Hi-tech health

13 Min Read

When Swedish chemist and engineer Alfred Nobel invented dynamite, he had mine exploration and the demolition of buildings in mind, all while making the handling of explosives safer. But as we know, his discoveries were also used for warfare. The fear of replicating a similar scenario is something many associate with the rise of artificial intelligence (AI).

The computer systems capable of performing tasks that normally require human intelligence, such as visual perception or decision-making, have been causing waves – think ChatGPT – sparking debate on the technology’s future impact on art and culture, the workplace, perhaps even our entire way of life.

With AI expected to permeate every aspect of our lives, it comes as no surprise that the health sector is affected too. But did you know that AI is already in use at numerous hospitals and doctors’ surgeries, assisting with procedures such as cancer screenings or radiology diagnostics?

It’s not about being pro or against AI. It is there, and it needs to be regulated in a way that lets us reap the benefits

“When I go to my dentist, he doesn’t manually make a model of the crown any more, he uses AI-powered 3D-imaging. When I get an ultrasound of my unborn baby, and I can see its bloodflow on the screen, it’s not something actually visible in the image but knowledge extracted from that image,” explains Narges Ahmidi, who studied AI and medical robotics at the Johns Hopkins University in Baltimore, and currently heads the department of reasoned AI decisions at the Fraunhofer Institute in Munich.

In her view, AI is the next revolution in healthcare. “There was a time when we didn’t have anaesthetics or antibiotics or vaccinations. Think of the invention of MRI, which lets you look inside the body,” she says. “There have been many revolutions in medicine, and we’re now witnessing another one.”

The current buzz in the research world mirrors this assessment. Google, which is working on a breast cancer screening tool, recently hosted HLTH 2023, where 10,000 industry professionals discussed the topic. Other tech giants such as Apple and IBM are also working on AI-powered health tools. Research institutes are upscaling or opening new departments, and even the European Union wants in on the action, heavily investing in the area.

The Commission supports small and medium-sized enterprises and startups that want to launch innovations in AI and robotics for healthcare, for example by providing testing facilities. And as part of its Europe’s Beating Cancer Plan, it launched the European Cancer Imaging Initiative, aiming to unleash the potential of AI to combat cancer. “Digital technologies and artificial intelligence are key in our battle against cancer,” Thierry Breton, commissioner for the internal market, said at the initiative’s launch last January.

Despite all the noise, new products arriving on the market are rare, Ahmidi notes. One recent addition is a software package that can predict the occurrence of complications after an operation, such as bleeding or kidney failure. Behind the idea that merges AI and medicine is Alexander Meyer from Berlin’s renowned Charité hospital. As a surgeon and a computer scientist, no one personifies this intersection better than he. “It does feel like a little superpower, this knowledge of how to deal with data,” he says. “It’s very practical when you speak both languages.”

While his early warning system for intensive care units is now a certified medical product, it has been a long time coming. When Meyer first floated the idea of data analysis in 2014, he was told to just use a spreadsheet – so he changed hospitals. “I was flabbergasted by how much data we collect and generate, but don’t use. We have all these screens showing measurements, but no machine that can extricate something meaningful. We’re lost in data,” he says.

The newly developed device solves that problem. It crunched the data of more than 50,000 patients and, using the accumulated knowledge to analyse measurements in real time, it makes predictions about a patient’s status. “We can intervene much earlier,” Meyer explains. “It makes it possible to work proactively instead of reactively. Often when you investigate certain mortality cases afterwards, looking at the data, you can see that it announced itself earlier, and you wonder why nobody reacted.”

Saving lives is undoubtedly a strong argument. Nevertheless, there are calls for caution. Hannah van Kolfschooten, a researcher and lecturer at the Law Centre for Health and Life at the University of Amsterdam, warns of the possibility of discrimination or medical errors programmed into the algorithms.

“My interest in this research area got sparked when I learnt how certain groups are being excluded. For example, skin cancer tools often don’t work on people with Black skin,” she says, referring to the problem of equal representation in data sets.

“Often, the people that are already being discriminated against and already have problems accessing healthcare, will have the same thing happen when it comes to the development of the algorithms,” van Kolfschooten adds.

There are measures that could prevent this, she says, such as insisting on full transparency around the data being used, requiring developers to have greater diversity in their data sets, and integrating many different patient subsets.

Janneke van Oirschot, a research officer at NGO Health Action International, agrees: “Discrimination in healthcare is already a huge problem, and AI is formalising it.” She also points to the potential danger of inventions that make use of AI in the health sector without being classified as medical devices – a fact that lets them circumvent heavy regulation.

Discrimination in healthcare is already a huge problem, and AI is formalising it

While the EU’s AI Act seeks to address this as the first regulation worldwide to rein in AI, the law is not expected to be passed before the end of the current European Parliament term in 2024. In the meantime, new products are entering the market. And even when the AI Act becomes law, it will contain certain loopholes, according to van Oirschot: “The EU classifies medical devices as high-risk, which subjects them to the strictest safety rules. But many products don’t qualify as that.”

Things such as menstruation apps, medical chatbots, and health watches fall outside the classification, as do devices in use in elderly care, such as smart beds, which closely monitor a person’s sleep and movements, and even make predictions about behaviour.

“For one, there’s the privacy issue – they’re quite intrusive,” Van Oirschot says. “Of course, it can be useful when they alert caregivers about restlessness or a fall. But when the person doesn’t match the training data very well – when they are shorter, or lighter than average – then maybe they will fall through the cracks.”

Another worry is the question of liability. Who will be responsible if something goes wrong – machine or human? “Imagine that AI suggests something you disagree with as a doctor, but it turns out the AI system was right. And is it fair for doctors to take responsibility for AI devices when they are not exactly sure how they work?” asks van Oirschot.

Ahmidi, who sits at the intersection of theory and practice, is excited about AI’s potential, but also thinks it’s important to address people’s fears. She wants people to understand that it’s a tool that can be responsibly used by doctors to make things easier and better, for patients and medicine professionals alike.

“It’s not about a red-eyed robot,” she says. ” In the same way it’s making your life easier with text correction or face recognition on your cellphone, it can make things easier in healthcare.”

When the general population is aging and many European countries lack healthcare personnel, it could take the pressure off the ‘demi-gods in white’. Ahmidi urges us to reappraise our perception of medical staff: “Who do we think they are – Superman? They’re supposed to look at me and just know what’s wrong with me?”

In the same vein, Meyer adds: “When you go to a doctor, you don’t know their decision-making process. It depends on their personal experience, their studies and so on. AI will let us manage the enormous knowledge that’s already there and make this data cosmos we created accessible.”

It’s a fine line between setting up regulations that protect the patient and trying to not stifle innovation. This is exactly what Italian S&D MEP Brando Benifei has been working on as a negotiator for the AI Act.

I’m in favour of diving into it, and integrating the concerns into the science

“The sector is already one of the most regulated ones, and rightly so,” he says. “The regulation on medical devices is one of the most advanced and up-to-date laws for the harmonisation of products in the internal market, and the AI Act intervenes by adding another safety component.

“It is necessary to adopt a concrete approach without exalting or demonising these new opportunities. The right path is a regulation that respects our values, protects citizens and improves performance.”

According to Ahmidi, taking controlled risks is necessary, but trustworthiness is key. “We are, as a whole community, trying to figure out how this can be measured. There will be a standard certification process in the future. It’s a wild force developing so fast, and we don’t have sufficient regulations yet. The question is, what do we do in between? Wait and watch or become part of the conversation? I’m in favour of diving into it, and integrating the concerns into the science.”

Even critics can’t deny the enormous potential AI brings. Van Kolfschoten acknowledges: “The question is, how can we live with this technology? It’s not about being pro or against AI; AI is there, and it needs to be regulated in a way that lets us reap the benefits.”

Share This Article
By admin
test bio
Leave a comment
Please login to use this feature.