A human hand and a robotic hand touching a screen

The Ethics of AI in Healthcare

As artificial intelligence (AI) continues its long journey from pipe dream to mainstream, few industries stand to benefit as much as healthcare. AI technologies are rapidly changing the landscape for patients, healthcare workers and researchers alike through increasingly sophisticated means of capturing and interpreting patient information in real time, allowing machines to learn and make accurate predictions from large sets of data.

The recent Viva Technology exhibition in Paris, a meeting place for startups and major companies, gave us the chance to launch collaborations with five startups working in AI to deepen our expertise and ability in this area.

However, ethical and practical concerns about AI have to be addressed so that enthusiasm about its benefits does not turn to dread over potential side effects. 

Privacy issues and concerns about biases and the quality and diversity of data are provoking major concerns. The so-called “black box” problem–the fact that many of the most effective AI systems come up with solutions they cannot explain–further dents trust in AI. 

Looking for guidance

Increasingly, executives, politicians and even AI practitioners are calling for oversight of the technology’s use in the life sciences. 

“AI doesn't make judgments, it gives you an output,” Ameet Nathwani, Chief Digital Officer at Sanofi, said. “And so the key thing is the data that is fed into the AI. Is the information that is fed in free of bias? Is it based on legitimate data sources?” 

Examples of biased data abound. Until recently, the fact that most participants in clinical trials were white and male did not cause concern.

The study of human genomics promises to revolutionize treatment of cancer and other diseases. Yet a 2016 analysis of 2,511 studies from around the world found that more than 80% of participants in genome-mapping studies were of European descent. Alice Popejoy, a Stanford post-doc who produced the analysis, told the news website Quartz that this was “not just an ethical or moral problem, it’s really a scientific problem.” 

Meanwhile, a widely reported MIT researcher’s finding that commercial face recognition software performed dismally on darker skinned women cast a pall over medical AI systems that rely on computer vision, like those designed to help diagnose skin cancers. 

Adversarial attacks

Dr. Nathwani pointed to additional concerns: privacy and protecting healthcare data systems from bad actors. 

He pointed out that with trillions of dollars at stake; there is an undeniable risk of adversarial attacks on health care systems. “The issues of fake news or nudges–the ability to imperceptibly influence an AI algorithm–has to be thought through. How do you govern it? How do you protect it?”

Most experts agree that trustworthiness is a key to successful AI, and that ethics and diversity issues must be addressed early and often. An early and consistent focus on ethics and transparency, combined with accountability and a human-centric approach, can head off some of these issues.

Different values

Of course, ethics, like beauty, are in the eyes of the beholder. Values differ from country to country and from company to company. For the average Chinese city dweller, face recognition technology is just part of day-to-day reality, while in the West, the technology raises hackles, especially in a law-enforcement context.

Amazon, for example, suffered a public relations blow last summer after American workers demanded that the company stop selling face recognition software to law enforcement. Google and Microsoft have experienced similar internal revolts. And last week, the city of San Francisco banned local government agencies’ use of facial recognition.

The ethical challenges, a diversity of stakeholders' involvement 

Ethics guidance

Companies seeking to ask the right questions and set the right course on AI ethics have a growing number of sources for inspiration and guidance. A good start is the European Union’s Ethics Guidelines for Trustworthy AI.  

The EU guide establishes seven key requirements: 

  • human agency and oversight 
  • technical robustness and safety 
  • privacy and data governance 
  • transparency
  • diversity, non-discrimination and fairness
  • environmental and societal well being
  • accountability

Sanofi is developing its own policy on the use and governance of AI, based on three principles that should be upheld when it comes to AI in healthcare, said Dr. Nathwani, who is also Sanofi’s Chief Medical Officer: 

  • AI should be used in the interest of patients
  • The use of AI should not treat any groups of patients unfairly
  • Dignity needs to be preserved so the patient should have autonomy of thought, intention, and action when making decisions regarding health care. 

Dr. Nathwani emphasized the importance Sanofi places on the treatment of data. 

“One of our obsessions is ensuring that before any data goes into our systems that it goes to a curation process and we understand what the data is fit for,” he said. “We have to categorize data very carefully. Robust data from a clinical trial is very different to data that comes out of social media listening, where there may be lots of gaps, and we don't treat them the same way.” 

Dr. Nathwani added that he hoped to soon see AI systems that can help explain other AI systems–to solve the “black box” problem and produce explainable AI, or XAI. Both doctors and patients deserve a natural, human explanation for a machine-made decision, he said. 

This website uses cookies to track its audience and improve its content. By continuing to browse this website, you agree to the use of such cookies.

Click here for more information on cookies
OK