All in on AI, Accountable to Outcomes

Published on: May 16, 2024


AI is an increasingly important tool in our chase for the miracles of science. However, without human involvement, AI lacks the necessary qualities to ensure fair & just impacts. Accountability is therefore a cornerstone in our approach to Responsible AI.  

AI presents a challenge for accountability: An AI system often makes human-like decisions; however, it does not possess human characteristics such as consciousness, intentionality, or moral agency to take responsibility for its own actions.  

AI systems can also be opaque and complex making them difficult to explain. At Sanofi, we believe that a human must be involved and is always accountable for the outcomes of an AI system. AI accountability is required by current as well as upcoming law. However, especially in the case of AI, regulations can move at a slower pace than AI development, deployment and uses. 

Therefore, beyond legal compliance, we at Sanofi hold ourselves accountable to upholding our “Responsible AI Pillars” of Transparency & Explainability, Fair & Ethical, Robust & Safe, Eco-Responsible, and Accountable to Outcomes. Accountable to Outcomes is viewed as an overarching pillar. Our ethical principles correspond with principles in found in recent and upcoming regulations from across the globe.  

Cultivating an AI Accountability Culture

In April 2023, Sanofi launched the Responsible AI Working Committee - a cross-functional team including experts from legal, privacy, procurement, ethics, policy, cybersecurity, as well as AI and data experts across the pharmaceutical value chain - with a mandate to set the vision, standards, and processes to practically operationalize AI governance at Sanofi.  

In December 2023, an internal policy document was released to announce our corporate commitment to design, develop, deploy, and use AI in accordance with our Responsible AI Pillars, outlining our risk-based approach to AI governance including this policy in our employee code of conduct and vendor code of conduct. 

In addition, a global campaign was launched in April 2024 to educate Sanofians about AI, the Responsible AI principles and about the risks of AI systems, reaching over 15,000 Sanofians worldwide.  

Facilitating AI Risk Management With the Creation of New Tools and Governance Bodies

In October 2023, a multi-disciplinary Interim Responsible AI Governance (“IRAG”) body was formed with the purpose to review AI use cases that are classified as high-risk and to advise on the appropriate risk mitigation strategies, as a set of controls that would make the use case acceptable. To date, approximately 20 use cases have been reviewed. 

The Sanofi AI Risk Assessment was launched in February 2024, as a tool to standardize the risk classification of AI use cases under development at Sanofi. Risk classification is based on the intended use, data to be used within the system, the AI model and the deployment of the AI System. An example of a high-risk use case is AI/machine learning-based software as a medical device (e.g., an imaging system that uses AI/ML for diagnosis skin cancer).  

The Sanofi AI Risk Assessment is initiated by AI Product Owners during the design phase of the product lifecycle, to ensure AI product owners, receive the necessary guidance required to mitigate the risks identified in their AI system, with minimal rework to the AI product. To date, approximately 50 AI systems across Sanofi have submitted the risk assessment. 

Fostering Accountability at an AI System Level

Beyond organizational level initiatives, AI system accountability is also fostered at Sanofi on an AI System level.  

Let’s look at how Turing – an AI system that provides insights to our sales representatives – has operationalized AI accountability through some examples of controls that have been implemented by the team. Those controls include the following: 


Turing guarantees the provision of straightforward and succinct records pertaining to the decisions made throughout the lifecycle of the AI system, as well as its development process. This encompasses appropriate management of code versions and transparent record-keeping of the AI system’s decision-making.

Remaining accountable to the outcomes of the AI systems that we design, develop, deploy and use is a testament to the high moral and ethical standards that we hold ourselves to as Sanofians. 

Our ambition at Sanofi is to chase the miracles of science to improve people’s lives. We have a duty of care to those we serve, ensuring that patients and healthcare professionals can interact with our systems safely and seamlessly.   

We believe that, alongside legal regulations, every one of us at Sanofi is accountable for how our technology impacts our patients and the world. 


Explore More

All in on AI, Environmentally Responsible

Sanofi at VivaTech

Exploring Our Digital Transformation: Strategies and Innovations