Responsilble AI
Responsible Artificial Intelligence (AI) refers to the ethical and accountable design, development, deployment, and use of AI systems.
It prioritizes values such as transparency, fairness, accountability, and respect for human rights, ensuring that AI technologies are developed and applied in ways that promote well-being, equity, and social good, while minimizing potential harm [1].
According to the World Health Organization (WHO, 2021), achieving responsible AI in health requires alignment with six core ethical principles [2]
PRINCIPLES
1 Protecting human autonomy Preserving human agency and clinical decision-making, especially in sensitive health interventions |
2 Promoting human well-being, safety, and the public interest Ensuring that AI technologies enhance patient outcomes and do not compromise safety |
3 Ensuring transparency, explainability, and intelligibility Making AI systems understandable, and their functions interpretable by end users and stakeholders |
4 Fostering responsibility and accountability Clearly delineating who is answerable for the performance and outcomes of AI systems |
5 Ensuring inclusiveness and equity Designing AI systems that address, rather than exacerbate, existing health inequities across populations |
6 Promoting AI that is responsive and sustainable Developing systems that are adaptable over time and environmentally responsible |
These principles offer a robust ethical foundation to guide the development and use of AI in healthcare, enabling equitable, trustworthy, and effective innovation.
The WHO’s 2021 guidance also outlines ten key ethical challenges that must be proactively addressed to realize the potential of AI in health care responsibly [2]:
- Assessing whether AI should be used at all in certain clinical contexts.
- Addressing the digital divide to ensure equitable access to AI technologies.
- Ensuring ethical data collection, ownership, and usage practices.
- Establishing clear lines of accountability and responsibility in decision-making.
- Managing the implications of autonomous decision-making by AI systems.
- Identifying and mitigating bias and discrimination embedded in AI algorithms.
- Safeguarding against safety and cybersecurity risks posed by AI technologies.
- Preparing for labour market disruptions within the health sector.
- Navigating commercialization pressures that may conflict with public health interests.
- Evaluating the environmental impact of AI, particularly in the context of climate change.
These are not hypothetical concerns; they are pressing issues that demand robust governance frameworks, inclusive stakeholder engagement, and ethical foresight to ensure that AI advances in health care serve all populations—especially those in low-resource settings.
References
- Lyons JB, Hobbs K, Rogers S, Clouse SH. Responsible (use of) AI. Front Neuroergon. 2023 Nov 20;4:1201777. doi: 10.3389/fnrgo.2023.1201777. PMID: 38234494; PMCID: PMC10790885.
- World Health Organization. Ethics and governance of artificial intelligence for health: WHO guidance. Geneva: World Health Organization; 2021. https://www.who.int/publications/i/item/9789240029200
RESOURCES
PUBMED | LATEST AI RESEARCH
(("Artificial Intelligence"[Title/Abstract] OR "Machine Learning"[Title/Abstract] OR "Deep Learning"[Title/Abstract] OR "Natural Language Processing"[Title/Abstract]) AND ("Clinical care"[Title/Abstract] OR "clinical decision"[Title/Abstract] OR "Health"[Title/Abstract] OR "Healthcare"[Title/Abstract])) AND ("Health equit*"[Title/Abstract] OR "health disparit*"[Title/Abstract] OR "health inequalit*"[Title/Abstract] OR "ethic*"[Title/Abstract])
WORKSHOPS & WEBINARS
coming soon
PARTNER RESOURCE LIBRARY
- HealthAI Resource Library: Resource Library: Papers, Briefs, Reports, and more — Health AI
- OECD Resource Library: Health Publications - OECD.AI