Technical Document 3

12 June 2024

The relationship between artificial intelligence (AI) in the healthcare domain and ethics is a topic of growing interest and debate. AI is defined as the field of study and development of systems and technologies capable of simulating human intelligence to carry out complex tasks autonomously [1]. In healthcare, AI has become a promising tool with the potential to improve diagnosis, treatment, and disease management and analyze large-scale medical and health data. However, the application of AI in healthcare poses a series of ethical challenges that must be addressed carefully and thoughtfully. The main issues to consider are the associated risks, primarily related to data handling and protection, as well as the biases that could occur or worsen, placing minorities from various backgrounds at a disadvantage and exacerbating existing disparities, such as those related to gender and others. Throughout this document, the concept of ethics and its associated principles are defined; the role of ethics in AI solutions is discussed; what biases are and why they are so crucial in the development of AI models, especially in the healthcare field, and finally, the document addresses, with examples, the application of ethical principles throughout the lifecycle of AI-based solutions: problem selection and definition, planning and design, development and validation, deployment and implementation, and operation and monitoring. 

Authors:

    • Santiago Esteban
    • Rosa Angelina Pace
    • Velén Pennini
    • Adrián Santoro
    • Adolfo Rubinstein
    • Cintia Cejas 

Download PDF | DOI: https://doi.org/10.48060/tghn.137