Technical Document 6

July 2024

Artificial intelligence (AI) is transforming the healthcare sector globally, promising significant advances in assisted diagnosis and personalized treatments. However, its accelerated adoption requires a regulatory framework that guarantees its ethical and safe use. This document, prepared by the Center for Implementation and Innovation in Health Policies (CIIPS) of the Institute for Clinical and Health Effectiveness (IECS), offers a detailed analysis of the regulation of AI in health, highlighting both regional differences and the approaches adopted by various countries and international organizations.

The paper examines how international organizations such as the World Health Organisation (WHO) and the Organisation for Economic Co-operation and Development (OECD) have established ethical principles and guidelines for using AI. These principles, which include transparency, data protection, and human oversight, seek to ensure that AI is used responsibly and moderately. It also highlights the legislation and regulatory practices of countries such as Japan, the United Kingdom, the European Union, Canada, and the United States, which have adopted diverse approaches, from binding laws (complex law) to more flexible and non-binding approaches (soft law).

Due to the socioeconomic and technological conditions in Latin America and the Caribbean, AI regulation presents specific challenges. Although some countries have made progress in creating bills and rules inspired by the European Union AI Act, the implementation and adoption of these regulations vary significantly. The document highlights the importance of regional and international collaboration in sharing knowledge, resources, and best practices.

The document identifies several challenges facing the region, including unequal access to medical innovations, technological dependence, economic competitiveness, and legal and ethical uncertainty. To overcome these challenges, it proposes developing regulatory frameworks adapted to local contexts, encouraging investment in research and development (R&D), and establishing mechanisms for continuous monitoring and evaluation of AI systems. Key recommendations include creating regulatory agencies that oversee regulatory compliance and assess AI's social and ethical impact. In addition, the active pursuit of AI certification standards and protocols within a flexible regulatory framework, both regionally and globally, is suggested to ensure reliable and secure AI systems.


Authors:

  • Paula Eugenia Kohan
  • Cintia Cejas 

Download PDF |