Rules for Robots: A Global Tour of How Countries are Regulating Medical AI
By- KHANIJ ARYA
GOVERNMENT MEDICAL COLLEGE PATIALA

Keywords: AI related laws, AI regulation, public safety
Artificial Intelligence (AI) is a general-purpose technology that has existed since the early 1950s. Its development has been characterized by alternating cycles of intense hype and innovation, followed by periods of stagnation and disillusionment. In 2022 alone, more than thirty AI-related laws were enacted across over a hundred countries, signaling a significant global shift toward regulatory oversight.
A defining moment in this shift was the launch of ChatGPT in late 2022, which brought generative AI into mainstream consciousness. This emergence raised widespread concerns about algorithmic bias, misinformation, copyright violations, and potential disruptions to labor markets. The rapid advancements in machine learning, the powerful capabilities of large language models, and the global influence of social media have collectively alarmed policymakers and catalyzed regulatory responses. Despite this momentum, no country currently has a fully comprehensive AI-specific legal framework. Most governments are navigating the challenges of AI through a patchwork of existing sectoral laws and evolving policy initiatives.
1) United States: Innovation-First, Fragmented Regulation.
At the Federal Level: The U.S. has no comprehensive AI-specific legislation. Instead, it relies on existing sectoral laws and emerging policy initiatives.
Federal Initiatives:
• National Artificial Intelligence Initiative Act of 2020 (NAll): Enacted to promote AI research and development across federal agencies, aiming to bolster U.S. leadership in AI innovation
• Bipartisan House Task Force Report on AI (December 2024): This report outlines guiding principles and recommendations for future congressional actions concerning AI advancements.
Executive Orders:
● Biden Administration (2023-2025). Issued orders focusing on the safe and ethical development of AI, including the “AI Bill of Rights.”
● Trump Administration (2025): In January 2025, President Trump signed an executive order titled “Removing Barriers to American Leadership in Artificial Intelligence,” aiming to revoke previous directives perceived as restrictive and to promote innovation.
State-Level Legislation: In the absence of comprehensive federal regulation, several states have enacted their own AI-related laws. Few of them being:
● Colorado Al Act (2024): Modelled after the EU AI Act, this legislation categorizes AI systems by risk level and imposes corresponding regulatory requirements
● Tennessee’s ELVIS Act (March 2024): Aimed at protecting artists’ rights, this law addresses unauthorized AI-generated reproductions of individuals’ voices and likenesses.
● Utah’s Artificial Intelligence Policy Act (March 2024): This act establishes disclosure requirements for generative AI use and creates an Office of Artificial Intelligence Policy.
2) India: Building Amidst Fragmentation
India’s AI regulation is a work in progress—defined by ambition but hindered by fragmentation. The Ministry of Electronics and Information Technology has initiated several key programs under the India AI Mission, which boasts a substantial budget of ₹10,372 crore over five years. This mission aims to bolster AI research and development through the establishment of Al Centres of Excellence focusing on sectors like healthcare, agriculture, and sustainable cities.
Additionally, the creation of the India AI Safety Institute (AISI) underscores the government’s commitment to ensuring the ethical and safe application of AI models, particularly those grounded in India’s diverse socio-economic context. Despite these initiatives, India’s regulatory landscape for AI remains fragmented. The Digital Personal Data Protection Act of 2023 addresses data privacy concerns but does
not specifically cater to Al-related issues Furthermore, the absence of standardized testing and certification processes for Al systems leaves citizens vulnerable to potential biases and privacy violations.
3) China: Centralized Control with Political Alignment
China has emerged as a global frontrunner in artificial intelligence (Al) regulation, implementing a series of comprehensive measures that reflect its unique political and social priorities. The regulatory framework is characterized by a top-down approach, emphasizing state control, content governance, and alignment with the Chinese Communist Party’s (CCP) ideological values
Regulatory Bodies: The National Medical Products Administration (NMPA) and Cyberspace Administration of China (CAC) oversee AI applications and algorithms.
Key Regulations that took place:
● 2021 Algorithm Rules: Mandate algorithm registration and transparency.
● 2022 Deep Synthesis Provisions: Require labelling of deepfakes and synthetic content.
● 2023 Interim Measures for Generative AI: Demand ideological conformity, algorithm pre-approval, and security audits.
The regulatory process in China is notably opaque, with limited public consultation and a significant role played by state agencies and party-affiliated institutions This centralized approach allows for swift policy implementation but raises concerns about the suppression of innovation and the potential stifling of academic and commercial freedom. This approach offers a contrasting perspective to more open and decentralized regulatory models seen in other parts of the world
4) European Union: Comprehensive and Preventive
The European Union has taken a proactive, risk-based approach with the Artificial Intelligence Act (AI Act), which came into effect on August 1, 2024. The Act categorizes AI systems into four risk tiers—unacceptable, high, limited, and minimal.
Key Provisions:
· High-Risk Classification for Medical AI: Requires rigorous compliance measures, including risk management, human oversight, and robust data governance.
· European Health Data Space (EHDS): Supports secure, cross-border exchange of health data within EU member states.
This framework is designed to prioritize safety and public trust while supporting innovation across the Union.
Bottom Line: AI regulation is evolving unevenly across the globe. While the EU prioritizes safety, the US backs speed, China pushes control, and India juggles innovation with regulatory gaps. Medical innovation is racing ahead—regulators are only just lacing up their shoes.
REFERENCES:
IMAGE REFERENCES: