Asia-Pacific Developing Frameworks for AI-Enabled Medical Technologies

In the Asia-Pacific (APAC) region, regulators are taking important steps to shape the future of artificial intelligence (AI) in medical technology. With that progress comes recognition of the usefulness and transformative potential of AI, and the necessity for new levels of oversight. At this time, no APAC country has implemented an umbrella AI law, but many APAC countries are in the process of creating legal and ethical frameworks to govern the use of AI and machine learning (ML) in healthcare, especially given the rising prevalence of AI-enabled diagnostic and therapeutic tools.

The nuances of AI – from deep learning algorithms to natural language processing – call out for a different regulatory approach that will ensure safety, fairness, and transparency. Japan is an outlier in having sector-focused rules, such as the 2023 Next-Generation Medical Infrastructure Act, which advances the use of AI in medical research or diagnostic services. In addition, the development of a Basic Law for the Promotion of Responsible AI is expected to establish governance standards for large foundational models with the potential for significant societal effects. Korea has also made headway by proposing the Digital Medical Products Act, targeted at digital and AI-based medical devices, and is developing a Basic AI Act as well (to take effect in 2026) that adopts a risk-based framework to regulate AI technologies based on their societal and health impact.

In addition, Singapore is taking significant steps in the healthcare AI space by issuing guidelines to advance the rational use of AI in healthcare. The National AI Strategy 2.0 and the AI in Healthcare Guidelines called for ethically responsible utilization, validation of protocols, and clarity of algorithmic decision making. Developers can use tools like the Singapore government-backed AI Verify toolkit, which allows them to benchmark system performance against internationally defined standards. Finally, Malaysia’s AI Code of Ethics and India’s recent guidance on responsible platform behavior include principles related to medical AI systems – data privacy, fairness of the algorithmic process, and accountability to the public.

These regulatory changes are being influenced by the European Union’s AI Act, which came into force in August 2024 and became the first set of comprehensive “hard” law on AI in the world. As AI-enabled health technologies are hot right now, companies are being encouraged to adopt enhanced governance early on, including compliance with new standards, documentation of clinical benefit, and disclosures of AI model constraints.


Written by: Ames Gross – President and Founder, Pacific Bridge Medical (PBM)

Mr. Gross founded PBM in 1988 and has helped hundreds of medical companies with regulatory and business development issues in Asia. He is recognized nationally and internationally as a leader in the Asian medical markets. Mr. Gross has a BA degree, Phi Beta Kappa, from the University of Pennsylvania and an MBA from Columbia University.