Technology can transform healthcare. But only accountability, ethics, and strong regulation can make it worthy of trust
K.V. CHANDRA MOULI
Artificial Intelligence (AI) is rapidly transforming healthcare, promising faster diagnoses, precise treatment planning, and improved patient outcomes. By processing vast volumes of medical data—patient records, laboratory reports, and imaging scans—AI systems can identify patterns with remarkable speed and accuracy. Yet, one fundamental principle must remain non-negotiable: AI assists doctors; it does not replace them. The ultimate responsibility for diagnosis and treatment rests squarely with qualified medical professionals.
From Experiment to Entrenchment
AI is no longer experimental. Since early systems like MYCIN in the 1970s, technological advances in computing power and machine learning have pushed AI into the core of modern healthcare. Today, it is embedded across hospitals—interpreting scans, suggesting diagnoses, monitoring patients in real time, and even guiding robotic surgeries.
This integration has undeniable benefits. Early detection of diseases, reduced diagnostic errors, and improved efficiency are real gains. But beneath this progress lies a more uncomfortable truth: regulation has not kept pace with innovation.
Where the System Falls Short
India’s current regulatory approach to AI in healthcare is fragmented and reactive. AI tools are treated as medical devices under the Medical Devices Rules, 2017 and the Drugs and Cosmetics Act, 1940. Patient data is governed by the Digital Personal Data Protection Act, 2023. Together, these laws create a patchwork of compliance—but not a coherent regulatory framework.
This fragmentation leaves critical gaps:
- No AI-specific legal standards addressing bias, model drift, or adaptive learning.
- No clear accountability when AI-assisted decisions cause harm.
- Inconsistent data protection enforcement.
- Limited oversight and weak auditing mechanisms.
- A glaring absence of uniform national standards.
In short, the system regulates pieces of AI—but not AI itself.
The Silence Around Patient Awareness
Perhaps the most serious omission is this: patients are not explicitly told when AI is used in their care. Existing laws require consent for data use, but they do not mandate disclosure of AI involvement in diagnosis or treatment. Neither the Drugs and Cosmetics Act, 1940, nor the Medical Devices Rules, 2017, imposes such an obligation. Even the Digital Personal Data Protection Act, 2023, focuses on data, not decision-making.
The result is a transparency vacuum. Patients may unknowingly rely on AI-driven outputs without understanding their limitations, risks, or role in clinical decisions. Consent, in such cases, is not truly informed—it is procedural.
What Hospitals Must Do—Now
Even within the existing framework, hospitals cannot afford complacency. Responsible AI adoption demands more than basic compliance:
- Use only approved and validated AI systems under existing medical device laws.
- Ensure human oversight at every stage—AI outputs must never be final.
- Maintain audit trails of AI recommendations and clinical decisions.
- Test systems for bias and accuracy before and after deployment.
- Protect patient data rigorously, with transparency in its use.
- Disclose AI involvement to patients, even where the law is silent.
- Train medical staff to understand both the capabilities and limitations of AI.
Compliance is not merely a legal obligation—it is a clinical and ethical necessity.
Lessons from Global Practice
Other jurisdictions are moving faster and more decisively. The EU AI Act introduces a risk-based framework, imposing strict obligations on high-risk systems like healthcare AI. In the United States, the Food and Drug Administration adopts a lifecycle approach, monitoring AI systems continuously rather than approving them once and ignoring future changes. These models recognise a simple reality: AI is not static—it evolves. Regulation must evolve with it.
India, with its diverse population and uneven healthcare access, faces even greater risks. AI systems trained on non-representative data can amplify bias and inequality. Without strong oversight, technology intended to improve care may deepen disparities.
The Government’s Role: From Observer to Regulator
Incremental reform will not suffice. The government must move from fragmented oversight to decisive regulation:
- Develop AI-specific regulatory standards.
- Mandate independent audits and certification.
- Establish clear liability frameworks.
- Introduce continuous monitoring mechanisms.
- Invest in training healthcare professionals.
- Launch public awareness initiatives on AI in healthcare.
Most importantly, regulation must shift from reactive enforcement to proactive governance.
Why India Needs a Dedicated AI Law?
The central question is no longer whether AI should be regulated—it already is, in parts. The real question is whether this fragmented approach is sufficient. It is not.
Current laws—the Drugs and Cosmetics Act, 1940, the Medical Devices Rules, 2017, and the Digital Personal Data Protection Act, 2023—address isolated aspects of AI. They do not address its defining features: autonomy, adaptability, opacity, and systemic impact.
A dedicated AI law would change this. It would:
- Define clear accountability across developers, hospitals, and clinicians.
- Mandate algorithmic transparency and explainability.
- Introduce risk-based classification tailored to AI systems.
- Ensure uniform national standards and enforcement.
- Protect patient rights in an AI-driven environment.
More importantly, it would provide regulatory certainty, encourage innovation while safeguarding public interest.
Conclusion: Technology Must Follow Trust
Artificial Intelligence is no longer the future of healthcare—it is already shaping clinical decisions today. Its benefits are real, but its risks are equally serious. Without clear rules, strong oversight, and ethical discipline, AI will not strengthen healthcare—it will erode the trust on which it depends. The message is direct: innovation without accountability is dangerous.
For the government, the message is urgent and unmistakable—this is not the time for incremental steps or fragmented regulation. Decisive action is required now. A dedicated AI law must be enacted without delay, with clear lines of accountability, enforceable transparency, and robust, continuous oversight built into the system. Regulatory gaps are not theoretical—they translate into real risks for patient safety and public trust. Every moment of delay allows these risks to grow, making inaction not just a lapse in policy but a failure of responsibility.
For doctors and hospitals, the responsibility is immediate and non-negotiable. AI can never be a substitute for doctors—it is only a tool to assist, not to decide. Doctors must not rely on it blindly or lower their vigilance even for a moment. Clinical judgment, responsibility, and accountability cannot be delegated to machines. Any lapse in attention, diligence, or professional duty—under the pretext of AI—is unacceptable and inexcusable.
India stands at a critical moment. The choice is not whether to use AI, but how to govern it. Get it right, and AI can expand access and improve care. Get it wrong, and it will deepen risks and inequalities. Technology can transform healthcare. But only accountability, ethics, and strong regulation can make it worthy of trust.
(The Author has BE in Mech, BOE, ASME and Deputy Director of Boilers (Retd), Mysuru)

