Professor Dr Madhu Dixit Devkota
Professor Devkota is the Executive Chairperson of Upendra Devkota Memorial National Institute of Neurological and Allied Sciences, the institution that is spearheading Nepal’s technological advancement in neuroscience. She is also recognised for bringing the first biplane CathLab to Nepal fulfilling her late husband, Professor Upendra Devkota’s vision and transforming the care for stroke, aneurysms, and AV malformations in Nepal. Additionally, she brings over 35 years of public health leadership, having shaped more than ten national health policies. Her article, “When Algorithms Wear White Coats” explores the growing role of AI in medicine through a public health lens. It highlights how data-driven systems may overlook social realities, amplify biases, and shift accountability away from health systems. The article argues that AI must remain a clinical aid, anchored in ethics, equity, and human judgment.
Artificial intelligence is rapidly becoming a powerful ally for doctors. It sharpens medical imaging, enables earlier detection of strokes and tumours, guides surgical planning, and even helps predict outcomes. These tools allow specialists to make life-saving decisions faster and with greater confidence. In many of today’s operating rooms, AI is already embedded in the equipment itself, working quietly in the background, much like a GPS or autopilot for the brain, making complex brain surgeries safer, faster, and more precise.
Artificial intelligence is rapidly becoming a powerful ally for doctors. It sharpens medical imaging, enables earlier detection of strokes and tumours, guides surgical planning, and even helps predict outcomes. These tools allow specialists to make life-saving decisions faster and with greater confidence. In many of today’s operating rooms, AI is already embedded in the equipment itself, working quietly in the background, much like a GPS or autopilot for the brain, making complex brain surgeries safer, faster, and more precise. As we marvel at these advancements, we can nevertheless, see how they test the resilience of public health systems. High costs, unequal access, data-privacy concerns, algorithmic bias, and over-reliance on technology risk widening the gap between well-resourced tertiary centres and under-funded public hospitals. While AI can dramatically improve outcomes for individual patient in advanced surgical settings, public health faces a harder question- how to ensure that these powerful tools are ethical, affordable, transparent, and fairly distributed, so that the innovations strengthen the health systems, not just the surgeons who use it?
In Public Health, Missing Data Can Look Like Good Health
Artificial intelligence is increasingly used in public health to track diseases, predict outbreaks, and guide health programmes. These systems depend heavily on data collected from hospitals, health centres, and other health facilities. In countries like Nepal however, many people rely on local healers, shamans, informal drug shops, or unregistered clinics. Large segments of the population, especially in rural, remote, and impoverished areas, never enter the formal health system at all. Their illnesses are treated at home, paid for out of pocket, and never recorded in digital systems or national health databases.
When data is missing, AI does not recognise absence, it assumes normality. As a result, communities with the weakest health access can appear deceptively “healthy” on the dashboards and maps. Rural districts in the Terai or remote mountain regions may report low rates of hypertension, diabetes, tuberculosis, maternal complications, or mental health conditions, not because these problems are rare, but because screening is limited, diagnostic services are unavailable, and records are incomplete. Even infectious disease surveillance can be skewed when fevers, diarrhoeal illnesses, or childhood pneumonia are managed at home or by informal providers and never officially reported.
The consequences can be serious. Public health resources may be misdirected, vaccines and medicines
allocated elsewhere, and awareness campaigns focused on already-visible populations, while the most
vulnerable remain underserved. An AI system might conclude that a district needs fewer health workers or less resources, when in reality it is suffering from silent disease and unmet needs. In maternal and child health, underreporting of home births and deaths can falsely suggest improvement, masking preventable tragedies.
AI, therefore, must be contextualised. Algorithms cannot replace local knowledge, community health workers, field surveys, and human judgment. In low-resource settings like Nepal, effective use of AI requires deliberate efforts to bridge data gaps by strengthening primary care, supporting frontline workers, integrating informal care pathways, and investing in inclusive data collection. Only then can technology truly strengthen public health systems, rather than unintentionally hiding the problems they are meant to solve.
The Quiet Erosion of Human Touch
Imagine visiting a hospital with a troubling neurological problem. You sit across from the doctor, hoping to be heard. But much of the consultation is spent looking at a computer screen, reading AI-generated summaries, automated scan reports, and decision prompts. The conversation feels rushed and your story feels secondary.
For conditions like Parkinson’s disease, epilepsy, stroke, or dementia, medicine is not just about prescriptions and reports. Reassurance, empathy, and clear explanation are often as healing as the treatment itself. Families come with fear, confusion, and many questions. No algorithm can sit patiently with them, sense their worry, or explain uncertainty with compassion. Technology can guide decisions, but it cannot replace the human connection that builds trust and comfort, especially when a diagnosis may change a person’s life forever.
There is also a risk that computers may miss the realities of daily life. An AI system may label a patient with epilepsy as “non-compliant” for missing follow-up visits or irregular medication use, without understanding the long travel distances, financial hardship, social stigma, or lack of family support that stand in the way. Data drawn mostly from urban hospitals may not reflect the challenges of rural communities, leading to decisions that unintentionally disadvantage those who are already vulnerable.
The promise of AI in medicine is real and exciting. But as we welcome smarter machines, we must protect something equally important- the human touch. AI should support doctors, not replace listening; assist decisions, and not override compassion. Healthcare works best when technology strengthens systems without weakening trust- when patients feel seen, heard, and cared for, not managed by a computer screen.
Can doctors become too dependent
Let us take the example of young doctors entering the medical world where artificial intelligence is everywhere. From reading scans and flagging abnormal laboratory results to suggesting diagnoses and treatment pathways, AI now sits quietly beside them at every step. Used well, these tools can improve efficiency, reduce errors, and support safer care. But there is a growing concern that dependence may begin to replace discernment.
Consider a junior doctor in a busy emergency room who accepts an AI-generated radiology report as final, without fully correlating it with the patient’s symptoms. A scan may be labelled “normal,” yet the patient’s worsening headache, fever, or subtle neurological change tells a different story. In low-resource settings, where imaging quality may be variable and patient histories complex, these mismatches are common and dangerous when clinical judgment is sidelined.
There is also a quieter risk- the erosion of curiosity and questioning. If a computer declares a CT scan
unremarkable, will a young doctor still review the images, re-examine the patient, or seek a senior opinion when something feels wrong? Over time, habitual reliance on automated outputs can weaken confidence in one’s own clinical reasoning, the skill of integrating history, physical examination, intuition, and lived experience at the bedside.
Keeping AI on a Human Path
It learns from the data it is given and it does not merely support care, it actively influences clinical decisions, and patient outcomes. If most of the data come from big urban hospitals in Kathmandu or from wealthier patients, the system may misdiagnose or overlook people in rural districts, women, ethnic minorities, or those who cannot travel to big hospitals. For example, a woman in Humla who misses regular check-ups because of long travel distances might be labelled “non-compliant,” even though the real problem is access, and not the willingness to follow advice or medications. What looks “objective” can quietly reinforce existing inequalities.
Further if a doctor follows an AI recommendation and the patient is harmed, who is responsible? The doctor, the hospital, or the software company? Ethical medicine requires clear human responsibility. AI should assist, not become a shield behind which errors are hidden. Many patients may not even know that a computer is helping decide their treatment. Transparency is essential.
Health information is deeply personal. If patient records are stored or shared without strong protection, trust in the health system can quickly weaken. People may hesitate to seek care for sensitive conditions, fearing their information could be misused.
At the end of the day, AI cannot replace listening, empathy, or judgment. Health workers in our society, often care for patients facing poverty, long travel distances, stigma and social barriers- the quiet cues that only a thinking doctor notices. Understanding these realities, offering reassurance, and adapting advice to the patient’s life are things no algorithm can replicate. Medicine is not just about faster scans, smarter predictions, or bigger datasets. It is about people, families, and communities. When guided by ethics, fairness, and accountability, AI can help doctors save lives, improve public health, and make healthcare more equitable. But if we forget that behind every chart, every algorithm, and every decision is a real person, we risk turning medicine into a system that is efficient but heartless. True progress in health comes not from machines alone, but from combining the power of technology with compassion, judgment, and trust.
Medicosnext