Doctor Expounds on Principle of AI-Enabled Diagnosis and Risks
As AI-powered medical consultation gains traction, there arises the question of how diagnosis generated by artificial intelligence should be deemed by patients, and doctors.
According to Zhang Wenhong, a member of the CPPCC National Committee and director of the Department of Infectious Diseases at Shanghai's Huashan Hospital, while patients' self-diagnosis via AI can provide some health clues, there lies the inherent risk of misconception that the technology can replace doctors.
Zhang believes that a more rational approach is to define AI as a "super assistant" to doctors.
"In other words, while we should ambitiously develop AI-enabled healthcare, we must also go on to strengthen primary healthcare, which is pivotal," said Zhang during the ongoing national Two Sessions in Beijing.
First, as Zhang observed, patient self-diagnosis can be risky as a result of the lack of oversight over human-machine collaboration.
Thanks to their professional training, doctors are well-equipped to discern AI hallucinations or misleading recommendations.
Since human clinical thinking and AI "thinking" are distinct, clinical practice ordains that only doctors, in their capacity as the ultimate decision-makers, can ensure safety in medical outcome.
Another reason for caution is legality. While only doctors could be held responsible for untoward medical outcomes, in the event of misleading AI-enabled self-diagnosis, there would be no professional staff to remedy the situation, or to be made accountable, placing patient safety in jeopardy.
Hence the delicate balance of how to correctly evaluate AI consultations so as to make better use of it.
In Zhang's view, while AI can serve as a tool for providing doctors with alternative opinions or "risk alerts," it is eminently ineligible to make final diagnosis directly to patients.
In view of fast-developing AI healthcare, Zhang pointed to the need of AI-enhanced training, so as to turn out more professional medical "gatekeepers" who know how to harness AI, making primary healthcare services more appealing.
At the same time, there should be a consensus regarding the legal principle of holding doctors fully accountable, with AI playing an assisting role.
This would prevent some doctors from evading their responsibility, and patients, taking the cue, would cease to place blind trust in AI recommendations.
Zhang said that optimizing service scenarios is critical to maximizing the value of AI, adding that by freeing doctors from drudgery such as chronic disease follow-ups and data monitoring, AI might enable doctors to be more committed to doctor-patient communication and providing warm-hearted, humanistic care, thereby mitigating patient reliance on cold machines.
Editor: Liu Qi
In Case You Missed It...








