Dr.R.Amarnath Trivedi,                                                                                Surveillance Medical officer, CMC Vellore.                                                            Phone: 9989148116, mail id: amartriv@gmail.com

Artificial Intelligence (AI) is the function done by a machine that has a thought process as humans using complicated algorithms. The idea that artificial intelligence will replace human workers has been around since the very first automata appeared in ancient myth. The debate on Artificial Intelligence use in the medical field starts with a basic question of “will patients be safe or not?”

The widespread introduction of new AI healthcare technology will help some patients but expose others to unforeseen risks. Then a question arises, what is the threshold for safety on this scale? How many people must be helped for one that might be harmed? How does this compare to the standards to which a human clinician is held? Who will be responsible for the harm caused by AI mistakes? Will it be the computer programmer, the tech company, the regulator or the clinician?

Human subtleties may be hard to digitize and machines may struggle to negotiate a pragmatic compromise between medical advice and patient wishes. As clinicians become increasingly dependent on computer algorithms, these technologies become attractive targets for malicious attacks. In that case, how can we prevent them from being hacked?

Can a doctor be expected to act on the decisions made by a ‘black box’ AI algorithm? In deep neural networks, the reasons and processes underlying the decisions made by AI may be difficult to establish, even by skilled developers. Do doctors need to explain that to patients? Will clinicians bear psychological stress if an AI decision causes patient harm?

Ethical issues: — is it acceptable to stratify patients by factors such as age, race, postcode or socioeconomic group if this can improve outcomes, or would this negatively impact those patients? This is a big question for society and ethicists — do we have an ethical duty to encourage under-represented groups to provide more of their data to be used to train algorithms? — Artificial intelligence has the potential to use the wide range of differences between us to provide truly individualized care – though this might be better for some people than others.

Whose data is it really? Does it belong to the patient (the source), the system (the collector and aggregator), or the developer (who adds value to the raw materials)? Patients do not, in the main, know that data about them and their disease is collected and used. When told, very few opt-out.

Vulnerable groups, such as patients with psychiatric illness, are at particular risk from any ‘bad advice’ from digitized systems. Should systems aimed at such groups be regulated more closely? Is there a risk that AI will drive unsustainable demand leading to rationing?

Advances in healthcare AI have the potential to improve care globally. Do high-income countries have a humanitarian duty to share data and technologies with resource-poor countries, where the potential benefits to provide a higher standard of care are very marked?

If the public begins to view some of the skills gained through medical school and clinical practice as ‘replaceable’, will this disempower the medical profession and its organizations? A reduction in the social

Adding artificial intelligence to the mix will indeed change the way patients interact with providers, providers interact with technology, and everyone interacts with data.  And that isn’t always a good thing.

Ensuring that artificial intelligence develops ethically, safely, and meaningfully in healthcare will be the responsibility of all stakeholders: providers, patients, payers, developers, and everyone in between. There are more questions to answer than anyone can even fathom.  But unanswered questions are the reason to keep exploring – not to hang back.

It’s an exciting, confusing, frustrating, optimistic time to be in healthcare, and the continuing maturity of artificial intelligence will only add to the mixed emotions of these ongoing debates.  There may not be any clear answers to these fundamental challenges at the moment, but humans still have the opportunity to take the reins, make the hard choices, and shape the future of patient care.

LEAVE A REPLY

Please enter your comment!
Please enter your name here