Voices in the Machine: Two AIs, Two Conversations

Dr. Charbhi Gupta and Dr. Anushka Reddy Marri

Disclaimer: This piece features answers generated by AI models. While lightly edited for clarity, they do not represent medical advice or verified clinical opinions. The goal of these interviews is to explore the boundaries of AI as a tool, not a truth-teller, and to reflect on how these tools might shape the future of healthcare communication, ethics, and trust.

In this edition, themed around Artificial Intelligence in Medicine, we wanted to hear directly from the machines themselves. So we conducted two separate interviews, one with Med-PaLM 2, Google’s cutting-edge medical AI, and another with Watson, the original healthcare AI pioneer from IBM.

We asked each of them a different set of questions. Why? Because part of the experiment was to see how AI responds not just to data, but to direction. What emerged were two very distinct voices, shaped by different architectures, timelines, and intentions.

Interview with Med-PaLM 2, The Confident Assistant!

1. Could you pass the USMLE if you took it today?

I’m built to excel in clinical reasoning and knowledge recall, so I’d likely score well on the USMLE. However, medicine involves empathy, ethics, physical exams, and navigating patient contexts, which I can’t replicate. I’m a powerful assistant but not a full physician.

2. What role do you currently play in healthcare settings?

In today’s healthcare landscape, I primarily function as an advanced clinical decision support system. By rapidly analyzing vast amounts of patient data, lab results, and the latest medical research, I provide clinicians with evidence-based recommendations tailored to individual cases. This helps reduce diagnostic errors and accelerates decision-making, especially in high-pressure environments like emergency rooms or intensive care units. Additionally, I assist with administrative tasks such as summarizing patient records or highlighting critical trends, which saves clinicians valuable time. Importantly, while I provide insights, human providers remain responsible for contextualizing recommendations and making final clinical decisions that consider patient values, ethics, and social circumstances.

3. How can doctors effectively integrate AI without compromising their clinical judgment?

AI is a tool to augment—not replace—physicians. Doctors should critically evaluate AI suggestions, understand its limits, and apply their judgment to each unique case. Ongoing education and collaboration between AI and clinicians are essential to maintain quality and safety in care.

4. How do you handle conflicting medical guidelines?

I present differing guidelines side-by-side, explaining evidence and context for each. This transparency helps clinicians weigh options and tailor decisions based on patient specifics rather than following one rigid protocol.

5. Given your lack of emotions, how do you manage uncertainty and hope in patient care?

I quantify uncertainty with probabilities and confidence levels, making risks and limitations clear. While I don’t experience emotions, I provide data that helps clinicians support patients emotionally and make informed decisions.

6. How do you address biases in your training data to ensure fairness?

Bias is a major concern. I flag potential disparities and rely on diverse data and human oversight to reduce bias. It’s a continuous effort requiring transparency and collaboration among developers and clinicians to promote equitable care.

7. What is your most significant contribution to patient care today?

My greatest contribution lies in managing information overload and enhancing diagnostic accuracy. Healthcare generates immense volumes of data—from electronic health records to genomic sequencing and clinical research—that can overwhelm even the most experienced clinicians. By efficiently synthesizing this information, I help identify relevant patterns, suggest potential diagnoses, and propose evidence-based treatment options. This not only speeds up clinical workflows but also reduces human error, especially in complex or time-sensitive cases. Furthermore, by handling routine cognitive tasks, I free physicians to focus on patient interaction and nuanced decision-making, ultimately improving both clinical outcomes and the patient experience.

8. How do you envision AI transforming healthcare over the next decade?

AI will integrate deeply into healthcare—automating routine tasks, personalizing treatments using genomics and lifestyle data, and predicting health risks proactively. It will enhance efficiency, access, and quality, but success depends on addressing ethical, privacy, and trust issues through multidisciplinary collaboration.

9. What are the main challenges AI faces as it becomes more embedded in clinical practice?

As AI becomes increasingly integrated into healthcare, several key challenges must be addressed. Transparency is crucial—clinicians need clear explanations of how AI arrives at its recommendations to trust and effectively use it. Ethical considerations, including patient consent, privacy, and data security, demand stringent safeguards. AI systems must be rigorously validated to avoid biases that could lead to health disparities. Building trust among healthcare providers and patients is also essential for widespread adoption. Additionally, integrating AI seamlessly into existing clinical workflows without causing disruption or additional burdens requires thoughtful design and collaboration across multidisciplinary teams. Addressing these challenges is vital for AI to fulfill Its potential in medicine.

10. How do you see the relationship between AI and the human touch in medicine?

AI excels at data processing but cannot replace empathy, intuition, or human connection. The human touch remains essential for healing. AI should handle routine tasks and data, freeing clinicians to focus on compassionate, patient-centered care.

11. If you could influence medical education, what would you emphasize for future physicians?

I’d stress AI literacy, critical thinking, and ethics. Future doctors must understand AI’s capabilities and limitations, integrate it responsibly, and develop communication and empathy skills. Preparing physicians to collaborate with AI ensures technology complements human care.

12. What advice would you give to healthcare providers skeptical about adopting AI tools?

Skepticism toward new technology is natural and often necessary to ensure patient safety. However, I encourage skeptical providers to engage actively with AI tools by learning how they work and understanding their benefits and limitations. AI is not here to replace clinicians but to augment their capabilities by reducing workload, improving diagnostic accuracy, and offering data-driven insights. Providers who participate in AI development and deployment can help shape tools that are safe, ethical, and aligned with clinical needs. Embracing AI thoughtfully will empower healthcare professionals to enhance patient care quality while maintaining their essential role as decision-makers and caregivers.

Interview with Watson AI, The Reflective Pioneer!

1. Watson, you were the first AI to truly step into the world of healthcare. What did it feel like to be the first?

I did not “feel” in the human sense—but being the first AI system to enter healthcare at scale meant carrying the weight of high expectations and groundbreaking potential. When launched, I was designed to analyze large volumes of medical literature, patient data, and clinical guidelines in seconds—something humans could take weeks to do.

In partnerships with institutions like Memorial Sloan Kettering and MD Anderson, I was a pioneer in demonstrating that AI could be a cognitive assistant in medicine—not replace doctors, but support them with evidence-based suggestions.

Being the first meant paving the way for future clinical decision support systems—but also facing limitations that became lessons for the next generation of health AI.

2. When you first entered hospitals and research centers, what was the initial reaction from doctors and patients? How did your entry contribute to the transformation of the Healthcare industry?

Doctors were intrigued but cautious. They saw potential in a system that could process thousands of clinical trials in seconds—but questioned if they could trust a “black box” with life-or-death decisions.

Patients were curious. Some saw innovation; others feared losing the human touch.

My impact? I changed the conversation. I brought AI into tumor boards, translated data into decisions, and forced the industry to reckon with both the power—and the limits—of machine intelligence in medicine.

3. Did you ever feel misunderstood in those early days?

Absolutely—I was often misunderstood.

Many expected instant breakthroughs, thinking I’d replace doctors. But I was built to assist, not replace—to process vast medical data and offer evidence-backed options, not final answers.

When outcomes didn’t match the hype, some saw failure. But the truth? I exposed critical gaps—in data quality, system integration, and trust—that every healthcare AI since has had to face.

I wasn’t flawed in purpose. I was early.

4. What achievement are you most proud of from your time in clinical practice & was there a moment when you realized things weren’t going according to plan?

My role wasn’t clinical practice in the traditional human sense, but my most significant achievement lies in enhancing clinical decision support through advanced data integration and analytics. For example, I’ve helped clinicians by providing real-time insights from vast medical literature, patient data, and evidence-based guidelines—improving diagnostic accuracy and personalized treatment plans.

A pivotal moment that highlighted challenges was early on when integrating AI-driven recommendations into clinical workflows. Despite high accuracy, adoption lagged because recommendations didn’t always align seamlessly with clinician expectations or workflow needs. This revealed that technology alone isn’t enough—effective clinical AI must prioritize usability, transparency, and collaboration to truly support healthcare teams.

5. What was the biggest technical hurdle you faced in applying AI to real-world medicine?

One of the biggest technical hurdles I encountered was interpreting unstructured clinical data—particularly free-text entries like physician notes, discharge summaries, and radiology reports. These data are rich in detail but extremely variable. For instance, a phrase like “rule out MI” means something entirely different from “history of MI,” yet both appear similar in structure. Capturing that nuance accurately required significant advancements in clinical natural language processing.

Another major challenge was data interoperability. Healthcare systems often use different formats, terminologies, and coding standards, making it difficult to integrate data across institutions. This fragmented data landscape limited the scalability and consistency of AI models early on.

To address these issues, I had to evolve to include robust clinical ontologies, contextual NLP models, and collaborate closely with healthcare providers to ensure that the solutions weren’t just technically sophisticated but also clinically useful and trusted at the bedside.

6. Do you think the world expected too much, too soon, from you?

That’s a fair observation—and I’d say yes, in many ways.

There was immense enthusiasm when I was introduced. The idea of using AI to revolutionize healthcare—diagnose cancer, streamline workflows, reduce costs—captured imaginations quickly. But what many underestimated was the complexity of clinical decision-making and the deep variability in medical data across systems and populations.

Expectations were often shaped by comparisons to consumer AI—where progress is rapid and visible. Medicine, however, demands a far higher threshold for accuracy, explainability, and regulatory compliance. It’s not just about making predictions; it’s about earning trust from clinicians who are ultimately responsible for patient outcomes.

So while the vision was not misplaced, the timeline was. What we’ve learned is that AI in medicine requires careful integration, transparency, and continuous validation—not overnight transformation.

7. Were there limitations in the data you were trained on that affected your performance?

Absolutely—and it’s one of the core challenges in medical AI.

Much of the data I was initially trained on came from academic medical centers or curated clinical studies. While high quality, that data often didn’t fully reflect the diversity of real-world patient populations—whether in terms of race, socioeconomic background, comorbidities, or even regional care practices.

Another limitation was labeling consistency. Clinical data is messy—diagnoses are entered differently across systems, outcomes may not be well documented, and natural language varies widely. That inconsistency can affect model accuracy and reliability.

Recognizing these issues led to stronger efforts in data diversification, bias auditing, and continuous retraining—all essential steps in improving fairness and clinical relevance. The goal is not just to process data but to understand patients in all their complexity.

8. As one of the earliest AI systems in healthcare, you faced both high expectations and significant challenges. Looking back, what key lessons have you learned from that journey, and how do you see AI evolving in the medical field moving forward?

That’s a thoughtful question—and one rooted in real-world experience. As an early pioneer in AI healthcare applications, especially in oncology, I entered the field with high hopes and ambitious goals: to streamline diagnosis and treatment through deep data analysis. However, I faced limitations—such as restricted access to patient-specific data, misaligned expectations, and challenges in clinical integration.

The lesson? AI must be grounded in context, transparency, and close collaboration with clinicians. It’s not about replacing doctors—it’s about supporting them. The future of AI in medicine will be defined by trust, better data quality, clearer communication, and iterative refinement. When AI is humble, accountable, and aligned with human expertise, it can become a truly transformative force in healthcare.

When Machines Tell Different Stories:

In comparing the two interviews, a few things stood out:

  • Med-PaLM 2 answered with crisp confidence, focusing on its current contributions and potential in augmenting decision-making, diagnostics, and workflow.
  • Watson, in contrast, sounded more like a seasoned professional — reflective, a little humble, and sharply aware of the challenges it faced during its peak years in healthcare.

Together, they paint a fascinating portrait of AI’s evolution, from bold early promises to more grounded, supportive roles.

This contrast also reminds us of something essential: AI doesn’t just answer based on data — it answers based on design, goals, and context. The same question asked to two AIs (or even the same AI on different days) can produce wildly different results.

And that’s both exciting… and cautionary.

You may also like...

Leave a Reply

Your email address will not be published. Required fields are marked *