PCS Blog

Beyond the Talking Head: What Actually Makes a Virtual Patient a Simulation?

Written by Michelle Castleberry | Mar 5, 2026 12:04:35 PM
As AI-powered virtual patients proliferate across healthcare simulation, the real question isn't whether they can converse. It's whether they can simulate.

The AI Wave in Healthcare Simulation

Coming home from IMSH 2026 in San Antonio this year, one impression was impossible to shake: AI is everywhere.

In past years, there had been steady growth — perhaps a company or two introducing AI features, or a new virtual patient platform appearing on the exhibit floor. But this year felt different. The shift felt larger, more visible.

Out of roughly 143 exhibitors at IMSH, about 29 companies referenced AI or demonstrated some level of AI capability. That’s roughly one in five vendors. It’s the highest number of AI-referencing companies we’ve seen to date. But it’s also the highest total number of exhibitors the conference has ever had. So the story isn’t simply vendor growth.

I think the more important story is the shift that we're not in an AI novelty phase anymore. AI in simulation is no longer impressive simply because it exists It's valuable only if it performs.. The field is moving from demo-stage to capability expectation. 

AI in simulation is no longer impressive because it exists. It's valuable only if it performs.

That distinction matters enormously — especially as the most visible application of AI in our field right now is conversational virtual patients powered by large language models (LLMs). These systems generate dialogue through text or voice, and they can be remarkably impressive in a demo. But impressive in a demo and effective in a simulation are not the same thing.

Conversation Is Not Simulation

One increasingly common approach is using general-purpose tools like ChatGPT as a virtual patient. To be clear — these tools are remarkable at many things. They're powerful, accessible, and astonishingly versatile.

But conversation alone does not create a clinical encounter.

A simulation has structure. It has defined objectives. It has constraints. It has measurable performance. If conversation is the only architecture, you have interaction — not necessarily simulation. There's an analogy that captures this well: all bourbon is whiskey, but not all whiskey is bourbon. Similarly, all simulations involve interaction. But not all interaction is simulation.

As the next generative AI wave approaches — bringing even more photorealistic avatars paired with conversational LLMs — we have to hold onto a fundamental truth: realism in appearance does not equal realism in simulation. The reason some AI feels impressive in a demo but falls apart in an actual clinical encounter is the same reason a charismatic standardized patient can still fail a case. Simulation isn't about intelligence alone. It's about structured performance within the constraints of a designed simulation.

Evaluation Framework: Think Like an Standardized Patient Educator.

If we share a common definition of healthcare simulation — one grounded in structure, objectives, and measurable performance — then our evaluation criteria shouldn't change just because the medium has. So how should we evaluate AI virtual patients?

I propose we use one of the oldest forms of reasoning in professional education: casuistry — reasoning by comparison to known cases. And in our field, we already have the perfect analogy: the standardized patient.

Think about what you actually evaluated when hiring a human SP. It was never whether they could talk. It was whether they could simulate:

  • Are they reliable and consistent across learners?
  • Can they stay in role under unexpected questioning?
  • Do they portray the right emotional intensity?
  • Can they escalate or de-escalate appropriately?
  • Do they represent the clinical condition authentically?
  • Are they trained to perform within defined objectives and boundaries?

Apply those same criteria to an AI virtual patient. Don't ask: 'Is it smart?' Ask: 'Would I hire this as a standardized patient?'

Don't ask: 'Is it smart?' Ask: 'Would I hire this as a standardized patient?'

What Simulation-Grade AI Actually Requires

True simulation capability in an AI virtual patient isn't all about conversational fluency. It requires:

  • Role boundaries — the virtual simulated patient stays in character, within defined clinical and behavioral parameters
  • Clinical architecture — symptoms, history, and physical findings that are internally consistent and scenario-specific
  • Physical exam logic — the ability to conduct a physical exam on a virtual simulated patient, and observe findings
  • Environmental context — a variety of clinical settings, spaces and its constraints
  • Progression rules — the ability to evolve or escalate the scenario based on learner actions
  • Assessment alignment — performance that can be mapped to measurable learning objectives

This is the difference between an AI that was built for conversation and later adapted to simulation, versus one that was designed from the ground up the way you would train a standardized patient — with simulation as the foundation, not a feature.

Evaluating Partners, Not Just Products

As you evaluate virtual patient platforms, I'd encourage you to look beyond features. Evaluate foundation. Evaluate architecture. Evaluate who is actually building the intelligence behind the product — and whether their infrastructure was designed for healthcare simulation or assembled around it.

Healthcare simulation isn't just another vertical to 'add AI to.' It's a discipline built on structure, pedagogy, performance measurement, and clinical integrity. When vendors approach it as a market opportunity first and a discipline second, AI becomes a feature layered onto infrastructure that was never designed for what we do.

In a market projected to exceed $7 billion by 2030, there is real incentive to move fast. That makes careful evaluation more important, not less.

The question isn't whether AI belongs in simulation. It's how well it performs inside it.

Michelle Castleberry is Co-Founder of PCS.ai, an AI company purpose-built for healthcare simulation. PCS.ai develops simulation intelligence infrastructure including Spark virtual patients and SimVox voice AI for manikin-based simulation. 

This content was originally delivered as a webinar on March 3, 2026.