Products
PCS Platform
Back to results

I Attended 3 Webinars in June — & Hosted 1 —These Were the Most Common Questions About AI in Healthcare Simulation

Author photo
Michelle Castleberry Jun 26, 2025 5:05:28 PM

Over the past month, I participated in four different healthcare simulation webinars—three as an attendee and one as a host. (You’ll find the webinar titles & links at the end of this article).

Across all four, generative AI and large language models (LLMs) were the core technologies in focus. Questions around tech stacks, hallucination risks, data privacy, and the limitations of general-purpose models like ChatGPT came up repeatedly from attendees.

Below, I’ve captured six of the most common and insightful questions that consistently emerged in nearly every webinar—along with clear, practical answers that highlight where the real differences (and real risks) lie when it comes to using generative AI in simulation-based education.


“Which Large Language Model (LLM) do you use?”

AKA “Which AI backend do you interface with?” (As in OpenAI, Anthropic, etc). AKA “Or do you have a dedicated / local install (something like Llama / Mistral etc)”?

Answer Only: PCS.ai runs its custom-trained LLM for response generation, on their own cloud server space. 

Tell Me More: When PCS transitioned from a classification-based neural network AI (used from 2016 to 2023) to generative AI in 2023, we selected a mid-sized open-source model as our foundation. To train it, we leveraged eight years of real-world healthcare simulation data collected from our earlier AI products. To enhance performance, we first refined that data using larger language models—a technique known as knowledge distillation. Then, we fine-tuned the model further with insights from our expert content team through human reinforcement learning. The result is a custom-built model that powers PCS.ai today—purposefully trained for healthcare simulation from the ground up and evolving with every new use case, dataset, and expert contribution.


“LLMs (like ChatGPT) are really smart and do a pretty good job at pretending to be a patient.”

Answer Only: There’s no doubt that tools like ChatGPT can generate realistic dialogue and convincingly play the role of a patient. But as general-purpose AI models, they often fall short in areas that matter most for clinical training—like staying in character, avoiding over-sharing, maintaining consistency, and integrating with educational systems.

Digging Deeper: Large LLMs like ChatGPT are designed to be broadly helpful across all kinds of topics—from cooking tips to coding support. Their main goal is to provide information, not to follow the specific behavior of a standardized patient in a clinical scenario. Because of that, they may “break character,” give away too much information too soon, or respond in ways that don’t reflect how a real patient would act. They aren’t trained to follow the structured flow of a simulation, and they lack the built-in guardrails needed for educational consistency—like session control, standardized behavior, or alignment with learning objectives.


"Are most virtual patient platforms just middleware connecting to general-purpose AI?"

AKA "Isn't it common for virtual patient platforms to rely on third-party LLMs under the hood?

Answer Only: Each company has its own approach, and I can’t speak to the specifics of what others use behind the scenes. If you're curious, it’s always worth asking them directly—they may be happy to share more about their tech stack.

Our Take on It: In general, virtual patient platforms take different approaches. Some use third-party large language models (LLMs) via APIs, while others rely on traditional rule-based or scripted systems. These choices often reflect a balance between ease of development, speed to market, and cost. Using third-party AI tools can offer speed to market and some flexibility, but it may also come with trade-offs—like limited control over the model, less domain-specific training, slower response times, or higher long-term costs. At PCS.ai, we built and trained our own healthcare-specific model, designed from the ground up for simulation-based learning. This approach gives us greater control over quality, realism, performance, and cost—ensuring fast, reliable responses tailored to clinical education.


"Can larger LLMs, like ChatGPT, be trained for healthcare simulation?"

Answer Only: Technically yes. Practically, it’s complex. Training an LLM like ChatGPT is complex, expensive, and often limited in effectiveness for clinical training. 

What That Means: LLMs like ChatGPT can theoretically be fine-tuned for healthcare simulation, but they aren’t designed for it out of the box. Adapting them would require large amounts of domain-specific data, technical expertise, and strict control mechanisms to avoid issues like breaking character, over-disclosure, or inconsistent responses. Even with customization; however, they would still lack key features like session control, learner tracking, and LMS integration. That’s why purpose-built platforms—like PCS.ai—develop their own models tailored specifically for simulation-based education amongst a suite of other tools and functionality.


"You mentioned that the LLM is learning from users, so do you have access to the questions and responses that have been done by users? Is this GDPR compliant?"

Answer Only: Yes, but with strict safeguards in place to ensure compliance with privacy laws like GDPR (General Data Protection Regulation, (a major privacy law in the EU)), and FERPA (Family Educational Rights and Privacy Act). 

The Details: Yes, we can access interaction logs, but only in ways that respect user privacy and legal frameworks. While the system may learn from user interactions to improve performance, access to those questions and responses is tightly controlled. In GDPR-compliant environments:

    • User data is anonymized or pseudonymized before it’s reviewed or used for improvement.
    • Access is restricted to authorized personnel, and only when necessary (e.g., for debugging, customer support investigation, or safety monitoring).
    • Purpose limitation applies — data is only used for specific, declared purposes like improving model accuracy.
    • Users may have rights (under GDPR) to request access, correction, or deletion of their data.

“How do you control or limit AI hallucinations”

Answer Only: Generative AI can hallucinate—but not in obvious, cartoonish ways. In healthcare simulation, hallucinations often look like the AI forgetting it's a patient, making up medical details, or responding out of character. It’s less about saying “I have a pet unicorn” and more about breaking realism in subtle, credibility-breaking ways.

Breaking It Down: In general use, hallucinations in AI refer to confidently generated but false or irrelevant information. In a clinical simulation setting, hallucinations are more nuanced and potentially harmful. For example:

    • The AI might forget it's supposed to be a ‘standardized patient’ and give medical advice.
    • Also known as: it could "break character" and speak like an assistant or narrator instead of staying in the role.
    • It might create fake lab results, diagnoses, or medications that were never scripted.

These errors disrupt the educational goals of realism, consistency, and learner trust. That’s why platforms like PCS.ai don’t just use off-the-shelf LLMs—they build domain-specific constraints, scenario logic, and session controls to reduce hallucinations and keep interactions clinically appropriate.


We're always open to thoughtful conversations as this space continues to evolve. If you'd like to dig deeper or explore how PCS.ai approaches these challenges, schedule a 1:1 meeting with our team today.


Webinars attended:

Hosted: Purpose Built AI in Patient Communication Training: A PCS.AI Product Review

Subscribe to our newsletter
Stay in the know with our exclusive newsletter—delivering the latest updates, tips, and insights straight to your inbox.
By clicking Sign Up you're confirming that you agree with our Terms and Conditions.