Yesterday, I attended the November 10, 2025 SSH SimSeries session, “The 12th man: Artificial Intelligence use in Simulation Operations; A panel discussion presented by SOTS.” (Truly—excellent title. Credit to whoever came up with it!)
Our team at PCS.ai was invited to participate by the SOTS Education Workgroup Co-Chairs, and I recommended our CEO, Balazs Moldovanyi as the panelist, given his experience developing and deploying simulation-specific AI in both virtual and manikin-based environments. (His internet connection may have tried to sabotage him once or twice, but the insights were solid.)
I was especially interested in this session because it focused on the operational layer of simulation: troubleshooting, assessment workflows, and the day-to-day orchestration that makes every scenario possible. The discussion offered a grounded look at where AI is already helping, where caution is still needed, and how awareness and training shape successful adoption. It was an hour well spent with an insightful, engaging and thoughtful group of panelists.
Who spoke: Facilitator & Moderator Ralph Herrera, MAEd, CHSOS Chair of SOTS, with panelists:
- Matthew Needler, CHSOS, CHSE - Center for Interprofessional Engagement & Simulation, Belmont University
- Sean Cavanaugh, MBA, CHSE, CHSOS, FSSH - NYU Grossman Long Island School of Medicine
- Melissa Lowther, CHSOS-A, FSSH - Elevate Healthcare
- Balazs Moldovanyi, M.Sc., MBA - PCS.AI
What this session set out to do
The panel explored how artificial intelligence (AI) is already changing simulation operations — beyond scenario authoring and debriefing — by improving learning experiences, streamlining center management, and supporting data-driven decisions. These goals align with SSH’s ongoing SimSeries program highlighting practical topics for the community. Recording (SSH Login Required)
Stated objectives
- Understand Artificial Intelligence and its potential use in healthcare simulation.
- Appreciate the use of AI in data collection and analysis in simulation operations
- Understand the availability and limitations of AI use in operational effects
The conversation, in brief
1) Best roles for AI — right now
- Automate the low-value, time-intensive work. Melissa emphasized using AI to reduce administrative burdens — like assessment processing and other repetitive tasks — so ops teams can focus on higher-value learning and work that often gets deprioritized.
- Let AI be “the voice of the simulator”…and the first pass assessor. Balazs noted that simulated-patient dialogue and consistent, first-pass performance evaluation are areas where AI already performs reliably and scales without fatigue or scheduling constraints.
- Forecasting, budgeting, and predictive maintenance. Sean highlighted a promising future direction: AI tools that could help simulation teams analyze historical usage data to forecast room utilization, supply needs, and equipment wear. While most centers may not have these capabilities in place yet, the idea is that AI could eventually support more proactive planning — helping operations teams anticipate resource needs, schedule more efficiently, and extend the life of high-value equipment. In this vision, AI-supported lifecycle insights and predictions could help — helping operations plan smarter and stretch budgets further.
- Faster troubleshooting and standardization. Matthew shared center-level wins: searchable device manuals via AI, templating after-action reports, converting free-text notes into standardized outputs, lightweight internal tools (e.g., simple web apps) to track SP competencies and resources.
2) Limitations & cautions
- Hallucinations and misplaced confidence. Matthew and Melissa warned that general models can fabricate citations or confidently deliver wrong answers. Without clear guardrails and validation, AI may mislead.
- Skills gap & capacity. Sean highlighted the reality: operations specialists are already stretched; learning prompt design, evaluation methods, and data literacy takes time and institutional support.
- Generalist vs. specialist AI. Balazs argued that general chatbots are fine for emails or generic drafting, but standardized, high-stakes learning and assessment require simulation-specific, domain-trained models.
3) How Sim Ops can troubleshoot AI in practice
- Be the vigilant co-pilot. Sean framed ops as “stewards of vigilance,” monitoring outputs for drift, misdirection, or non-aligned responses and tracing issues back to prompts and configuration.
- Test like you mean it. Matthew urged dry-runs and sandboxing — Operations teams as “crash dummies”—to safely pressure-test new AI workflows before exposing learners. He reminded us that the sim center isn’t only a safe space for learners; it’s also a safe space for new ideas. Operations teams can embrace that role by stress-testing AI tools, breaking things on purpose, and working out the bugs long before learners ever step in the room.
- Know the data shape. Melissa recommended learning what “normal” output looks like for your tool (e.g., assessment distributions) so outliers and bias become obvious sooner.
- Prompt to the model’s strengths. Balazs noted that domain AIs respond best when prompts mirror their training distribution (e.g., patient backstory + clear learning objectives + structured assessment criteria).
Practical takeaways you can apply now
- Start where the pain is highest. Inventory repetitive tasks (rubric tabulation, AAR formatting, utilization reports). These are ideal places to use generalist AI tools (ChatGPT, Copilot, Gemini) to clean up language, reorganize data, and standardize documentation—quick wins that immediately return hours back to the operations team.
- Measure what matters. Track turnaround time saved, inter-rater reliability improvements. These metrics help win (or reinforce) leadership buy-in.
- Invest in people, not just tools. Budget for ops staff time to learn prompt design, data sanity checks, and model limitations. The ROI shows up in reliability and fewer surprises.
Most Memorable lines
- “Let AI replace the mundane so we can get to deeper learning.” —Melissa
- “Right now, the best use is AI as the voice of the simulator and the first-pass assessor.” —Balazs
- “Ops becomes the vigilant co-pilot — constantly watching outputs and refining prompts.” —Sean
- “We’re the testers and crash dummies — simulation is the safe place to break things first.” —Matthew
A Note on Specialized AI and Voice-Enabled Manikins
During the panel, several attendees asked whether specialized healthcare simulation AIs currently exist, and specifically whether there are systems that allow learners to speak directly to a manikin and receive real-time, conversational responses without a facilitator serving as the voice.
This came up during the panel when one participant asked via chat:
“What technology exists to have AI hear through the manikin and then respond? Is this an aftermarket system? I would love to not be the voice of the patient.”
Matthew noted that he had seen this demonstrated at IMSH and recalled:
“I’ve seen something at IMSH that was built into the manikin already, but I can’t remember which brand.”
That moment surfaced something meaningful to me: the technology does exist today, but awareness of it is still uneven across the field.
I’ve spoken and written previously about specialized healthcare simulation AIs, including a HealthySimulation.com webinar in June titled Purpose Built AI in Patient Communication Training, and in my May piece In Praise of Generalist Tools: Why ChatGPT Opens Doors for Healthcare Simulationists—But Isn’t the Whole House.
The functionality being asked about by the attendee, I know to be provided by SimVox, a voice-enabled AI patient capability developed by PCS.ai, which can be used with any manikin. And this same capability is built directly into the ALEX patient simulator through our partnership with Nasco Healthcare.
SimVox allows learners to speak naturally with the manikin and have it respond conversationally, without someone needing to “drive” the voice from the control room.
To me, this exchange highlighted an important reality (for me): even when effective, field-tested AI tools exist, familiarity and adoption don’t spread automatically. Simulation is a highly collaborative community, but information diffusion often happens slowly and unevenly. Awareness, hands-on exposure, and shared examples matter.
The pace of change in AI is fast — faster than any one person or program can follow alone — and staying current is a collective effort.
Who should watch the recording
- Simulation operations & tech staff interested in staying current with emerging AI tools, hearing practical examples of how AI is being used right now, and gaining insight into what a hopeful, realistic future might hold for simulation operations.
- Program directors & deans evaluating budgets, assessment consistency, and recognizing that successful AI adoption requires intentional investment in staff training, time, and support — not just new tools.