Applied AI Summit Healthcare
Free online conference | April 14-15, 2026From Guardrails to Guardians: Continuous Red Teaming and Holistic Safety for Agentic Healthcare AI
As healthcare transitions to autonomous Agentic AI and multi-step reasoning LLMs, the risk surface has shifted from surface-level hallucinations to sophisticated internal failures. Traditional observability and static guardrails often miss the Silent Failure of the Hidden Middle – where an agent produces a clinically plausible output despite harboring catastrophic errors, privacy leaks, or reasoning flaws in its intermediate steps.
This keynote introduces the Guardian Agent: an active, integrated peer within the production system that monitors the thinking as much as the result. Functioning as an independent reviewer, the Guardian Agent probes internal reasoning chains and tool-use intent to provide a layer of proactive clinical assurance.
Learn how the Pacific AI Guardian Agent operationalizes this oversight using 60+ healthcare-specific test suites, including:
- Clinical Task Performance (MedHELM): Real-world benchmarks for clinical decision support, note generation, patient communication, and workflow administration.
- Safety & Bias Foundations: Detecting demographic bias and robustness against clinical data perturbations.
- Continuous Red Teaming (Patent Pending): Real-time adversarial loops for ethical violations, HIPAA breaches, and jailbreaking.
- Medical Cognitive Biases: Identifying reasoning flaws like anchoring, confirmation, and availability bias.
- Regulatory Hardening: Enforcing 2026 legal standards (e.g., California AB 489) for emergency escalation and preventing AI impersonation of licensed professionals.
The keynote concludes by shifting from passive ML/AI observability to a Clinical Residency model, where the Guardian Agent acts as the “Senior Doctor” overseeing a “Junior Resident.” This ensures every autonomous workflow has the active, independent verification required for true clinical production.
About the speaker
Julio Bonis
Principal Data Scientist at Pacific AI
Julio Bonis is a data scientist working on NLP & LLM for Healthcare at John Snow Labs. Julio has broad experience in software development and design of complex data products within the scope of Real World Evidence (RWE) and Natural Language Processing (NLP).
He also has substantial clinical and management experience – including entrepreneurship and Medical Affairs. Julio is a medical doctor specialized in Family Medicine (registered GP), has an Executive MBA – IESE, an MSc in Bioinformatics, and an MSc in Epidemiology.