Applied AI Summit Healthcare

Free online conference | April 14-15, 2026

Forensic-Ready LLMs: Building AI Systems You Can Investigate, Trust, and Explain

Large Language Models are rapidly becoming core components in healthcare, finance, public services, and high-stakes decision-making. Yet most deployed LLMs remain “black boxes” with no mechanisms to verify how outputs were generated, whether data was compromised, or how to investigate harmful behavior. My talk presents a new paradigm: forensic-ready LLM systems designed with built-in evidence collection, tamper-resistant logging, and transparent decision pathways that support real-world investigations.

This session will cover emerging threats—including prompt injection, data poisoning, bias manipulation, and misuse—and demonstrate how forensic techniques such as hash-chained logs, model update attribution, anomaly detection, and signature-based provenance can be integrated directly into LLM pipelines. I will outline a practical architecture organizations can adopt to make their AI systems auditable, explainable, and accountable without compromising privacy or performance.

Attendees will learn how to design, monitor, and investigate LLMs responsibly in complex, partially trusted environments.

About the speaker

Safiia Mohammed

CEO at Cushites Canadian Enterprise

Safiia Mohammed is a Canadian-based entrepreneur, researcher, and community builder whose work spans Responsible AI, beauty and wellness, and global trade. She is a PhD candidate in Computer Science at the University of Windsor, specializing in AI security, privacy, and forensic accountability—particularly federated learning, blockchain-based auditability, and trustworthy machine intelligence. Her research is guided by a mission to build safer AI systems for hospitals, governments, and industry.