LLM management
Learn how Large Language Models enhance healthcare with Puppeteer's HIPAA-compliant platform.
CEO - Federico Ruiz
May 10, 2024
6
min read
On the administrative side, they'll automate countless administrative tasks, improve care standards, and drive down costs. LLMs, at their core, are reasoning engines that will eventually power AI agents capable of operating at the same level as human knowledge workers in healthcare. They'll be able to follow tasks, achieve goals, solve problems, and inevitably bring down the inflated costs of healthcare.
On the actual care delivery side, progress will be slower compared to the rest due to regulatory and patient safety restrictions. But it's still going to be quite quick compared to the usual pace of healthcare. While regulatory restrictions lag actual capabilities, eventually we’ll get to LLM-based agents that can provide digital care at the same level as a human doctor.
The Experimental Phase
The potential is immense. Every person could have a virtual medical professional in their pocket, providing guidance, answering questions, helping manage treatment, and more.
That's all to say that we know where this is going. The question is how to get there.
So we are currently, as an industry, in an experimental phase, in which healthcare companies are experimenting.
Challenges in Healthcare LLM Deployment
But going from prototype to production is where it gets tricky. There are significant challenges:
Regulatory compliance: Healthcare software must adhere to strict regulations, like HIPAA in the US, and even more stringent requirements at the enterprise level. The LLM ecosystem isn't built for this yet.
Patient safety and liability: An LLM giving incorrect or harmful medical advice could endanger patients and expose providers to serious legal risk. LLMs aren't ready for direct medical advice.
Integration: Healthcare applications often need to integrate with electronic health record (EHR) systems. LLMs can't do this out of the box.
Expertise gap: Healthcare companies often lack experience with prompt engineering and LLM development, which require specialized skills beyond traditional machine learning.
Building a Production-Ready Healthcare LLM
Building a production-ready healthcare LLM is complex. It needs to support granular conversation scripting, flexible but constrained responses, proactive communication, EHR integration, dynamic context handling, and output supervision to prevent harmful content. This architecture isn't available off-the-shelf.
Some companies outsource the development, but finding a firm with both healthcare and LLM expertise is no easy feat. Building from scratch often takes months and a hefty investment before even reaching a minimum viable product (MVP). Ideally, companies want to experiment and iterate quickly.
Puppeteer’s Solution
That's the problem we're solving at Puppeteer. Our platform helps healthcare companies create LLM-powered experiences rapidly. In essence, we’re constructing the scaffolding to enable the rapid creation of LLM-powered healthcare applications.
It's HIPAA-compliant by default, and we sign BAAs. We anonymize protected health information (PHI) when sending it to the LLM, like GPT-4, and de-anonymize the response, ensuring PHI never leaves our system.
With our framework, we can easily:
Use our dialogue scripting feature to choreograph conversations, make sure they follow a sequence of steps and control how rigid it is at different points.
Set up an agenda for the AI to proactively reach out to the user - defining when and how to start new conversations.
Integrate with EHRs out of the box (currently with Epic and Healthy).
We've implemented guardrails to prevent the AI from overreaching, giving direct medical advice, or generating harmful content.
Allow AI to use internal models for simple tasks, and draw only from authoritative knowledge bases with proper citation for health-related guidance.
This feature set helps us get an MVP ready in days or weeks, not months. It's safely tested, production-proven, and shields against common AI pitfalls. We provide this on a SaaS model with no setup costs. Presently, we assist customers directly in building their solutions, but we're moving towards a fully self-serve platform.
Security and Compliance
It's HIPAA-compliant by default, and we sign BAAs. We anonymize protected health information (PHI) when sending it to the LLM, like GPT-4, and de-anonymize the response, ensuring PHI never leaves our system.
Future Outlook
With our platform, we are paving the way for more automated, efficient, and safe healthcare delivery systems. Our goal is to transition to a fully self-serve platform, fostering further innovation in healthcare LLM applications.