Posted May 16, 2023

“Solving consumer engagement is worth $1 trillion to our organization,” said a health plan executive to me one day. He was obviously being a bit sensationalist with the magnitude of the number, but it’s been a common thread in strategic dialogue across all of healthcare for quite a while—that so much of the high cost, waste, and poor clinical outcomes from which our healthcare system suffers stems from the lack of individuals’ engagement with it—or the flip, which is healthcare organizations’ inability to engage, at scale and cost-effectively, with its patients and members.

And it’s not just consumer engagement that needs to be solved—our country’s severe clinician burnout problem, combined with overall high rates of staff churn, are causing healthcare organizations to falter at delivering on their core services. One study shows that the average hospital has turned over 100.5% of its workforce in the last 5 years!

In this context, when it comes to generative AI, healthcare is an industry that we view as holding the most potential for tangible and measurable impact. Closing the gap on a shortage of millions of healthcare workers in the next several years, while also trying to increase leverage for those already in the workforce, requires much more than just the traditional paths of training or importing more human labor. We’re excited to be backing Hippocratic AI as they apply generative AI to execute against this opportunity set.

Imagine a world in which every patient, provider, and administrative staff member could interact with an immediately available, fully context-aware, completely capable, and charismatic conversationalist to help each individual pick the right path or do their job better (a form of “always-on triage,” as we’ve described in the past). Imagine that the marginal cost of engaging a patient through empathetic phone calls was on the order of $0.10 per hour, as opposed to the $50+ it might cost today. The very nature of generative AI—conversational, scalable, accessible to non-technical users—has the potential to solve the shortcomings of previous generations of rules-based chatbots and other such products in making these concepts a reality.

But AI applications in healthcare also pose among the highest stakes of any industry. AI skeptics might point to the lack of focus on responsibility, safety, and regulatory compliance exhibited by many companies in this space. Not to mention the challenge of assembling a cross-disciplinary team with deep expertise in LLM development, healthcare delivery, and healthcare administration to build AI products that actually work.

Hippocratic AI’s name alone represents their safety-first ethos (referring to the Hippocratic Oath that physicians commit to, in which the core principles are to “do no harm” to patients and to maintain confidentiality of a patient’s medical information). They’ve built a unique framework to incorporate professional-grade certification, RLHF (reinforcement learning from human feedback) through a panel of healthcare professionals, and “bedside manner” into their non-diagnostic, patient-facing conversational LLMs, with the recognition that passing a medical board exam is not enough to ensure that a model is ready to be deployed into a real-world setting.

We’ve known the CEO, Munjal Shah, since investing in his last company in 2017 (which was his third, after previously selling an AI company to Google), and thus know he has uniquely earned secrets about how to build a company at the intersection of AI and healthcare. He most recently ran a Medicare brokerage business that involved a national-scale call center that made personalized recommendations to seniors based on their individual claims history. There, he led through the operational pains of scaling an empathetic but efficient engagement platform for consumers in a regulated healthcare context. We believe these competencies give him and his founding team (composed of individuals with clinical, LLM development, and healthcare operations experience) an edge in understanding what it takes to bring high-impact, responsible, and safe generative AI products to market, and consider it a privilege to be backing him again.