Virtual nurses (Munjal Shah, co-founder and CEO of Hippocratic AI)


JAMA에 11월30일에 올라온 “Will Generative Artificial Intelligence Deliver on Its Promise in Health Care?”논문.

Analysis Opinion on whether generative artificial intelligence technology can be well fused in the medical field

—- I’ve summarized the whole thing for convenience —-

History has shown that general-purpose technology often fails to deliver the benefits promised for many years (“the productivity paradox of information technology”). Healthcare has several characteristics that make successful deployment of new technologies more difficult than other industries

The prevailing technological changes in healthcare over the past 15 years have been made with the implementation of EHR. It is also worth appreciating that healthcare attempted to implement AI decades before the release of genAI, reflecting on the challenges faced with implementing EHR. In the 1960s, 1970s, and early 1980s, several companies and academia developed AI tools designed to support (or replace) clinicians as diagnostic physicians, but they have not proven helpful, leading to a “AI winter” that has significantly slowed interest and investment in healthcare AI for decades

Healthcare presents some challenges to digital innovation that are much more challenging than those seen in other industries

First, healthcare is highly regulated due to a large debate over data ownership, and strong privacy regulations significantly restrict data sharing essential to genAI

Second, because the EHR market is highly concentrated, and a small number of companies “own” the desktops of the majority of healthcare providers, companies other than EHR specializing in genAI-related tools have significant barriers to entry

Third, with a huge number of participants in healthcare, including doctors, hospitals, health insurance, employers, pharmaceutical companies, device manufacturers, and governments, the successful implementation of genAI is much more complex than the direct consumer industry, where tools only need to improve the experience of individual consumers (paying for it through capital or advertising)

Fourth, as medical data is very messy and often depends on the underlying purpose (e.g., clinical documents, quality reporting, compliance, claims), using a single dataset as the “truth” source for AI algorithms is potentially problematic

Fifth, the medical field is by no means static, and new research continues to lead to changes in understanding and practice that should be incorporated into treatment recommendations and protocols. Therefore, AI algorithms generated based on past records are old and even dangerous

So, can GenAI overcome the productivity paradox of healthcare?
(It would be helpful to analyze the possibility by referring to the interpretation of the thesis table as ChatGPT.)

In some situations, the introduction of genAI itself can quickly generate benefits, but in most cases, it can only benefit significantly when implementation is combined with significant changes in the design of the work

We expect genAI to achieve initial victories in healthcare delivery systems by addressing areas of waste and administrative friction rather than dealing with patient-facing tasks (e.g., diagnosis and treatment recommendations), and experience gained in these areas will pave the way for broader implementation in areas that more directly impact patient outcomes and experiences

To do so, GenAI developers must effectively address concerns related to hallucinations, prejudice, safety, and affordability. Regulators must establish standards that promote trust in genAI without unduly disrupting innovation. And most importantly, healthcare leaders must come up with a viable roadmap to prioritize areas where genAI can generate the most benefits for an organization, pay close attention to the complementary innovations that are still needed, and work to mitigate known problems in genAI

Paper: “Six ways large language models are changing healthcare,” published in https://jamanetwork.com/journals/jama/article-abstract/2812615 Nature Medicine.

It’s not a typical six-way interview, but rather an interview with six experts, but I think it’s a piece to think about the applications and possibilities of LLM in healthcare. Of course, there are actually more applications and possibilities…

1, Virtual nurses (Munjal Shah, co-founder and CEO of Hippocratic AI)

  • The goal is to create a virtual nurse for chronic disease management who speaks and listens to patients using automated ‘voice’
  • LLM-backed chronic nurses can pass the NCLEX License Test for Nurses and the NAPLEX License Test for Pharmacists, speak all languages and remember all conversations with each patient
  1. 2. Clinical note-taking (Harvard Medical School의 의학 교수인 David Bates)
  • LLM can help you categorize your emails and can be trained to respond to basic messages,
  • It can also be useful for identifying patients with chronic conditions, writing notes in patient records, and summarizing patients’ problems during the time between typical meetings with caregivers
  1. Adverse-event detection (Vivek Rudrapatna, gastroenterologist at the University of California, San Francisco)
  • LLM will be able to automate side effect detection in electronic health records and one day use new data sources to support drug safety monitoring in a post-marketing environment
  1. Predicting cancer metastasis (Amber Simpson, Canadian Research Chair, Biomedical Computing and Informatics, University of Queens, Canada)
  • LLM is being used to predict metastatic cancer and support clinical treatment response design. By providing a predictive pathway for how cancer progresses, LLM transforms treatment strategies into a more targeted and cautious approach
  1. 5. Social determinants of health (Columbia University의 간호학 교수인 Maxim Topaz)
  • LLM can provide clinicians with useful information about social determinants of health
  • Analyzing social determinants of health using LLM is likely to apply to healthcare beyond nursing
  1. 6. Conversational AI diagnostics (Google AI의 연구 책임자인 Greg Corrado)
  • LLM will soon be deployed in a predictive system that seamlessly integrates into clinical practice and “provides very high accuracy in the medical imaging domain.”
  • AI tool integration allows clinicians to “talk to the system, ask questions, and ask for help.” These tools can also be used to draft reports or improve existing reports

Added: I asked ChatGPT to draw a picture as the title of my thesis without a suitable one, so I put it in. Not too bad ^^

Paper: https://www.nature.com/articles/s41591-023-02700-1


답글 남기기

이메일 주소는 공개되지 않습니다. 필수 필드는 *로 표시됩니다