ChatGPT and health care: Disruptive? Or just the same old thing?

By:

Apr 25, 2023

Back in February, ChatGPT, the popular AI chatbot, passed a medical licensing exam. This exercise, cosponsored by Massachusetts General Hospital, highlighted the potential benefits of ChatGPT (or any AI chatbot) in the future treatment of medical conditions. 

AI’s potential in health care is not a new discussion point; forward-thinking health care organizations have been using AI in their offerings for a few years now. However, the emergence of ChatGPT has brought the discussion of AI in health care back to the forefront of most health care actors’ minds and C-suite discussions.

While ChatGPT’s ability to pass the medical licensing exam may appear to be a fun, interesting data point about the latest AI craze, it does beg a number of questions:

  • What is the role of ChatGPT in the future of health care? 
  • Is it a disruptive innovation? 
  • Is it a sustaining innovation? 
  • Could it be leveraged to create both types of innovations? 
  • Why does it matter? 

Disruptive Innovation

Disruptive innovations make products and services more accessible and affordable, thereby making them available to a larger population. There are three components that make up a disruptive innovation:

  1. A enabling technology that allows a product or service to be delivered at a more affordable price to a broader market;
  2. An innovative business model targeting either nonconsumers (those who previously did not buy products or services in a given market) or low-end consumers (the least profitable customers), and;
  3. A value network where suppliers, partners, distributors, and customers are each better off when the disruptive technology prospers.

Policies and regulations also play a critical role, either enabling innovations to flourish, or prohibiting their ability to do so. This is especially true in health care. 

As ChatGPT gains knowledge and popularity, an essential question to answer is, ”What is the disruptive potential of ChatGPT?”

Assessing ChatGPT’s disruptive potential

A technology alone cannot be disruptive; it needs to be wrapped in the proper business model and deployed in a disruptive manner. As a result, ChatGPT could be disruptive, depending on how it’s deployed in the health care industry. 

For example, AI technology such as ChatGPT could be used to provide mental health advice to patients, particularly those who find it difficult to access traditional therapy. AI has been shown to be a promising therapy tool, particularly in the wake of COVID-19 where people experienced higher levels of loneliness. While AI chatbots can’t provide diagnoses or prescribe courses of treatment, embedding AI into a lower cost business model, or one that targets nonconsumers, is a potentially disruptive application of the technology. In such a case, ChatGPT would be providing those without access to therapy a new way to find mental health support, while not being as good as traditional therapy. 

ChatGPT may also disrupt patient-provider communication, as it could interact with patients on the provider’s behalf. For example, ChatGPT can answer questions about medications or at-home diagnostic tests, as well as educate patients on possible treatment options to discuss further with their doctors. Embedding ChatGPT into health systems’ business models to specifically target nonconsumers of medical care (such as those struggling to adhere to medication regimens) could be another potentially disruptive application of AI.

While these are two promising applications of ChatGPT (or AI in general), the technology alone is not enough to disrupt. In order to be truly disruptive, ChatGPT needs to be wrapped in the right business model, and it must be supported by both regulations and a value network that enables AI’s implementation in the health care space. 

The downsides of ChatGPT

While there are potential upsides to ChatGPT, it’s important to acknowledge that there are nondisruptive, and potentially dangerous ways, ChatGPT could be deployed in health care. 

On a basic level, the sources feeding ChatGPT could be inaccurate, meaning users would be trusting false health information. There is already evidence of ChatGPT being able to convincingly lie, as well, meaning the truthfulness of its advice could be called into question. There is also room for racial bias and discrimination in the use of Chat GPT—previous use of algorithmic solutions in health care settings found that clinical algorithms needed to consider black patients much sicker than white patients in order for black patients to receive the same level of care. 

ChatGPT has its fun uses, from recommending books to providing recipes for different meals. But more importantly, it holds potential to be a tool to power the future of health care. As ChatGPT develops and grows, it’s important to remember that technology alone does not make an innovation disruptive; more importantly, it’s how it’s deployed within a business model and its surrounding value network, inclusive of regulations, that will make the AI disruptive, or not. 

Jessica is a research associate at the Clayton Christensen Institute for Disruptive Innovation, where she focuses on business model innovation in health care, including new approaches to population health management and person-centered care delivery.