The ABCDEs of Technology Adoption

The ABCDEs of Technology Adoption

GUEST POST from Arlen Meyers, M.D.

Every day, doctors have to make daily decisions about whether or not to adopt a new technology and add it their clinical armamentarium, either replacing or supplementing what they do. In doing so, they run the risk of making a Type 1 or a Type 2 adoption error.

Epistemology is a branch of philosophy generally concerned with the nature of knowledge. It asks questions such as ‘How do we know?’ and ‘What is meaningful knowledge?’. Understanding what it means to have knowledge in a particular area—and the contexts and warrants that shape knowledge—has been a fundamental quest for centuries.

Data Information Knowledge Wisdom Pyramid

In Plato’s Theaetetus, knowledge is defined as the intersection of truth and belief, where knowledge cannot be claimed if something is true but not believed or believed but not true. Using an example from neonatal intensive care, this paper adapts Plato’s definition of the concept ‘knowledge’ and applies it to the field of quality improvement in order to explore and understand where current tensions may lie for both practitioners and decision makers. To increase the uptake of effective interventions, not only does there need to be scientific evidence, there also needs to be an understanding of how people’s beliefs are changed in order to increase adoption more rapidly.

Only 18% of clinical recommendations are evidence based. There are significant variations in care from one doctor to the next. Physicians practicing in the same geographic area (and even health system) often provide vastly different levels of care during identical clinical situations, including some concerning variations, according to a new analysis.

Clinical and policy experts assessed care strategies used by more than 8,500 doctors across five municipal areas in the U.S., keying in on whether they utilized well-established, evidence-backed guidelines. They found significant differences between physicians, including some working in the same specialty and hospital.

The study results were published Jan. 28 in JAMA Health Forum.

One practice difference the authors found surprising was in arthroscopic knee surgery rates. In these cases, the top 20% of surgeons performed surgery on 2%-3% of their patients, while the bottom 20% chose this invasive option for 26%-31% of patients with the same condition being treated in the same city.

The question is why?

There’s an old joke that there are two ways everyone sees the world: those that see it as a 2×2 matrix and those that don’t.

Type 1 Type 2 Errors Kris Martin

A type 1 error occurs when they make a “false positive” error and use or do something that is not justified by the evidence. Type 2 errors, on the other hand are “false negatives” where the practitioner rejects or does not do something that represents best evidence practice.

The most recent example is the campaign to get doctors to stop prescribing low value interventions and tests. The Choosing Wisely campaign, which launched five years ago, hasn’t curbed the widespread use of low-value services even as physicians and health systems make big investments in the effort, a new report found.

The analysis, released  in Health Affairs, said a decrease in unnecessary healthcare services “appear to be slow in moving” since the campaign was formed in 2012. The report found that recent research shows only small decreases in care for certain low-value services and even increases for some low-value services.

The reasons why American doctors keep doing expensive procedures that don’t work are many. The proportion of medical procedures unsupported by evidence may be nearly half. In addition, misuse of cannabis, supplements, neutriceuticals and vitamins are rampant.

Evidence-based practice is held as the gold standard in patient care, yet research suggests it takes hospitals and clinics about 17 years to adopt a practice or treatment after the first systematic evidence shows it helps patients. Here are some ways to speed the adoption of evidence based care.

Unfortunately, there are many reasons why there are barriers to adoption and penetration of new technologies that can result in these errors. I call them the ABCDEs of technology adoption:

Attitudes: While the evidence may point one way, there is an attitude about whether the evidence pertains to a particular patient or is a reflection of a general bias against “cook book medicine”

Biased Behavior: We’re all creatures of habit and they are hard to change. Particularly for surgeons, the switching costs of adopting a new technology and running the risk of exposure to complications, lawsuits and hassles simply isn’t worth the effort. Doctors suffer from conformation bias, thinking that what they do works, so why change?

Here are the most common psychological biasesHere are many more.

Why do you use or buy what you do? Here is a introduction to behavioral economics.

Cognition: Doctors may be unaware of a changing standard, guideline or recommendation, given the enormous amount of information produced on a daily basis, or might have an incomplete understanding of the literature. Some may simply feel the guidelines are wrong or do not apply to a particular patient or clinical situation and just reject them outright. In addition, cognitive biases and personality traits (aversion to risk or ambiguity) may lead to diagnostic inaccuracies and medical errors resulting in mismanagement or inadequate utilization of resources. Overconfidence, the anchoring effect, information and availability bias, and tolerance to risk may be associated with diagnostic inaccuracies or suboptimal management.

In addition,  there is a critical misunderstanding of what information randomized trials provide us and how health care providers should respond to the important information that these trials contain.

  • Has this trend been studied?
  • If so, who conducted the study?
  • Was it somebody who may make money based on study results?
  • Did the study include a control group?
  • What population did they use to test this trend?

Do you know how to read the medical literature?

Denial: Doctors sometimes deny that their results are suboptimal and in need of improvement, based on “the last case”. More commonly, they are unwilling or unable to track short term and long term outcomes to see if their results conform to standards.

Emotions: Perhaps the strongest motivator, fear of reprisals or malpractice suits, greed driving the use of inappropriate technologies that drive revenue, the need for peer acceptance to “do what everyone else is doing” or ego driving the opposite need to be on the cutting edge and winning the medical technology arms race or create a perceived marketing competitive advantage. In other words, peer pressure and social contagion is present in medicine as much as anywhere else. “Let’s do this test, just in case” is a frequent refrain from both patients and doctors, when in fact, the results of the treatment or test will have little or no impact on the outcome. It is driven by a combination of fear, the moral hazard and bias.

Here are some common customer fears when it comes to adopting a new product or technology and how to overcome them.

 Although investment in start-ups is significant, complex barriers to implementing change and innovation remain.

These “unnecessary” barriers, which vary from complicated funding structure to emotional attitudes towards healthcare, have resulted in the uneven advancement of medical technologies – to the detriment of patients in different sectors.

Economics: What is the opportunity cost of my time and expertise and what is the best way for me to optimize it? What are the incentive to change what I’m doing?

Here are three insights as to why physicians are still skeptical about using wearable technology to monitor patients’ health:

  1. Data access. Clinicians aren’t interested in using wearables if data from the devices isn’t connected to their organization’s EHR. Only 10 percent of physicians said they have integrated data from patient wearables, leaving clinicians unable to access the data or having to enter it manually.
  2. Data accuracy. Some physicians do not trust data from consumer wearable devices; for example, the FDA and other global regulators have cleared a smartwatch application that can alert patients who have already been diagnosed with atrial fibrillation when they are experiencing episodes. However, the capability is less useful as a mass screening tool and has generated many false positive results.
  3. User error and anxiety. If a wearable device is not worn correctly, it may generate inaccurate results. Some who use wearables to monitor their health can also become too focused on vitals such as heart rhythm and pulse rate, which can cause anxiety-induced physical reactions that mimic conditions such as atrial fibrillation.

The past 600 years of human history help explain why humans often oppose new technologies and why that pattern of opposition continues to this day. Calestous Juma, a professor in Harvard University’s Kennedy School of Government, explores this phenomenon in his latest book, “Innovation and Its Enemies: Why People Resist New Technologies.”

Here are the key takeaways.

Research indicates that doctors make these kinds errors frequently(http://ecp.acponline.org/sepoct01/pilson.pdf). Moreover, we are witnessing the development of digital health technologies like medical mobile apps, most of which are not clinically validated. So, how should a clinician decide when to adopt new technology? How much evidence is sufficient for an unsophisticated physician to begin adopting or applying a technological innovation for patient care? How do you strike a balance between innovation and evidence from a patient safety and quality standpoint?

Changing patient behavior has been described as the “next frontier”. To make that happen, we will need to change doctor behavior as well.Some interventions work but passive interventions don’t.

Here are some suggestions:

  1. Recognize, that like most customers, surgeons buy emotionally and justify rationally.
  2. Surgeons should be introspective about how and when they adopt new technologies and try to minimize Type 1 and Type 2 errors
  3. While an initial step is to be sure that surgeons are aware of new information that might drive an adoption decision, research indicates that simply presenting them with that information does not change behavior.
  4. Doctors should be skeptical about digital health technologies that might be technically and commercially validated, but not clinically validated.
  5. Doctors will have to resolve the conflict between best evidence and personalized medicine. We face the opportunity to individualize care yet are faced with the challenges of delivering mass customized care when it is becoming increasingly commoditized.
  6. Technology adoption, diffusion and sustainability vary depending on the product offering like drugs, devices, digital health products, care delivery innovation or business process innnovation.
  7. Doctors often have nothing to do with choosing which technology is adopted or, more importantly, paid for by a third party.
  8. Doctors, particularly those that are employed, have to concern themselves more and more with making the numbers, which involves implicitly or covertly rationing care, as irrational as it may be.
  9. Conflicts of interest hide, in some instances, the motivation for technology adoption.
  10. Doctors have high switching costs when it comes to including something new in their therapeutic armamentarium.
  11. In most instances, dissemination and implementation has more to do with leading change than the technology. Consequently it is best to get buy in from clinicians early, define a clear unmet need, have internal champions and leadership from the C-suite.
  12. Adoption and penetration happens in organizations when there is a match between the values and skills of intrapreneurs and organizational innovation readiness as demonstrated by teams that are willing to pull the oars in the same direction.
  13. Here are some ways to to change doctor prescribing habits to conform to evidence based medicine

The job doctors want virtual care technologists to do is that they want you to give them a QWILT: quality, workflow efficiencies,income, protection from liability and giving them more time spend with patients (face to face, since, in most instances, that’s how they get paid) Increasingly, they also want to spend more time “off the clock”, instead of being overburdened with EMR pajama time and answering non-urgent emails or patient portal messages.

While monetary incentives and behavioral “nudges” both have their strengths, neither of them is sufficient to reliably change clinician behavior and improve the quality of their care. Sometimes nudging helps. Organizational culture, while diverse and complex, provides another important lens to understand why clinicians are practicing in a certain way and to put forth more comprehensive, long-term solutions.

The public shares some culpability. Americans often seem to prefer more care than less. But a lot of it still comes from physicians, and from our inability to stop when the evidence tells us to. Professional organizations and others that issue such guidelines also seem better at telling physicians about new practices than about abandoning old ones.

Medicine has a lot to learn from the consumer products industry when it comes to using the power of brands to change behavior. Some are using personal information technologies to give bespoke information to individual patients, much like Amazon suggesting what books to buy based your preferences. We need to do the same thing for doctors.

Like most consumer electronics customers, doctors will almost always get more joy from technology the longer they wait for it to mature. Cutting-edge gadgets can invoke awe and temptation, but being an early adopter involves risk, and the downsides usually outweigh the benefits.

There are many barriers to the adoption and penetration of medical technologies. The history of medicine is full of examples, like the stethoscope, that took decades before they were widely adopted. Hopefully, with additional insights and data, it won’t take us that long.

Image credits: Pixabay, ResearchGate.net, Kris Martin

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

Leave a Reply

Your email address will not be published. Required fields are marked *