Approximately 100,000 Americans die each year due to medical errors and recent studies have found that 10 percent to 15 percent of all clinical decisions regarding patient diagnosis and treatment are wrong.
A team of researchers led by Damon Centola, a professor and director of the Network Dynamics Group at the Annenberg School for Communication at the University of Pennsylvania, has found a simple, effective way to reduce errors in patient diagnosis and treatment — use structured networks to connect clinicians with other clinicians.
In a study published today in the journal Proceedings of the National Academy of Sciences (PNAS), the researchers shared results from a multi-year study involving nearly 3,000 doctors across the United States.
They found that when presented with a case study and asked to provide diagnosis and treatment recommendations for a patient, clinicians who were shown the diagnostic decisions of their peers on an anonymous basis, were on average twice as accurate in their recommendations than clinicians who made decisions on their own.
Simply put, doctors make fewer errors when they have a support network.
“The big risk with these information-sharing networks,” says Centola, who is the Elihu Katz Professor of Communication, Sociology, and Engineering, “is that while some doctors may improve, there could be an averaging effect that would lead better doctors to make worse decisions. But, that’s not what happens. Instead of regressing to the mean, there is consistent improvement: The worst clinicians get better, while the best do not get worse.”
Study co-author, Elaine Khoong of the University of California, San Francisco and the San Francisco General Hospital and Trauma Center, says, “We are increasingly recognizing that clinical decision-making should be viewed as a team effort that includes multiple clinicians and the patient as well. This study highlights that having other clinicians available for consultation at the point of decision-making improves clinical care.”
Over the course of several months, the researchers tested clinicians’ treatment and diagnostic decisions through an app that they built and distributed on Apple’s App Store specifically for this purpose.
After signing up for a trial and downloading the app, doctors were prompted to evaluate a clinical case — based on real life documented patient cases — over three rounds. At the start of each round, clinicians read the case study, then were given two minutes to answer two questions.
The first question had the doctors estimate the diagnostic risk for the patient (e.g., how likely is a patient with chest pains to have a heart attack within the next 30 days?) from 1 to 100. The second question prompted doctors to recommend the proper treatment among several options (e.g., send home, give aspirin, or refer for observation).
Every clinician was randomly assigned to one of two groups: either a control group whose members answered all questions in isolation, or an experimental group in which participants were connected in a social network with other anonymous clinicians whose responses they could see.
During rounds two and three, the control group participants had the same experience as in round one, answering questions in isolation. But, participants in the network condition could see the average risk estimates made by their peers in the social network during the previous round.
Every participant was given the opportunity to revise their answers from one round to the next, regardless of whether they were in a social network or not.
Centola’s team used the same experimental design to study seven different clinical cases, each from areas of medicine known to exhibit high rates of diagnostic or treatment error.
The researchers found that the overall accuracy of clinicians’ decisions increased twice as much in the networks as in the control groups. Moreover, among the initially worst performing clinicians, the networks produced a 15% increase over controls in the fraction of clinicians who ultimately made the correct recommendation.
“We can use doctors’ networks to improve their performance,” says Centola. “Doctors talk to each other, and we’ve known that for a long time. The real discovery here is that we can structure the information-sharing networks among doctors to substantially increase their clinical intelligence.”
In-person consultation networks in medicine are typically hierarchical with senior practitioners at top and younger doctors at the bottom. “Younger doctors with different perspectives, culturally and personally, come into the medical community and they're influenced by these top-down networks,” Centola says. “This is how persistent biases creep into the medical community.”
The researchers made an effort to recruit clinicians of various ages, specialties, expertise, and geographical locations for the experiment.
They found that anonymized egalitarian networks erased the barriers of status and seniority that, the researchers say, restrict many facets of learning in medical networks. Centola notes, “egalitarian online networks increase the diversity of voices influencing clinical decisions. As a result, we found that decision-making improves across the board for a wide variety of specialties.”
“We don’t have to reinvent the wheel to implement these findings,” Centola says. “Some hospitals, especially in low-resource areas, rely on e-consult technologies, in which a clinician sends a message to an outside specialist to get advice. It usually takes from 24 to 72 hours to get a response. Why not send this query to a network of specialists, instead of just a single person?”
Centola notes that each experimental trial took less than 20 minutes. What’s more, he says that the networks don’t have to be huge. In fact, 40 members is ideal.
“Forty people in a network gets you a steep jump in clinicians’ collective intelligence,” Centola says. “The increasing returns above that - going, say, from 40 to 4,000 - are minimal.”
The researchers are currently working to implement their network technology in physician offices. The Hospital of the University of Pennsylvania has already funded pilot implementation of this program, set to begin within the year.
The study, “Experimental Evidence for Structured Information-Sharing Networks Reducing Medical Errors” was authored by Damon Centola, Joshua Becker of University College London, Jingwen Zhang of University of California, Davis, Jaya Aysola of the Penn Medicine Center for Health Equity Advancement, Douglas Guilbeault of the University of California, Berkeley, and Elaine Khoong and published in Proceedings of the National Academy of Sciences.
Source: University of Pennsylvania