Personalizing diabetes prevention
Insights from the OptumLabs Research & Translation Forum
Identifying and treating those most at risk
84 million adults in the U.S. have prediabetes — blood sugar levels that are elevated, but not high enough to indicate diabetes. If not addressed, prediabetes can develop into type 2 diabetes (T2D).
With so many at risk, there’s a need to focus on those individuals at highest risk of near-term progression to T2D.
To address this issue, OptumLabs facilitated collaboration between teams from Tufts Medical Center and AMGA to develop a model that uses OptumLabs data to predict individual risk of developing T2D within three years.
It also provides an individualized estimate of the benefit of the two interventions — an intensive lifestyle program or the glucose management medication metformin — with the goal of connecting those at highest risk to prevention efforts.
The model is being piloted at Premier Medical Associates with strikingly positive results. AMGA is working with Mercy and Interopion to create a cloud-hosted version of the model that can be incorporated into any leading Electronic Health Record.
At the patient level, this model informs shared decision-making. At the population level, it helps provider organizations risk-stratify people with prediabetes.
A unique model with potential for impact
Listen to experts discuss what makes this model different and its potential to scale nationally.
Speakers: John Cuddeback, MD, PhD, Chief Medical Informatics Officer at AMGA; David Kent, MD, MS, Professor of Medicine and Director of the PACE Center at Tufts Medical Center; and Frank Colangelo, MD, MS-HQS, Chief Quality Officer at Premier Medical Associates.
- This title slide has a lot of white space on it, and the reason is that this is a story of collaboration that was facilitated at Optum Labs. I will fill up the rest of the slide with the logos of the people in the organizations that are working on this. Tufts Medical Center, where David directs the Predictive Analytics and Comparative Effectiveness Research Center, PCORI, which funded some of this work and continues to fund our our work. And by the way, PCORI has funded a one-in-the-nation predictive analytics resource center at Tufts as well, which David is also the director of. AMGA got involved - I'll explain in a minute why this is important to our members - And two of our members are involved in this implementation science project to test the use of a predictive model for people with pre-diabetes. One is Premier Medical Associates where Frank is chief quality officer. And the other is Mercy, a much larger organization based in St. Louis, 3,200 providers spread out over four states. So we'll get to test the scalability in a very large organization as well. And we're working with Interopion, which is part of the story of scalability and I'll tell you that at the end. So, four parts to the talk. One is, why a predictive model and why we got so interested in this and why it matters to our members and their patients. Then David Kent will talk about the reanalysis of this landmark clinical trial, the Diabetes Prevention Program which, in his work, is looking at estimating the risk for an individual rather than using the averages that came out of the clinical trial, and then how to adapt that for clinical use. And that's where Optum Labs comes in. Then, using the results in actual practice, and Frank will describe his experience doing that. And then, how can we make it easier to scale this and implement it in other health systems, which is obviously important for getting the benefit of this spread nationwide. So, AMGA and about 150 of its members are embarked on a five-year national campaign to improve care for a hundred million patients with type 2 diabetes. And Optum is actually one of the major sponsors of this national campaign. And I just wanna let you know about the results after year three. Our members have actually hit the goal and improved care for a million people with type 2 diabetes. The reason it's important in this context is that about two-thirds of those improvements are actual improvements for people who already have a diagnosis of type 2 diabetes. But one-third are discovering that people have type 2 diabetes. It's from the process of screening. So screening is really important, because one out of four people who have type 2 diabetes are not even aware of it. It's really important to get started early on treatment to minimize future complications, the idea of a glycemic legacy. So, one of the things we did as we started the campaign was to survey our participants, the 150 or so members who are participating in the campaign, and ask them which strategies they would adopt. And we were really surprised to see that almost a third of them said that they wouldn't focus on screening, when we know that's pretty important. And when we asked why, they said, well, it's because they're already overwhelmed with the number of people who have type 2 diabetes that they have to treat, let alone the number with pre-diabetes. Darshak's already described, pre-diabetes is elevated blood sugar, but not high enough to indicate diabetes. It does carry an elevated risk of progression to type 2 diabetes. And from the Diabetes Prevention Program study, we know that risk is on average about 29%. That means that one out of three adults, 84 million Americans have prediabetes. So it really is a very large population, and figuring out how to deal with that large of a population at risk for diabetes is a real problem. So, is there an effective way to prevent it? That's one of the questions. And that's answered by the study. And then, is there a way to prioritize? And that's answered by the work that David has done and that Frank has implemented. So, this study, the Diabetes Prevention Program study, it was a randomized control trial of 3,000 adults with pre-diabetes, and the main outcome measure was developing diabetes over three years. It was conducted in 1996 to 2001. It was actually stopped early because the interventions were so effective. The three interventions are very intensive lifestyle intervention, now called the DPP program. And as Darshak points out, that's now a covered benefit in Medicare, or by taking metformin, or no intervention, that was the placebo arm of the trial. So the study, the 29%, the risk of progression from pre-diabetes to diabetes, that's where the 29% came from. The overall average effect of the lifestyle intervention was an absolute risk reduction of 14%, and the effect of taking metformin was half as much on average, an absolute risk reduction of 7%. But the AMGA members participating in Together 2 Goal really still felt very overwhelmed. Because for every patient you find when you're doing screening with a result in the diabetes range, you find six patients with prediabetes. So it really is kind of an overwhelming problem. And that brings us to David's work on reanalyzing this clinical trial. Thank you, John. So, while John was faced with this real world problem, we at Tufts had just gotten funding from PCORI to do a research project that really wasn't targeted to any specific medical issue. It was targeted to a general methods problem that cuts across many different medical domains, and that's heterogeneity of treatment effect. And I know you've already heard about that this morning. The main issue here is that the randomized clinical trials such as the DPP and other trials, they really are designed to tell you what the best treatment on average is. And what doctors really wanna know instead is what the best treatment for an individual patient is. And those are two very different tasks. So the conventional way of trying to look at heterogeneity of treatment effect is really to look at subgroups, to look at subgroups, what we call one variable at a time. Let's look at males compared to women, old patients compared to young patients, those with diabetes compared to those without, those with a particular gene compared to those without. Those subgroup analyses, they have a lot of limitations and they have a lot of problems. I don't have time to discuss all the problems. But we were funded to look at what was then a new approach, really to look at prognostic risk using a multi-variable model where you integrate all the available patient information to predict the risk of the outcome. We were funded to look at the variation in that risk across many different trials. We actually looked at 36 different trials, and they included the DPP study. I'll just show you what the risk distributions look like. So this is a busy slide, it shows 36 different risk distributions. And I'm showing you this busy slide for two things only. You could see that all the trials we looked at had two things in common. The first thing is that if you look at any one of these, the risk varied tremendously in each of these trials. So the high-risk patients would typically have five-fold, ten-fold, twenty-fold, thirty-fold the risk of low risk patients. And when you have patients that differ in risk by that much you can be pretty sure, pretty confident that the amount of benefit they'll get will be different. And the second thing to notice from these distributions is that they don't look like normal bell-shaped curves. They're all skewed distributions where there are many low-risk patients and few high-risk patients. Now, the trial that's highlighted with the red square is actually the DPP study, but you can see the distribution that there's nothing special about the DPP study. The distribution there is actually quite typical of a lot of the distributions. We'll drill down a little bit. The average risk in this trial was, across arms, let's say 25% or so. But you can see most patients have lower-than-average risk. So the typical patient is at lower-than-average risk, and that's quite typical across trials. And the other thing you'll see is that there are many patients that have risks of 5% or less, 10%. And then there are also patients who have risks that are calculated to be much, much higher: 40%, 50%, 60%, 80%. And it's really those few influential high-risk patients that are often driving the results of the study. So risk is important because it's a mathematical determinant of the treatment effect, regardless of how you measure the treatment effect. But it's especially a determinant of the clinically important measure, which is the absolute risk reduction. And that was the case with the DPP. So these are the absolute risk reductions across different quartiles of risk. And just as we would have anticipated by looking at the risk distribution, the degree of benefit that patients had differed across the different risk categories. So if we look at the intensive lifestyle arm on the left side, we could see that the average treatment effect, as John said, was a 14% absolute risk reduction. But that summary result that you'd see in the abstract of the trial actually obscured the tremendous variation that we saw. And the lowest risk patients only got a five-percentage-point reduction, whereas the highest risk patients got a 30-percentage-point reduction. That's a six-fold difference in the benefit in terms of the three-year risk of developing diabetes. Metformin, which is shown on the right hand side, actually was even more skewed than lifestyle intervention because the low-risk patients actually got no benefit whatsoever from metformin, and the high-risk patients got a 21% absolute risk reduction. So that was all obscured in the overall result which showed only a seven-percentage-point absolute risk reduction. So, we were pleased with these results, we publish them in the BMJ. But what we really wanted to do is disseminate them out into care, and that's really where Optum Labs came in, and they came in in two ways. The first way is through the collaborative network. It was really through Optum Labs that our methods work came into contact with John's problem where he was trying to implement Together 2 Goal. And together, John and I collaborated on another PCORI grant which we received. The other way Optum Labs came in is through their database. And their database was a tremendous asset here because it allowed us to generalize the results from an RCT to a population that was actually the very population that we were hoping to treat. So, one of the things that we realized is that if we were going to disseminate this out broadly into care, we couldn't use the exact same model that we published. Some of the variables we used were easily ascertainable, but others like waist circumference or waist-to-hip ratio could not be ascertained in usual care by doctors. If we were disseminating this out to tailors that might work, but not physicians. So we developed a new model using again the Optum Labs data. We retained some of the easily ascertainable information and got rid of some of the others, and replaced it with variables that can all be easily ascertained and automatically ascertained from the EHR. So the data can be pulled automatically into the model and it would eliminate any data entry problems. So that was the first thing we had to address. The other thing about the Optum Labs data is the database is huge. So we had a developmental data set that was comprised of the northeast region, the south region or the west region, a million patients with pre-diabetes. And we developed the model on that population, the C-statistic was about 0.74. And the external validation was the Midwest region which had about a million pre-diabetic patients, and actually the C-statistic was slightly higher at 0.76. And the calibration was also excellent. The last step that we did is an important step. We now took this OptumLabs-derived model and we took the DPP data, and we risk stratified it according to the new model. And we got results that were actually almost identical to the results that I showed where we looked at unbiased estimates, risk-specific estimates of the absolute risk reduction in each of the risk categories. And we were then able to incorporate that into an EHR-based model. And then we reached out to Mercy and Premiere to implement it in their their clinics, and Frank will tell you about that.
- Hey. Thank you, Dave and John. I'm gonna talk about how we have implemented this at our practice. Premier Medical Associates, a 100 provider group in the Eastern suburbs of Pittsburgh. And when John approaches them, "Would you be willing to try implementing this?" My initial response was yes, because I come from the land of the famous Primanti Brothers sandwich. So we have a lot of high-risk patients that we take care of. And when I tell my patients to eat healthily, they go and order a Pittsburgh salad that has a little bit of lettuce and some steak and three kinds of cheese on top of it. So we needed help, we needed help with our patient populations. And we talked about the delay of time translation. I mean, the Institute of Medicine in 2001 said it takes 17 years for new knowledge to get incorporated into practice, and that's often uneven. And the date of the publication in New England Journal was February 7th of 2002. So if I do some math, it's 17 years later, and we're finally getting this implemented. So that was pretty cool when we did the math. And some of the numbers that we saw was less than 4% of patients in the United States with pre-diabetes are felt to be on metformin, and even a lower number have been referred to a diabetes prevention program 17 years after the paper came out. So how did we...? The process, what happened? Before we started this, there were patient and provider focus groups. We had a group of patients with pre-diabetes who, everyone in their family had diabetes, they knew the age that they develop diabetes, they were afraid that they were gonna develop diabetes. They wanted to know what is their risk actual of developing diabetes. And we talked to our doctors. I mean, most doctors, can you tell the risk? "Well, yeah, if their hemoglobin A1C's "in the high pre-diabetes range, they're probably high risk. "But I don't feel real good about that. "I wish I had a tool that could "help me predict that better." And whenever they looked at the information on David's work, I mean, 25% of the high-risk patients had A1Cs that were in the low range, and 15% of the low risk patients had A1Cs in the high range. So you couldn't rely just on the A1C result. So we put it in our EHR. These are actual a couple screenshots. I mean, sometimes patients are low risks, you're able to assure them, listen, try to be healthy, you're gonna be okay. But you'd have to tend 31 patients for treatment or start on to get the benefit from it. But when I turn the EHR screen and tell a person that they're a high-risk patient, and tell them you have a 58% chance that I'm going to be telling you in the next three years that you have diabetes, most of them gasp whenever I give them a number. And then I'll say, "But listen, it's not all gloom and doom. "I can refer you to an intensive lifestyle program "and reduce that risk by 58%." And metformin, similarly large reductions. And for every four patients that we treat I'm going to prevent one person from developing diabetes. So patients are listening to this. And over a 15-month period of time, three quarters of our patients with pre-diabetes in the practice had the calculator run. If 25% either didn't come in or they were in for an acute visit and someone forgot to run it. But that was, I thought, a pretty good impact, the number of patients that had their score given to them. And our docs did what we hoped that they were gonna do. 75% of the patients whose score was listed as high-risk had an intervention recommended by the provider. So it was what we hoped was going to happen. And, it was interesting, oftentimes our nurse navigator in her morning huddle would say, "This patient's coming in today, they have prediabetes, "they haven't had a blood sugar check for three years, "they haven't had an A1C check for four years," would remind the doctor to do it. We diagnosed 97 patients... We identified 97 patients as having diabetes thanks to this project also, so that we could start them on treatment in a more timely fashion. So there were 901 high-risk patients that we identified in the practice. Only 41 of them were on metformin, it was about 5%. We were better than the national average. But a 150 were started on metformin after May 1st when we started this, so 16%. None of them had been referred to a DPP before. Close to 500 have been referred to a DPP after that time. So our doctors took these recommendations and ran with them. Now, what did the patient's do? We have in our market, the YMCA, it runs the diabetes prevention programs, and we got feedback from 'em. A 124 of those 497 patients actually called the Y and talked to the gentleman who runs the program about what it would entail. 64 of them actually enrolled in the program and participated in it, and had an average weight loss of over 17 pounds. They hit the 7% mark for success. So we're circling back on these patients, "Why haven't you called?" "Well, I'm too busy, I'm work..." There were another 33 patients that we referred to an online diabetes prevention program. I reached out and said, "how many of those?" They aren't able to give us a feedback loop. So they're working on developing a feedback loop for that program as to how many have participated and what were their results. And our doctors, before we started this there was a survey of the primary care doctors in the practice, about 41 or 42% of them felt that they were confident or very confident in their ability to predict whether a person would develop diabetes. After a year of using the tool 93% of them said, "I feel very confident that I can tell my patients." So it had that action, it answered the doctor's request at the beginning of the project also for having an available tool to help to predict and have that discussion better with patients. And then, what can this mean? Intermountain Health Plan, their insurance product said for every year that they delay someone developing diabetes with pre-diabetes, they'll save $3,500. So over five years, a savings of $17,500. CMS's Office of the Actuary says, "In the 15 months after a person enrolls "in a diabetes prevention program, "that you save over $2,600." The cost of the program is around $600. So I think that's a pretty good return on investment there. So that's been the Premier experience with implementing the tool in our offices, and we've been very happy with the results so far. And I think John's gonna bring us home here.
- Thanks, Frank. So how can we make this a little easier to implement, this predictive model? It's really important to splice it into the clinical workflow in the EHR, and there are essentially two ways to do that. All EHRs provide some kind of clinical decisions support capability. And that's what Premier did, they actually use an ad-in to their Allscripts enterprise system called a eCalcs from Galen. But that's work at each organization that involves implementation, testing and maintenance. And if we revise the model, we've already recalibrated the model once during the year that we've been doing this, that has to be done at every organization. So a better way to do it, and this was actually suggested to us by Todd Stewart, the CMIO at Mercy. Rather than do the work to implement the calculator in their Epic system, let's build a cloud-hosted predictive model using the SMART on FHIR standard, and then any of the leading EHRs will be able to subscribe to that. So it's not quite as simple as downloading an app on your iPhone. Because all of the data elements are exposed as FHIR resources by the EHR. And the idea is that the app will then go and retrieve those data elements and then present the result to the clinician, first for validation and maybe a correction of the data. And so here's an example, don't look too closely at this. We have a slider here for age. Most people can't adjust their age. But this is just the first version for this particular model. But we're working with Interopion, which is one of the leading developers of SMART apps and FHIR standards. So here's an example of a rather more mature display that they have developed for the ASCVD risk calculator. So at any rate, this is something that you actually can implement across multiple EHRs in a consistent way. We can provide a good user interface, not only for the clinician but also to use with the patient in the shared decision-making process. So that's the next step and we're working on that currently at Mercy. And of course, the reason we're doing this. And one of Frank's patients consented to have his story be part of the story that PCORI tells about the work that David's group has done. Because John Schultz learned that he had a high-risk personally, fairly early in the process of using this calculator at Premier. And he actually joined the DPP Lifestyle Program and took it very seriously, lost more than 30 pounds. And has really, feeling a lot better. And this is the reason we're doing it, 'cause it really does make a difference. So that's our attempt to take heterogeneity-of-treatment effect and put it to use in the care of patients and populations.