SubHero Banner

Blog

    Specified user is not valid
    Publish
     
    • Improving quality of life after cancer care

      Published Aug 07 2018, 10:39 AM
      • Cancer

      By Henry J Henk, PhD; Darshak Sanghavi, MD; and Samantha Noderer, MA, OptumLabs

      Researchers and clinicians have been working hard for decades to improve the quality of U.S. health care delivered to patients. In particular, our country has made significant improvements in the treatment of cancer. Overall, the death rate from cancer in the U.S. fell by 25 percent from 1990 to 2014. And now, approximately 70 percent of people diagnosed with cancer of any kind may expect to live five years or more.

      But as we get better at saving lives, how can we do more to preserve quality of life after treatment? And how can we accelerate broader use of treatment strategies that have a positive impact over both the short and long term? At OptumLabs®, we’re collaborating with cancer experts from our partner community around a longitudinal data set to understand and act on these important questions — to better understand what helps patients live healthier, higher-quality lives.

      The heart of the matter

      Let’s take pediatric leukemia as an example. With today’s advanced medicine and technology, the vast majority of kids diagnosed with this serious cancer survive. Yet, these young survivors often face health problems in the future caused by the side effects of cancer treatments. For example, a very effective and powerful chemotherapy drug called doxorubicin has been associated with serious heart problems 5, 10 or 20 years down the line.

      A breakthrough occurred in 1995 when the FDA approved a new drug called dexrazoxane, which is designed to protect the heart from damage caused by powerful chemotherapy. The drug was first approved and used in patients with breast cancer, but its use was unproven for other types of cancer. In 2004, a study in the New England Journal of Medicine found pediatric patients with acute lymphoblastic leukemia (ALL) who were given dexrazoxane had less heart cell damage (indicated by an increase in cardiac troponin) in the short term, without compromising the effectiveness of the chemotherapy they were receiving. In 2014, dexrazoxane received FDA-designated orphan drug status for prevention of heart damage in children and adolescents receiving chemotherapy. Over the next few years, more research was published on the favorable effects of this drug.

      Some may have expected this collection of early evidence to help change pediatric cancer care in recent years. But without more long-term evidence about the safety and effectiveness of dexrazoxane, is it enough to change practice? We turned to the OptumLabs data to find out.

      OptumLabs analysis

      Using de-identified OptumLabs claims data; we studied 499 pediatric patients with acute myeloid leukemia (AML) or acute lymphoid leukemia (ALL) who received doxorubicin chemotherapy between January 1993 and December 2010. Only 16 (3.2 percent) were treated with the heart-protecting dexrazoxane. A similar study on a different dataset published in Pediatric Blood Cancer found even lower use among patients with AML and ALL between January 1999 and December 2009. This was not surprising, given the lack of favorable evidence during that time period.

      Percentage of pediatric patients with ALL or AML who received dexrazoxane with chemotherapy,
      1993–2010, OLDW

       

      We looked back at the OptumLabs data for 2014–2016 to see if the FDA designation and favorable research were having an impact, and we found that rates had increased somewhat. Among 299 patients with several types of common pediatric cancers (AML or ALL as well other leukemia/lymphoma, CNS/brain, sarcoma/bone) who received doxorubicin, 28 (9.4 percent) received dexrazoxane.

      Our initial findings suggest that early evidence on short-term effectiveness of a drug is often not enough to get clinicians to use it. We also know that there are no guidelines recommending routine use of dexrazoxane, which are key drivers of broad uptake. There are some treatment protocols that recommend it for pediatric patients receiving a high dose of chemotherapy (cumulatively over 200 mg/m2). Nevertheless, this preliminary analysis helps us ask follow-up research questions that can get at the long-term outcomes associated with the drug and help identify the right patients for this treatment.

      That’s why the OptumLabs Data Warehouse — which contains data spanning 25 years and covers more than 160 million de-identified lives across claims and clinical information — can help us explore important questions about cancer care and survivorship.

      The OptumLabs Cancer Research Collaborative

      With this in mind, last year, The American Cancer Society and OptumLabs co-founded the OptumLabs Cancer Research Collaborative (CRC) to accelerate cancer research and translation through collaborative studies and research-driven data improvements. We’re convening expert participants from organizations such as Stand Up To Cancer, Harvard Pilgrim Health Care Institute, Mayo Clinic, University of California Health, Yale and leadership from Optum Clinical Services and UnitedHealthcare Oncology.

      The CRC is looking not only at cardiac impacts of cancer therapies, but at many other issues in cancer treatment and survivorship. The OptumLabs data contains information on components such as benefit design and prior authorization that are well suited for studying the cost of cancer care — one of the biggest challenges for survivors and the health care system at large — as well as long-term outcomes. Through this, we’re answering questions that can help us determine what quality care looks like, such as:

      • How are cancer care guidelines being followed?
      • How are cancer survivors using the health care system — what does it cost and is it effective?
      • What kind of chemotherapy toxicities are manifesting in survivors long-term?
      • What secondary risk factors are leading to poor health outcomes in cancer survivors — such as obesity and smoking?

      There are roughly 20 cancer-related research projects underway, and several recently completed on topics such as breast cancer survivorship care, emergency department use among colorectal cancer patients and cardio-toxicities associated with new cancer treatment drugs.

      With support from AARP, we are also working on developing quality measures with our partners at Tufts Medical Center that help identify cancer care-related adverse events and medical errors. Better measurement in these areas could help clinicians inform patients about treatment options, surface areas for health care delivery improvements and guide researchers and policymakers.

      It is gratifying that more people are surviving cancer than ever before. Just as important, OptumLabs and our partners are working to make sure that our health care system not only knows which treatments work, but also which treatments may lead to a higher-quality life for patients down the road.

      ABOUT THE AUTHORS

      • Henry J Henk, PhD, is vice president of research at OptumLabs
      • Darshak Sanghavi, MD, is chief medical officer at OptumLabs
      • Samantha Noderer, MA, is communications and translation manager at OptumLabs

    • Translating Shades of Gray: How can we accelerate value in health care?

      Published Apr 04 2018, 1:35 PM
      • Health care
      • Value
      • AARP

      By Margot Walthall, MHA, and Darshak Sanghavi, MD, OptumLabs

      Change is hard, particularly as our country focuses more on value — not volume — in health care.

      Getting paid based on number of tests or procedures has led to over-treating patients, exposing them to unnecessary health complications and higher costs. By focusing instead on the most effective treatments based on research, we can deliver the highest quality care, while saving money for the health system — and for patients.

      We know some medical tests and procedures are more harmful than helpful for certain patients. They may create additional health risks, cost a lot, have unpleasant side effects and, in some instances, lead to a cascade of unnecessary or unsafe follow-on services.

      These “low-value” medical services are more common than you might realize. With the input of more than 70 medical specialty societies, the Choosing Wisely campaign has identified hundreds of low-value services (LVS) to help improve care decisions between providers and patients

      For instance, we think of cancer screenings as generally useful preventive measures (the sooner you catch something, the better) that saves many lives. However, these experts found that colonoscopies for patients older than 75 can be more harmful than helpful. In these patients, colonoscopies can put them at risk for intestinal tears, dehydration and fainting. Combine this fact with additional out-of-pocket costs, time and stress associated with the screening and ask: Are these tests worth the costs if the patient does not have specific risk factors for colon cancer?

      It can be challenging to reconcile population-based recommendations when treating an individual person. Moreover, our research with AARP reveals that reducing the use of some of these LVS is easier said than done. This suggests that while our health care system has made some progress in transitioning to high value tests and procedures, we have a long way to go.

      Some low value services are easier to reduce than others

      The appropriate use of health care services is an important issue for AARP, OptumLabs’ founding consumer advocate partner, and its nearly 38 million members. We wanted to see whether efforts to reduce low value services were having an impact on the health system

      Using de-identified OptumLabs claims data, researchers from AARP and OptumLabs analyzed trends in use of 16 low-value services from 2009, before the creation of Choosing Wisely, to 2014 among adults ages 50-plus with commercial insurance and adults ages 65-plus with Medicare Advantage.

      Overall, we found that since the Choosing Wisely campaign and other efforts to increase value in health care began, there continues to be a lot of variation — clear successes and additional opportunities — in the use of specific LVS.

      Here are some examples.

      Declines in chest x-rays before surgery

      For decades, it’s been common for doctors to routinely give patients chest x-rays before surgery as a safety measure. Because these x-rays expose patients to radiation, can cause many false alarms and cost extra money, Choosing Wisely recommends that only patients with relevant risks should get them, such as those who:

      • Have signs or symptoms of a heart or lung condition
      • Have heart or lung disease
      • Are over 70 years old and haven’t had a chest X-ray within the last six months
      • Are having surgery on the heart, lungs or any other part of the chest.

      Based on our research, it appears many doctors have gotten the message and researchers found chest x-rays before surgery have fallen considerably across all populations over the six years.

      Figure 1. Pre-operative chest x-ray, 2009-2014

      Line graph shows steep declines in pre-op chest x-rays across all commercial and MA enrollees.
      From 2009-2014, pre-operative chest x-rays declined at the following rates: By 26.8% among commercial enrollees aged 50-64 (COM 50-64), by 21.3% among commercial enrollees aged 65+ (COM 65+), and by 13.1% among Medicare Advantage enrollees aged 65+ (MA 65+).

       

      Declines in cervical cancer screening for women over 65

      According to American Cancer Society Guidelines, women over 65 who have had regular pap smears over the past 10 years should not be screened for cervical cancer unless they have a serious cervical pre-cancer. This recommendation is because the risk of cancer is low at those ages and the testing can lead to unnecessary treatments.

      Reassuringly, researchers found a substantial decline in cervical cancer screening for women over 65 who had commercial insurance. Initially, the rates of over-testing were markedly higher in the Medicare Advantage subgroup compared to commercial enrollees, but over time, that gap has dramatically improved.

      Figure 2. Cervical cancer screening, 2009-2014

      Line graph shows steep decline starting in 2011 for cervical cancer screening among women over 65.
      From 2009-2014, cervical cancer screening in women over 65 declined by 45.2% for commercial enrollees (COM >65) and by 34.5% for Medicare Advantage enrollees over 65 (MA >65).

       

      Little reduction in MRIs for low back pain

      Use of MRI to help diagnose low back pain, one of the most common conditions among adults, had a much more subtle decline. A recent Health Affairs study showed similar findings. Why is this type of imaging more difficult to reduce, despite evidence that many patients with uncomplicated low back pain do not get better faster with imaging?

      Figure 3. MRI for low back pain, 2009-2014

      Line graph shows little change in rates of MRI for low back pain across commercial and MA enrollees.
      From 2009-2014, MRIs for low back pain declined very little for Commercial enrollees age 65+ (1.8%) and Medicare Advantage enrollees aged 65+ (11.4%). MRIs declined slightly more among Commercial enrollees aged 50-64 (20.6%).

       

      To determine if an MRI for low back pain is warranted — and it is in certain situations — doctors need to have careful conversations with their patients about all of their symptoms and health history instead of just ordering the imaging “to be safe.” MRIs are expensive and may be associated with higher rates of unnecessary surgery. But, doctors may not have enough time or may feel pressure from their patient to do the imaging. Therefore, reducing MRI scans in this setting appears to be more challenging.

      How can we accelerate change?

      The results of our work with AARP show that curtailing some low value services is more difficult than others. Awareness alone is not always enough for change. But there are some additional tactics that can help push change. These include reducing barriers to patient-provider conversations, rewarding a de-implementation culture of LVS and encouraging bundles of related — and potentially unnecessary — services for common patient situations.

      Reduce conversation barriers

      Thoughtful provider-patient conversations about which services are recommended, and when, are core to ensuring value in health care. However, a recent survey of providers suggests it has become more difficult to have these discussions with patients. Reasons cited include limited time during office visits, a lack of data to make confident choices, patient insistence and a desire to keep patients happy, as well as malpractice concerns and wanting to do certain things “just to be safe.”

      To solve this, we need to start having broader conversations with patients about what treatments are and are not appropriate so that both doctors and patients feel confident about the choices they make together.

      Create a culture for change

      According to the same provider survey, a cultural change within an organization is required to prioritize the de-implementation of wasteful services as much as the implementation of new treatments. Experts highlight four key actions for organizations to reduce LVS:

      • Ensure leaders prioritize the change.
      • Create a culture of trust, innovation and improvement.
      • Establish a shared purpose and language.
      • Commit resources to the measurement of value.

      Take it to the bundle

      Financial incentives have been blamed for a lot of overuse. Under fee-for-service models, the more doctors treat, the more they are paid. Some bundled payment models that focus on a collection of services rather than individual services have been shown to motivate higher quality health care at lower cost. For example, a possible “Low back pain service bundle” could offer the opportunity to address the overuse of imaging, opioids, surgeries and more. This could lead to fewer MRI scans that result in unnecessary surgery, yielding lower costs and better care.

      As a society, we’ve accepted the fact that many patients receive low value medical services, which continues to take its toll on the health care in America. To move forward in studying and translating what works for de-implementation, we have to focus on better methods that measure clinically meaningful outcomes and unintended consequences for patients.

      ABOUT THE AUTHORS

      *Margot Walthall, MHA, is vice president of integrated programs and translation at OptumLabs
      *Darshak Sanghavi, MD, is chief medical officer and senior vice president of translation at OptumLabs

    • To address the opioid crisis, build a comprehensive national framework

      Published Jan 25 2018, 6:51 PM
      • OptumLabs
      • Key performance indicator
      • Quality measures
      • Opioids

      By Darshak Sanghavi, MD; Aylin Altan, PhD; Christopher Hane, PhD; Paul Bleicher, MD, PhD, OptumLabs

      Among the myriad of challenges around the U.S. opioid epidemic is the lack of consistent, data-driven ways for the health system to measure and respond to it. That’s why OptumLabs collaborated with a panel of national clinical and public health experts to develop a comprehensive framework of 29 claims-based measures that are organized by four opioid-related domains: prevention, pain management, opioid use disorder treatment, and maternal and child health. These metrics have been shared with diverse stakeholders in various forms, and have garnered significant interest for their potential to guide a more holistic approach to evaluating and improving efforts to tackle this public health crisis. We are excited to share them with you! – Editor

      This following piece first appeared in the Health Affairs Blog on December 18, 2017.

      The annual rate of opioid-related deaths in the United States will surpass the historic peak annual death rates from motor vehicle accidents, HIV infections, and firearms to become the leading cause of death for people less than 50 years of age. The National Vital Statistics System reported roughly 64,000 deaths from drug overdoses in 2016 and year-to-year relative rate increases of more than 20 percent. Responding to this epidemic with such dramatic increases, the Department of Health and Human Services declared the national opioid crisis a public health emergency on October 26, in conjunction with President Donald Trump’s pronouncement on the same day.

      The current opioid crisis came after substantial increases in per-capita frequency and dosages of prescription opioids beginning in the late-1990s. Researchers hypothesize the increases were driven in part by efforts to recognize pain as the fifth vital sign as codified by the Joint Commission in 2001, pharmaceutical marketing practices, and lack of recognition of the risks of dependence and addiction with prolonged use. According to the Centers for Disease Control and Prevention (CDC), when averaged nationally, roughly one prescription for opioids is written for every person in the United States annually.

      Current gaps and opportunities in quality metrics

      The epidemic nature of the opioid crisis and its immense complexity has exposed important gaps in the ability of the health care system to respond in a data-driven manner. Typically, federally-endorsed performance measures are developed to track and address key health care processes or outcomes targeted for improvement using blunt payment incentives, which understandably require a high degree of evidence and testing prior to widespread deployment. The development process is not typically geared to rapid-cycle quality improvement programs, which can be deployed via education, quality improvement, comparison reporting, and value-based payment incentives as has been demonstrated in the private sector. As a result, formal federal measure development has not kept pace with the rapid cadence of the opioid epidemic.

      For example, the Pharmacy Quality Alliance developed three opioid misuse measures in 2015 (targeted towards identifying high frequency/dosage prescribers or towards “doctor-shopping” patients) and these measures did not complete the endorsement process of the National Quality Forum (NQF) until 2017. Broad adoption by federal payment programs may take several more years, during which time contributing prescribing trends could continue. Over a year ago, the CDC released guidelines on the appropriate dosing of opioids for pain, and the former U.S. Surgeon General mailed a letter to 2.3 million clinicians on the same topic, but no endorsed quality measure to assist with this recommendation has been made widely available.

      The need for a comprehensive quality measurement framework examining the impact of the epidemic from many vantage points has particular timeliness, as the White House opioid commission recently recommended that the nation "invest only in those programs that achieve quantifiable goals and metrics."

      OptumLabs, a collaborative research and innovation center, embarked on a five-month program to develop a comprehensive framework of 29 claims-based measures for the opioid crisis. The measure set for this program benefitted from the input of a panel of national clinical and public health experts, including representatives from the CDC and the Substance Abuse and Mental Health Services Administration. Four key domains for the metrics were identified: prevention, appropriate acute and chronic pain treatment, Opioid Use Disorder (OUD) treatment, and maternal/child health. Where possible, measures were abstracted from publically available specifications which were refined as needed by expert coding teams. The measures were calculated using the OptumLabs Data Warehouse, which includes de-identified integrated pharmacy, medical claims, and enrollment data from a geographically diverse population of approximately 150 million United States residents currently or previously enrolled in commercial and Medicare Advantage programs. The measures are described in Exhibit 1 and technical specifications are described in a supplemental index.

       

      Exhibit 1: Comprehensive opioid use quality measure framework and annual trends, 2016

      # Measure 2016
      Enrollees meeting inclusion/exclusion criteriaa 5,568,625
      Prevention  
      Primary outcome measures  
      1.   New opioid fillers per 1000 enrollees 122
      2.   Initial opioid prescription compliant with CDC recommendations (composite)b 55.4%
      3.   New opioid fillers who avoid chronic use 97.9%
      4.   Prevalence of opioid overdose (OD) per 100,000 person-years     35.9
      Secondary outcome measures  
      5.   Initial opioid prescription is prescribed while patient is not exposed to benzodiazepines (component of primary measure #2) 91.1%
      6.   Initial prescription is not for methadone (component of primary measure #2) 100%
      7.   Initial opioid prescription is for short acting formulation (component of primary measure #2) 99.6%
      8.   Initial prescription is for <50MME/day (component of primary measure #2) 77.2%
      9.   Initial opioid prescription is for <=7 days supply (component of primary measure #2 79.7%
      10. No use of opioids for new low back pain patients  87.1%
      11. No concurrent opioid and benzodiazepine use 78.0%
      12. Appropriate contact with provider before second opioid prescription 54.0%
      Pain management  
      Primary outcome measures  
      13.  Chronic pain treatment with opioids is optimally managed (composite)c 9.4%
      14.  Avoidance of breakthrough post-surgical pain leading to ED visit and new opioid presription 95.3%
      Secondary outcome measures  
      15. Appropriate contact with provider among chronic opioid users (component of primary measure #13) 95.1%
      16. No ED visit for breakthrough pain among chronic opioid users (component of primary measure #13)    85.3%
      17. Evidence of non-opioid pharmacological treatment for pain among chronic opioid users (component of primary measure #13 45.9%
      18. Evidence of non-pharmacological therapy for pain among chronic opioid users (component of primary measure #13) 23.8%
      Opioid use disorder (OUD) treatment  
      Primary outcome measures  
      19. Evidence of medication-assisted treatment (MAT) among patients with opioid use disorder (OUD) or OD 27.8%
      20. Prevalence of OUD per 1000 person-years 8.0
      Secondary outcome measures  
      21. Evidence of MAT following OD 10.8%
      22. Evidence of naloxone fill among patients with OUD or OD 0.7%
      23. No opioid prescription following any OUD or OD diagnosis 41.0%
      Maternal, infant & child health  
      Primary outcome measures  
      24. Percentage of infants with NAS born to mothers on MAT 20.6%
      25. Initial opioid prescription compliant with CDC recommendations for patients under 18y age (composite) 68.6%
      26. Prevalence of OD per 100,000 person-years under 18y age 7.2
      Secondary outcome measures  
      27. Cases per 1,000 live births of infants born with neonatal abstinence syndrome (NAS) 1.2
      28. New opioid filler per 1,000 enrollees under 18y age 36
      29. Prevalence of OUD per 1,000 person-years under 18y age 0.21
         

      Source: OptumLabs Data Warehouse. Note: aIncludes commercial and Medicare Advantage health plan enrollees with two years of continuous enrollment in medical and pharmacy coverage, no evidence of active cancer treatments, and not in long-term or palliative care. Denominators vary by measure and are noted for composite measures. bNumber of new opioid users in 2016: 750,594. cNumber of chronic opioid users in 2016: 311,870.

      Results of comprehensive framework development

      Related to the prevention rubric of Exhibit 1, data from 2016 shows that the rate of compliance with CDC-recommended prescribing for a first opioid fill was approximately 55% and the rates of new opioid fills was 122 per 1,000 enrollees. To better explore the distribution, the measures were computed at the county level and found to be nearly normally distributed (See Exhibit 2), with substantial and meaningful variation, suggesting that local factors may strongly impact prescribing volume and guideline-adherence. (Similar variation was seen among several measures; data not shown.)

      Exhibit 2: Distribution of initial opioid prescriptions in compliance with CDC recommendations and rates of new prescriptions at the county level, 2016

      Graph showing percent of new opioid fills that are compliant with CDC guidelines

      Source: OptumLabs Data Warehouse.

      In 2016, the year the CDC issued guidance for prescribing, there was striking geographic variation (See Exhibit 3) in appropriate prescribing with further variation in the specific component of guideline non-adherence. In some areas physicians tended to prescribe high doses for new prescriptions, while in other areas, they prescribed for longer durations suggesting the need for tailored interventions based on location. For example, in Massachusetts, there was overall average performance in the composite measure of CDC-compliant prescribing when compared with national benchmarks, and the opportunity for improvement appears to relate to excessive opioid dosage, rather than prescription duration (See Exhibit 4).

      Exhibit 3: County-based performance variation in selected opioid use quality measures, 2016
      (blue=better performance)

      U.S. map showing proportion of initial opioid is compliant with CDC recommendations

       

      U.S map showing MAT among patients with OUD

       

      U.S. map showing prevalence of OUD per 1,000 person-years

      Source: OptumLabs Data Warehouse. Note: Blue indicates better performance. Grey areas indicate counties with insufficient data to calculate measures. Histograms depict performance on x-axis, and number of counties on y-axis.

      Exhibit 4: County-based performance variation in selected opioid use quality measures, Massachusetts, 2016
      (blue=better performance)

       

      Three Massachusetts maps showing county-based performance variation in opioid use quality measures

       

      Source: OptumLabs Data Warehouse.

       

      Encouragingly, in the pain rubric of Exhibit 1, post-operative pain appeared to be well-managed with 95 percent avoiding emergency department visits resulting in receipt of new opioids. Our claims based measures related to patients on chronic opioids showed moderate utilization of non-opioid drug treatment by these patients with about 45 percent receiving this treatment, and low use of non-drug treatments (roughly one in four patients). It should be noted that patients may be accessing some of these therapies outside the benefits of their current health insurance plans.

      In the treatment rubric of Exhibit 1, rates of medication-assisted treatment (MAT) for OUD were approximately 28%, with a particular opportunity in those following overdose where rates of MAT were 11%. This is consistent with reported national trends. Additional data from our analysis suggests the relative prevalence of OUD increased 50 percent over the 2014-2016 time period, potentially due to both higher incidence and improved coding with ICD-10 adoption.  

      Moving forward with comprehensive framework

      While traditionally developed performance metrics are essential, their value in managing acute crises is limited by an extended ratification process, a focus on one measure at a time, and delayed provider feedback. By offering a comprehensive diagnostic snapshot with benchmarked data on populations that can be attributed based on patients’ geography, payer, or provider, “just in time” comprehensive quality frameworks can enable public health agencies, integrated health care providers, and/or health plans to more rapidly benchmark their status and deploy and evaluate the impact of interventions.

      Within our larger organization, for example, initiatives for safer prescribing via pharmacy benefit management, access to MAT via expanded provider networks, clinical and consumer education programs, and dozens of other programs are being pursued in various populations and geographic regions, and such a framework could help assess outcomes over time. To take one example, first-fill drug utilization rules were placed in effect in July 2017 by several hundred clients served by our organization, and within two months of implementation, initial review found relative reductions of 82 percent in excessive dosage and 65 percent reduction in excessive duration of new opioid prescriptions relative to CDC opioid best practice prescribing guidelines among this cohort. In this manner, “just in time” frameworks may permit a precision-medicine approach to the opioid epidemic based on a common set of data-driven metrics. 

      About the authors:

      *Darshak Sanghavi, MD, is chief medical officer at OptumLabs

      *Aylin Altan, PhD, is senior vice president of research at OptumLabs

      *Christopher Hane, PhD, is vice president of data science at OptumLabs

      *Paul Bleicher, MD, PhD, is chief executive officer at OptumLabs

    • Risk in perspective: A rare heart surgery infection explained with data

      Published Jan 08 2018, 11:59 AM
      • Heart Disease
      • Health care data
      • Bacterial Infection

      Darshak Sanghavi, MD; Samantha Noderer, MA

      Rondi remembers the day she got the letter in the mail from her hospital in central Massachusetts. It was addressed to her, but was about her teenaged son, Cole, who was born with a heart defect and underwent cardiac surgery a few months earlier. Like Rondi, thousands of other patients and families across the country were opening similar letters from their doctors in the fall of 2015.


      We are notifying patients who have had open-heart surgery, about a potential infection risk related to this surgery. We are contacting you today, as you or a member of your family have been identified in clinical records as a patient who might be affected...


      Rondi had many questions.

      “It was definitely nerve-racking,” said Rondi. “I was glad to know about the issue but my biggest concern was: How bad is it? Is this letter telling me everything?”

      Rondi was just one of thousands of people who received this letter because of a spike in reported infections connected to a device used in heart surgeries. But should Rondi have been concerned? Was there a real imminent threat to her son’s health? 

      In this blog post, we will look at how data can provide important context on health risks, assisting us in determining when and how to communicate to patients. 

      A rare bacterial infection sourced from a commonly used device

      The risk of infection was linked to bacteria that contaminated heater-cooler devices used during open chest surgeries. According to tests, contamination may have occurred during manufacturing of the equipment. More than 250,000 heart bypass procedures are performed each year in the U.S. with the help of these heater-cooler devices that regulate body temperatures. This could have been a major public health crisis.

      Graphic representation of heater-cooler circuits tested for transmission of mycobacterium chimaera.

      The suspect was a type of bacteria known as nontuberculous mycobacterium (NTM). While most people exposed to these bacteria never get an infection, a spike in reports of infections in patients linked to contaminated heater-cooler devices concerned public health officials at the Centers for Disease Control and Prevention (CDC). They asked providers to inform patients of the infection risk, which resulted in the letter Rondi received.

      “The letter explained that the signs of an infection could take several months or years to show. And the list of potential symptoms was very broad, such as night sweats, muscle aches, weight loss, fatigue and unexplained fever,” said Rondi. “I was most nervous to tell Cole about the letter without more information. It wasn’t until we spoke with our cardiologist and he clearly explained the small risk and how it relates to my son specifically, that we felt more at ease.”

      Based on a few published studies, the CDC estimated that the risk of a patient getting an infection was between about 1 in 100 and 1 in 1,000. This is a large range, which can be stressful for patients when the risks are not put into proper perspective.

      We asked ourselves whether it was possible to find a more precise estimate of risk. Knowing the precise risk could better inform public health communications and keep families like Rondi’s at ease in the event of future outbreaks.

      To answer this question, OptumLabs queried our data set of commercial and Medicare Advantage claims for more than 127 million people over 20 years.

      Demonstrating real-world risks with real-world data

      We explored the risk of mycobacterial infection among a group of patients who had claims for open heart bypass surgery between July 1, 2007 and June 30, 2015 and compared it to the risk of infection among patients who had claims for angioplasty —  a non-open heart cardiac procedure that does not involve a heater-cooler device — during the same time period. Both groups of patients had very similar health conditions. Because the only major difference between them was the type of surgery they had, we were able to isolate the impact of the heater-cooler device used in open heart surgery. Infection was defined by looking at ICD-9 diagnosis codes for treatment with rifabutin, a common antibiotic used to fight the infection.

      Looking at patients enrolled in the health plan for four years in a row, the small rate of infection among patients who had bypass surgery with a heater-cooler device was not statistically higher than the rate of infection among patients who had angioplasty without a heater-cooler device. In short, it appears that the actual risks to patients were quite low.

      Chart showing difference in risk between those with or without bypass surgery is not significant

      This initial analysis isn't definitive by any means, but shows how health care data can help point us in the right direction to guide patients, and support doctors when communicating the risks of one treatment or procedure over another to patients. 

      Looking at these results, should a letter have been sent to Rondi? We would argue that the letter should have been written with more precision, informed by data, and ready to answer patients’ second and third questions. Ideally, patient communications should provide relevant information that can help them put complicated risks into perspective. It can reduce confusion and prevent unnecessary worry.

      When it comes to our health or the health of a loved one, it’s often the questions left unanswered that can cause more distress than even the worst news. With the help of data, we can work to get the right answers to the right people to guide important real world decisions and positive outcomes.

       

      About the authors:  

      *Darshak Sanghavi, MD, is chief medical officer at OptumLabs

      *Samantha Noderer, MA, is communications & translation manager at OptumLabs

    • Making diabetes care personal with the right data

      Published Dec 08 2017, 4:20 PM
      • Diabetes
      • Health care data
      • Quality measures
      • Personalized care

      By Darshak Sanghavi, MD, and Samantha Noderer, MA, OptumLabs

      It isn’t easy being the pancreas.

      Nobody understands this better than somebody with diabetes — a chronic disease impacting more than 30 million people in the United States, according to the Centers for Disease Control and Prevention. A healthy pancreas constantly checks the body’s blood glucose (sugar) levels and adjusts them from rising too high by releasing tiny bursts of insulin. In people with type 2 diabetes, the body resists that insulin and in time the pancreas cannot keep up with the demand. When glucose in the blood becomes dangerously high, it can cause damage to vital organs like the kidneys, heart, eyes, and brain over many years. That’s a lot of pressure!

      When the pancreas struggles, doctors can step in to monitor a patient’s average glucose levels through a blood test called hemoglobin A1C (HbA1c). For most people with controlled type 2 diabetes, it’s recommended they’re tested 1–2 times per year.

      But it turns out there’s a lot of variation when it comes to how often patients are getting their A1C tested. Some are tested more frequently than necessary. Others are not tested enough. OptumLabs® and our partners are leveraging data to help us to gain insights on how this may impact a patient’s health, and how to better manage their diabetes.

      Your zip code could impact how often you’re tested for diabetes

      Image of U.S. map showing percentage of Medicare enrollees aged 65-75 with diabetes getting recommended yearly test of hemoglobin A1C. States such as Minnesota, Wisconsin, Iowa and Virginia have the highest percentage, while states such as Indiana, West Virginia and Louisiana have the lowest percentage.

       

      This map from Dartmouth researchers shows the striking variations in the proportion of Medicare beneficiaries getting the recommended yearly A1C testing.

      There are large areas of the country where roughly 1 in 4 patients don’t get their A1C monitored once a year. If left untreated, poor blood glucose control can cause serious complications down the line, like kidney failure or blindness.

      Understanding the variation in appropriate blood glucose level testing has led government health care programs and others to emphasize proper monitoring and comprehensive diabetes care through special measures. Today, insurance companies are graded — and rewarded — by Medicare on their performance with several diabetes measures. One such measure is keeping the A1C level for patients with type 2 diabetes under 8 percent. Low A1C levels mean the average blood sugar isn't too high — which is a good thing to a certain extent.

      Guidelines may improve care, but sometimes there are unintended consequences

      Higher quality care is everybody’s goal and driving accountability for better quality makes sense to most people. But what happens when providers focus on following these guidelines to test at least once a year and keep A1C levels low? Is it possible that a well-intentioned guideline could have some unanticipated side effects? Endocrinologist Dr. Rozalina McCoy, MD, and her research team at Mayo Clinic (OptumLabs’ co-founding partner) recently investigated this question and found some surprising trends.

      Using OptumLabs data, the Mayo team found1 that patients with controlled type 2 diabetes were being tested much more frequently than 1–2 times per year. More than half of patients were getting their A1C checked 3–4 times per year, and 6 percent of patients were getting 5 or more tests per year.

      When doctors test more often, they have more information about a patient’s glucose levels. This information could lead to better care, but there’s also potential this may do more harm than good.Illustration of five human figures, one is orange and the other are gray. This shows that 1 in 5 patients with controlled type 2 diabetes were likely over treated with glucose-lowering drugs.

      In a follow-up study, the Mayo team discovered2 that more than 1 in 5 patients with controlled type 2 diabetes were likely being over treated with glucose-lowering drugs, almost doubling their risk of dangerously low blood sugar episodes (“hypoglycemia”). These are severe episodes that can land patients in an emergency room or hospital.

      Making medical care better is a learning process. When a problem comes to light, like inconsistent A1C testing, the system responds to correct the problem by creating measures and incentives. However, unexpected side effects can emerge like over-testing that leads to over-treatment. What should we do to fix this unintended consequence?

       

      Learning from research to improve quality measures

      Upon reflection, the root of this issue is that the diabetes care measure only rewards lowering A1C levels. In collaboration with our founding consumer advocate partner AARP, OptumLabs established a new research project with Mayo Clinic to create a more nuanced performance measure that takes a “Goldilocks approach” to diabetes management.

      Rather than just shooting for a low A1C, a new measure will seek to reward being “in the sweet spot” of “just right” care, so the A1C should be neither too high nor too low.

      Patient’s A1C level

      (blood test)

      Glucose-lowering drug treatment

      (red is unsafe, green is safe)

      Too low Any medication No medication
      Safe range Target regimens depend on clinical complexity, number and type of glucose-lowering medications prescribed, and other patient risk factors.
      Too high ≤ 1 medication High-intensity medication treatment

       

      The actual measure has multiple subcategories to address the nuanced approach required to tailor treatment for patients in different Hba1C ranges with unique risk profiles.

      Mayo Clinic partners will prototype the measure using millions of de-identified patient records available from OptumLabs, and if successful, work to share this knowledge more widely with nationally recognized health organizations such as the National Quality Forum, which OptumLabs has collaborated with3 in the past.

      This is the blessing and the curse in the age of big data: Doctors can get a lot of helpful information to guide treatment, but the data flow can also be overwhelming. A key challenge for researchers, like those at OptumLabs, is shaping how to use the right signals to improve and not complicate treatment.

      It certainly isn't easy being the pancreas, which continuously evaluates and modifies its testing and output to manage someone's blood sugar. No one can reproduce its working perfectly. But the overall example it sets — always observing, testing, responding, correcting and having a big impact on the overall system — is one that OptumLabs and our partners strive to emulate.

      About the Authors

      *Darshak Sanghavi, MD, is chief medical officer at OptumLabs

      *Samantha Noderer, MA, is communications & translation manager at OptumLabs


      1. BMJ. HbA1c overtesting and overtreatment among US adults with controlled type 2 diabetes, 2001-13: observational population based study. Published Dec. 8, 2015. Accessed Dec. 5, 2017.
      2. JAMA. Intensive Treatment and Severe Hypoglycemia Among Adults With Type 2 Diabetes. Published July 2016. Accessed Dec. 5, 2017.
      3. National Quality Forum. NQF Launches Measure Incubator. Accessed Dec. 5, 2017.

       

    • Uncovering hidden patterns in dementia that might save lives

      Published Nov 03 2017, 11:01 AM
      • Dementia
      • Health care data
      • Natural Language Processing (NLP)
      • Electronic Health Record (EHR)
      • Alzheimer's disease

      By Darshak Sanghavi, MD and Samantha Noderer, MA

      Decades ago, doctors had to rely on personal experience and luck to make discoveries.

      For example, around World War II, a Dutch pediatrician noticed that some of his patients, who were malnourished despite being well-fed, suddenly improved when a grain shortage hit their community.1 The doctor unknowingly stumbled on celiac disease. Once he realized that gluten in wheat had caused the illnesses, he was able to effectively treat his patients.

      Today, instead of relying on serendipity to understand mysterious conditions, we have the potential to uncover hidden trends in the massive amounts of data created by electronic medical notes and records.  

      One way OptumLabs® is trying to do this is through a collaborative program with the Global CEO Initiative on Alzheimer’s Disease.OptumLabs and several research and expert partners are exploring the roots of Alzheimer’s disease and other dementias.

      But instead of relying on chance, OptumLabs and our partners are using advanced data science techniques to organize vast information for research and visualize new, meaningful patterns in dementia.

      Paper trails to computer processing

      Over the past two decades, electronic health records (EHRs) have transformed health care by recording information digitally. Gone are the often illegible handwritten notes that made it impossible to measure or monitor trends across hundreds or thousands of patients.  Digital records now make better presentation and interpretation of large amounts of data possible.

      While we’re now able to understand what the notes say, we’re still challenged in seeing trends across large populations. That’s why we need a way to take these unstructured, narrative texts in electronic notes, and create an easier-to-analyze spreadsheet. That's where “natural language processing” comes in.

      Natural language processing (NLP) uses a combination of linguistic, statistical and exploratory methods to analyze text via computer programs and organize it for research.3 Here’s an example of how thousands of charts turn into spreadsheets via NLP.

      In the OptumLabs clinical data, NLP-derived phrases are separated into tables based on their content. The Signs, Diseases and Symptoms (SDS) table, for example, filters to a medical concept that relates to the patient and provides details such as the location on the patient’s body, features such as severity, whether it’s confirmed or denied, and other notes.

      Here we can see the NLP table filtered to fall risks. Location does not apply here.

      SDS_TERM SDS_LOCATION SDS_ATTRIBUTE SDS_SENTIMENT NOTE_SECTION
      fall risk (null) Altered (null) psychosocial
      fall risk (null) (null) negative psychosocial
      fall risk (null) (null) negative (null)
      fall risk (null) Moderate (null) (null)
      fall risk (null) (null) negative social history

       

      In short, NLP allows researchers to go from illegible handwritten notes to a structured spreadsheet of consistent data that we can review in a systematic way.

      It’s still difficult to make discoveries across a massive table with thousands or millions of rows of data. However, there are helpful techniques that build upon NLP, allowing us to visualize big data and help find patterns.

      Mapping patient journeys through clinical note visualizations

      The visual communication of data is an invaluable learning tool. It can illustrate facts and relationships in context, revealing a higher level of understanding than text alone. OptumLabs is incorporating this approach in a variety of projects underway. 

      We are using a combination of NLP and data visualization to find hidden clues in medical notes that people were developing signs of dementia before they were diagnosed. This is important because there may be measures that doctors can take to prevent or delay this progressive disease if they had warning.

      To start, we used NLP to “read” clinical notes in our de-identified EHR database from more than 40 U.S. provider practice groups or independent delivery networks that serve more than 50 million people.

      We looked back at the medical history of patients with dementia and the related terms that showed up one to four years earlier in their medical records.

      To visualize the changes over time as patients move closer to their date of diagnosis, we organized the data in an “alluvial flow” diagram. This diagram style gets its name from nature’s alluvial fans that form as sediments carried from a point of flowing water build up over time.4

      Alluvial flow diagram helps us understand the early signals that would be useful in future projects focused on prediction prevention and treatment of Alzheimers disease and related dementias

      Figure. Alluvial flow diagram of dementia-related signs and symptoms mentioned in clinical notes three years prior to an Alzheimer’s disease or dementia diagnosis. Source: OptumLabs EHR Clinical Notes Data via NLP.

      Arranged from largest to smallest, the size of the black bars at each time stamp represents the number of patients with each of these issues mentioned in their chart. The colored flows represent how many patients transition from one state to the next.

      This visualization helps us understand the early signals that would be useful in future projects focused on prediction, prevention, and treatment of Alzheimer’s disease and related dementias.

      We can use this same technique to explore patient journeys in other disease areas and use many types of data such as administrative claims data, longitudinal survey data, or disease registry information.

      As we move from big data to bigger data in health care, OptumLabs continues to explore data visualization opportunities that uncover important patterns that would otherwise go unnoticed. In turn, these patterns just might lead to discoveries that save lives.

      About the authors:

      *Darshak Sanghavi, MD, is chief medical officer at OptumLabs

      *Samantha Noderer, MA, is communications & translation manager at OptumLabs


      1. Slate. Why Do So Many People Think They Need Gluten-Free Foods? Published Feb. 26, 2013. Accessed Oct. 16, 2017.
      2. Global CEO Initiative on Alzheimer’s Disease
      3. Nature Reviews Genetics. Mining electronic health records: towards better research applications and clinical care. Published May 2, 2012. Accessed Oct. 16, 2017.
      4. National Geographic Society. Alluvial fans. Accessed Oct. 16, 2017.

      You must be a registered user and logged in to comment on this post.