Blog

    Specified user is not valid
    Publish
     
    • Risk in perspective: A rare heart surgery infection explained with data

      Published 12 days ago
      • Heart Disease
      • Health care data
      • Bacterial Infection

      Darshak Sanghavi, MD; Samantha Noderer, MA

      Rondi remembers the day she got the letter in the mail from her hospital in central Massachusetts. It was addressed to her, but was about her teenaged son, Cole, who was born with a heart defect and underwent cardiac surgery a few months earlier. Like Rondi, thousands of other patients and families across the country were opening similar letters from their doctors in the fall of 2015.


      We are notifying patients who have had open-heart surgery, about a potential infection risk related to this surgery. We are contacting you today, as you or a member of your family have been identified in clinical records as a patient who might be affected...


      Rondi had many questions.

      “It was definitely nerve-racking,” said Rondi. “I was glad to know about the issue but my biggest concern was: How bad is it? Is this letter telling me everything?”

      Rondi was just one of thousands of people who received this letter because of a spike in reported infections connected to a device used in heart surgeries. But should Rondi have been concerned? Was there a real imminent threat to her son’s health? 

      In this blog post, we will look at how data can provide important context on health risks, assisting us in determining when and how to communicate to patients. 

      A rare bacterial infection sourced from a commonly used device

      The risk of infection was linked to bacteria that contaminated heater-cooler devices used during open chest surgeries. According to tests, contamination may have occurred during manufacturing of the equipment. More than 250,000 heart bypass procedures are performed each year in the U.S. with the help of these heater-cooler devices that regulate body temperatures. This could have been a major public health crisis.

      Graphic representation of heater-cooler circuits tested for transmission of mycobacterium chimaera.

      The suspect was a type of bacteria known as nontuberculous mycobacterium (NTM). While most people exposed to these bacteria never get an infection, a spike in reports of infections in patients linked to contaminated heater-cooler devices concerned public health officials at the Centers for Disease Control and Prevention (CDC). They asked providers to inform patients of the infection risk, which resulted in the letter Rondi received.

      “The letter explained that the signs of an infection could take several months or years to show. And the list of potential symptoms was very broad, such as night sweats, muscle aches, weight loss, fatigue and unexplained fever,” said Rondi. “I was most nervous to tell Cole about the letter without more information. It wasn’t until we spoke with our cardiologist and he clearly explained the small risk and how it relates to my son specifically, that we felt more at ease.”

      Based on a few published studies, the CDC estimated that the risk of a patient getting an infection was between about 1 in 100 and 1 in 1,000. This is a large range, which can be stressful for patients when the risks are not put into proper perspective.

      We asked ourselves whether it was possible to find a more precise estimate of risk. Knowing the precise risk could better inform public health communications and keep families like Rondi’s at ease in the event of future outbreaks.

      To answer this question, OptumLabs queried our data set of commercial and Medicare Advantage claims for more than 127 million people over 20 years.

      Demonstrating real-world risks with real-world data

      We explored the risk of mycobacterial infection among a group of patients who had claims for open heart bypass surgery between July 1, 2007 and June 30, 2015 and compared it to the risk of infection among patients who had claims for angioplasty —  a non-open heart cardiac procedure that does not involve a heater-cooler device — during the same time period. Both groups of patients had very similar health conditions. Because the only major difference between them was the type of surgery they had, we were able to isolate the impact of the heater-cooler device used in open heart surgery. Infection was defined by looking at ICD-9 diagnosis codes for treatment with rifabutin, a common antibiotic used to fight the infection.

      Looking at patients enrolled in the health plan for four years in a row, the small rate of infection among patients who had bypass surgery with a heater-cooler device was not statistically higher than the rate of infection among patients who had angioplasty without a heater-cooler device. In short, it appears that the actual risks to patients were quite low.

      Chart showing difference in risk between those with or without bypass surgery is not significant

      This initial analysis isn't definitive by any means, but shows how health care data can help point us in the right direction to guide patients, and support doctors when communicating the risks of one treatment or procedure over another to patients. 

      Looking at these results, should a letter have been sent to Rondi? We would argue that the letter should have been written with more precision, informed by data, and ready to answer patients’ second and third questions. Ideally, patient communications should provide relevant information that can help them put complicated risks into perspective. It can reduce confusion and prevent unnecessary worry.

      When it comes to our health or the health of a loved one, it’s often the questions left unanswered that can cause more distress than even the worst news. With the help of data, we can work to get the right answers to the right people to guide important real world decisions and positive outcomes.

       

      About the authors:  

      *Darshak Sanghavi, MD, is chief medical officer at OptumLabs

      *Samantha Noderer, MA, is health communications specialist at OptumLabs

    • Making diabetes care personal with the right data

      Published 43 days ago
      • Diabetes
      • Health care data
      • Quality measures
      • Personalized care

      By Darshak Sanghavi, MD, and Samantha Noderer, MA, OptumLabs

      It isn’t easy being the pancreas.

      Nobody understands this better than somebody with diabetes — a chronic disease impacting more than 30 million people in the United States, according to the Centers for Disease Control and Prevention. A healthy pancreas constantly checks the body’s blood glucose (sugar) levels and adjusts them from rising too high by releasing tiny bursts of insulin. In people with type 2 diabetes, the body resists that insulin and in time the pancreas cannot keep up with the demand. When glucose in the blood becomes dangerously high, it can cause damage to vital organs like the kidneys, heart, eyes, and brain over many years. That’s a lot of pressure!

      When the pancreas struggles, doctors can step in to monitor a patient’s average glucose levels through a blood test called hemoglobin A1C (HbA1c). For most people with controlled type 2 diabetes, it’s recommended they’re tested 1–2 times per year.

      But it turns out there’s a lot of variation when it comes to how often patients are getting their A1C tested. Some are tested more frequently than necessary. Others are not tested enough. OptumLabs® and our partners are leveraging data to help us to gain insights on how this may impact a patient’s health, and how to better manage their diabetes.

      Your zip code could impact how often you’re tested for diabetes

      Image of U.S. map showing percentage of Medicare enrollees aged 65-75 with diabetes getting recommended yearly test of hemoglobin A1C. States such as Minnesota, Wisconsin, Iowa and Virginia have the highest percentage, while states such as Indiana, West Virginia and Louisiana have the lowest percentage.

       

      This map from Dartmouth researchers shows the striking variations in the proportion of Medicare beneficiaries getting the recommended yearly A1C testing.

      There are large areas of the country where roughly 1 in 4 patients don’t get their A1C monitored once a year. If left untreated, poor blood glucose control can cause serious complications down the line, like kidney failure or blindness.

      Understanding the variation in appropriate blood glucose level testing has led government health care programs and others to emphasize proper monitoring and comprehensive diabetes care through special measures. Today, insurance companies are graded — and rewarded — by Medicare on their performance with several diabetes measures. One such measure is keeping the A1C level for patients with type 2 diabetes under 8 percent. Low A1C levels mean the average blood sugar isn't too high — which is a good thing to a certain extent.

      Guidelines may improve care, but sometimes there are unintended consequences

      Higher quality care is everybody’s goal and driving accountability for better quality makes sense to most people. But what happens when providers focus on following these guidelines to test at least once a year and keep A1C levels low? Is it possible that a well-intentioned guideline could have some unanticipated side effects? Endocrinologist Dr. Rozalina McCoy, MD, and her research team at Mayo Clinic (OptumLabs’ co-founding partner) recently investigated this question and found some surprising trends.

      Using OptumLabs data, the Mayo team found1 that patients with controlled type 2 diabetes were being tested much more frequently than 1–2 times per year. More than half of patients were getting their A1C checked 3–4 times per year, and 6 percent of patients were getting 5 or more tests per year.

      When doctors test more often, they have more information about a patient’s glucose levels. This information could lead to better care, but there’s also potential this may do more harm than good.Illustration of five human figures, one is orange and the other are gray. This shows that 1 in 5 patients with controlled type 2 diabetes were likely over treated with glucose-lowering drugs.

      In a follow-up study, the Mayo team discovered2 that more than 1 in 5 patients with controlled type 2 diabetes were likely being over treated with glucose-lowering drugs, almost doubling their risk of dangerously low blood sugar episodes (“hypoglycemia”). These are severe episodes that can land patients in an emergency room or hospital.

      Making medical care better is a learning process. When a problem comes to light, like inconsistent A1C testing, the system responds to correct the problem by creating measures and incentives. However, unexpected side effects can emerge like over-testing that leads to over-treatment. What should we do to fix this unintended consequence?

       

      Learning from research to improve quality measures

      Upon reflection, the root of this issue is that the diabetes care measure only rewards lowering A1C levels. In collaboration with our founding consumer advocate partner AARP, OptumLabs established a new research project with Mayo Clinic to create a more nuanced performance measure that takes a “Goldilocks approach” to diabetes management.

      Rather than just shooting for a low A1C, a new measure will seek to reward being “in the sweet spot” of “just right” care, so the A1C should be neither too high nor too low.

      Patient’s A1C level

      (blood test)

      Glucose-lowering drug treatment

      (red is unsafe, green is safe)

      Too low Any medication No medication
      Safe range Target regimens depend on clinical complexity, number and type of glucose-lowering medications prescribed, and other patient risk factors.
      Too high ≤ 1 medication High-intensity medication treatment

       

      The actual measure has multiple subcategories to address the nuanced approach required to tailor treatment for patients in different Hba1C ranges with unique risk profiles.

      Mayo Clinic partners will prototype the measure using millions of de-identified patient records available from OptumLabs, and if successful, work to share this knowledge more widely with nationally recognized health organizations such as the National Quality Forum, which OptumLabs has collaborated with3 in the past.

      This is the blessing and the curse in the age of big data: Doctors can get a lot of helpful information to guide treatment, but the data flow can also be overwhelming. A key challenge for researchers, like those at OptumLabs, is shaping how to use the right signals to improve and not complicate treatment.

      It certainly isn't easy being the pancreas, which continuously evaluates and modifies its testing and output to manage someone's blood sugar. No one can reproduce its working perfectly. But the overall example it sets — always observing, testing, responding, correcting and having a big impact on the overall system — is one that OptumLabs and our partners strive to emulate.

      About the Authors

      *Darshak Sanghavi, MD, is chief medical officer at OptumLabs

      *Samantha Noderer, MA, is health communications specialist at OptumLabs


      1. BMJ. HbA1c overtesting and overtreatment among US adults with controlled type 2 diabetes, 2001-13: observational population based study. Published Dec. 8, 2015. Accessed Dec. 5, 2017.
      2. JAMA. Intensive Treatment and Severe Hypoglycemia Among Adults With Type 2 Diabetes. Published July 2016. Accessed Dec. 5, 2017.
      3. National Quality Forum. NQF Launches Measure Incubator. Accessed Dec. 5, 2017.

       

    • Uncovering hidden patterns in dementia that might save lives

      Published Nov 03 2017, 11:01 AM
      • Dementia
      • Health care data
      • Natural Language Processing (NLP)
      • Electronic Health Record (EHR)
      • Alzheimer's disease

      By Darshak Sanghavi, MD and Samantha Noderer, MA

      Decades ago, doctors had to rely on personal experience and luck to make discoveries.

      For example, around World War II, a Dutch pediatrician noticed that some of his patients, who were malnourished despite being well-fed, suddenly improved when a grain shortage hit their community.1 The doctor unknowingly stumbled on celiac disease. Once he realized that gluten in wheat had caused the illnesses, he was able to effectively treat his patients.

      Today, instead of relying on serendipity to understand mysterious conditions, we have the potential to uncover hidden trends in the massive amounts of data created by electronic medical notes and records.  

      One way OptumLabs® is trying to do this is through a collaborative program with the Global CEO Initiative on Alzheimer’s Disease.OptumLabs and several research and expert partners are exploring the roots of Alzheimer’s disease and other dementias.

      But instead of relying on chance, OptumLabs and our partners are using advanced data science techniques to organize vast information for research and visualize new, meaningful patterns in dementia.

      Paper trails to computer processing

      Over the past two decades, electronic health records (EHRs) have transformed health care by recording information digitally. Gone are the often illegible handwritten notes that made it impossible to measure or monitor trends across hundreds or thousands of patients.  Digital records now make better presentation and interpretation of large amounts of data possible.

      While we’re now able to understand what the notes say, we’re still challenged in seeing trends across large populations. That’s why we need a way to take these unstructured, narrative texts in electronic notes, and create an easier-to-analyze spreadsheet. That's where “natural language processing” comes in.

      Natural language processing (NLP) uses a combination of linguistic, statistical and exploratory methods to analyze text via computer programs and organize it for research.3 Here’s an example of how thousands of charts turn into spreadsheets via NLP.

      In the OptumLabs clinical data, NLP-derived phrases are separated into tables based on their content. The Signs, Diseases and Symptoms (SDS) table, for example, filters to a medical concept that relates to the patient and provides details such as the location on the patient’s body, features such as severity, whether it’s confirmed or denied, and other notes.

      Here we can see the NLP table filtered to fall risks. Location does not apply here.

      SDS_TERM SDS_LOCATION SDS_ATTRIBUTE SDS_SENTIMENT NOTE_SECTION
      fall risk (null) Altered (null) psychosocial
      fall risk (null) (null) negative psychosocial
      fall risk (null) (null) negative (null)
      fall risk (null) Moderate (null) (null)
      fall risk (null) (null) negative social history

       

      In short, NLP allows researchers to go from illegible handwritten notes to a structured spreadsheet of consistent data that we can review in a systematic way.

      It’s still difficult to make discoveries across a massive table with thousands or millions of rows of data. However, there are helpful techniques that build upon NLP, allowing us to visualize big data and help find patterns.

      Mapping patient journeys through clinical note visualizations

      The visual communication of data is an invaluable learning tool. It can illustrate facts and relationships in context, revealing a higher level of understanding than text alone. OptumLabs is incorporating this approach in a variety of projects underway. 

      We are using a combination of NLP and data visualization to find hidden clues in medical notes that people were developing signs of dementia before they were diagnosed. This is important because there may be measures that doctors can take to prevent or delay this progressive disease if they had warning.

      To start, we used NLP to “read” clinical notes in our de-identified EHR database from more than 40 U.S. provider practice groups or independent delivery networks that serve more than 50 million people.

      We looked back at the medical history of patients with dementia and the related terms that showed up one to four years earlier in their medical records.

      To visualize the changes over time as patients move closer to their date of diagnosis, we organized the data in an “alluvial flow” diagram. This diagram style gets its name from nature’s alluvial fans that form as sediments carried from a point of flowing water build up over time.4

      Alluvial flow diagram helps us understand the early signals that would be useful in future projects focused on prediction prevention and treatment of Alzheimers disease and related dementias

      Figure. Alluvial flow diagram of dementia-related signs and symptoms mentioned in clinical notes three years prior to an Alzheimer’s disease or dementia diagnosis. Source: OptumLabs EHR Clinical Notes Data via NLP.

      Arranged from largest to smallest, the size of the black bars at each time stamp represents the number of patients with each of these issues mentioned in their chart. The colored flows represent how many patients transition from one state to the next.

      This visualization helps us understand the early signals that would be useful in future projects focused on prediction, prevention, and treatment of Alzheimer’s disease and related dementias.

      We can use this same technique to explore patient journeys in other disease areas and use many types of data such as administrative claims data, longitudinal survey data, or disease registry information.

      As we move from big data to bigger data in health care, OptumLabs continues to explore data visualization opportunities that uncover important patterns that would otherwise go unnoticed. In turn, these patterns just might lead to discoveries that save lives.

      About the authors:

      *Darshak Sanghavi, MD, is chief medical officer at OptumLabs.

      *Samantha Noderer, MA, is health communications specialist at OptumLabs.


      1. Slate. Why Do So Many People Think They Need Gluten-Free Foods? Published Feb. 26, 2013. Accessed Oct. 16, 2017.
      2. Global CEO Initiative on Alzheimer’s Disease
      3. Nature Reviews Genetics. Mining electronic health records: towards better research applications and clinical care. Published May 2, 2012. Accessed Oct. 16, 2017.
      4. National Geographic Society. Alluvial fans. Accessed Oct. 16, 2017.

      You must be a registered user and logged in to comment on this post.

    • Creating impact: Partnering to answer big questions with health care data

      Published Nov 03 2017, 11:01 AM
      • Health care data
      • Innovation
      • Collaboration
      • Analysis

      By Paul Bleicher, MD, PhD; William Crown, PhD; and Darshak Sanghavi, MD

      Welcome to the OptumLabs® blog. Here, we will take a regular look at some of the most complicated issues in health care, and explain how we’re addressing them through collaborative research and innovation.

      We'll make it accessible. We'll make it interesting. The goal is to bring you into the world of health care data with engaging but scientifically rigorous stories that make an impact on the health care system and on the health of people.

      But first, what does it take to create “impact” in health care using health care data? Impact is one of the key goals of OptumLabs. We think a lot about how to create impact and how to measure it.

      Sometimes, when you have one of the largest databases in health care, it is tempting to study a problem or do an analysis because the results will be “interesting.” But answering “interesting” questions alone doesn’t create a positive impact on people’s lives. 

      Creating impact by solving some of health care’s biggest problems with complex health data depends on a number of factors. First, you need the data and the sound science to use it. Next, you need the right expertise in health care to conceive and implement solutions once you generate important findings. And, often it takes working together with world-class organizations that have the resources and skills to implement these solutions.

      That’s OptumLabs.

      With access to health care’s richest data set of more than 200 million de-identified lives, data and analysis — with a focus on health system improvement — resides at the center of how we solve the biggest problems in health care. 

      OptumLabs staff and partners represent the world’s experts in health care. With our diverse group of partners from Mayo Clinic, AARP, AMGA, University of California Health to many others in the private and public sectors, we work collaboratively with the help of the OptumLabs Data Warehouse — our industry-leading data asset containing de-identified, linked administrative claims, medical records, and patient self-reported health information.

      Applying some of the newest artificial intelligence techniques, we look at long-term trends to better understand the health experience of individuals and its impact on the industry by pairing claims data with unique details on patient and health plan costs, demographics, health behaviors and more.

      In this blog, we'll show how that happens. Every couple of weeks, OptumLabs experts and partners will touch on some of the biggest health care problems today; like how to track and understand the many dimensions of the opioid crisis, whether Alzheimer's disease and dementias can be detected earlier, how to guard against harming patients with diabetes while trying to control their blood sugars, and much more.

      Join us on our journey to improve health and health care. Follow us on Twitter (@OptumLabs) for new post alerts.

      About the authors:

      *Paul Bleicher, MD, PhD, is the chief executive officer at OptumLabs

      *William Crown, PhD, is the chief scientific officer at OptumLabs

      *Darshak Sanghavi, MD, is the chief medical officer at OptumLabs


      You must be a registered user and logged in to comment on this post.