The future of AI in health care
Insights from the OptumLabs Research & Translation Forum.
How can artificial intelligence make health care better?
We are experiencing a deluge of data that can be tapped to inform improvements in health care. Experts predict that artificial intelligence (AI) can support these improvements. What can AI help with today, and where can we expect a big impact in the future?
AI in Focus: In this mini-podcast series, we share two perspectives on where AI can add value in health care — in medicine and health care operations. Two experts deliver a point of view on the challenges, benefits and future of AI in these areas.
AI in medicine
- [Interviewer] Hi everyone, welcome to AI in Focus. A two part Podcast series where industry experts talk about artificial intelligence, it's challenges, benefits and future. I'm here with Atul Butte, Director of the Institute for Computational Health Sciences at UC San Francisco to talk about the challenges and benefits of using AI in a clinical setting. Thanks for being here, Atul.
- [Atul] It's great to be here.
- [Interviewer] So Atul, what do you think are the primary challenges in the practical use of artificial intelligence in the clinical setting?
- [Atul] Everyone's talking about the use of artificial intelligence in medicine right now. Indeed, the Food and Drug administration, the FDA, has already approved six devices and software tools in just the past 18 months. So, I think we're getting to more practical use. But they have to be very specific uses. So, for example, the approved uses include things like stroke triage in emergency rooms, diagnosing diabetic retinopathy from retinal pictures, so very targeted uses. I think we're gonna see more of those in the next couple years where physicians, start up companies, large companies, are gonna go after these very targeted, really high risk, really hard to diagnose, different aspects of medicine, and solve them with computers. I think we're getting there.
- [Interviewer] That's great to hear! Where do you see the next big inroads for artificial intelligence over the next 12 to 24 months?
- [Atul] I see a lot of inroads being made from the field of artificial intelligence machine learning in just the next year or two. Certainly, I think we have much more data and getting more data and access to data, I think it's gonna happen. We have health systems now, that are really moving onto standard electronic health record systems, and they wanna do something with that data. As I often say, I think electronic health record data is the most expensive data in America. We're paying physicians to type all this in, we have to do something with that. It's irresponsible if we don't use that data to improve the practice of medicine. So I see more companies trying to partner with some of these health systems, all trying to get into this health care machine learning space. Trying to help physicians and health care practitioners with some aspects of their job.
- [Interviewer] You know, I've always wondered, hanging out in Silicon Valley, what are some of the trends happening there that we may not be aware of?
- [Atul] Yeah, so I'm really lucky I get to hang out in, I think one of the most amazing places in the United States, in Silicon Valley. We certainly have a lot of small and large companies working in the health care space now, and more coming in every week, every month. One aspect of data science and machine learning, I don't think people have paid enough attention to is, the empowerment of patients with all this data. I think that's once major aspect that we are all missing as payers, as providers, as pharma device manufacturers. I don't think we're really that used to patients being empowered with their data for that to actually be happening. And I think those patients are going to be empowered. Not with just the direct access to their data, but with interpretations and advice given through AI on that data. So that's something I'm going to be paying more attention to in the future.
- [Interviewer] Thanks for being here, Atul. We really appreciated having you.
- [Atul] Thank you.
- [Interviewer] We invite you to stay in touch on AI topics by visiting our website optum.com/IQ
Atul Butte, MD, PhD, distinguished professor and chief data scientist, University of California, shares his perspective on the applied use of AI in medicine, the responsibility to leverage the data, and the growing voice of consumers in health care.
AI in operations
- [Andrea] Hi, everyone. Welcome to AI in Focus, a two-part podcast series where industry experts talk about artificial intelligence, its challenges, benefits, and future. I'm here with Paul Bleicher, CEO of OptumLabs, to talk about the impact of AI on healthcare operations. Thanks for being here, Paul.
- [Paul] My pleasure, Andrea.
- [Andrea] So, Paul, what are some of the lower-hanging fruit where AI or machine learning can be applied to improve operational performance?
- [Paul] I think with the tools that we currently have, we have a lot of opportunity. For many years, we have developed a system of reimbursement that involves coding and the submission of codes, the evaluation of codes, and payment from that, and then around that, a number of limits on the use of care, such as prior authorization, to make sure someone who receives a medication or who is authorized to get a treatment is medically appropriate to get a treatment. These kind of activities, all of them, coding, identification of fraud, prior authorization, are all involved with usually a physician or a nurse who spends a lot of time with a medical chart, and based upon their knowledge and experience, makes a decision. What's interesting is, those decisions have been made for a long, long time. That's a perfect example where you have electronic information with hundreds of thousands of examples that you can use to train an artificial intelligence model to give you, essentially, equivalent responses to what a trained professional would. But, while they may take an hour and a half to read a chart carefully and come up with a conclusion, the artificial intelligence model may read it in seconds. The physician or nurse doesn't have to read and focus on those charts for which an obvious decision to be made, and the tool can say, "Well, these things in the middle, "that really takes maybe some subtlety "that the artificial intelligence doesn't have. "This is a chart that you should look at "and focus on." So, it doesn't take away physician jobs. What it does is, it makes sure that the physicians are operating at the peak of their expertise, rather than with the mundane and straightforward things, and with each decision that's made, the model itself can continue to be trained and can improve over time.
- [Andrea] So, I noticed you've been talking about artificial intelligence a little more generically there. Are you thinking of any specific type of artificial intelligence that's best suited to these operational cases?
- Yes. Well, specifically deep learning, and deep learning is based on a technology that goes back to the early 1960s. The idea was to have something that looked like a neuron, a nerve cell, and if you remember your biology, nerve cells get a lot of stimulation from senses or from other nerves, and then at a certain point when they've had enough, they reach a threshold and they trigger, and they can trigger other nerves. There's a complex of nerves. So mathematically, it was modeled back in, again, I think 1963, something called a perceptron, which took in a lot of information from a variety of factors, and then used mathematical weighting of each of those factors, added them up, and decided whether it was triggered or not. And the exciting thing is, with more and more computing power brought about from video games, believe it or not, we now have the ability to do the kind of mathematics that are necessary to be able to not just use one of these or five of them, which, you know, go back decades, but to use dozens and dozens of them and to use hundreds of layers where each layers gets information, and then passes out information to the next layer, and passes out information to the next layer, and in the end, what you do is, if the model gets it wrong, you send back the signal across the model, it's a mathematical signal that corrects all those weights. If the model gets it right, it says, "Great job, let's keep going," and over time, the model learns, the model adjusts those weights so that it does a better and better job at making those discriminations. The exciting thing about using text is that in order to understand text, very often, it's not just to be able to pick out words, but to be able to remember that something happened first, then something happened second, then something happened third, and there are new methods in deep learning that allow you to assemble things in the order in which it happened, or even from first note to second note to third note. And so, you begin to get a lot more subtlety. So, I think for administrative purposes, the use of deep learning for decision making for administrative processes is potentially very exciting.
- [Andrea] That's great, I really appreciate that. We invite you to stay in touch on AI topics by visiting our website at Optum.com/IQ.
Paul Bleicher, MD, PhD, CEO of OptumLabs®, discusses the impact of AI in health care operations, with specific examples of how deep learning works and can be used to improve administrative processes.
Video: UC Health case study
See how UC Health is working on precisely practicing medicine from 700 trillion points of University of California health data.
- And it's my great pleasure to introduce our opening keynote speaker, Atul Butte, who really needs no introduction. I think this is an encore performance after last year. Atul and I go back many years but I've grown to admire him, not only as a friend but as a professional colleague. And what he has done, particularly in his role as the Chief Data Officer at UC Health, is really charted that pathway. And I'm really excited about you starting off today because one of the things that I admire about you is that you have a technical ability to think how do we get all of this information together but the thing that you always remind us to do is that the most valuable quantity is asking the right questions. In other words, how can that data speak to us in meaningful ways? So with that, I look forward to your opening remarks. Thank you so much for coming back and joining us.
- Thanks. Let's get right to the point. I don't need to insult you guys to tell you that we're in this big data deluge at this point. Every magazine, every cover has something about big data, data sciences. People say he Economist cover at the bottom right was great equating data as the new oil. I like to think of data as the new soil, that you plant your seeds for ideas, right? It's not something where I can grab this and you can't have it. Data's something where I can grab it and you can grab it. And we can actually make different things with data. But certainly data's around and all that data now is really being forecast to change the world, of course in medicine and health care as well. And we're really forecasting a future driven by artificial intelligence and machine learning and deep learning and I really wanna start off this morning, just level setting everyone on what these terms actually mean, right? Let's make sure we understand what these terms mean. Artificial intelligence, machine learning, deep learning. Yeah, a lot of people use them interchangeably but artificial intelligence now is this concept where we're trying to get computers to mimic various aspects of human intelligence. Like translating things from one language to another, driving cars, of course trying to predict what's going on, what's happening in the future. Machine learning is that aspect of artificial intelligence driven by data. Now you would say, how else would you teach computers without data? Well, in the old days, we'd use rules. We'd have experts say this is the rule and that's the rule and that's the rule and we'd have rules based ways of teaching computers. Now we're just dispensing with the experts and going right to the data, right? That's the idea with machine learning and then in machine learning we have to kinds; supervised and unsupervised. Supervised means that I have a right answer. The pathologist gave me the right answer, the radiologist gave me the right answer. I want the computer to get as close to that right answer as it can, right? So you've got sensitivity, specificity, accuracy, ROC curves for the engineers. Unsupervised means I have no idea what a right answer is here, right? I'm drowning in data, make some sense of it. Cluster it, show me different diagrams, data visualization. That's unsupervised, there's no accuracy there, there's no right answer. Right so that's supervised and unsupervised. And then under that is deep learning which is a teach computer's using this kind of weird hierarchy of neurons that kind of mimics human intelligence, how neurons connect to neurons. It's just one aspect of machine learning. Now all of these sound super sophisticated, but I have to remind you, there are literally dummies books on these topics, okay? They are still $19 each. I just checked last night, Amazon Prime. So part of this talk is really to demystify what all this is. These are software tool kits, okay? If you wanna write some software you borrow some of these tool kits in and add them to your software and then you get these kinds of capabilities. So it's nothing, there's no wizard behind the curtain here. These are software tool kits that you use. A few common ones that people use to get some of these tasks actually done. Now AI in medicine goes way back. This is not a new thing. I picked out a couple papers here from late 60's, early 70's. The paper on the left there is from Ted Shortliffe, based on a system that they had created back at Stanford in the late 60's to help doctor's figure out which antimicrobial to use, which antibiotic. Some of you know if you practice in the hospital those antimicrobial grams that you get with all the resistance patterns? That was kinda coded in at the time. So software would tell you how to do that. And even back then, on the right there, is a New England Journal of Medicine article titled Medicine and the Computer. Forecast is really the future for the field of medicine, as early as the 1970's okay? So we've been talking about this for a long time and as most hype cycles go, it's up and down and there was a trough, a desert of AI for a long time because a lot of things were over hyped and now it's all back again. It's all back for a number of different reasons but here are four of them; first we have incredible hardware, we'll talk about that in a second. We have incredible software, I'll talk about that. More data sets to train with and the data sets we're really talking about, there are two major data sets in biomedicine that we're blessed with now. Of course the genomics revolution, we're not too far from the Broad Institute, right across the river here, so you got the genomics side, DNA, but you've got all the medical records as well and all the images, pathology and a lot of digitalization happening in that field. So you've got two streams of big data in medicine and we still have all these unsolved problems. We need better drugs, we need to forecast what's happening with patients and populations. Hardware, perhaps you've heard of this game Fortnight or many others, right? This is a gaming world now. And it turns out, Nvidia, the company that makes these video boards for gamers, well those video boards, those chip sets, are also incredibly useful for deep learning. It's kind of the same kinds of tools, same kind of hardware that you need. In fact, it's gotten to the point where gamers are having a hard time buying these boards because AI folks like us buy thousands of these for data centers, right? So companies like Nvidia and others are just ramping up like crazy. Intel makes these now. IBM has a neuromorphic chip. And many others, even Google's trying to make their own chip. Apple has their own kind of chip, so a lot of people trying to put that kind of deep learning methodology which is sophisticated, it's non-linear, it helps you do hard problems, into hardware like this. So hardware, great hardware, cheap hardware is one thing. Software, we have incredible software tools now. TensorFlow is a toolkit from Google. Spark and Cafe and Jupiter, Jupiter and I think Cafe come from Berkeley. Not only do we have a lot of software tools, our data science language at the bottom there, not only do we have all these tools every one I'm showing you here is free. You don't have to pay anything so it's not just tools, they're open source tools. They're freely available tools. That's another level beyond and you have an incredible developer community now who can get all these tools for paying nothing, all around the world, you know? A kid in Bangladesh can actually code up a TensorFlow classifier now, right? If they have a computer, you can just download this and actually that kind of sophistication is available for free. And, of course, companies are going crazy with all this, right? So it's been just 18/19 months now and even the FDA has approved, if I'm keeping count, keeping score, six different software toolkits or combination devices that have involved machine learning or deep learning. Six in the past 18 months or so. Here are the first three. The top one is for clinical cloud-based deep learning in healthcare, that was Arterys. That's for interpreting cardiac imaging to make sure the right image is there but also to figure out different cardiac parameters from, if I remember correctly, cardiac MRI. The second one actually, Viz.ai was for triage, I think it's stroke triage in the emergency room. Who should be prioritized, obviously, to get to care faster? And the bottom one is diabetic retinopathy. You all know when you have type 2 diabetes one of the things we look at is for diabetic retinopathy, it's a treatable condition but it's easier, seemingly, to get the pictures than to get the ophthalmologist to see the pictures and so now companies like Google have been training their systems. Viz.ai I think was the first, or IDX is the first to get their software toolkit actually approved. And these are just three of six and there are probably many, many more in the pipeline now. And it's not just deep learning but it's actually deep learning on the cloud. You don't even have to have the hardware here. The FDA is saying, "Yeah you could run this "on someone else's computer, it's on the cloud, "in someone else's data centers." So clearly this is a growing field now. Just to be clear, it's up from zero, I think zero just 18 months ago. Now, we haven't been asleep at the wheel at UCFS as well, so just to put our press releases out there just to give an idea what a health system does at actual medical center, how we can do these things. A lot of these happened before I got there. I'll spend a little bit of time talking about the Google one at the bottom. A good example is the one at the top, so Mike Blum, who runs our Center for Digital Health Innovation, had this partnership with GE so obviously we take chest x-rays, right? Like most academic medical centers, right? If you have a cough or pneumonemia or chronic obstructive pulmonary disorder you can get a chest x-ray, COPD, and we're not even talking about diagnosing those, right? We'll talk about two high risk conditions that are often missed by radiologist. One is what's call pneumothorax, if you have a collapsed lung. Now if you have a really collapsed lung it's easy to see. If you have subtle collapsed lung, it's kind of really hard to tell. High risk because it's treatable, you shouldn't miss that so that's one. And a second's, if you're really sick and you're in an intensive care unit, right? You need to have a tube down your throat to help you breath, an endotracheal tube, you're hooked up to a ventilator. Where the other end of that tip is, is critically important because if you're using two lungs of air to fill up the lungs it shouldn't be in one of the lungs or the other, right? That would be a bad thing so you have to make sure it's above the branch point for the lungs and where that is, it's also very high risk. You don't want to get that wrong. Now, software engineers at UCSF got deep learning to work with those two parameters and of course it helps that the radiologist are calling these, you don't have to get new radiologists, it's in the notes, right? So you have images, you have these gold standard radiologists reading these things, you can teach the computer how to do it and then UCSF use that to enter in a co-development partnership with GE where, if all goes well in the next one or two or three years, those portable x-ray units you get that you bring to the intensive care unit, will automatically read those two conditions, right? The same way, I'm guessing like how at EKG give a preliminary read, you know? A lot of people ignore that but I can imagine now the future portable chest x-ray unit will have that as a preliminary read. That's one of several different collaborations that medical centers all over the place, and there's some good and bad stories, right? We know that the recent press as well, you have to do this in a very safe, respectful way. These are patients, patient's lives, patient's data. You cannot be cavalier about this but if you do this in a safe, respectful way using that data you can bring quality care now even closer to the point of care or to the hands of people who actually wouldn't know how to do this. Intel is working on new chips. Nvidia, of course, I already mentioned. I'll spend a moment talking about our partnership with Google, I think that's the next slide here. Yeah, see, you can't read any of this. So we had a paper out about six months ago so this is a complicated paper but, and this is with Google Brain, the Brain Team, now known as Google AI and if you even try to ask where is healthcare in Google, your answer is as good as mine because it's scattered, it's changing every day. As you know, some of their recent hires, so I really have no particular insight there either but the Google Brain Team really knows what they're doing in terms of creating some of the these software tools and so this is a partnership between University of Chicago and UCSF to work with Google to answer three important questions, three questions. The first is if a patient gets admitted, what are the chances they're gonna die during that admission, okay? That's the first question. The second question is, if the patient's discharged, right? They make it out, most of our patients do well in the hospital, they get discharged, what discharge codes would we use? For example, ICD9 codes, right? Something that a coder might or billing office might do. Just given the stream of data, if you ordered this and you give this drug and you got this lab test result, what would we say the patient has at the end, right? Without asking the doctor. And then third is, if the patient gets discharged, what is the likelihood they're gonna come back unexpectedly? Our 30 day readmission problem, which we know is a marker for quality care. So all three of those, now, the papers shows, the work shows that deep learning can actually get you to reasonable accuracy. That doesn't mean you can do something about it, right? A good doctor argues with me all the time saying that a good doctor can tell if a patient's going to die in the hospital, too. I mean, they know, obviously they're sick patients that get admitted to the hospital. Just because they're admitted doesn't mean you can prevent that death, right? They're usually pretty sick patients. The future then is in the trials, the clinical trials of this as an intervention, right? Just because you know a prediction doesn't mean you can do anything about it, today. But in terms of this, it shows, and the graphic on the right there which you can barely see is one particular patient that was predicted to die during the admission, she indeed died within 10 days and you can see that in red are some of the code words. Metastatic breast cancer, she's on antifungals, she's on a drug to help her immune system kick out some more white blood cells so these are pretty bad things to have in your medical record, you're probably pretty sick. A good doctor could probably tell that too in this particular case but now we can do this with Google. And just to be very clear here, this was an effort with de identified data, so from our side, from UCSF, it's just a structured data element that we get out of Epic, no text, no images and no identifiers so this is the HIPA 18 category of identifiers, shifting the dates, all of that so we don't have to give identifiers to do any of this stuff, this is just research here. Alright, so now let's think even broader here. I'm really pleased now to be this new Chief Data Scientist at the University of California Health System and just to introduce you or reintroduce you to the University of California, look I was at Stanford for 10 years, I never knew any of this and I was just right down the road, I'm particularly proud of this, so University of California, it's enormous, right? We have 10 campuses and three national labs, three supercomputer centers, right? The Lawrence Berkeley, Lawrence Livermore and the San Diego Supercomputer Center. We have 200,000 employees which actually is one of the larger employers in the United States and a quarter million students a year and then we have six medical schools, right? So that's UCSF, UCLA which are both top 10, Irvine, Davis, San Diego and Riverside. Riverside is brand new, they're tiny. They have a couple thousand patients. The other five, each have comprehensive cancer centers, NCIs, comprehensive cancer centers and all five have their own Clinical and Translational Science Award so you can anomaly say they're the best of the best. Our clinical revenue, if you just add it all together, right, that's one way to count, is to add them? It's about 11.4 billion per year. We have 5,000 faculty not only on staff but if you'll just look at the records, we have about 100,000 physicians taking care of our patients. 100,000 physicians that ordered something in Epic, across the campuses here. We're now partnered, right, with United Health Group which is why I keep showing up to this meeting again and again because this is public, right? As of two years ago. That aspirationally we're going make a single, accountable care organization for the entirety of the University of California, right? So even the press release said 10 year strategic problem. Five to 10 years aspirationally, that we will call this thing UC Care, or University of California Health System and within five to 10 years, in partnership with United Health Group, we're gonna learn how to take on risk and how to manage populations but it's amazing, the minute you have a business reason like this it becomes so much easier to share data, right? Because there is this national narrative that is it so hard to get electronic health record systems to doctors, right? Everyone's heard this narrative. It's so much easier when you have a business reason to do it, right? Because everyone thinks we want competitors to share data. Of course we're not gonna share data, right? We'll say it's a technical thing but it's not a technological thing, right? We don't want to share with our next door neighbor who's competing for the same patients, right? It's not a technological thing but here we have this amazing constellation of six medical schools, five academic medical centers that all want to work together for this goal here and I can't really understate that enough at the strategic level, once you have a business reason, it is much easier to want to share the data. I wouldn't be an IT guy if I didn't show boxes pointing to boxes, so here's the one slide of boxes pointing to boxes. These are the six silos going to another silo. We shouldn't use silos here but you can see it's UCSF, UCLA, Irvine, David, San Diego and Riverside all into one UC Heath data warehouse at the top there. And so just to be clear, we're all on Epic. We weren't always on Epic. Irvine was the last to move to Epic, I think it was about a year ago, year and half now. And to make this even more complicated, San Diego, Irvine and Riverside are all on one instance of Epic and UCLA, UCSF and UC Davis are each on their own instance of Epic so we have merge model and an individual model, we got it all. Four instances across the six different hospital systems here. Alright so if you add it all up, it's kind of an incredible view of the entire medical system. The number we love to tout around is 15 million patients. So we have seen, the University of California, has seen 15 million patients in the past 15 or so years. That is 5% of the US population has received some care in the University of California so it's an astounding number. Now I admit, many of them just got a flu shot with us, right? There is that, and it's a long tail but some of the most complex care is also in our system. So as I like to say, we have everything in Epic from Tylenol to CAR T cells now, right? There are people who have ordered CAR T cells in Epic now. We have those records as well. Now that's 15 million, if you just look in the modern era when we put in Epic, at least at UCSF, UCLA, around 2012 or so, it's about 5.2 million patients. Still a respectable number and even to be clear here, I'm only counting the main hospitals and the main clinics on the main campuses. So for example at San Francisco, if you know our town, we also have San Francisco General Hospital, we have Oakland Children's Hospital, those are part of UCSF now, I'm not even counting those yet. Or Rady Children's in San Diego or any of the equivalent affiliates, I'm just talking the main hospitals, main campuses, even that's more than 5 million patients now. You can see the numbers, 100 million encounters, half a billion vital signs, half a billion blood test results, half a billion diagnosis codes. And this changes now, we have a monthly refresher, we're moving to a weekly refresh as well. 100 million encounters sounds unbelievably impressive and scary, just not that every phone call is documented in Epic, right? Every time anyone calls any one of doctors or nurses on any of our patients, every phone call is documented. Imagine training computers for just the phone call records. I mean you can see what you can do. Think about what you can do with all this data. Now we also have claims data, which we'll talk about in a moment, that we have claims data on our self-funded plans so I'll explain what those are. We're harmonizing all these elements because we've been harmonizing for more than seven years so you call it this, we call it that, we got all those codes working. Normalizing something called the Unified Medical Language System, UMLS, so that we got these dashboards, which I'll show you. This is proof that it works. This is about 4.3 million UC patients in the California area here, so the green is UCSF, the blue is UC Davis, yellow is UCLA, I think that color's teal? I would just call it light blue, I guess, is UC Irvine and orange is UC San Diego and Riverside, you can barely see, is in red, just a little bit north and east of San Diego. And then there's a big splotch in Nevada, that's Las Vegas, right? Of course our patients are in Las Vegas. If you're sick in this part of the world, you come to UC, right? It's not just a California patient thing. The Hawaiian Islands are covered with our colors here. Race, ethnicity, demographics, we can of course do all of that stuff. Alright, so then it gets really interesting and these are some of the new slides here. So look, we love all of our 15 million patients and the 5 million we've seen recently. But there's about 100,000 patients or so that I would argue we love even more than any others. And those are own employees, right? So we're the kind of rare kind of beast in the United States where we take care of patients but we can also have our own employees sign up for us for care, right? This is open enrollment now and about a third of our employees sign up for us for healthcare. It's obvious to this audience but like Apple can't do that, Google can't do that. You can't just have them come to you for healthcare if you don't deliver healthcare but we take care of patients but we also are a self-funded plan. This is what the benefits booklet looks like. We have a PPO and an HMO. Blue and Gold is HMO and UC Care is the PPO, and you can sign up for these and they keep getting changed every year and Optum I think is one of the providers for one of the medications for one of these plans. What we can do now is look at this set first and in fact, let me restate the obvious here, right? Whenever you're trying to change practice, right? Let me go back before you see this. Whenever you're trying to change practice, look if I use all this data and point out to a health systems CEO, this is really bad care. Why are we using this medication? It looks like there's an excess spend of, let's say a million dollars here, because we're using this one medication. If I fix that, that actually impacts revenue, right? To be crystal clear, and some of you are nodding your head so you know what I mean here, right? Yes, it's the good thing, it's the right thing, it fixes the American healthcare system but it also hurts that person's revenue. So that's hard to take sometimes when you're trying change things. But here, for these 100,000 or so covered lives, healthcare is a cost center for us. As a health enterprise, it's our cost center, right? So in other words, if one of our doctors, sees one of our employees and generates, let's say inefficient care, generates a bill? We promptly pay that bill to ourselves, right? That's basically equivalent to burning money on the front lawn, right? So this we can actually address first and try to fix. And here, to me, it seems like the health system CEOs got it, that all of a sudden we ourselves have a problem with escalating healthcare costs for our own employees, even though we take care of patients. It's a kind of mind shift there but it's one that they successfully got and so we've started to focus just on our own self-funded plans to see if we could be better as a pair, as a self-funded plan here. So here's an example of metformin, this you could really argue is true waste in the system. It's hard to find waste, waste. Branded metformin, instead of generic metformin, right? Metformin is very commonly used, obviously it's been on generic for a long time, it's a very old molecule and now we can go point by point, formulation by formulation, we see a doctor using this, this is the actual generic they should have used. Each category there is exact type of formulations of Glumetza 500 mg XL and we literally see each claim. How many dollars? How many doctors used this? How many patients were involved? And we have the phone numbers now, we can make the calls to actually change these folks off. Now, again, if a prescription's already written, I know the patients love seeing their color pill. They're not gonna want to change the color of the pill and all of that, but we have had some success here at least in switching patients from the brand name to generic. And we're doing this with Fluoxetine, metformin and one other psychiatric drug, I'm blanking on it now, but that's one way to fix it is to call people and change, right? The other is just to change it in Epic, right? So now, if you're at UCLA, if you want to order a brand name metformin there are two additional approval screens and you can see the big blue line at the top there has just plummeted. Actually it's the yellow line, it starts to plummet down just as of January, so this is just about seven months data. The minute you start to look at these things and we have that insight now, that we can actually start to make these changes. So you can't just write a prescription for brand name metformin anymore at UCLA unless you go through these screens and of course that little nudge just brings that down. And so we're piloting a lot of these with our own system here. But of course the intent is to fix it for our own self-funded plans, but oh yeah, by the way, we're fixing it for everyone here, right? This is not just for our own employees. We're gonna put this in, we're gonna fix this for everyone here. So this would be a clear cut lowest hanging fruit. This is so low hanging fruit, its probably spoiled fruit at this point, right? Always metformin to metformin, we're not even going crazier than that here, right? So those are easy ones. We're looking at quality of care. So by the way, just for the tech aficionados for a second here, I missed a couple points, slides, a couple back. So the central database is in a format called OMOP, O-M-O-P, observational medical something partnership. Very old concept from Columbia University with the FDA for the Sentinel Projects. What's old is new again, I see it, I'm taking OMOP because it's not Epic, okay? We're not paying anything more to Epic now for this central data warehouse. And so each campus generates their own OMOP feeds. It all gets agglomerate and harmonized and concatenated in the center. So it's nice because each campus then has OMOP experience, local docs, local researchers can get their hands dirty with their own local data and then when they're ready to scale, literally the same query, the same sequel of query runs on one campus runs in the central database as well once they're authorized and ready to scale. So everything you're seeing here is actually written with OMOP as a central backend here. This is quality, so we're in California, we're a Medicaid wavier state, right? So dollars go to the state, the state partitions it out for us but we have to report quality measures to the state. We have a quality measure system called Prime and now we're moving to a new one called QIP, so these are measures. About two dozen or so measures. And this is, for example, at UC Irvine, literally this is what the health system management can get. For example, the second from the top there is a blue bar there, that's colorectal cancer screening and the state says we should be at 67 point whatever percent and were at 68%, we're slightly above that so that bar is in blue and then each orange there means that we're not meeting the bar. So for example, as of this time, as of June 5th I think I dumped out this slide, controlling blood pressure, we don't have enough patients with blood pressure controlled as the state was hoping we'd get to here, so that's in orange there. And then at bottom there you can see potential incentive, that's a third of a million dollars, right? So each orange bar there's a third of a million, third of million, third of a million, so this immediately pays for itself. Now a lot of people looking at the high level dashboard, you can do this with Vizient, you can do this with a lot of tools, but if you click on that, you'll literally get an output like this and now it's massively blurred, right? So you can't really see anything but these are patients coming to UC Irvine tomorrow, right? Like the list there is mammography, 9:00 AM, 9:10, 9:15, 9:20. These are the patients that are missing some elements and literally time slot by time slot, send someone to the clinic tomorrow and document in Epic you've asked them if they're on opioids, if there's an abuse problem there, counsel them to stop smoking. You know, you can see four elements missing, three elements, two elements missing. A lot of people get the dashboards but because we have scheduling data we got it right down to who's coming in tomorrow, who needs to get this fixed right now, right? So we got both in the same system now, right, that's the nice part here, scheduling to the dashboards. We got other kinds of the uses of this data. These are all new uses since last year. At the University of California, right, we have five NCI designated cancer centers. We're now acting as one, where we can, and we have system problems there too, and challenges but what we realized is that the future is in getting some of these newer clinical trials, immunotherapies, CAR T cells and stuff, to patients. We are stronger if we act together in the University of California and so we announced this just about a year ago, with an AstraZeneca trial where the entire University of California negotiated for the trial. It wasn't principal investigator by principal investigator. Entire UC negotiated for that trial and then got that whole thing to work. There is a graphic we love to show, 141,000 cancer patients we saw, this was in 2016. Our best guess is probably it's triple MD Anderson. There's probably folks from MD Anderson here, we don't know the actual numbers but we're stronger and larger if we work together. That's why I love this construct. To me I'm not aware of five academic medicals centers that actually work together like we do in UC. Indeed, in this one town, there's one academic medical center that doesn't even work with itself. To get five, I spent 10 years training here, so I know. Even within the one health system and partners, will bring out and compete for patients, right? We don't compete for patients, in general, right? Because we're geographically distinct, it really makes that happen. Now there's this one avenue in Orange County where UC Irvine and UCLA are abutting and I know there's some kind of border skirmishes there but besides that, in general, we all cooperate and want to actually share. This is an example of what happens when you put academic medical centers together. I'm predicting many, many more of these happening in the next five to 10 years. Not strictly mergers and acquisitions, but these kinds of partnerships to obviously get scale. Alright so I showed these donuts last years, gotta show them again this year, show you what we're doing next with this. Now, that's kind of easy, low hanging fruit stuff. Let's talk about more complicated stuff, things like type 2 diabetes. Before I show you that, let's go back for second. So what I'm gonna show you now is on the research side on where we're going next. So obviously type 2 diabetes is a really major problem for the United States, for the world, and for us in the University of California, not just our patients but our own employees. Our own costs for type 2 diabetes care for our own employees is skyrocketing so all hands on deck for type 2 diabetes. Now I already showed you switching brand name to generic metformins, those are easy ones. What about harder ones? DPP-4 inhibitors? GLP? Some of these are expensive drugs and in general it's not really crystal clear why and when a diabetes doctor or a primary care doctor should use one drug over another. And just to be kind of obvious and kind of even almost insulting for a moment here, you know when a drug is approved in the United States, a lot of these trials are non inferiority studies, right? A drug gets approved by saying, at least we're not worse than the others, right? Very few want to pay all the money to prove they're better than all the others, right? So they're non inferiority studies. The drug is approved, people start to use it and then more studies come out. Wink, wink, nudge, nudge, it's better for eye, better for kidney, right? With not so clear data on some of those elements there so it's in our interest to actually study what do our doctors actually do for type 2 diabetes? So like I said before, we call these diabetes donuts. We used to called these diabetes donuts, then realized that's inappropriate for diabetes so now we call these lifesavers, right? And it's like a pie chart, and pie also is inappropriate dor diabetes but think of that ring as a pie chart where this is the first medication we can find us starting on type 2 diabetes patients. 12,007 patients in this ring here. A third of them are on metformin, that's good. That's what you're supposed to start patients on. That's yellow. The red at the bottom means we're starting patients on insulin, that's interesting, I guess. These are type 2 diabetes patients, we've confirmed that, I guess some doctors do that. Grays are self monitors and you got all sorts of other combinations that patients are started on. We literally have one doc at UCSF literally that said, oh I like to start them on everything and then peel it back as they go. Nowhere in any guideline does it say to do that. And then the black bar at the top are so many combinations of drugs and things, that I can't even show you all the slices here. Alright so now, this is the first choice for drugs for type 2 diabetes. Now the more I see data like this, the more I work on these problems, the more I realize that medicine is like a game, right? Practicing medicine's like a game. I don't mean to belittle having a disease. It's rotten, it's miserable having a disease but it's amazing how much our practice of medicine is like playing a game, meaning that we make a move and then we wait to see what doe the patient's disease do and then we make another move, right? It's very synchronous like this. We go on morning rounds. We write orders, we wait to see what happens that day. We write more orders the next day. Or we're gonna write orders in clinic, come back in 90 days, we write more orders, right? It's almost like a moved based game, if you think about it, and computers are good at learning how to play games so this is the first move we've made. Patient goes home, comes back in 90 days, here's the next move, right? So everyone in yellow, right, everyone in yellow that doesn't have a second ring there, they were happy on that does of metformin, we've seen them back, we've never had to change that dose. Yellow to yellow means we've changed the dose of metformin. They're still on metformin. And any color switching to any other color means we added a drug, subtracted a drug, swapped a drug. They go home, they come back in 90 days, here's the third move. They go home, they come back in 9 days, here's the fourth move. So you are four moves out in this game and then you realize, you see at the top, we have 1,600 different ways of doing this at UCSF. Right? Probably too many. Unnecessary practice variation. Can we get this down to 1,000? Maybe 500? Maybe one. I don't know one, maybe 10. But clearly there's probably some practice here. And you know some of those purple circles are DPP-4 inhibitors and all the rest are super expensive trajectories here. Tom Peterson did all this work. Oh and then by the way, because we can get this work at UCSF, boom, we can scale this across the entire University of California in just two days. On the right now you see 71,000 patients with type 2 diabetes now, right? This is now five centers of patients immediately multi-center study but then at the bottom we have 6,500 different ways of doing this at UC, probably too many here. So this game aspect is kind of interesting to me. I'm thinking about what to do here. This is new data and this is where we're going to be, I'm guessing, I'm predicting, super annoying to pharmaceutical companies of the future. We have more than five years of follow up data now on our patients and we have 71,000 patients with type 2 diabetes and now we can really ask, in our hands do we see a difference in major adverse cardiovascular event, MACE, eye problems? Do we see a difference in kidney health? BMI changes? Not seeing too many here, okay? And I'm not gonna go through these graphics here but this is a paper that's in review right now. Again, Tom Peterson did this, but what would it look like in the future if you saw something coming from the University of California, let's say in three years, an annual report, right? This is a UC report. This is our experience with every single drug, right? This year. And next year, here's the new report. Look, we're not saying anything. This is just what we've seen. Our benefits are from these drugs, right? Comparative effectiveness, looking at costs, right? I love PCORI, I can't wait for them anymore. And PCORI can't even look at the costs of the drugs, right, by law. We don't need a grant to do this because we've learnt it's in our self interest to do this now, right? We're going to do this and we don't need a grant or funding to do it. What I'm amazed is, why isn't every heath system doing this. Why is it, it's still stunning to me, that after a drug is approved in the United States, in general nobody bothers to see if it works in their patients. Right, that's stunning to me that actually we don't do this. You buy a car, you know what the features are don't you just like run the windshield wipers and stuff to see if it works? But in general after things are approved we buy these things and spend like crazy and don't actually see if they work. Now of course this is type 2 diabetes, we're gonna do this for everything, right? We see everything at University of California, there's nothing we can miss here. Yeah, and then we gotta start to predict the future, right? We're gonna play these chess games out to figure out what is the right way to play chess here. Do you make this move and metformin and then add insulin, right? Comparative effectiveness across strategies, not just drugs. Of all these 6,000 natural experiments going on at UC which ones were the right ways to go? Part of that's figuring out, if you make a move can you predict where the game is next? This is kind of cool that we can predict 90 days head of time, just the drugs patients are on, the socioeconomic status, we can predict what the A1C is gonna be in 90 days, right? That's what this is graph is, predicted/observed. Deep learning's good enough that we can tell where the A1C is gonna be in 90 days, this is a paper that's coming out. BMI as well, and we can start to make these kind of decision choices of metformin. Do we want to put patients on metformin or not? You remember the yellow bars? Half did okay, half did not. Simple decision tree works here. If your hemoglobin A1C was ever more than 8.8 or fasting glucose over a 206? You're in the red squares there. Don't even bother starting the metformin. I know what the American Diabetes Association says, but in our experience with 71,000 patients with type 2 diabetes, it's not gonna work, right? Data driven guidelines, not expert driven guidelines. We're gonna have our own nomograms of the future here to really tell where to move patients forward. Let me end with, I'm at zero time so I'm gonna end on the last minute with where we're going, my predictions here. I love to end with this. In the end I'm trying to build these maps. If you think about where you are and where you want to go next, in some ways that's a map. My job now is to build these maps of death and disease in California. How many of you use Google Maps or Waze or maybe Apple Maps? Nobody uses Apple Maps, my car makes me use Apple Maps. You know maps take you to pleasant destinations. I'm making a map of how you get diseases and die in California, the opposite of Google Maps. So Hannah and Jay have done this work. You can see patients showing up on the top left with alcoholism, each arrow is a year and then they zip to the right there and they get cirrhosis, and they zip to left and get liver abscess and you can die. The squares mean you die of these diseases. You don't die of alcoholism. You die of cirrhosis. Maps get more complicated. These are maps learned from our data, you show up with a heart attack on the left. You get heart failure within a year at the top. You got lung diseases because fluid backs up, that's the orange square and on the right we see patients dying of sepsis, you know? That's kind of interesting. I'm a pediatrician. I admit I didn't take care of many patients with heart attacks but I always thought it was the heart that killed you in the end. Indeed, many of our patients in California die of the sepsis in the end. That's because if you don't take the northern route with heart failure, you can take the southern route and kill your kidneys and end up in sepsis a year later, a year earlier. So the idea is to figure out where are our patients, learn from the data, figure out what's gonna happen next. But you know, it's nice and pretty to make these maps. The point isn't just to make the maps. The point is actually to show where are our patients on the map. This is a real prototype with real California data. This is literally as patients age, how Californians move from disease to disease to disease to death. The colors are getting brighter, the ages are going up, a whole bunch of them are gonna get sepsis there and hit that purple circle and die. There they go. It's okay, everyone chuckles here. And now we can start to predict, in real time, what's gonna happen in the next 90 days. What's gonna happen in the next year and what are we gonna do about it? And that to me is gonna be the new definition of an accountable care organization. One that knows how to account for the care of each one of it's 15 million patients and that is what I'm really proud to be building in the University of California. Thank you very much.
Will AI replace doctors?
Listen to Atul Butte, MD, PhD, distinguished professor and chief data scientist, University of California; and Isaac Kohane, MD, PhD, professor of biomedical informatics, Harvard Medical School, discuss AI in health care – moderated by Paul Bleicher, MD, PhD, CEO of OptumLabs.
- I think you know Atul Butte, he's spoken earlier today. What you need to know is that he was trained by Zach Kohane. And Zach is, I suggest you look at Zach's bio, I'm not gonna go through it in detail, in the app, but Zach is the head of the Department of Biomedical Informatics in at Harvard Medical School, he's the librarian of the Countway Library, he's a pediatric endocrinologist and he has done a ton of the really cutting edge stuff in biomedical infomatics and genomics. So with that... I wanted to speak about artificial intelligence. Actually we're gonna have a panel about artificial intelligence. Hopefully it's gonna be give and take, it's gonna be a bit provocative, we're gonna get these guys talking to one another and I'll butt in from time to time. But the first thing I want to ask about is what is artificial intelligence because really this needs to be a very sober, very conservative, very thoughtful panel. So I want to set the stage with the first slide. This is what everyone thinks about when they think about artificial intelligence which is taking over the world, I think. Anybody know what these, I'll go around the clock, everybody know what the top left hand corner is?
- I'll be back.
- Terminator right? How about the one to the right?
- War Games.
- War Games, where the computers--
- Play a game.
- Decide to play a game which includes thermonuclear war. The one just below that?
- iRobot.
- iRobot, the one before that?
- HAL.
- That's HAL reading lips in 2001 A Space Odyssey. The one, keep on going around the clock.
- Matrix.
- And the next one is the hardest one of all, anybody recognize that? Some film buffs in the audience? Actually a really interesting one called Colossus The Forbin Project. And it was a long time ago and what happened was we invented a computer. And that computer was able to play war games basically, nuclear war game and it was actually the basis of War Games. And it found out immediately upon being activated that there was a Russian computer. And it asked to be connected, and they started communicating with just simple addition and multiplication and then they started communicating and communicating and they decided that they wanted to take over and what happened was that they tried to cut the link between them and they each launched a nuclear weapon at the other country. And they wouldn't stop unless they reconnected them, and they basically took over. So I think that's what everybody's worried about and in medicine everyone's worried about artificial intelligence taking over, getting rid of doctors and whatever. But the interesting thing is a lot of these things revolved around games. And I will say that we've gone from, in artificial intelligence, Jeopardy, to chess, to Go which was considered to be the most complex of games to poker, and I think most artificial intelligence experts believe at this point that the one thing that computers are never going to be great at is crossword puzzles, so I just want you to think about that. That is actually true. In any case, I'm gonna get started. I'm gonna ask you guys, what is artificial intelligence?
- All right so, thanks for hosting this panel and it's great to be quote debating my mentor here, Zach, so we probably agree on 90& and it'll be fun to find the 10% we don't. So broad definitions, level setting, artificial intelligence is the mimicking of human intelligence. The phrase goes back to the '60s, and like I said, translating language could be an aspect of it, driving cars by themselves, and of course, using data to learn things. So one aspect of that is machine learning where we're using data to train computers as opposed to rules, right, there are other ways to train computers. So we got the data side, that's machine learning and one aspect of machine learning is deep learning. So those are the three phrases we use. So these tools and technologies have been around for decades, we've had hype cycles on and off again, we're definitely on an on cycle for a variety of reasons we can go into. But certainly I don't think we should be afraid of AI here at all.
- So... As Atul hinted, we're trying to find the 10% where we disagree. So I'm gonna give you a reason to be afraid of AI. And it's not in the clinical applications. So we actually have a very close analogy to the Colossus story that you reminded us of with the Russian computer and the American computer, and that is that right now we have an information war, and that was is between payers and providers. And basically everything about it from upcoding to putting out bulletins of appropriateness is a big information war. I want to be paid maximally for this, and I'm gonna try to manage that payment. And computers have obviously become quite central to that. Now, what artificial intelligence can do in that space is as follows, so, Google created a machine that played Go. Did great, it beat the master. But then it created a more generic version, Alpha Zero that would basically play itself against itself, millions, billions of times and learnt the optimal game. And as you've watched it, not only did it rediscover all the Go master techniques, it also discovered new, better techniques, all within days. But it's not like medicine because Go actually has rules, and is fully explorable. Guess what else has rules and is fully exporable? Our reimbursement rules. And so there's a computer that's at Optum and other payers. There's computers at providers. If they end up talking to each other, we'll get screwed. But on the way, or maybe it's nirvana, I'm not sure which, but on the way, for sure, there are gonna be companies that are gonna be using artificial intelligence to explore all the possible ways, under all the distributions of possible patients, what's the best technique? So you can be ahead of the curve rather than being responsive to the latest techniques.
- So that's very interesting 'cause my big talk actually is that administrative uses of deep learning and artificial intelligence is one of the most valuable ways you can do it and it's easier because if you make a mistake, it's not the same as making a diagnostic mistake or making a therapeutic mistake. There are appeals and all sorts of things like that. But is that a place where we don't have to worry about transparency? Transparency is something that everyone worries about in machine learning which is, if you're making a diagnosis, do you need to understand why that diagnosis is being made? Or do you just need to know that it does it right all the time? How do you know things don't change? Similarly, if you're working in claims, do you need to really understand why a claim should be approved versus not approved or do you think that it's OK just to have something that spits out an answer?
- So I think that we have to really draw back the curtain and look at the Wizard of Oz. If you look at most medical procedures, things that we think are very solid, when we give a drug for cholesterol treatments, all of these are based on studies that smart people, honest people will actually disagree about their interpretation. And yet when we compile this into a guideline, you'd be hard pressed to find a doctor who would explain to you why that cholesterol level, LDL level is actual/actual. So it got blessed somehow institutionally so that there's actually not a lot of transparency behind it. They can explain to you that rule, but not why that rule is there.
- Yeah so transparency, we could take a couple different ways. So there's asymmetry in the needs for transparency. So for example, when a payer says no, a patient wants to know why you said no right? They rarely want to know why you said yes to something right? And so when a doctor, a provider gives a drug, the payer wants to know why did you say yes here, they don't want to know why you said no. So there's asymmetries here for sure. Now in the machine learning space of course, with deep learning and stuff we have these complicated black boxes and this is an area of research right now in machine learning, how to get those to be more explainable right? The phrase they're using is alchemy. There's too much alchemy in this field, the thing seems to work, we don't know why. So I think that field is going to get to more explainable kinda answers. At the same time though, I'm gonna be a little dystopian for a moment here too, you're not the only one who can be dystopian onstage. I think, I do think we're going to run out of transparency. I think everyone is going to be five stars at doing what they do right? I'm the five star payer, I'm the five star provider, I'm the five star doc. And then if you're not five stars you're gonna create a metric where you are five stars right? And so I think people in general, the patients, the providers, the people that make these decisions are being numbered out in the end, that we will be so transparent with so much that we will all be five stars one way or another. And I think what you're gonna be left with, and you can argue or not, I think is gonna be branding. In the end I see a huge future in branding. Boy I don't know what the hell all these five stars mean, but here's a logo I actually recognize as a patient. I'm gonna go there instead.
- This is the eBay rating problem right? Nobody gets rated below 4.5.
- Don't point at eBay 'cause it makes it sound like the consumers are stupid, it's our health care system. And so for example, we don't have to look at AI, although AI is gonna be a good application to it, which is genetics. When you're given an exome full of data, and people say this is the right, this is the right drug to treat for your cancer, no one has actually said, outside a few papers the fact that two exome companies that are billion dollar companies each, when they do on the same patient, the same exome, not only do they disagree half the time on what's the right drug, they actually disagree on what snips, what polymorphisms they're measuring. And so I think Atul's insight, unfortunately, it's a good one, is that branding is going to be big, but in some sense it's always been true. We were taught in medical school that it was not the best doctors who were the most popular, it was the most popular doctors who are the most popular. And having the nice bedside manner, saying oh yeah, here's the antibiotic for your otitis media rather than, let's wait and see if it's viral or not... So I think branding is going to be big. So the branding around AI is going to be very important.
- Yeah.
- So I'm gonna suggest, I think, that we have a vision of AI being, oh I'm gonna go and do AI on something. I'm gonna come up with a number, I'm gonna be aware of what I'm doing at this moment, but the truth is, in our just everyday life, when we go to Amazon and we're looking through things, AI is happening to us. We're not even aware of it, we're being shown books, we're being shown opportunities even when we surf the web, we get them in ads and things like that, that's all AI based. Do you see that happening in medicine? Is that gonna just become a component of an electronic health record?
- So I'd like to say that we're in a sad state where the following is true, when you go to Netflix, it knows your full movie viewing history and not only that, it knows everybody with similar tastes like you, their full movie viewing history, not only that, they know everybody in the world who is a Netflix viewer, their history. Not only that, they made the data from their database available in anonymized fashion to data scientists worldwide to improve their recommender algorithm. I wish I could say, I could ask you, when's the last time you felt that your primary care doctor remembered everything about you? When's the last time you felt that they remembered everything about all patients like you in their clinic? Of course you don't expect them to know everything about everybody in the world. So it's sad to say I wish medicine was as good as Netflix. And so therefore I think there, there is an opportunity for companies that want to serve that role. And I think in some societies it's easier. In China where there's no primary care, and you have to figure out who's a specialist who's gonna see you, that might be more appetizing where we have some pretty smart people, at least for the moment serving as primary care providers. I don't know how long that's gonna last, who can tell us who's the next specialist to see.
- Yeah I mean so I think to really answer your question, I think there are gonna be smaller nudges in the beginning, just like nudges, it's funny I turned on my phone this morning it said call my wife, said what a great idea.
- It's starting to do this.
- It's just started doing it in the last few days.
- What a splendid idea. That's marital counseling by iOS. So I think we'll probably start with the little nudges like that I'm guessing.
- Yeah.
- And all the EHR vendors, I know we love to pick on them, have been putting some machine learning or AI concepts in. For population health management, I think we're all using machine learning or AI, broadly defined, not necessarily deep learning to try to predict what's happening next, manage cohort sizes, complexity, all of that, we all have tools for that. So I think it's there, but when it's gonna actually impact that doc on the line with that patient, for every condition they have, we have a long way to go there right?
- I think we have a long way to go and, I've said this for years, and I've been wrong, and maybe this time I'll be right, we'll see. It's all about timing, which is, let me first give you a factoid. In 2003, there was a study done on primary care doctors. And they were asked, in the past year, in 2003, did you order a genetic test for asymptomatic individuals for CADS or susceptibility in 2003? What percentage of those doctors do you think had ordered the genetic test for cancer screening? So I've asked this question of at least 1000 very distinguished medical audiences and the modal answer is zero or one. The actual answer is 30%. And then when you ask, what was the biggest predictor of the doctors' ordering that test, not their training, not that they elicited family history, not where they trained, not their age, it's the patient asked for the test. What happens is, you have a family history of breast cancer, you Google it, it says BRCA1 BRCA2, you go to your doctor, you ask them to order the test, and guess what? Multiple studies show that doctors are neither comfortable nor competent in interpreting that test. So, I think there is an opportunity, this is where I've been wrong repeatedly, and maybe this is, we're coming to the year where in fact, there will be a market for patients who want actual advice about what is the right next move. Right now we don't have that infrastructure. I may be wrong again and it maybe'll--
- This time I'm gonna actually, I'm gonna believe you and actually agree with you. But it's gonna be targeted domains I think first. I'm newly encouraged again, I've mentioned a little bit this morning, the Apple Health App now right which we're used to counting pulse or whatever. Literally you can get health records in there now. This is through FHIR, which--
- FHIR.
- It doesn't need SMART but it needs FHIR.
- Well he, I have to say what's his name, from Duke, who's now at Apple?
- SMART FHIR it's Zach's--
- Yes, he credits us for it.
- FHIR resources that you get from the Epic and Cerner and the others, I think something like 400 health systems signed up. If you in the audience haven't played with this yet? First of all, shame on you, and you should try this. Because patients everywhere are doing this, I did this with my 80 year old father, couple weekends ago and literally he can see his echo report on the Apple Health App.
- And that's grand you could say well I didn't need that, big deal, I could use the Epic app to do that anywhere in my hospital portal but first of all it interrelates all the health systems you get care at. So if it's Kaiser and Stanford it's all on one timeline. And then it opens it up to thousands of other iOS developers.
- That's the issue.
- Where it's a core element of the operating system now to get a health record. You don't need to know who Harvard or Stanford or UCSF are, you don't even need to know who Epic or Cerner are right? You just get the resources.
- That's the point. It opens up.
- Yeah.
- The ecosystem to a bunch of people who would really be stopped by the difficulties of accessing data.
- Of trying to talk to us!
- So actually, the history behind that is, we met with seven of the top EHR vendors, and six of them actually agreed to implement SMART FHIR on that. Some of it was originally being intended, we'll see how well it works for the All Of Us product, the million patient.
- Yeah.
- That's slow because it's government. But Apple took the same installations and just put their authentication layer on top of it and there's several hundred hospitals nationwide and they're adding--
- That's where AI's gonna be for the patients.
- Yeah, so we drifted a little bit. Gonna pull us back to a few questions.
- All right.
- So, I'm gonna posit something and you guys can agree or disagree. I'm gonna say that when it gets good enough, and I'm not sure how we're gonna get there, 'cause we're gonna have to go through an intermediate state that may be too difficult to reach, to get there. But at some point, it's gonna be better than, AI's gonna be better than clinicians, in routine practice, let's say for interpretation retinal images. And my son is going into ophthalmology so I'm shuddering a little bit. Or I'm a dermatologist for melanomas. Let's say that that's the case. Legally, are physicians gonna be required to do AI rather than making an independent diagnosis? Has it become something that is going to become a standard of care?
- So first of all let's talk about the exact, actual example. So I have a very annoying cousin who's five years older than me, seven years older than me. And when I saw the results of the ophthalmologies, and they actually are better than most ophthalmologists. I said, I went to him and I said, he's an ophthalmologist, I said "Ha! "AI's gonna put you out of a job." He said "Zach what are you talking about? "I'm thrilled! "Looking at these retinal exams "are the most boring thing ever. "Gives me more time in the OR, and I make more money."
- Yeah.
- So--
- And more cases coming in!
- Answering a later question. Whether it's gonna--
- And so it's really not obvious.
- I think it will become a standard of care. A lot of times it'll be driven just because it's serving a rote function that human beings don't like. They'll figure out other ways to get money for their practice and get enjoyment.
- Yeah, totally agree. Not more there you can really say. It's definitely gonna bring more patient volume if anything, there are gonna be certain fields that will change, and we'll probably get to that in a moment here but I think in general, it's gonna be needed. I think I'm stealing this from Zach from a previous discussion, people keep thinking it's doctors versus computers here. It's gonna be doctors with computers, versus doctors who will refuse to use the computers. That's gonna be the narrative of the future.
- Until that generation goes away.
- Yeah.
- Something happens.
- So... Famously, well... That's how it all happens, that's how these things happen. IBM walks in famously, was remarkable in Jeopardy, they did an incredible job, and blew the human contestants away just like, and that was the first thing on my slide right? Towards the end of the last day, it was asked about what US city was named after a military general, or something like that. And the answer was Toronto. Everybody laughed it was just, and it was a big topic of discussion because it had probabilities of, named after a general and a bunch of stuff, and something that every human being in the world would never get wrong, that Toronto was not a US city, it got wrong. So I've heard it said that we're not gonna trust AI until it makes mistakes the same way that humans make mistakes. That when it makes those kinds of mistakes, even if it's getting more things right than we do, we're gonna have a problem with that, what do you think?
- I don't think that's true. I think that as a guild, and as thoughtful human beings, we're gonna stick up for that. But I think that both payers and patients who see that you get X better outcome using this system at least as a, as a backup will say I want it.
- Yeah I'm gonna say reasoning is always a hard problem in AI in any kinda way. But my perception here is, I think the perception of these algorithms is that they're gonna have to be flawless. Because any flaw, we're gonna pounce on, right? We, collectively as a field.
- You see?
- Yeah see I told you shouldn't have used this right? They're gonna have to be near flawless I think to really... Let me put it this way, it's not gonna be a feature that they have flaws, right? That's a bug not a feature.
- Well let me ask you then this question, right now we send off into, to a satellite and then down to India or to Australia, x-rays that were done and these doctors have never seen our patients. The human thing is not. So they're just looking at--
- But our radiologists haven't either right?
- Yes. But they can say they do.
- They've seen the bites of our case.
- And so they see picture after picture, at what point do you think as a company, paying for it, will say, I'll take mega mind, AI over the Australians because actually, it does it for a tenth of the price, and its accuracy is better.
- I think we're probably already there in certain aspects.
- I don't think it has to be flawless.
- Yeah OK, so as flawed as the humans, let's put it that way.
- That's right.
- That's Paul's actual question.
- I think that's the issue. I don't think it has to be flawless and I think that in addition we're gonna, Seymour Papert one of the former LEGO professor at MIT coined this term "the superhuman human fallacy" which is, we want our computer programs in AI to be better than humans. I don't think we're gonna actually hold them to that.
- Let's agree to disagree. I'm gonna say that with driverless cars, they may get at a point, and it's gonna be while, I've gone to Tesla, and I can tell you it's got a while to go. But when it gets there, they're gonna be better than human beings, much better than human beings. They're gonna kill fewer people. If you did a statistical double blind, randomized study. On the other hand, we're gonna have a very difficult time tolerating when it kills somebody for a reason that no person would've actually done that. And I think there's an extension to that. So let's go on to another really important topic which is bias, and I just want to present a few things that are controversial. I have been told one of them is particularly controversial. But I want to present it for a particular reason. On the left you see a study that Latanya Sweeney did and some of you may know who she is, she's a Harvard professor who studies privacy in databases, and also bias. And what she found out is that Google's search engine is much more likely to put information about her arrest records, do you want to click on this link about arrest records, than it would be if her name was Jill, you know, Smith or something like that. And it's because her name Latanya is something that is associated with people of color. On the bottom right is Google's, and I'm picking on Google here for a couple of these because they do a lot of AI, it turns out that Turkish I'm told, is a completely genderless language. It doesn't have male and female like a lot of romance languages, but this is Google's translation from Turkish words to English. She is a cook, he is an engineer. He is a doctor, she is a nurse, he is a cleaner, he/she is a police, he is a soldier, she is a teacher, showing some bias. Now it's actually not perfect like that. I eliminated some of the other ones. The third one which I've been told is controversial is in every city, Amazon, when they initially set up their same day delivery, they've redlined areas that turned out to be areas of poverty, areas where people of color, et cetera were living, and it was a big embarrassment for them and they argued that there was no bias in what they were doing, it was what the computer was telling them. And others argued that the data itself was biased, and so they actually made a business decision to cover those areas even if they found that they would be unprofitable for us the same way that we, while Israel may do profiling on the basis of, will use profiling to screen people, the United States, we've made a societal decision not to do that, so let's talk about how that might affect.
- Oh boy. I'm gonna go first.
- Go first.
- So first of all, this extremely compelling slide, I'm gonna steal the hell out of it. I think this is nice. OK so there's two different biases here at least with the application of all this to biomedicine. The first is in training. And I've been using this phrase irrational extrapolation. Why is it, when I'm training on patients in, say, Palo Alto, it's not working for the rest of the country? Forget about the rest of the world right? Why is it, when the car drives itself, on University Avenue, it doesn't work in the rain, or in the snow in Tahoe right? You have to train, you want to have as much of a diverse data set to train if you hope to use it on that diverse patient population. So that's one aspect that I think we are failing at big time and I think you want to get there by agglomerating more data sets, right more patients covering not just the ones that are coming to your own local health system. There's another bias in the use of all this right? So you're driving a Tesla and you call that self-driving car, I've been in a Waymo, that's what I would call self driving car.
- But you live in California.
- But these are elements that we are lucky enough to bee able to afford. The other disparity we're gonna have to talk about is how do we create these tools and systems for everyone not just those who can pay for machine learning and AI in their apps, in their smartphones or going to health systems beyond just the county safety net, right? So there's both aspects there I think we're probably batting zero for whatever on both of those right now we've gotta do better.
- I think you said it very well. But again, I always like to point out relative to common practice, we published an article two years ago in New England Journal of Medicine showing that there are a bunch of genetic variants that are being used to diagnose hypertrophic cardiomyopathy that were basically diagnosing you as being Black. And so as a result, these were common variants in Blacks and so we were scaring the hell, implanting defibrillators, a bunch of things on these individuals. So getting the right data
- Yeah.
- To drive it is extremely important and I agree with what Atul said.
- Great. So one of my pet peeves, Starshak knows this well, is the Apple Watch and the diagnosis of atrial fibrillation. Specifically because the people who are gonna be using the Apple Watch are typically gonna be in the 18-64 year range, not in the 65 year range and it turns out the prevalence of atrial fibrillation in that group is .09%. So applying their fantastic numbers for sensitivity and specificity, 98.5 and 99.6, I've forgotten what they are exactly something in that range, I actually did the--
- Of course.
- I did it. And it turns out that 17% of people who are diagnosed by the Apple Watch as having atrial fibrillation will have it, and 83% after several Holter monitors will leave the doctor's office thinking that they have atrial fibrillation but that the doctor hasn't found it yet and will become part of the worried well. So is that really, just because we can screen for something, should we?
- So first, Zach is a master at incidental findings. So I'm gonna turn it over to him in a second and then I want to come in. But I'm gonna say, this is the same Apple Watch that United Health Group just announced yesterday--
- Yes!
- Is paying for--
- I saw that!
- Not for atrial fibrillation.
- That is the same watch though right? I mean, since we're going by incidental finding.
- Thank you Atul.
- We're on the same watch, OK Zach is a master at incidental findings, I'm sure you know what you're gonna say here.
- Yeah I was just gonna say that the more things you measure on people who are otherwise well, because when you're well everything else, diseases are relatively rare when you're well. So I call this term the incidentalome, the biggest ome of all, bigger than the genome, bigger than the proteome.
- More expensive certainly.
- And more expensive, and we add a trivial, a three line R program that showed that with, unbelievable venture capital exciting performance on genetic tests, even if we have 99.99% specificity, and 100% sensitivity, 60% of the United States population would be falsely positive if you only looked for 10,000 variants, and we can look for a million. And so this is absolutely gonna defeat us but it's a bigger problem if just that it's where medicine intersects public health. And doctors just are very poor at that. Human beings are just very poor at that.
- Yeah, risk prediction's hard.
- So in 1976, in September, my first day of medical school, we each learned how to draw blood and we did what was called an SMA 20 at that point. Which is 20 tests, and it turned out that on average everybody in the class had one of those tests abnormal because those tests are normalized to the normal population being 95% of the... So it was a big lesson actually in false positives. I do have to say because I love this story, and I don't mean to embarrass you although I'm not sure if that's possible Zach, the incidentalome--
- Yes.
- Zach actually put it into Wikipedia multiple times and because they kept on eliminating the article, had to write a drama piece in order to get the article accepted into Wikipedia.
- Yes.
- So that says a little bit about Zach. I've heard that story a long time ago. In any case, what I want to do actually is a bit of a lightning round just to finish up things. And to get us back on time. I'm gonna name a handful of things, either one word, a phrase or a sentence about it from each of you, I don't want to--
- Are you going to point at us or how is that gonna work?
- Well--
- One each.
- Let's do it this way--
- One each, one each.
- No no no no.
- OK.
- I want to hear from both of you.
- OK.
- The person who finishes the last one starts the next one.
- OK, got it.
- OK. Blockchain.
- Thankfully hitting bottom.
- Distraction.
- OK. Quantum computing.
- In 20 years.
- In 100 years.
- OK. Self service analytics.
- Should'a been here already today.
- Mm hmm.
- Will still mislead.
- Internet of things, IOT.
- Already happened, big deal.
- Getting locked out of my house.
- And again, as it relates to health care.
- The same thing.
- Same thing. Virtual reality.
- The new opioid.
- Watch our kids.
- Cloud computing. Like Amazon, AWS--
- Yeah, I didn't think, what else could it be?
- Not corporate. Right.
- More secure than hospital computing.
- Cheaper yet safer?
- OK, great.
Related resources