false
Catalog
The Emerging Use of Artificial Intelligence in Der ...
Driving High Value Care Through AI-Augmented and T ...
Driving High Value Care Through AI-Augmented and Tech-Enabled Approaches
Back to course
[Please upgrade your browser to play this video content]
Video Transcription
A lot of the themes that we've talked about before, maybe you'll hear them in a slightly different way or with a slightly different perspective on them, but it's a really exciting time for us here. And I titled my talk this way because, as Joe says, when you have something that is your driving force, the thing that you can represent that is bigger than yourself, that takes you a long way. And I think this idea of driving high value care, I loved how Ivy brought it to the level of what it is that she does in her practice for her patients and her staff to drive high value care. And notice that the order here is correct. So it's not AI and tech first, right? That's how. And I love the fact that Joe says, you work with the tech that you have, and that's going to change. But what's not gonna change is that first part of the sentence, first part of the equation. That's what we're here for. That's what we hold as our values as clinicians and as physicians and as technologists. So here's my disclosure side, and this is gonna be my first point. So these are the ways that I work in this field outside of my clinical and academic realm. And the last point is a really important one. All of this really is in line with this idea of being an evangelist about how we responsibly move our field towards a data-driven and tech-enabled future. And it doesn't happen, you've heard in all these conversations, it doesn't happen without us driving it at the core. And how do we drive it at the core? Well, we gotta be involved, right? So we need to be deeply involved and not just involved in a way where after people come to us, we say, okay, tell me this or show me this. We need to be involved, I would posit, at the very aspects of the development. And I already fear that with the generative technologies and LLMs, we're in a similar position where we're being handed things, right? Just think about the models that are being developed. They're being handed to us and say, okay, go play with this, go explore this. They've already been foundationally, decisions have been made, right? What the sources are, how that's gonna be done, what the guardrails are. I didn't have a conversation about it. I don't know if any of you guys did, but I fear that that's gonna be happening unless we insert ourselves into the process. We've seen this happen with EHRs, right? That's one of the biggest lessons. Never has there been the advent of a whole new generation of a business unit of scribes to solve a problem that we created ourselves by not being involved. Let's not do that again. And then I think also if we look societally, this is another kind of version of this where these social media platforms are handed down to us. And it's like, okay, now go deal with the consequences of it. Really, I also see our advent of the generative tools as something that can be as threatening as that, right? We don't have any visibility into the algorithms behind these social media models. Similarly, don't know, as Roxanna was saying, are these being pulled from Reddit? What is the human knowledge that's being curated and being fed to us? It's a little bit of a scary question. And this idea of, imagine how many times has Elon Musk promised self-driving cars? 2015, he said. In three years, we're gonna have it. 2016, 2017. Do we have it yet? I don't see a self-driving car around fully automated. So this idea of the hype being ahead of the, the promise being ahead of the reality keeps on happening. And especially when we think about, that's cars. I agree, that's a high stakes paradigm situation. We also have one here in medicine. And so we also have to be in this realm where I think Rob had made this prediction back when we were doing that Nature paper. He said it's gonna take upwards of five, seven decades to have tools that are responsibly developed. And that's quite about right. And so this is the right way to do it. I'm gonna say, lots of great efforts here that support it. Governments, entire continents are taking stances around this. There is a flip side that I do want to add. There is also risk in not doing, right? So there is risk in what it is that we do now, which is to perpetuate low value care. There's a lot of low value care happening in the way that we deliver it. When we don't have the innovation imperative, we also suffer from the harms of stagnancy. So it's what is the right balance? What is the way that we get there? I'm gonna show you how we think about things here. I do some work from a experience, quality, safety perspective within the institution. So I hold this perspective. And this idea of how do we leverage these capabilities, technologies to drive high value care is really top of mind. You heard this so many times. And I hope to also just like, you know, cement it a little bit more. You gotta start with the meaningful problems and the use cases. That's gotta be the nugget. When you do it the other way around, Albert showed such great examples where when you have that misfit, even in small ways between the problem you're trying to solve and the tools that you're trying to use, things don't work, right? You get really big gaps. And so again, right now, I'm really fearful that we are in a situation where we are in solution searching of problems. I don't think if we think about what's happening when you have free access to GPT or Gemini, you're actually doing the product development. You're doing the product solution fit. They're saying, hey, here's this thing, here's an engine, go build a car around it. And I think this idea, again, is backwards. And I think that's why we're in the conundrums that we are because we're just trying to figure out what the guardrails are. What is it that it's able to do? What can't it do? What are the dangers? What are the risks? But we're doing that work, which is not quite right. So the other idea that Ivy showed so beautifully is identify those problems. And oftentimes, you're gonna find AI is not gonna be the solution for it. Simpler is better. Simpler is always better. There are some things that will require AI. We're generative models, but don't let the technology be the driving factor. And the thing that really feeds into this idea is that we are asking the wrong questions and we're getting the wrong answers. This stuff happens all the time. This is just a graph. Rob shows this often, but models are able to do well on tests. Who cares? That's not the right way to evaluate them, right? So who cares that these models are able to do well on tests? That's not replicating anything meaningful in the real world. So it's also the thing that if we were to say, how would you show the best performance of these kinds of models and capabilities? It's to ask a test question, to regurgitate some information that is seen multiple times in a data set with different variations. These aren't the right questions. And so what are the right questions and how do we evaluate things? There's this idea of getting higher on the ladder. And this is a really interesting, it's a blog post by a couple Princeton PhD folks who are saying, look, let's think about how we assess the impact of AI tools. This idea of comparison on benchmark exams, the parallel would be, how does a machine perform versus a human on a classification test? That's way down at the bottom. Interesting, but not meaningful. So what happens if we go up? So one is let's make it real world tests. And now we're seeing lots of papers that are demonstrating at least real world tasks, some retrospective, now some prospective, some in experimental settings still. And then we're starting to have, and we're seeing fewer though, quantitative studies on people using AI. What does it look like when people are actually using them in practice? What kind of impact is it having? And what you guys will find, this is not a, it's not a switch here. Why is qualitative impact on top? And I'll tell you why. We don't often think of qualitative data as being more meaningful than quantitative data, but I don't think we're in a place where we even know the questions to ask and the metrics to look for. So Veronica, I think you mentioned this idea of in-basket saving time. So time is a quantitative measure. Do we know that's what's most meaningful to people? I don't even think we're there. So this idea of if we wanna get the richest data, we need to be able to ask and find out how people want to be using this data and these technologies and how they wanna interact with the technologies, what's actually gonna be meaningful to them and their patients. So I always keep this paradigm in mind. And as you're reviewing the literature and looking at the evidence and what's out there, I'd ask you to also think about what is it that you're being shown and truly how impressive or meaningful is it? There's so many aspects. Again, this is something that you've heard so many intervention points in our patient journey map. This is just the care journey. We got the administrative stuff. We've got the business stuff. We got so many use cases that are that low-risk, high-impact matrix. And so I generated a list. It's gonna look very similar, I think, to what you guys talked about, what other speakers have talked about. These are the kinds of things that are low-risk, high-impact, like the stuff that's perfect for what we're able to do in this day and age of the kinds of AI that we have. And let me show you, I'm gonna go through here. I'll show you one. Okay, so we had, this is collective work. A lot of the people doing the work are here again in the front couple rows are my colleagues here. The biggest thing that we had a problem with during COVID was we would get photos like this all the time. And so this is an example of you can only do what you can with what you have. And so this was a meaningful problem. And we said, okay, we have some different ways of solving it. You can imagine lots of different ways of solving it. And we said, okay, we're gonna solve it in an AI, traditional machine learning model way and try to aspire to this idea of making it as easy as taking a photo of a check. But the thing that we're trying to achieve was not just the grading and assessment of the model of the quality, but ultimately we wanted to make an intervention. I'll hit on this point a little bit later. So we did that. And then you can see the difference between when you bring the data versus when someone gives you data. We said, it's not traditional metrics of image quality. Actually, there's lots of them out there that are prebuilt, lighting, blur, all of that stuff. It's actually clinically appropriate. All of you clinicians know that there's a difference between what is the image quality and is it good enough for me to do something with? And so that's how we trained it. So that's completely an innovation that without clinicians doing this, we're not gonna get to. All right, I already know I'm gonna run out of time. I'm sorry. So it's not just about what the model is outputting. This is so important and again, touched on by so many other people. The output is just one thing. What we do with it is how we make it meaningful. And this is an area where we actually can introduce a lot of biases ourselves. We can introduce health equity issues ourselves. Let's just imagine you have a tool that says, hey, is a patient gonna no-show or not? And you get a patient with a really high no-show rating, score, whatever it might be. What do you do with that data? You can do a number of things with that data. You can say, okay, that person's not gonna show up. Let me double book. Let me overbook, right? Let me pull someone into that slot. You can say, why isn't this patient gonna show up and delve into the reasons for that. It's a transportation issue. This person has two jobs. You can think about, okay, so how do I solve this issue? I can solve it operationally. I can maximize utility. I can maximize revenue. I can maximize health equity. So that's where I would say, again, it's the way that we interact with the models and the way we interact with the output that really is what dictates whether or not we drive value in what we do. Okay. It is absolutely true that underlying all of this stuff is the data. And so you heard it from Veronica in terms of the foundational aspect is how do we set some standards and some consistency and that you heard it from Roxanna who says, what's in there really matters in terms of driving what's out there. And we know right now what's in there. Yeah, it's a great representation of human knowledge, but human knowledge is not what it's cracked up to be. There's a lot of false stuff. There's a lot of things that have been debunked. There's a lot of things that are perceptions, misperceptions, there's different quality of sources. So what do we do? Well, we can do something. We can add to the quality and the diversity and the robustness of the data. Lots of efforts going on. This is just one of them where we collectively are saying, hey, let's create and augment the data, the human knowledge that we have with greater diversity. You heard this from Roxanna, the effort that we had within our institution to do this. And then this is just a little bit off the press. There's also a way to crowdsource this, to engage patients and people who are not even our patients to say, can you contribute to this effort and help us increase the diversity of images that exist to help support and train good high quality data. So this is the way that I visualize what Albert talked a lot about, which is the idea that you have models that look like this. We can produce models that look like this all the time. I wish this was me. I would use Ivy as sort of the website to make me into this here. But this is what our models look like. They're nearly perfect. And then what you do is you just take them, you say, hey, let's go for a walk. Let's go to another institution. Let's have a slightly different case here. And then this is what happens. This is probably me. So I'm more on the right side here. But you're on the left. I'm with you, Rob. Okay, so you and I here on the right side. And so this is exactly what's happening. And so we have to do this work now where we're doing this left to right and we're saying, okay, we know this is gonna happen. But to Albert's point, why? What are the insights in that? How do we mitigate against that? Is it the product market fit that we have to get tighter? Is it the fact that we have to fine tune in each individual institution? Is it the fact that we have to create this local benchmarking kind of effort? So these are the kinds of questions that I think we're grappling with and tackling these days. It is no longer the questions on the left. And so I completely agree. The models are not the thing. The models are the easy part now. It's the other stuff that's hard. And then the last part that I would say is that this is, I love this, I show this slide a lot. Google has a near perfect diabetic retinopathy algorithm. It's so good. They tried to deploy it in Thailand, right? To really introduce some meaningful impact. And it was a disaster. And it was not a disaster related to the model at all. They did a post-mortem. And this is the sentence that I pulled out. This is hilarious. Because if they asked any clinician, you guys all know when we introduce humans into a system, we get fallibility. So they had issues where they couldn't get the pictures taken right. The lightings in the room were wrong. The people weren't trained. The internet didn't work. We could have told you that, right? Those are the problems and those are the things that we also need to be thinking about and grappling with again. It's not the models. It's everything else. It's the humans, it's the systems, it's the processes. All right, let's see. I will go to, let's do this. I'm gonna make a shameless plug for a session that's happening tomorrow, one o'clock to three o'clock, where we're gonna double click and get into real world use cases and how people are utilizing AI capabilities in their practices. We're gonna be talking about things like scribing and efficiency. We're gonna be talking about a DERM-GBT. We're gonna be talking about lots of examples of that. So that's gonna be a way where we'll really get into the nitty gritty of this. I wanna touch on one idea that I'll answer the question of if I had an AI model, what would I want it to do? And the way that I would answer that is I would want that to prompt me. So Daniel had, I loved his talk about the prompting concept. That is where the action's gonna be. It's gonna be in how we interact with these systems, what kind of prompts we use for that. I want a reverse prompt. And so what that means is when I have AI helping me do whatever I do, maybe it's helping me document a note, maybe it's helping me summarize that, maybe it's helping me do all the prior authorizations, all of that stuff, what I want it to, the next time I see that patient, it's saying, hey, don't forget about the horse, the horse riding thing, that trip, or hey, this is something that you yourself, in your busy practice, wouldn't have thought of. It's a care gap closure. Or did you notice that your patient always schedules Tuesday afternoons? Maybe when they check out, that's what you should offer them first. This idea of what is it that, not how we ask it to help us, but actually saying there are things that we're gonna be able to imbue or have that be able to connect pieces of data into streams to really be able to tell us and connect that experience, connect that quality, and connect the way that we really enhance the value of the care that we provide. So I'll end there. As you guys know, I keep going here, but this is where I think our field stands. Again, I love this conversation. It's a very different conversation than we had even a few years back, and really appreciate all of your interest in your engagement because that's what we're gonna need as a specialty. Thanks. Thank you.
Video Summary
The speaker emphasizes the importance of driving high-value care through technology in the field of medicine. They stress the need for clinicians and technologists to be deeply involved in the development of AI tools to ensure responsible and meaningful use. The discussion touches on the potential biases and ethical implications of AI in healthcare, highlighting the need for human oversight. Real-world testing and patient engagement are proposed as ways to improve the quality and diversity of data used by AI models. The speaker also discusses the challenges of implementing AI solutions in healthcare settings, emphasizing the importance of understanding human factors and system processes. The talk concludes with a call for continued collaboration and exploration of AI applications in healthcare.
Keywords
high-value care
technology in medicine
AI tools development
ethical implications
patient engagement
Legal notice
Copyright © 2024 American Academy of Dermatology. All rights reserved.
Reproduction or republication strictly prohibited without prior written permission.
×
Please select your language
1
English