top of page

Improving Patient Support with Conversational AI


AI is becoming increasingly prevalent in healthcare as providers look to cut costs and speed up patient response times.


However, a study by the Pew Research Center found that 57% of patients say using artificial intelligence to diagnose diseases and recommend treatments would worsen the patient-provider relationship.


So how does AI bridge the disconnect between effective patient care and streamlined operations?


Join us for this week’s Care Talk episode as David E. Williams and John Driscoll meet with Israel Krush, co-founder of Hyro, to discuss how they’re revolutionizing conversational AI and improving the AI experience in healthcare. 




Episode Transcript:


David Williams: Today's guest is a lifelong lover and solver of puzzles and problems. He loves the complexities of data and real-world impact, but can the AI chatbots from his company, Hyro, solve patient frustrations in interacting with the healthcare system? That's a tall order.


David Williams: Welcome to CareTalk, America's home for incisive debate about healthcare business and policy. I'm David Williams, president of Health Business Group. And I'm John Driscoll, senior advisor at Walgreens. Well, today's guest is Israel Krush, co-founder of Hyro, whose mission is to revolutionize conversational AI.


David Williams: Meanwhile, join the fast-growing CareTalk community on LinkedIn, where you can dig deep into healthcare business and policy topics, access CareTalk content, and interact with the hosts and our guests or their chatbots. And please be sure to leave us a rating on Apple or Spotify while you're at it. 


John Driscoll: So, David, why, why are we talking about chatbots and how does it relate to healthcare?


David Williams: You know, I was intrigued when I was reading about Israel's company and he had, uh, he, he founded a company. We know he's a scholar because he founded a company with hypotheses, uh, which is something we would, uh, we would love to do. And it was really about chatbots and voice assistants being the future of human computer interaction and the traditional ways of doing it.


David Williams: We're no good. That was back in 2018. We're in 2024 now. So I want to know, like, how's it panned out? 


Israel Krush: Yeah, absolutely. Uh, so you're right. I think that when we started the company, we had two hypotheses. One is that, uh, natural language interfaces, so chatbots and voice AI systems, Will be the dominant interfaces when it comes to human computer interaction.


Israel Krush: I think that today, you know, if the buzz around chat, GPT and large language models, it's, it's a matter of time, of course, we're going to talk with technology. Of course, we're going to type into technology and the technology computers, uh, phones would be able to understand us in our own human language. So this hypothesis today, I think is less of an hypothesis.


Israel Krush: This is a reality. Uh, with regards to large enterprises. Um, in the health care ecosystem, but generally speaking, adopting chatbots and voice AI systems and being able to deploy and maintain them. That's the second hypothesis. We believe it's going to be very hard for them to do so. And they think that you see that even post at the buzz, we're like, what, 18 months now after, uh, call it like the media attention that we got with the release of chair GPT.


Israel Krush: And you still see very few, uh, Production-ready, patient-facing or end user facing, facing chatbots and voice assistants that are available. So the hurdles to actually deploy and maintain those are still very hard. And that's exactly what we try to simplify and help with this transition. 


John Driscoll: So Israel, if you think about the consumer Approach of of whether it's claims or answering basic questions chatbots are not only increasingly more pervasive in some cases, they're more sympathetic than the actual people on the phones.


John Driscoll: And yet there have been real hurdles and roadblocks and. And sort of, uh, a high level of concern about engaging directly with patients around clinical issues. What's the big difference? Language is language. Knowledge is knowledge. What's different? 


Israel Krush: So, yeah, absolutely. To your first comment, John, you know, chatbots or voice AI assistants, they never get tired.


Israel Krush: They never get frustrated. They can speak with you 24 7, whenever you call them, and they'll have the same tone, and they can be emphatetic these days also with some new tech. Thanks. To your point, Watson, the question, even before getting into clinical use cases, let's just talk about administrative use cases within healthcare.


Israel Krush: I think one of the main things to consider here is the cost of mistake. So, um, let's think about the chatbot that helps you buy, I don't know, clothes, uh, t-shirts. If you requested the blue 


John Driscoll: David buys a lot of t-shirts. 


Israel Krush: Okay, you see, so here's like a user for you. So if you request a blue t shirt and you get the purple one You might be upset, but it's not the, the end of the world.


Israel Krush: The cost of mistake is, is not that big now. Oh, so Israel, by the way, John, John 


David Williams: is, John is colorblind, so he really doesn't care what shirt shows up. I, I'd 


John Driscoll: be fine. 


Israel Krush: I'm as well by the way, one outta seven male, uh, is colorblind, so it 


John Driscoll: tends to be associated with higher intelligence. David. Sorry. Exactly. 


Israel Krush: One outta two 50 women, by the way.


Israel Krush: So anyway, yeah. So 


David Williams: I'm gonna be upset with my purple shirt, but I'm not gonna be upset if they take out like my kidney instead of my liver. Am I. 


Israel Krush: Correct. No, absolutely correct. Maybe this is a, you know, a very extreme example. But even scheduling an appointment in a time slot that doesn't exist and you're getting into the clinic to find out that it's closed and the physician that you wanted isn't there.


Israel Krush: That's like much more frustrating and opens a lot of lawsuits against the health care facility. So the cost of mistake is probably like one big factor for it. Of why you don't see them a lot in production these days. 


David Williams: So, I mean, I, I get, you know, these voice bots, they always, I mean, they don't get tired, but also what happens is they, sometimes they try to be empathetic and they'll, and they'll say, you know, they'll say, say, or whatever, and I'll say it.


David Williams: And they said, could you say it again? Or, you know, please say this. And. I'm sorry, I can't understand. It's like, shut up. And at the same time, I do find that the, you know, the CHAT GPT is particularly, it can be very empathetic. And in fact, the bar relative to like a healthcare administrator or a doctor isn't that high.


David Williams: So it can easily be more empathetic than them. Does it, does it end up playing a role? Can it play a role in personalizing care and actually improving engagement? These possibilities? Well, 


John Driscoll: David, David, maybe, maybe the way to think about it in Israel, to frame it a little bit more broadly, the large language models, which kind of drive chat GPT, maybe you could kind of explain what they are and how they feed chat GPT to sort of contextualize David's question.


Israel Krush: Yeah, absolutely. So large language models, uh, at least today, um, are very much statistical tools. So, um, they don't have deep reasoning as we as humans have. So when you ask me a question, I actually think about what I want to answer to you versus like a large language model will think about what's the next word in the highest probability.


Israel Krush: And all of a sudden, you know, because it was trained on so much data, we got like to this magical point in which the sentences makes a lot of sense. And then to your point, we can align them to be sympathetic and we can align them to not break certain rules. Um, uh, and they think that While reading these answers or hearing these answers, it all makes sense.


Israel Krush: You know, there's a big hallucination issues with them today. We talked about custom mistake. So the AI will sound smart and sympathetic and will schedule an appointment for you, but with a physician that doesn't even exist. And that's a problem. So it doesn't know. What's 


John Driscoll: the difference between David's hallucination and Chad GPT's hallucination?


John Driscoll: Like, how do you define it? 


Israel Krush: It's a good question. I think that, um, I don't know, like regarding David's hallucinations, but to give you like a good analogy with human beings is like a six year old when you'll ask a six year old to do something. Um, some of it would be based on on ground truth. And sometimes the six year old would be shy to say, You know what?


Israel Krush: I just don't know. So we'll make up something. And that's how you need to think about it with large language models. They don't know that they lie. Actually, if you continue the conversation and say, You know what? This physician is actually not existence or like disappointment is not in their schedule.


Israel Krush: They will apologize and say, You know what? You're actually correct. So 


John Driscoll: Oh, that's very different than David. David never apologized. 


David Williams: So how do you deal? I mean, so this hallucinations are a problem and you talk about high rope controlling hallucinations. Do you like just like put the thing in a headlock or what does it mean to control the hallucinations?


Israel Krush: Yeah, absolutely. So in healthcare, we talked about the cost of mistake. And I think another thing that we didn't talk about is the fact that like, this is a regulated industry, very much highly regulated industry, which is Part of why the cost of mistake is so big. So now, uh, I'd say the new buzzword, especially like in regulated industries that wants to deploy AI is responsible AI.


Israel Krush: Oh, in our area, responsible conversational AI. And I think that, you know, I am allergic to buzzwords. So I like to say like, what does it mean for me? And when I think about responsible conversational AI, I think about three main pillars, which are. explainability, control and compliance. Um, so explainability, why the AI replied the answer that it replied.


Israel Krush: And, um, again, without getting too geeky, you know, large language models are large machine learning models, which are large black boxes. Inputs, outputs, you don't really understand what's happening inside. So how can we make it more explainable to Why did I recommend this physician? Why did I gave you, uh, this information about your headache and so on and so forth?


Israel Krush: And it's not like that's partially to deal with the hallucinations issues. And it's not like you can eliminate it entirely. But you can definitely offer citations and paths to how I deduct the answer. Control to your question is how do I balance between the generation and You know, gen ai, generative ai.


Israel Krush: So how do I balance between the AI generating an answer to, you know what, this is a sensitive subject. I don't want you, dear Ai, to generate an answer. I want you to give me the same exact response each and every time. Uh, the main, uh, the, the simplest example here is when you are in an emergency. So when we identify that you need, like, to get to the ER or call 9 1 1, we don't want the AI to offer any type of treatment or diagnosis besides telling you, it looks like it's an emergency situation, call 9 1 1 or get to the ER.


John Driscoll: So Israel is the right way to think about this because open AI has continued the, the, the, the organization that's behind chat GPT. That is, this is the way David and I are navigating the world and not necessarily telling everybody using these models to answer questions or to prepare work have been somewhat allergic to actually limiting What's effectively a model, a model that is learning even on the questions that you're asking, um, Anthropic has a different competitor has worked to sort of organize and try to make more explainable the way the logical, the way the models are driving answers.


John Driscoll: It's just two different approaches to a similar problem. Chat GPT is more open-ended and has obviously got more users. I think what I hear you saying in healthcare is rather than let the models try to answer every question in every place, you're actually putting stops, Uh, controls and, and effectively, um, guardrails that no, you can't answer certain of these questions because the range of answers is, is, is, is, is too risky to risk the wrong answer.


John Driscoll: Is it, is, is that the right way to think about it? 


Israel Krush: Absolutely. We actually talk about guardrails or safeguards in terms of, um, you know, like today, there are a lot of AI companies. Some of them are basically let's call open AI or entropic or any other large language models in healthcare. You just cannot do that.


Israel Krush: So the question is, what are your guardrails? For example, we use a knowledge graph again without getting too geeky. What we do is we can tap into the physician directory and scrape all of the physician's information and restructure it in a knowledge graph form. And then when I have questions about the physician, I know that the large language model isn't going to go to the world wide web and search for the data, but he's going to search it through the knowledge graph.


Israel Krush: So the knowledge graph will work with tandem. Tandem with the large language model, and that's how you create. Just, 


John Driscoll: just to be clear, a knowledge graph is, would be a, a form of a, of a database. Only a little bit more elaborate with a, with a few more pathways. But it, it's a, it's a, it's a data, it's a, a data structure that you vetted and manage so that therefore it's safer.


John Driscoll: Is that the right way to think about it? 


Israel Krush: Absolutely. And we reorganized it in a way that. Um, you can think. So let's take a find the physician use case. Um, so the main entity in this knowledge graph would be the physician and some of the attributes of this physician would be your specialty, the insurance plans that they accept, the locations they accepting and you can actually visualize all of this information.


Israel Krush: So when I'll say I'm looking for a cardiologist to speak Spanish and accepts at nature once in the upper east side, I'll see john as as a physician that is a cardiologist that accepts Aetna. Um, And is in the Upper East Side and whatever else I said, but, but that's how you guard the data. So you won't make up physicians just to satisfy an answer.


John Driscoll: There is a John Driscoll who I think it's a pediatrician in the Upper East Side, but I never could have gotten into medical school, unlike David. 


David Williams: Yeah, I'm sure I could have gotten in. I don't know what I would have done once I got inside there though. Luckily I have a brother does that. Now, speaking of John and not John Driscoll, there was another John that you spoke with recently, John Brownstein.


David Williams: And I saw you had a webinar on, on responsible AI and, and John, it was actually been a guest on our show. And I don't know whether he's responsible or not, but he is a creative thinker. What, um, 


John Driscoll: He's responsible. We love John. And please don't forget to listen, to re listen to that podcast. That 


David Williams: was a, that was a good one, John.


David Williams: He's always smiles through every, whatever we say to him. Kind of like you, Israel, in a sense, whatever we throw at him. What was that webinar like? When he, any takeaways, uh, from that on, on the responsible, uh, AI side from John or others? 


Israel Krush: Yeah, I think that john and Children and Boston Children's like a very thoughtful and very advanced with everything that they do with, uh, with large language models with the collaboration with open the eye.


Israel Krush: And I definitely don't think that you should take an example from john because they're very advanced in terms of like the resources that they can put in. Um, and I think that the nice thing about the conversations with him and obviously that's not our first conversation is, um, the depth of In which they actually started experimenting with large language models for a variety of things.


Israel Krush: And now I said, don't take an example from him because, um, it's, it's really where we started. It's still is a hustle to deployment and maintain a good enough chatbots and voice assistance for various needs, especially in a patient-facing. So, um, unless you have the time, the resources, both from like a capital perspective and from a technical perspective, uh, to actually Get very much into the weeds.


Israel Krush: You probably want some sort of a partner, uh, to help you like navigate that. And, uh, yeah, I think that Boston Children's is in a very unique place, uh, to be able to both find the partners, but also do a lot of experimentations by themselves. 


David Williams: So we're fortunate enough to actually live down the street from Boston Children's and my kids have gone there.


David Williams: And one of the things that and we've done some work there. And one of the things that struck me is that AI, but also other sorts of technology should be able to do is enable anybody, wherever they are, even if they're not just down the street, but to be able to tap into that kind of expertise and to be able to project it further, you see people there from all over the world.


David Williams: There's only a few that can come. It's very expensive. You have to wait, et cetera. But how can we tap into all of that knowledge? Not just randomly that's generated, but actually that they've generated and bring it, bring it forth. Hasn't happened so much to date. Does AI, that wasn't necessarily the conversation with John, but does AI enable that?


David Williams: Yeah. To a greater extent, or are we, or is that a different direction? 


Israel Krush: Yeah, I think that, uh, as I matures, it's going to be, um, you know, the most professional skilled workers, knowledge workers that we've had. Um, that means that they can query various large databases and return with coherent answers and really create efficiencies, um, and more access to care, access to knowledge, generally speaking.


Israel Krush: So in health care, which is a very complicated. area for a lot of us patients, you know, to grasp both from, you know, the clinical side, but also in terms of like how it works, you know, the payers versus the providers versus the pharma companies. It's, it, it is, um, still very problematic to navigate. But 


John Driscoll: Israel, I, I do think that David's asking a slightly different question, which I've actually seen some evidence, David, that there is.


John Driscoll: I mean, there's a, uh, an MD who's a chief data officer in a hospital who's got a very rare form of cancer, and he's actually using the hallucinations, those in those, those sort of probabilistic jumps in the model. to help test whether new forms of and new combinations of cancer treatments might help accelerate his healing.


John Driscoll: And he, because he's a PhD in data science as well as an MD, and understands that he can, he can, to Israel's point, control it. But I do think that in addition to a learning, learning faster, which I think was your question, we also may be able to some of these, what now is a hallucination. And potentially control them as Israel is suggesting into insights.


John Driscoll: And maybe we'll try to get that doctor on our, on our podcast. But I, I think it's a really, um, it's a subtle question. And I think Israel, I mean, you've got to speak to this, but I think we're still figuring out the models, but there there's a lot of runway. 


Israel Krush: No, absolutely. I think that, um, if we're looking at this from this perspective is, you know, um, when people thought about what are the first use cases that is going to solve, I can tell you that The last use case that people thought that the eye is going to solve is creativity.


Israel Krush: And all of a sudden, most of us on on, you know, on our personal use, use chat GPT for creative thinking, right? Like it help us brainstorm. It help us come up with questions. It help us come up with workshops that we want to have with our leadership teams and so on and so forth. So to your point, I definitely think that it's not necessarily like hallucinations.


Israel Krush: It's like even Mutation to genetic algorithms like, um, that's part of how we as you and humans evolved. And it looks like there is a sub-part of algorithms called genetic algorithms that uses this mechanism to actually come up with new novel solutions. So where can it go? I don't know yet, and I don't want to guess because it seems like our guesses in the past were very wrong, but it can definitely help us achieve.


Israel Krush: I'd say like breakthroughs that we weren't even thinking about. 


David Williams: So last question. We've been fairly positive here about AI as a good force. And on the other hand, we've seen that some nurses in particular have been raising the alarm bell over the use of AI and introduction of AI. We actually did a popular show on it.


David Williams: I think it's one of our highest-rated episodes about that. What is your take Israel on why at least some nurses may be kind of raising the alarm and even taking labor action when they see AI being introduced? 


Israel Krush: Yeah, so it's not only nurses. You heard about the nurses, but, uh, so, so we, we sell voice AI assistance, right?


Israel Krush: Why do you think the call center manager thinks about that or the call center agents? And I think that, uh, generally sticking Every time that, um, a new technology wave, a big technology wave, like a platform shift is happening, there's a question of, is it going to eliminate, uh, a lot of jobs and what it's going to do to these populations?


Israel Krush: And I think that what we've discovered in the past is that while it's eliminating some jobs, it creates a lot of new ones. So, um, I can end it in an optimistic manner, but because you requested a pessimistic one, I will also share that, um, the pace in the change of pace, um, that we're experiencing with AI is unlike anything that we've seen in the past.


Israel Krush: Um, so the concerning factor is, um, are we as humans going to be fast enough to adapt to the change that AI brings? Given like the technological breakthroughs that happening really every week, um, and in the past we had enough time, like with the again, internet, cloud, mobile as platform shifts, we were able to adapt to it because it took like several decades.


Israel Krush: Now it's going to take. Less than half a decade to see a lot of implications of AI. So that's the question. Like, will the human race be fast enough to adapt to this new pace of, uh, technological shift in the era of AI? 


David Williams: Excellent question to end on. And I was going to ask another one about whether podcasts can be replaced with AI, but we'll save that for another episode.


David Williams: For podcasters. Exactly. We can't, we can't do that. Well, that's it for yet another episode of CareTalk. We've been talking today with Israel Krush. He's co-founder of High Row revolutionizing conversational AI. I'm David Williams, president of health business group. 


John Driscoll: And I'm John Driscoll, senior advisor at Walgreens.


John Driscoll: If you like what you heard or you didn't, we'd love you to subscribe on your favorite service. And thank you Israel for joining us.



Watch the full episode on YouTube:








 

ABOUT CARETALK  


CareTalk is the only healthcare podcast that tells it like it is. Join hosts John Driscoll (Senior Advisor, Walgreens Health) and David Williams (President, Health Business Group) as they provide an incisive, no B.S. view of the US healthcare industry.



FOLLOW CARETALK

Comments


bottom of page