top of page

Is the AI Bubble Ready to Burst? w/ Ed Zitron


As AI becomes more integrated into our lives, many believe it will revolutionize nearly every industry, with healthcare being one of them.


Ed Zitron is not one of those people.


In this episode of CareTalk, David Williams speaks with Ed Zitron, CEO of EZPR, on why he believes AI is a bubble ready to burst and the ramifications of AI on everything from patient care to climate change.



This episode is brought to you by BetterHelp. Give online therapy a try at https://betterhelp.com/caretalk  and get on your way to being your best self.


As a BetterHelp affiliate, we may receive compensation from BetterHelp if you purchase products or services through the links provided.


Episode Transcript:


David Williams: A. I. Seems like the future of everything. We're told it will transform health care by revolutionizing diagnosis and treatment, discovering new cures, and mastering administrative paperwork. But what if it's all a mirage that will end in disappointment? Or even disaster.


Welcome to CareTalk, America's home for incisive debate about healthcare business and policy. I'm David Williams, president of Health Business Group. Today's guest, Ed Zitron, is CEO of EZPR, author of Where's Your Ed At?, and host of iHeartMedia's Better Offline podcast. We'll get to Ed in a second. Masks are visible to everyone in October, but many of us have an invisible mask.


We wear all year long at work in social interactions, basically everywhere. But therapy can help you rediscover your true self so you no longer feel the need to hide behind a mask of any kind. Sure, masks are fun for Halloween, but we really shouldn't have to keep our emotions buried. That's where BetterHelp steps in.


BetterHelp provides online therapy tailored to your life, convenient, flexible, and made to fit your individual needs. With just a few questions, you'll be matched with a licensed therapist who aligns with your preferences. And if you ever feel the need for a change, switching therapists is easy and comes at no extra cost.


So whether you're managing stress, Dealing with anxiety or simply seeking personal growth, BetterHelp connects you with a professional who can guide you on your path to self-discovery and healing. Let BetterHelp help you take off that mask. Visit BetterHelp. com slash CareTalk to get 10 percent off your first month.


That's BetterHelp H E L P dot com slash CareTalk. Ed Zitron, welcome to CareTalk. 


Ed Zitron: Thanks for having me. 


David Williams: You know, there's tremendous excitement about AI these days, but somehow you seem somewhat less enthusiastic. And I wonder, why is that? 


Ed Zitron: So if you look at what people like Sam Altman and their ilk are promising, they're talking about this autonomous general intelligence, artificial general intelligence, whatever you call it.


This idea that even To your point, Oh, AI will be able to like solve diseases. And so to quote Sam Altman, solve physics, which is an insane thing unto itself. You have all of these promises. And then the actual reality is pretty much large language models, which have some utility, but burn way more money than they'll ever make.


OpenAI spends 2. 35 to make a dollar. And so there are useful things that generative AI does. But they're vastly overwhelmed by both the hype and well, the massive cost in many cases. 


David Williams: So it's maybe not shocking in and of itself that a technology in its early days is maybe, you know, spending more money than is, than is coming in.


And some would argue that's just, you know, that's a sign of all the investment that's being made and it's going to pan out, but you don't think so. 


Ed Zitron: I think that that comparison, I understand why people make this comparison, but the last two of these, by the way, that people have said that about the metaverse and crypto, they've been completely wrong.


And my favorite reference here is a guy called Jim Covello from Goldman Sachs. He made a point that when people say it's like the early days of the internet, when you start, when the internet started, you need these massive 64, 000, some microsystem servers to run it. And then the cost came down. The problem is that's not really.


How it went down. Yes. You needed the service, but you didn't need anywhere near as many of them. And also the cost benefits were obvious. Immediately e commerce immediately reduced the cost of stores. Like it was just an immediate, obvious thing. And smartphones, for example, people say, well, early days of smartphones, people didn't think they were a big deal.


Again, not really true smartphones. There was a Covello in this report. Genia it's called gen AI too, too much spend for not enough return. I think, and he makes the point that even with smartphones, there's, there was this obvious roadmap going back to the very early 2000s saying when GPS sizes come down, when chip sizes come down, we will be able to create these devices.


Describing smartphones. He describes thousands of presentations in which this was described. No such roadmap exists for AI. And when I say AI here, I really mean generative AI, because that's the other thing AI has been around for 10, 20, 30 years, depending on how you look at it. Actually goes back to the seventies, depending on the terminology you use.


Generative AI, which is the latest boom really is not. Going to do the kind of artificial intelligence that we're being sold today. If anything, I think the term AI for it is kind of a misnomer. 


David Williams: Why are people so excited about it? I mean, to me, it seems that what happened is you've had AI around for a while.


People have been talking about it, but when someone could go to chat GPT and just say, say, you know, Give me a good looking email that I can send to my boss to say he's a jackass, but I don't look like I'm being bad. And then it kind of comes alive for them. Is that, is that why people are so excited about it?


Ed Zitron: I think it's a few things. I think you're right. The people are using it and going, wow, it can write an email for me. And then there's a bunch of investor hype around it as well, because right now, if you look around outside of AI, there really aren't any hyper growth markets left. We're kind of done with smartphones.


We'll keep selling, but smartphones are kind of plateauing app stores and all they're like all kind of. We've kind of tapped out on that as well. Software as a service already kind of reaching the limits. Sure. We'll find new things to sell, but there's not really a new business model waiting. And so you don't really have a new thing to point at.


And what's happened is people like Sam Altman of OpenAI have attached this dream of what AI could be to generative AI, large language models, which. Kind of resemble something that's sort of smart, but you really have to squint kind of hard and they generate things probabilistically, meaning that they'll say, if you ask it to write an email, it will say, based on the training data, the most likely series of words will be this.


It doesn't know meaning it doesn't know any of this, but nevertheless, This is a runaway marketing campaign. And it's also kind of exposing a big, big problem in the economy, bigger than AI, which is the people running companies do not understand what their businesses do at all, and they don't understand the underlying technologies and they don't understand what their workers do all day.


So to them, wow, something can write emails for me. That's all I do all day. I go to meetings. I, I write emails. I don't code or do a real job. I just go places and wow, I no longer have to write the three emails I write a day between my three lunches I take. 


David Williams: That sounds good. You know, I, I wonder, I love a lot of the things that you write.


One of the terms that you've been using for some time that I love is about the rot. Economy. And I have to say that sounds, that sounds bad. What is that? It's not good. 


Ed Zitron: So the rot economy is the sense that most of modern capitalism has been engineered, engineered around growth, growth at all costs. It isn't about say, whether it's a good business, whether it will last the test of time.


It's about, can it show quarter by quarter year over year growth without fail? Because the markets love it. That's how the markets value companies. And it goes all the way back to Jack Welch of GE. Who created stack ranking, which is when you chop off the bottom 10 percent and the idea that layoffs can be something that's done for profit reasons, rather than an existential threat to the company.


But what it means is that companies are in many cases, building things, not to solve a need, but to solve the need of the company to sell more stuff. So you look at Google, for example, Google deliberately, and there's, I wrote an article called the man who killed Google search about this. Google deliberately made it.


It made it, it added more spam back into it. It obfuscated the way you see ads on the platform so that the customer had a worse experience, but Google got to show the more ads, which meant Google got to make more money. So Google was very happy with that. You look at Spotify, Spotify is like. Living in a minefield, except the mines are music.


You don't want to hear or buttons. You don't want to press UI user interface elements, which feel confusing and kind of counterintuitive. Like they're there to trick you because they are look at Facebook, look at Instagram, they are built to get in the way of you seeing your friends, seeing your family, seeing the things you want to see, and that's because they must show quarter over quarter growth.


And this hits everything. Once you see it, it's kind of hard to not see it anymore. It's like the arrow in the FedEx logo. 


David Williams: Yeah. One of the things I like about, about Google and Facebook is when people make these sort of observations that, that you're making about, you know, the user experience is crappy, there's all these ads and things in the ways is to remind the user that they're, they're not really the customer, they're, they're the product, they're the raw material that's kind of being fed into it.


Ed Zitron: And that's the thing. Yeah. You can do a decent business where you are the customer and the product. Facebook used to be a very profitable business where the person was the product. It was fine. It worked pretty well. Facebook up until about, I'd say like 2012, 2015, maybe wasn't that bad. It was a positive, positive ish product, kind of evil.


Yeah. But still like, it didn't feel like it was actively fighting you every time you used it. And that's really the, I think something tripped in 2019, 2020, growth was slowing across the board. The pandemic happened and all these companies are in 2021. So these crazy earnings and they were like, Oh, wow, we need this forever.


And they got rock poisoned. And now we're in this situation where you can't really, can't put the toothpaste back in the tube with this. 


David Williams: So let's talk about healthcare. I'm almost hesitant to turn the topic to healthcare because healthcare is almost always a bad discussion. You know, it's usually a bad experience for the patient, but at least there's, at least there's some purpose behind it.


Trying to keep people healthy and then get them to be a little bit better if they're, if they're not. So let's talk about AI and healthcare. You wrote something recently about a partnership between Thrive Global and OpenAI. I know Arianna Huffington, who's excited about a lot of things, seems excited about this, and Sam Altman too.


So how do you look at it? 


Ed Zitron: Well, two con artists got together and made a new grift. I think it's nice to see they're still, still kicking. That, that I'm, I'm not even going to discuss the details because nothing exists. Arianna Huffington is a con artist. She has like, what does Thrive do? Thrive has been around forever, the only thing it's done is spent money.


Or paid her. She sits on board, she does nothing, she does nothing at all. Sam Altman, same deal. He's a professional con artist. He is good at raising money for things that may or may not exist. Putting that aside, what Thrive is talking about there, Is basically a chatbot, a chatbot that can look at your data and do stuff.


And they love doing these things. They love talking about them because they seem theoretically reasonable, right? You can just connect a chatbot to some data and then it will work right. That will be personalized experience. Amazing. The thing is. If you think about it, like 11 different companies have promised this by now.


You'll notice that none of them have launched it. And that's because of a few things. Chief of them, interoperability. The connection between two datasets. You know, within health, Epic is difficult to say the least. Can you imagine any kind of health data interoperating with anything else like a chatbot?


Hell no, with the amount of Regulations that are around health data, probably not going to happen. But on top of that, there's something very craven about thrive and it's not, they're not the only people to do it. I don't like this push of. Trying to replace caregivers with chatbots, and the theory is, well, you get a 24 7 response, you get a 24 7 response from a chatbot, though, it's a chatbot, it doesn't do anything, and they're like, oh, well, doctors are busy, this isn't gonna really fix that, all it's going to do is give patients potentially bad data.


David Williams: I definitely understand how, you know, there's no great solution to say you've got 24/7 access, but it's just to a chatbot, not a doctor. If I think about it from the doctor's standpoint, though, they say, you know, I'd love to spend more time with my patients. I can't really do it. Can AI give me that opportunity to be more efficient or, you know, more helpful to my patients?


Ed Zitron: Yeah, but how does putting something else instead of the doctor fix that is the thing now if the answer is okay? There are things that the doctor doesn't need to do or there are ways that I I'm even trying now and I can't do it because if you think about it as a patient when you go to the doctor after waiting half an hour For the time your appointment was meant to start.


So half an hour later, the doctor actually arrives. You then go and speak to a nurse and then the nurse makes you wait another 10 minutes and then the doctor arrives. And then you speak to the doctor for five minutes. This isn't an issue of doctors not having enough time with patients. This is an issue of administrative burdens, never being moved.


It's about overbooking offices. It's about bad administration, which isn't necessarily on the doctor's side. I think that they want to do the AI thing to kind of fob off patients as a means of doctors having to speak to them less. Because right now, and I say this so, my father ran part of the NHS in England, and I won't pretend like there aren't waits there, but I've never felt rushed like I do in a doctor's office here.


Nor do I, and that's the funny thing. I remember when I moved in 2008, people say, Oh, you're going to what you're like, you're going to Americans would say, Oh, well, you waited all the time. The doctor's office in England, right? I wait all the time here, every single time, every single time. But the solutions are, Oh, well, let's find more ways to get doctors away from patients.


Shall we? Let's so, so those menial questions that the doctor aren't, the doctor doesn't answer menial questions. Menial questions. Most patients can't even talk to their doctor without an appointment. Most patients have to go through intermediary after intermediary. So we want to add another intermediary.


Great. How does that make patients lives better? It doesn't. Ever. All it does is further distance medicine from patient. And it's frustrating to me. Because people like Ariana Huffington, people like Sam Altman, they're like, Oh, well, it'll give you a healthcare assistant, it'll give you all these goals, blah, blah, blah, blah.


Now it won't. Stop pretending. It isn't doing any of this stuff. It's not built to do this stuff. It's not built to It is a large language model at this point and this whole idea of the agentic approach I will have a health care agent that will be able to, I don't know, intelligently interface with a patient so the doctor doesn't have to spend as much time.


Doctors are already not spending enough time with patients. That is the fact. I am not, I am not a medical expert. I am not a healthcare expert. But as a patient, and as a friend of multiple people with chronic illnesses, I know for a fact that doctors throughout at least the West Coast, but I remember even when I lived on the East Coast, doctors are already not spending enough time with patients.


The idea of fobbing them off to LLMs is disgusting to me. 


David Williams: So let's take it from the doctors that are one of the great things about healthcare and problem-solving is you could solve 10 or 20 problems and it's still going to be a huge mess ahead of you. So let's, instead of trying to solve the whole thing, let's say for that physician that for whatever reason, they're in a situation where they can only have 5 or 10 minutes with a patient, which is a typical thing.


A lot of times as a patient, when I go in and I'm, you know, I'm not a physician, but I fairly well understand healthcare. I can't completely understand what the doctor told me. I can't either remember it. I didn't totally understand it. And the physician realizes that in a way, they don't always realize it, but they use a lot of terminology.


Is AI useful for them to say, write a follow up note, say, take this as what I was going to say and, and make it in a, in a language that the patient can understand. 


Ed Zitron: Why even go to med school at this point? You can't write a letter cupcake. Is it too hard to write a letter? Oh no. I get the formula isn't necessary.


And I'm being somewhat glib. Like there are examples with that. There are like, especially when it comes to like healthcare plans, that there are, there are buttons you need to pick if anything. The actual use here, and you kind of see a version of this with a much more expensive version with, there's this private clinic called Forward in San Francisco.


What they have is during the intake, they have someone listening to you talk about it, and they start filling in things, and it pops up on a screen, it's kind of cool. I actually do imagine there would be a use of generative AI within that. For a more intelligent intake, something that asks something customized to the patient to say, okay, this patient is this many years old.


They have these problems. And they might say something offhandedly that the LLM catches and goes something to check into that is useful. Something that gets the administrative layer away, but also makes it so that the patient is fully heard because the re the real problem with the five to 10 minutes is.


Doctors aren't thinking about the patients particularly much that they are. And I'm not saying doctors don't care. I must be clear. They're overloaded. All caregivers are, but at the same time, the solutions being offered are very much, how do we get the doctor away from the patient and how do we get the patient away from the doctor's office entirely, which isn't good for the doctors either.


No one likes that. It's just frustrating because I am neither a medical expert. Nor an AI founder. Yeah. I think within the last five minutes, I've come up with a more useful use case than anything Ariana Huffington's pushing. And it's because it does get to the grander point. These people aren't thinking of real problems.


They're thinking of things they can sell, things they can market and ways they can get headlines. And people are falling for it. Every time. They just print whatever they say. 


David Williams: What are some of the big ethical issues in AI? 


Ed Zitron: I mean, in what part specifically? Other than, I mean, the environmental damage, the fact that it, you are basically boiling lakes to make this stuff, having to make people generate a picture of Garfield with a gun.


The fact that, We don't let the training data is trained, it is made up of copyrighted material. People's work is being stolen. The fact that there is a big dream of all of this, of replacing workers, even though it's not really going to happen based on the tech we have today. The fact they're so excited, isn't fun to watch.


And quite frankly, when it comes to health data, as I've been saying repeatedly, it seems like the big move that a lot of these companies are making is to push you away. To keep the patient as far apart as possible versus of thinking of ways in which you can enrich the patient's relationship with the doctor.


If you really only have 10 minutes, the reason I bring up the intake thing is because that's a way in which you can get more out of the patient. You can then have it digested and brought to the doctor so the doctor can say, wait, you mentioned you had a pain in your right eyeball. That might say this, this, this, this might be vascular.


Now, I'm again, not a doctor. And really the ethical issues as well is, with people making companies in health connected to generative AI, I really hope they don't train on customer data. The only companies I work with don't do that. I don't like anything related to customer data being touched. Putting aside the legal side, the amount that you could reveal about a patient if it's fed into one of these models is really scary.


There was a, I think last year, a thing with ChatGPT spitting out people's personal cells. 


David Williams: Yeah. 


Ed Zitron: Because of the way in which it ingested training data. And on top of this. Do we really need to burn this much money on nothing? Like, is that really, is this the place we want to put our cash? The planet is burning.


We could be investing in climate tech. The problem is that climate tech isn't going to have a hundred X multiplier for some VC that's already quite rich already. 


David Williams: So you mentioned about you know, boiling lakes in order to power the, the service for this. And I did, I know AI uses a lot of electricity and it seems that Google and Amazon perhaps are.


Gonna try to solve that problem by funding nuclear power plants. Should that take care of it? 


Ed Zitron: I mean, I'm not necessarily anti-nuclear power. Yeah. So, I don't mind that. What I mind is the fact that there are coal plants being reopened because of this. I think in Virginia, The fact that we don't need this data center sprawl at all.


The fact that we have, we are ruining, I mean, the climate, the emissions, I think Microsoft's over 40 percent going to blow past those, but there was a study that suggested it might be in those hundreds of percent. It's just, It's really frustrating because, on top of this being environmentally destructive, it's totally useless.


This isn't helping society. This isn't the future. Things are not better as a result of this. There's nothing to point at and say, Wow, well, I mean, I know it burns the environment, but you know At least this, and there is no, at least here, it's just, it's nihilistic. Honestly. 


David Williams: Now you're talking about AI, maybe as, as a bubble and bubbles tend to inflate and then pop.


What would, what would that look like? Is just be less attention and we go talk about something else, or is there real damage that would be done or we'll just use less electricity and it will just be back to a more mellow standard? 


Ed Zitron: It's less about the AI part and more about what happens next. So right now, I kind of hinted at this earlier.


There are no lands left to conquer. There are no hyper-growth markets left. We have the cloud boom. We have the smartphone boom. They tried with wearables. It didn't really work. They tried with crypto a little bit, not that hard, but didn't work. They tried with Metaverse a little bit, didn't work. So they came up with AI.


The generative AI and this is meant to make them a bunch of money, except it isn't what will happen when the bubble pops is not just, and it will be horrible for their stocks. Don't get me wrong, but the next step is when the markets realize these companies have no other growth vehicle, Microsoft, Google, Apple, to an extent as well.


Oracle as well for data. I mean, there are countless other options when they realize. And of course, Microsoft, if it didn't say them, when the markets realize these companies can't grow forever, and it's kind of already happening, there's going to be an apocalypse with the tech stocks. They're going to see 30, 40 percent haircuts on top of that, but also recovering those losses is going to be really difficult when you don't have a narrative to sell of where those gains are going to come from.


There are only so many people to sell stuff to. There is only so many businesses that you can sell to. And on, at some point, something's got to pop and it's bad. It's bad for, it will be bad for a lot of people. I think startups are in a good position just because startups are obviously not exposed to the public markets.


But again, investment has been primarily focused on AI when it comes to startups. It's a really rough, rough situation, which is why I think they're keeping this thing inflated because deflating it requires them to admit that things are. And also no one's making any money off of this. It's such a small industry for the amount of money they're spending.


, I estimate based on numbers, I've seen Microsoft's making two, 3 billion off of AI. That's like a year. That's not that much. And it's, I, I worry about it. I'll be here to narrate it obviously, but I worry about it. 


David Williams: So we've been talking about kind of the, you know, the tech companies, their investors. What about regulators?


Is there a role for regulators with AI and are there any special considerations regulators should give on the healthcare side of AI? 


Ed Zitron: Oh, I think that there should be legislation that says you cannot train on customer data. Like that should be the number one first thing they do. It should be completely off the table and it should be criminal if you do.


I think that there needs to be a big fat red line right at the beginning. Just because if patient data starts getting into these models, it is bad for everybody involved. And there are companies using synthetic data, which works in small amounts. That's great, but patient data needs to not be there, but also, I don't know.


I don't know how you regulate out the idea of replacing doctors with LLMs, but you should. You shouldn't be, you can use them for intake. I don't think you should be able to use them for actual any medical advice. I don't think that that should happen. I think it's immoral. And I think that it is, it shows a deep lack of respect for the patient.


David Williams: So we've been talking a lot about this from kind of the expert standpoint to investors, regulators, tech companies, and so on. How should the general public be thinking about AI? What you've been putting forward today is very different from what they'll be hearing, you know, on, on the radio or seeing in the, in the newspaper or on television or whatever.


Should people be asking questions, pushing back against AI in their workplace, on the internet, when they go to the doctor, what should the general public be doing? 


Ed Zitron: I think the, the main thing to focus on is utility. Always ask why this is here. What does this do? And when someone gives you something vague, give them a specific question.


Does this touch my data? Well, what does this do? Why is this here? If it's someone telling you what it might do, tell, ask them what it does. Never ever accept what it might do, because it will, you are safe to assume it will never do that. And I think that, I think that people in the workplace and in doctor's offices are at the mercy of the people running them.


But I think it is always fair to opt-out, and it's always fair to ask as many questions. And if they can't answer them, you should not touch the thing. Unless, I mean, you're working, then Nothing you can really do about it. But I think that the big thing is, is just don't listen to what it might do. Ask what it does.


Fair enough. 


David Williams: Well, that's it for another episode of CareTalk. I've been speaking today with Ed Zitron. He's CEO of EZPR, author of Where's Your Ed At? and host of iHeartMedia's Better Offline podcast. I'm David Williams, president of Health Business Group. If you like what you heard, or even if you don't, please subscribe on your favorite service.


And thank you, Ed. 


Ed Zitron: Thanks so much.



Watch the full episode on YouTube:








 

ABOUT CARETALK  


CareTalk is the only healthcare podcast that tells it like it is. Join hosts John Driscoll (Senior Advisor, Walgreens Health) and David Williams (President, Health Business Group) as they provide an incisive, no B.S. view of the US healthcare industry.


FOLLOW CARETALK

Comentarios


bottom of page