The AI Boom and the Future of Investing

May 1, 2026

In this episode of Motley Fool Money, Rule Breakers strategy senior vice president Brian Richards sits down with Morgan Housel, bestselling author of The Psychology of Money, Same As Ever, and The Art of Spending Money, for a conversation about how the AI boom is intersecting with human psychology and investing.

To catch full episodes of all The Motley Fool’s free podcasts, check out our podcast center. When you’re ready to invest, check out this top 10 list of stocks to buy.

A full transcript is below.

This podcast was recorded on April 26, 2026.

Morgan Housel: What’s very unique about AI historically, though, is that it’s the first new technology that the people making it promise that if they’re successful, they could destroy society.

Mac Greer: That was Morgan Housel, bestselling author of The Psychology of Money. I’m Motley Fool producer Mac Greer. At our recent Motley Fool member event, senior vice president of Rule Breakers strategy Brian Richards talked with Morgan about AI, history, investor psychology, optimism, pessimism, and the future. It was a great conversation. Enjoy.

Brian Richards: Morgan, I’ve been looking at this conversation for a few weeks and not just because you’re an old friend. The topic du jour, AI and innovation, I wanted to get your take on it as somebody who has studied behavior, psychology, investors over time. I’d love to hear what’s the most useful thing you’ve learned yourself in the last year as a thinker.

Morgan Housel: One that’s very unique to me, maybe, I don’t write nearly as much as I used to. I had a good 15-year run of writing every day at fool.com was writing two or three pieces per day. Many of which you edited. Thank you. About two years ago, I cut way way back, and I haven’t really written anything significant in about two years. What was interesting for me that I noticed is how much of writing is not just an output, it’s an input. It’s a very clear way to crystallize your thinking and to understand what you’ve been learning. As soon as I stopped writing, I felt like, even though I was reading more with my newfound spare time, I was learning less because I wasn’t spending a lot of time actually trying to crystallize the thoughts that I’d had from learning. I think for everybody, no matter who you are, forget professional writers, everybody. If you’re just reading all day and learning, but you’re not going out of your way to really crystallize those thoughts, by just writing down what you’ve learned, taking notes in the books that you’ve read, you lose a lot. I think I knew that five years ago, but it was interesting to see in the last two years how quickly my brain turned to mashed potatoes when I stopped writing. That’s one thing.

The other thing that’s, I think, been very prevalent at the society level in the last five years is the reinforcement of how addictive pessimism is at the society level. That’s always been true. John Stuart Mill was writing about that 150 years ago. This is not a new thing. But I think 25, 30 years ago, cable news figured out that you can gain attention with pessimism. They all figured it out. It’s just been in the last five years, I think that the social media algorithms figured it out, too. You see this at the economy level with, like, levels of consumer confidence are the lowest they’ve ever been, ever right now. Lower than they were during the 2008 crisis, lower than they were in the darkest days of COVID, consumers have never felt worse about the economy in the history that we’ve been tracking this stuff than right now.

Of course, there’s a lot going on right now, but it’s not even political. It spans different presidencies. It’s been spanning for a while. I think at least an element of that is that people, particularly young people, are more exposed to pessimism than they’ve ever been at all. There’s some interesting studies about tracking New York Times headlines over the decades. Even as the world has gotten objectively better in terms of life expectancy and average income or whatnot, the headlines progressively get more negative over time. That’s been going on over decades. In a media world where you’re just trying to get attention, you just need everyone’s attention, very different from the Walter Cronkite days of you had a monopoly on people’s attention. Now, when there’s an arms race for attention, and you’re never going to get there faster by being pessimistic, it’s like, you can live in a world in which things are objectively analytically getting better, and people feel worse and worse about it. I think that’s always been true. But the last five years, it went steep.

Brian Richards: If you have your bingo card, that is John Stuart Mill, Karl Marx, Joseph Schumpeter, and Friedrich Hayek all in one morning. Wait till we get to the afternoon. Morgan, your second book, the title is Same as Ever. It’s a great book, and it’s basically an argument that the most important things about human behavior never really change, despite all of the technological progress that we make. I want to start there. Does AI feel to you like a new variable, or is it just a stage for the same old human drama? I don’t know if we can, but put the poll back up there because I think that was a pretty fascinating result.

Morgan Housel: So I think there’s plenty that rhymes with things that have happened in the past. There’s been every 20 or 30 years, there’s a new technology that at least promises to fundamentally change everything. Usually does. The Industrial Revolution, radio in the 1920s, nuclear energy in the 1950s, the Internet in the 1990s, a new technology that says, This is going to rewrite everything that we know, and your jobs, your careers are not going to be the same in a short period of time, five or 10 years. That’s been going on forever. One thing that’s interesting about those trends, if you look back, with the glory of hindsight looking back, is that even the people who invented those technologies and were the most ambitious and had the most foresight in those could not have fathomed what their products turned into. Henry Ford could not have ever imagined that he was going to basically create the American suburb with the car. He understood cars and motors and whatnot. He couldn’t fathom that this meant that people were going to live 40 miles from where they work and commute in. All these things that the Wright brothers could never have imagined Delta Airlines.

All these things that the people who have the greatest vision can’t see where it’s going. If Steve Jobs was alive, I don’t think he could have possibly foreseen what social media was going to do to society on the phones that he built. I think if that’s the trend, even the people who have the most wild AI visions today and who are creating the technologies themselves probably can’t comprehend where it’s going to go in 10 or 20 years. The people who make Adobe Photoshop, like just software for manipulating images, they create tools within Photoshop that they have no idea what people are going to do with. They just understand that if you create every imaginable tool to manipulate an image, somebody will find a use for it, even if they don’t know what that use is going to be. I think there’s a lot of that with technology, particularly with something like AI, where the people who are making these can’t fathom what other people are going to do with it. They know what they would do with it, but what is somebody else going to do with that technology? That’s where these things go way offhand.

What’s very unique about AI historically, though, is that it’s the first new technology that the people making it promise that if they’re successful, they could destroy society. That’s a very unique thing that hey, if we achieve what we’re trying to achieve, we could wipe out 50% of white-collar jobs and hack every government database. They’re explicitly warning about this on a daily basis. That’s a very unique thing. Most of the time when you have a new technology, the people making it want to advertise the good that it’s going to do versus constantly warning about how dangerous they are. It’s a technology that there’s a lot of existential risk, almost like the nuclear era, and it feeds into the pessimism you’re talking about.

One thing about the nuclear era, too, is that if you go back to the 1950s, the peak of nuclear optimism, when the vision back then, all over the world, was that every town, big and small in the world, would have its own small little fleets of nuclear reactors and that the fossil fuel era, at least for power plants, was over. That nuclear was going to take over everything. That was the vision back then. It obviously didn’t come to fruition, at least as the optimists saw because it’s dangerous. As soon as it started growing, governments all over the world said, You have this amazing, powerful technology, but it’s dangerous. We are, at a minimum, going to regulate it into the ground, if not outright ban it, as Germany and Austria have done.

Is that analogy for AI? It’s like, if the optimists are right and it actually is a tool to put half of white-collar workers out of business, what government is going to say, Good for you. Congratulations, guys. Like, thanks for destroying. They’re not going to let that happen. It’s the same way that they did with nuclear energy. I don’t know if it’s a perfect analogy, but the more disruptive a technology is going to be, there’s a paradox where, like, the higher the odds, it’s just going to be regulated out. But what’s different about AI, too, is how dispersed it can get globally. If the U.S. regulators regulate everything, but there’s one model in China that could just spread out all over the world, and everyone can use it. It’s hard to, like, put it back in the box relative to other technologies.

Brian Richards: In the morning session, Bill did a great job of talking about AI’s impact on the market. As a person who’s sold 12 million books talking about investor psychology, I’d love to hear your take as to what you believe AI’s impact on the investor will look like.

Morgan Housel: I think largely it’s a continuation of what’s happened over the last 30 years, which is, if you go back before 30 years ago, the edge that you could find in investing if you wanted an edge was informational. You have stories of Warren Buffett in the 1960s going into the library in Omaha and reading every page in the Moody’s manual so that he could find cheap stocks. That doesn’t work anymore. Everyone has the same information. It’s all on your phone. A kid in Africa has the same information that the people working at Goldman Sachs do. Informational edges almost don’t exist like they have. Over the last 30 years, what has become more important in terms of having an edge is behavioral. Like, very hard to have an information edge. But if you can remain calm when others are panicking, that’s your edge.

To the extent that AI is another layer on top of that, that the people building investing models on Wall Street, discounted cash flow models, even ten years ago, that was a unique thing. That was a unique skill they had. Now, any AI can whip out those models in 3 seconds, and anyone can do it for free. That edge doesn’t exist anymore. What could backfire with AI and investors is everyone knows if you’ve used ChatGPT or Anthropic or whatever, they’re all syncophants. They just tell you whatever you want to hear. Very much like social media, it’s very good at keeping you engaged. They know exactly how to keep you scrolling. I think with the LLMs now, they want to keep you on the page. They want to make you happy. They want to tell you that you’re doing great. If you were to upload your portfolio to ChatGPT and said, What do you think of this, it’s going to say you’re the most brilliant investor ever. You’re doing great. If it were to say, Hey, you’re an idiot. These are the worst of the worst companies, you would stop using it. The companies know that. They want to keep you engaged. Maybe just like a lot of times with politics and news in the last 20 years, everyone found their own bubble. Whatever you want to believe, there’s someone out there who’s going to tell you that you’re right. If LLMs are that for investing, that’s probably a risk.

Brian Richards: The for you page in LLM world you just served the things that you want to be served?

Morgan Housel: Well, the other thing with LLMs, too, is that if you take a field that you are an expert in, that you really you truly are an expert in this field, and you start querying ChatGPT about some of the basics of your field, you’ll see how much of it it’s just making up. You don’t know that when you’re not an expert. You read it, and you’re like, Oh, this is all the right information. But whatever your profession is, ask information, and you’re like, It’s making a third of this up. But if people don’t know that, it just furthers them down to that bubble that they want to be and just go to tell them whatever they want to hear.

Brian Richards: I want to stay on this topic. You’ve written before that bubbles aren’t really about valuation or they’re not exclusively about valuation. They’re about narrative, Zeitgeist, and identity. People don’t just own a stock. They become it. By that definition, do you think we’re currently in an AI bubble?

Morgan Housel: Two things about bubbles. One, there’s no definition of what a bubble is, so people can just subjectively say it is or it isn’t a bubble. But I think what’s interesting about it is that AI is so expensive to build. It’ll cost trillions and trillions of dollars to build out these data centers, that the company’s raising money. Whether it’s OpenAI or Anthropic or xAI, any of them, they have to be hyperbolic when they’re describing it. They have to. If they just went out and said, We’re creating a technology that’s going to be a marginal improvement for a couple white-collar workers, you can’t raise $2 trillion on that. They have to say, this is the technology that ends all technology. There’s no other way that they can do it.

What’s also interesting about how expensive it is, is that at least for the chips that they’re building? The fundamental inputs in these data centers right now have a 12- to 24-month shelf life before they’re obsolete. Not only does it cost trillions of dollars, you got to redo that every couple of years, which means they have to be hyperbolic squared now when they’re talking about what they’re going to do. I mean, it’s not dissimilar, vein when you were buying a new laptop in 1995, it was obsolete by 1996. That’s very much what they’re going through right now. We don’t know if it’s a bubble, but we know that they have to talk as if there’s nothing that comes after AI. This is it.

Brian Richards: The chip obsolescence is a bit of an argument in favor of Nvidia. But I want to stay on this and piggyback off that. A critique of behavioral finance or the behavioral finance worldview is that sort of it’s conservative. You’re always, not you, but that worldview is always preaching about the dangers of, recency bias or overconfidence or narrative seduction. The issue with that is that some of the greatest wealth creation over time accrues to people who are wildly optimistic or almost hype men and women in the case of the leaders of some of the AI companies who are out raising money. I guess I want to ask you the line between bias and vision or optimism? How do you strike the balance between those two things?

Morgan Housel: I think part of this is being very careful who you look up to, because a lot of the people who are extremely successful, outsized, huge multi-billionaires are successful because they don’t think about the world in the same way you and I do. Some of that is very positive. They create amazing products, create a lot of wealth for their investors. Inevitably, with every single one of them, there’s going to be parts of the world where they think differently in bad ways. When people have negative views about Elon Musk for his political statements, all that, whatever it might be. This guy’s been trying to colonize Mars since he was 25 years old. He doesn’t think like you and I do. Of course, he has very strange views about what we should do politically.

But going on down the list, whether it’s Zuckerberg or Bill Gates, like Jeff Bezos, all of them, the reason they’re so successful is because their brains don’t work like us. A lot of them, I think, have harnessed their demons for productivity. There’s another saying from Paul Graham, the investor, he says, half of the traits of the eminent are actually disadvantages, and they succeeded in spite of those things. It gets dangerous when people try to mimic those traits of, like, Oh, Steve Jobs was successful, and he was a jerk to his employees. Maybe I should try to do that, too. Like, no, he succeeded in spite of being a jerk to his employees. I think there’s a lot of that.

But in terms of, like, the thin line between bold and reckless is always very difficult to understand in hindsight. One example of this is Cornelius Vanderbilt, who was, the richest man in the world during his day. By any account, by even the most optimistic account, the most charitable account of what he did, the huge portion of his wealth came from breaking laws, just completely flouting the laws. He admitted that, he admitted he had no qualms about that whatsoever. It was an era where he could get away with it. He could pay off judges. He did pay off judges. We remember him today, by and large, as being wealthy entrepreneur, maybe, like, a maverick. It’s so easy to imagine an alternative history in which eventually caught up with him, and they threw him in prison, and we remembered him as, like, the old-school Bernie Madoff.

That line between bold and reckless is very thin and hard to know different ways in which it would have turned off. Sam Bankman-Fried of FTX, who’s now in jail right now for the crimes that he committed. He’s tweeting a lot from. He’s wet he’s tried to get apart. [OVERLAPPING] But that is another scenario where he easily, easily could have gotten away with it. If he had kept it going for another two months, he probably could have gotten away with it. There’s an alternative history where you and I right now, were praising how much of a genius he was. Those outside successes, there’s always there’s a graveyard of people who made the same decisions of them and ended up with a very different outcome.

Brian Richards: In broad strokes, has the AI technology changed anything about how you personally think about your financial life, even something small?

Morgan Housel: I don’t know about my financial life, but as a writer, I mean, it would be literally talking my book to say, It’s not going to replace writers, we’re still going to be in demand. But let’s say what’s changed for me as a reader, I consume a lot more content than I create. Obviously, there’s a lot of discussion and whatnot. Will this replace not just authors, but musicians, artists of all kinds? I’m not optimistic on that at all. I think people really like art, and writing fits into that category as being able to connect with a fellow human. I’ll give you an example of this.

One of the best business books of the last 20 years, Shoe Dog by Phil Knight. I’m sure half of you have read it. It’s phenomenal book, unbelievable book about how he created Nike. This was not hidden, and they didn’t try to hide this, but I learned after I read it. After I read it, I said This is one of the best business books I’ve ever read. I learned that it was ghostwritten. They didn’t hide that. Interesting. The same ghostwriter wrote Prince Harry’s biography and Roger Federer’s biography. Very good ghostwriter. But after I learned it was ghostwritten, it took away some of the magic that I had really cherished that book for. Look, it’s the same story. It is his story, but there’s something very special about reading it and saying, I’m reading Phil Knight’s words right now. When you learned you weren’t like, Oh, it doesn’t it kind of took it away.

Two years ago, Google came out with a product called Notebook LM, which what it was is an AI product that would create a custom podcast for you. You go in and say, Make me a podcast about the fall of the Roman Empire, about technology in the 19th, whatever you want, or you can even upload a PDF and say, Make me a podcast about this topic. It would spit out a perfect 10-minute podcast describing anything you want. When that came out, I was like, This is the end of podcasts for people. Everyone’s just going to listen to their own custom pod. Why would anyone want to listen to a human? That was what I thought two years ago. How many Notebook LN podcasts have I listened to since then? Zero. I don’t want to listen to a bot describing it, even if it’s perfect and accurate and fluent. I want to listen to the messiness of another human who’s actually experienced these things going into it. I’m actually not that optimistic that AI is going to disrupt our writing, music, those kind of things as much as some other people. But, of course, I have a stake in that game.

Brian Richards: Morgan, you have a gift for finding the question behind the question. I want to ask, what’s the thing about AI that you think almost nobody is asking and that you wish more people were?

Morgan Housel: Well, one, there’s a lot of that if it is as disruptive to labor and employment as people think, a lot of times people will say, Oh, well, there’s a solution for that. It’s universal basic income. That, look, we’re gonna have 30% unemployment, but we’ll just send people five grand a month and say, you can just go write poetry and toil in your garden, and like, you don’t have to work anymore. We’ll take care of you. I think there’s so much evidence that if you think work is hard, try boredom. It’s 100 times harder. The idea that we can just pay a third of society to not work, the amount of mental illness that that would unleash on society would be off the charts. That’s pretty much the only solution that people have of, like, Oh, it’s going to put people out of business, but we’ll take the profits from AI and just pay them off effectively. That would not work in 1 million years. You see this during deep recessions. Like, after 2008, a significant number of people were unemployed for more than 12 months, and that destroys people. That’s not unemployment. That leads to mental breakdown at that point. The idea that you could do that forever for long periods of time, that would never, ever work.

Mac Greer: As always, people on the program may have interest in the stocks they talk about, and The Motley Fool may have formal recommendations for or against, so don’t buy or sell stocks based solely on what you hear. All personal finance content follows Motley Fool editorial standards and is not approved by advertisers. Advertisements are sponsored content and provided for informational purposes only. To see our full advertising disclosure, please check out our show notes. For the Motley Fool Money team, I’m Mac Greer. Thanks for listening, and we will see you tomorrow.

  

Search

RECENT PRESS RELEASES