AI could solve greatest human challenges, if we let it, Moderna cofounder says

January 14, 2025

There’s potential for ‘a whole new intelligence,’ Noubar Afeyan says

Noubar Afeyan speaks on a panel about health equity at the Clinton Global Initiative on Sept. 24, 2024 in New York City.Alex Kent/Getty

The recent breakthroughs in AI have reignited excitement over “artificial general intelligence,” or the capability of computer programs to someday reason and comprehend the world at the same level as humans.

ChatGPT and other apps created with machine learning seem to be approaching that level of intelligence. Sam Altman, chief executive of ChatGPT developer OpenAI, last week predicted his company is on track to create “AGI” and even “superintelligent” AI that could far surpass humans.

But Noubar Afeyan, the accomplished entrepreneur and founder of startup backer Flagship Pioneering, wants to reframe the challenge.

Human intelligence is only one form of intelligence and overlooks features of the natural world such as plants and viruses that adapt to changing conditions and alter themselves to thrive in new environments. The new AI apps backed by machine learning could already be considered intelligent on their own, even if they do not work in the same manner as human thought, Afeyan argued in his annual letter. The missive, released on Monday, is widely read in the biotech community where he has helped found dozens of companies, including Moderna.

Combining human efforts, nature, and AI in what Afeyan calls “polyintelligence” could solve some of the biggest challenges in the world, like climate change or cancer, he wrote.

“We know most about human intelligence, and for the past 70 years or so we’ve tried to replicate that in the form of computers, which the last few years have shown actually are a different form of intelligence,” Afeyan explained to me in a phone call from San Francisco, where he’s attending the J.P. Morgan Healthcare Conference.

Viewing nature and AI as less than human intelligence may underestimate their significance, he said. And that could lead to ignoring or even outlawing potential breakthroughs that AI could develop in biology and biochemistry, Afeyan’s fields of expertise going all the way back to his doctoral work at MIT in the 1980s.

“Are we as humans going to inhibit the greater understanding and therefore the greater output of new treatments, new cures, new prevention approaches that can come from the interface between machine intelligence and nature?” he asked. “Are we going to deny treatments because we don’t quite understand one or another aspect of how it came about?”

Some efforts to regulate AI have taken that approach, seeking to bar apps unless their decision-making process can be interpreted and understood by humans.

Afeyan isn’t opposed to all AI regulation, but counsels for “carefully balanced regulation” that takes into account potential benefits and weighs those benefits against potential harms. And he thinks the private sector should be carefully self-regulating AI projects even before governments act.

“Self-regulation, above all, should be the first step,” he said. “We should not as scientists, as engineers be waiting for some other moral entity to tell us what we should be anticipating.”

Flagship Pioneering, the company Afeyan created in 2000 in Cambridge to nurture more biotech startups, is already backing a host of AI-fueled innovations. One of the latest, at Lila Sciences in Cambridge, is developing AI that can generate its own ideas for scientific experiments, conduct the experiments, and interpret the results, all on its own.

“Humans have long developed tools, microscopes, mass spectrometers, you name it, to help them be able to understand nature better,” he said. “Now one of the tools, in the case of machine [learning], we’re elevating to the level of a whole new intelligence.”

The resulting breakthroughs could be beyond current human understanding, in the same way AI apps developed new strategies for winning games of chess and go that humans had never conceived. The programs played against themselves in billions of simulated games, without human input, to “learn” how to win.

“The machines were benefiting from what would have taken thousands of more years of human chess,” Afeyan said. “Similar things will happen in achieving scientific understanding. That’s the thing that should be, if properly harnessed and directed, generating solutions for climate, solutions for food security.”


Aaron Pressman can be reached at aaron.pressman@globe.com. Follow him @ampressman.

 

Search

RECENT PRESS RELEASES