Meta needs to win over AI developers at its first LlamaCon

April 29, 2025

On Tuesday, Meta is hosting its first-ever LlamaCon AI developer conference at its Menlo Park headquarters, where the company will try to pitch developers on building applications with its open Llama AI models. Just a year ago, that wasn’t a hard sell.

However, in recent months, Meta has struggled to keep up with both “open” AI labs like DeepSeek and closed commercial competitors such as OpenAI in the rapidly evolving AI race. LlamaCon comes at a critical moment for Meta in its quest to build a sprawling Llama ecosystem.

Winning developers over may be as simple as shipping better open models. But that may be tougher to achieve than it sounds.

Meta’s launch of Llama 4 earlier this month underwhelmed developers, with a number of benchmark scores coming in below models like DeepSeek’s R1 and V3. It was a far cry from what Llama once was: a boundary-pushing model lineup.

When Meta launched its Llama 3.1 405B model last summer, CEO Mark Zuckerberg touted it as a big win. In a blog post, Meta called Llama 3.1 405B the “most capable openly available foundation model,” with performance rivaling OpenAI’s best model at the time, GPT-4o.

It was an impressive model, to be sure — and so were the other models in Meta’s Llama 3 family. Jeremy Nixon, who has hosted hackathons at San Francisco’s AGI House for the last several years, called the Llama 3 launches “historic moments.”

Llama 3 arguably made Meta a darling among AI developers, delivering cutting-edge performance with the freedom to host the models wherever they chose. Today, Meta’s Llama 3.3 model is downloaded more often than Llama 4, said Hugging Face’s head of product and growth, Jeff Boudier, in an interview.

Contrast that with the reception to Meta’s Llama 4 family, and the difference is stark. But Llama 4 was controversial from the start.

Meta optimized a version of one of its Llama 4 models, Llama 4 Maverick, for “conversationality,” which helped it nab a top spot on the crowdsourced benchmark LM Arena. Meta never released this model, however — the version of Maverick that rolled out broadly ended up performing much worse on LM Arena.

The group behind LM Arena said that Meta should have been “clearer” about the discrepancy. Ion Stoica, an LM Arena co-founder and UC Berkeley professor who has also co-founded companies including Anyscale and Databricks, told TechCrunch that the incident harmed the developer community’s trust in Meta.

“[Meta] should have been more explicit that the Maverick model that was on [LM Arena] was different from the model that was released,” Stoica told TechCrunch in an interview. “When this happens, it’s a little bit of a loss of trust with the community. Of course, they can recover that by releasing better models.”

A glaring omission from the Llama 4 family was an AI reasoning model. Reasoning models can work carefully through questions before answering them. In the last year, much of the AI industry has released reasoning models, which tend to perform better on specific benchmarks.

Meta’s teasing a Llama 4 reasoning model, but the company hasn’t indicated when to expect it.

Nathan Lambert, a researcher with Ai2, says the fact that Meta didn’t release a reasoning model with Llama 4 suggests the company may have rushed the launch.

“Everyone’s releasing a reasoning model, and it makes their models look so good,” Lambert said. “Why couldn’t [Meta] wait to do that? I don’t have the answer to that question. It seems like normal company weirdness.”

Lambert noted that rival open models are closer to the frontier than ever before, and that they now come in more shapes and sizes — greatly increasing the pressure on Meta. For example, on Monday, Alibaba released a collection of models, Qwen 3, which allegedly outperform some of OpenAI and Google’s best coding models on Codeforces, a programming benchmark.

To regain the open model lead, Meta simply needs to deliver superior models, according to Ravid Shwartz-Ziv, an AI researcher at NYU’s Center for Data Science. That may involve taking more risks, like employing new techniques, he told TechCrunch.

Whether Meta is in a position to take big risks right now is unclear. Current and former employees previously told Fortune Meta’s AI research lab is “dying a slow death.” The company’s VP of AI Research, Joelle Pineau, announced this month that she was leaving.

LlamaCon is Meta’s chance to show what it’s been cooking to beat upcoming releases from AI labs like OpenAI, Google, xAI, and others. If it fails to deliver, the company could fall even further behind in the ultra-competitive space.