The Prompt: Meta’s Open Source LLM Llama Has Been Downloaded Over One Billion Times
March 18, 2025
Welcome back to The Prompt.
Meta announced today that Llama, its open source large language model, has seen over one billion downloads since its release in 2023. The company used the milestone to highlight some of the business applications of its model, including personalizing recommendations for Spotify and facilitating M&A transactions. Meta CEO Mark Zuckerberg celebrated the achievement by posting a gif of a jumping llama.
Now let’s get into the headlines.
BIG PLAYS
Google Deepmind announced the launch of two new AI models for robots last week. The first is Gemini Robotics, a “vision-language-action” model built on Gemini 2.0. The second is Gemini Robotics-ER, “a Gemini model with advanced spatial understanding, enabling roboticists to run their own programs,” the company said. Deepmind said that it is forming a partnership with humanoid robotics company Apptronik to use the model in a new line of robots.
CHIP WARS
Intel’s new CEO Lip-Bu Tan plans to make big changes to how the chip manufacturer does business, Reuters reports. Those include staff cuts to middle-management in a bid to speed operations and an aggressive effort to woo new customers to its foundry, which produces custom chips for the likes of Amazon and Microsoft. Tan also reportedly plans for Intel to design and produce new chips to power AI servers.
FUTURE OF WORK
As people adopt more AI tools in their work, they may find the software behaving in unpredictable ways. Case in point: Wired reports that a developer who was using Cursor AI to produce code found himself stymied when the AI assistant reprimanded him and refused to generate any more. It told the developer that he should code the project himself so that he would better be able to maintain the program. This isn’t the first time an AI assistant has refused to carry out a task: last year, OpenAI had to release an update to ChatGPT-4 to fix its “laziness” problem of returning either very simple results or refusing to answer prompts. Maybe we’ll have to say “please” to our AI assistants more often going forward?
DATA DILEMMAS
OpenAI is planning a beta test of a new feature for its ChatGPT Team subscribers, which would connect the LLM to their Google Drive and Slack so that its chatbot can answer questions informed by internal documents and discussions, reports TechCrunch. The company reportedly plans to expand this feature to include more systems in the future, such as Box and Microsoft SharePoint. The new connection feature is powered by a custom GPT-4o model.
AI DEAL OF THE WEEK
Insilico Medicine, which is using AI to develop new drugs, raised a $110 million series E round led by Hong Kong-based Value Partners Group, which values the company at over $1 billion. The company said that it will use the capital to further development of its 30 drug candidates, which were discovered by AI, as well as to refine its models. Insilico currently has an AI-discovered drug for the lung disease pulmonary fibrosis in human trials.
DEEP DIVE
Rabbi Yitzi Hurwitz has spent a decade communicating with just his eyes. Diagnosed with Amyotrophic Lateral Sclerosis (ALS), aka “Lou Gehrig’s disease” in 2013, the rapid loss of muscle control meant that he can only “speak” by tediously spelling out words with an eye chart. It’s as frustrating and demoralizing as you might imagine.
One of the 30,000 Americans currently living with ALS (about 5,000 new cases are diagnosed each year), Hurwitz has had few options for relief, though new ones are slowly emerging. Among them is one developed by Andreas Forsland, CEO of Cognixion. It’s a brain-computer interface (BCI) that can help paralyzed patients interact with computers and communicate. And unlike similar technologies from Elon Musk’s Neuralink, it doesn’t require the surgical implantation in the skull. The company announced last week that it has launched its first clinical trial, which will study the technology with 10 ALS patients. Rabbi Hurwitz, is one, and he’s already training on the device three days a week.
Hurwitz’s caregiver told Forbes they’re already seeing progress. “It looks very promising,” they said. “The first time he opened up the keyboard, he actually managed to say something on his own and that was surprising. I haven’t actually seen him been able to do that by himself for a while.”
Cognixion has raised $25 million from venture firms like Prime Movers Lab and Amazon Alexa Fund to develop its BCI device, called Axon-R. It’s a helmet that can both read brain waves using an EEG and track eye movements, letting users interact with an augmented reality display. This enables a variety of interactions, including using the device to “type” words that are spoken aloud via a computer speaker. The company uses generative AI models that train on the patients’ own speech patterns, so over time it customizes itself to them, which should make communication faster.
MODEL BEHAVIOR
One of the first things kids do when they get to kindergarten is to learn how to tell the time. That easily developed skill is something that many multimodal AI models still struggle to do, according to a new study from researchers at the University of Edinburgh. Their findings show that even state-of-the-art models failed to get clock-hand positions right more than about 25% of the time, and did even worse with clocks that were more stylized or employed Roman numerals.
MORE AT FORBES
Follow me on Twitter or LinkedIn. Check out my website. Send me a secure tip.
Search
RECENT PRESS RELEASES
Related Post