Meta is the latest AI company to debut its own AI chip
March 11, 2025
49m
Another day, another AI company rolls out its own AI chip. Today, it’s news of Meta testing its first AI chip, a move that would reduce its reliance on Nvidiaand lower its mammoth AI infrastructure costs. Depending on how tests go, the company could ramp up production for wide-scale use. The other day it was Alibaba, which launched its own open-source AI chip, a move that could help the company and its clients bypass US trade restrictions and provide an alternative to chips made by companies like Intel.
More Tech
17h
Today was Tesla’s worst day since 2020
It’s been a bad day for many major companies but, hey, at least they’re not Tesla. The electric vehicle company saw its biggest daily decline — more than 15% as of market close — since 2020, the year that was plagued by a global pandemic and ensuing supply chain chaos.
Back in September 2020, Tesla saw its biggest decline ever, 21%, after Standard & Poor’s didn’t add the company to its index of the 500 biggest stocks. Notably, Tesla, which is now on that list, is now the worst-performing stock on the S&P 500 for the year.
Tesla has been facing declining sales, lowered analyst estimates, growing competition, shrinking popularity, a rash of protests against the company and its CEO Elon Musk, and tariffs on Mexico and Canada, where many of its parts are manufactured.
20h
Quit the yapping: New AI technique could cut costs 90% by saying less
A consensus is emerging in AI circles that the way forward involves models that use “chain of reasoning” to get better performance, at the expense of costlier computing resources. This process involves instructing the model to break a problem down into detailed step-by-step instructions. The problem is that these steps can be pretty verbose, and when it comes to AI, more words = more cost.
A new paper from researchers at Zoom shows that using a new technique dubbed “chain of draft,” if you tell a model to simply limit those steps to succinct “drafts” of only five words or so, rather than wordy sentences, not only can you still achieve high performance on responses, but you can cut computing costs by up to 90%.
AI models are priced by the number of “tokens” — or portions of words — that are input and output by the model. For example: OpenAI’s o3-mini “reasoning” model costs $1.10 per million tokens input, and $4.40 per million tokens of output. That may seem cheap, but when you’re processing millions of queries, this can really add up.
“By reducing verbosity and focusing on critical insights, CoD matches or surpasses CoT in accuracy while using as little as only 7.6% of the tokens, significantly reducing cost and latency across various reasoning tasks,” the paper reports.
Translation: it’s faster, cheaper, and sometimes better than chain of thought.
This approach is also notable for its ease of use. You can simply change the prompts you enter to get this benefit. That said, most of the gains were found using larger models like OpenAI’s GPT-4o and Anthropic’s Claude 3.5 Sonnet, while using smaller models resulted in poorer performance.
Go deeper: Here are OpenAI’s 50 Laws of Robotics
A new paper from researchers at Zoom shows that using a new technique dubbed “chain of draft,” if you tell a model to simply limit those steps to succinct “drafts” of only five words or so, rather than wordy sentences, not only can you still achieve high performance on responses, but you can cut computing costs by up to 90%.
AI models are priced by the number of “tokens” — or portions of words — that are input and output by the model. For example: OpenAI’s o3-mini “reasoning” model costs $1.10 per million tokens input, and $4.40 per million tokens of output. That may seem cheap, but when you’re processing millions of queries, this can really add up.
“By reducing verbosity and focusing on critical insights, CoD matches or surpasses CoT in accuracy while using as little as only 7.6% of the tokens, significantly reducing cost and latency across various reasoning tasks,” the paper reports.
Translation: it’s faster, cheaper, and sometimes better than chain of thought.
This approach is also notable for its ease of use. You can simply change the prompts you enter to get this benefit. That said, most of the gains were found using larger models like OpenAI’s GPT-4o and Anthropic’s Claude 3.5 Sonnet, while using smaller models resulted in poorer performance.
Go deeper: Here are OpenAI’s 50 Laws of Robotics
21h
Tesla is the worst-performing stock in the S&P 500 this year
It’s been a bad day for stocks in general. It’s been a worse day for Tesla.
As of about 12:30 p.m. ET today, Tesla, which lost all of its election gains, is officially the worst-performing stock in the S&P 500 this year. The EV stock is facing numerous headwinds: declining sales, shrinking analyst estimates, growing competition, shrinking popularity, and the institution of tariffs on Mexico and Canada, where many of its parts are manufactured — all of which are weighing on its stock price.
22h
Companies probably won’t switch to DeepSeek but they do think it should make AI cheaper
After DeepSeek’s sudden arrival on the AI scene in January, it upended a lot of preexisting assumptions about AI. Namely it subverted the idea that to get better models, companies would have to spend more.
To get an idea of what DeepSeek means for enterprise spending on AI — one of AI’s more promising revenue sources — Enterprise Technology Research surveyed more than 100 business leaders who are “very” or “extremely” familiar with their organization’s usage of large language models. Their companies either used paid subscriptions to tools like ChatGPT or Microsoft Copilot or otherwise integrate LLMs (outside ones or their own) into their businesses.
While more than half of respondents said they believed DeepSeek-R1 offers comparable performance to better-known models from OpenAI, Meta, Google, and Alibaba and that they had a strong interest in “evaluating” DeepSeek in the next six months, few said they trusted its data privacy measures. Partly as a result, most said DeepSeek wouldn’t influence their AI spending plans.
They did think, however, that the advent of DeepSeek should make their AI business expenses cheaper. Some 65% of respondents said DeepSeek will substantially reduce the costs of integrating LLMs into their applications and workflows.
For what it’s worth, those surveyed also seemed to subscribe to Jevons Paradox, with 68% saying that if AI tools were less expensive, their organizations would be investing “much more.”
23h
With Tesla sliding 9% today, its Trump bump is solidly gone
Tesla’s election stock price rocket ride is officially over. On Election Day, the stock closed at $251.44. As of 9:55 a.m. ET, it’s at $244.49, or down about 3% from then, thanks to a sharp drop in the stock price Monday morning.
It’s been a pretty epic fall: at one point the stock had risen more than 90% since Election Day. Tesla’s stock price briefly went below that election threshold on Friday but recovered.
This is happening as Tesla sales decline around the world, as do analysts’ delivery estimates, throwing water on the EV maker’s aim to “return to growth” in 2025.
Latest Stories
Sherwood Media, LLC produces fresh and unique perspectives on topical financial news and is a fully owned subsidiary of Robinhood Markets, Inc., and any views expressed here do not necessarily reflect the views of any other Robinhood affiliate, including Robinhood Markets, Inc., Robinhood Financial LLC, Robinhood Securities, LLC, Robinhood Crypto, LLC, or Robinhood Money, LLC.
- Privacy Notice
- Disclosures
- Terms and Conditions
- Editorial Standards
- Masthead
- Your Privacy Choices
- Advertising Disclaimers
©2025 Sherwood Media, LLC
Search
RECENT PRESS RELEASES
Related Post