Meta Has Kicked Off Minor Deployment Of Its In-House AI Chip As Company Aims To Reduce Its Massive Infrastructure Cost; First Tape-Out Successful Using TSMC’s Technology
March 11, 2025
AI infrastructure costs alone for Meta are said to increase to $65 billion, with total expenditure forecasted to be between $114 billion and $119 billion. To curb this rising sum, the social media giant started developing its first in-house AI chip, with the company displaying show progress in this area, according to the latest report. Apparently, a small deployment of the silicon will kick off in the future, allowing Meta to reduce its reliance on NVIDIA and its pricey GPUs for training artificial intelligence.
After a rocky start, which included shuttering the endeavor, Meta executives hope that the in-house AI can start being utilized for training purposes by 2026
The small deployment plan could lead to a full-scale use case if all tests go well. According to unnamed sources, Reuters reports that Meta’s new AI chip is a dedicated accelerator, meaning that its sole purpose will be to tackle artificial intelligence-related tasks. In addition to reducing its bill, which is currently happening by purchasing ludicrously expensive graphics processors from NVIDIA, Meta can substantially allay its infrastructure’s power consumption as its AI chip will be more power efficient thanks to being designed to handle specific tasks.
Related Story Instagram Users Turned To Multiple Platforms To Express Their Concerns About A Surge In Disturbing Content
TSMC is expected to undertake the production of this custom silicon, but the report does not specify which of the Taiwanese semiconductor firm’s manufacturing processes will be utilized. However, the details state that Meta had successfully finished its first tape-out of the AI chip, which can cost millions and up to six months for the process to complete. Even then, there is no guarantee that the chip will work according to the company’s requirements, forcing it to isolate and diagnose the problem and repeat the tape-out process, adding further to its development costs.
There was a time when Meta decided not to pursue the development of its custom AI chip, likely due to development complications, but it appears that the company has managed to scale these hurdles. Executives hope to start leveraging the silicon’s capabilities by 2026, with its intended goal to train Meta’s systems, then later move on to generative AI products such as the AI chatbot. NVIDIA continues to benefit thanks to increased GPU sales, with Meta as one of its most lucrative customers.
Unfortunately, experts are concerned about how much progress can be attained in scaling up LLMs by increasing raw GPU power. The transition to custom AI chips could also reduce the space required to house and cool this hardware, so let us wait and see how long Meta comes up with the first unit.
Search
RECENT PRESS RELEASES
Related Post