Meta (META) Tests AI Chip Production with TSMC to Reduce Reliance on NVIDIA

March 11, 2025

Meta Platforms (META, Financial) is testing AI training chips developed in collaboration with Taiwan Semiconductor Manufacturing Company (TSMC). This initiative aims to decrease Meta’s dependence on NVIDIA (NVDA) and reduce infrastructure costs. If successful, Meta plans to mass-produce these chips to further advance its AI capabilities. Insiders reveal Meta has started small-scale deployments and intends to ramp up production upon successful testing. Both Meta and TSMC have declined to comment on the matter.

The primary goal behind Meta’s chip development is to cut down extensive infrastructure expenses, especially as the company heavily invests in AI tools to drive growth. Meta estimates total expenditures to reach $114 billion to $119 billion by 2025, with up to $65 billion allocated for capital expenditure on AI infrastructure.

The new training chip, designed as a specialized accelerator, effectively handles AI-specific tasks and is more power-efficient compared to conventional GPUs used for AI operations. Meta’s deployment follows the completion of the chip’s “tape-out,” a significant milestone in semiconductor development indicating that the design has been sent to the fab for manufacturing.

Meta’s MTIA series of training and inference accelerators faced challenges in the initial stages, including abandoning a chip at a similar development phase. However, Meta has started utilizing MTIA chips for inference processing, particularly in the recommendation systems on Facebook and Instagram. The leadership aims for full-scale use of these in-house chips by 2026, gradually applying them to generative AI products like Meta’s AI chatbot.

Despite some success with their in-house inference chips, Meta had to abandon a project in 2022 due to unsuccessful small-scale testing. They subsequently placed multi-billion-dollar GPU orders with NVIDIA, becoming one of its largest customers and utilizing these GPUs across applications used by over 3 billion users daily.

Concerns have arisen among AI researchers about the growing reliance on data and computational power for large language model development. This concern intensified following DeepSeek’s launch of a cost-effective model prioritizing inference over traditional scaling, which caused significant turbulence in the global AI stock markets and a temporary drop in NVIDIA’s stock price.