Meta To Deploy Four New MTIA Accelerators Through 2027
March 12, 2026
Meta Platforms announced plans to deploy four new generations of its in-house artificial intelligence accelerators as the company expands its custom silicon strategy to power AI workloads across its global platforms.
The chips — MTIA 300, MTIA 400, MTIA 450, and MTIA 500 — are part of Meta’s Meta Training and Inference Accelerator (MTIA) family, which the company is developing to support rapidly growing demand for AI services ranging from recommendation systems to generative AI models.
Meta said the new chips will either be deployed or scheduled for deployment between 2026 and 2027. The MTIA 300 chip is already in production and optimized for ranking and recommendation training workloads. Subsequent generations expand support for generative AI workloads and inference tasks across Meta’s infrastructure.
The MTIA 400 chip builds on the MTIA 300 architecture with significantly higher compute performance and memory bandwidth to support both recommendation models and generative AI workloads. Meta said the chip has completed testing and is expected to be deployed in its data centers.
MTIA 450 is primarily designed for generative AI inference, featuring twice the high-bandwidth memory bandwidth of the previous generation and new low-precision data types to accelerate inference workloads. The company plans to begin mass deployment of MTIA 450 in early 2027.
The MTIA 500 chip further advances the architecture with additional memory bandwidth, higher memory capacity, and increased compute throughput. Meta said the chip is scheduled for deployment later in 2027 as part of its continued focus on scaling generative AI inference efficiently.
Across the four generations, Meta said high-bandwidth memory bandwidth increases by about 4.5 times while compute performance improves roughly 25 times, reflecting rapid improvements enabled by its modular chiplet-based design strategy.
The company said it aims to release new MTIA chip generations roughly every six months through an iterative development model. By using modular chiplets and standardized rack-level infrastructure, Meta can upgrade compute, networking, and memory components independently while maintaining compatibility across data center systems.
Meta said its custom AI chips are designed to complement rather than replace commercial GPUs, allowing the company to diversify hardware sources while optimizing specific workloads for lower cost and higher efficiency.
The MTIA platform is built around industry-standard AI software frameworks, including PyTorch, vLLM, and Triton, enabling developers to run models across both GPUs and MTIA hardware with minimal changes.
Meta said the strategy focuses primarily on generative AI inference, which it expects to grow rapidly as AI assistants and generative features expand across its services used by billions of people worldwide.
Search
RECENT PRESS RELEASES
Related Post
