Meta is reportedly testing its first AI training chip, which is designed to train Meta’s AI systems and reduce the company’s dependency on third-party suppliers like Nvidia.
The initiative is also expected to reduce the company’s huge infrastructure bill, with Meta aiming to use its own in-house chips by 2026, according to sources cited by Reuters.
Meta is working with TSMC on this project
The company is said to be already using a previous generation chip to train its ranking and recommended action algorithms, but this would be its first use to train generative tools like Meta AI.
Reuters reported that Meta has opened a new tab for testing the in-house chip. The social media giant has begun a small placement of the chip with plans to increase production for wide-scale use if the test goes well.
The reports indicate that Meta is training AI chips dedicated to handling AI-specific tasks. It has reportedly been manufactured by TSMC, with the test deployment following a successful tape out of the final process before a semiconductor is manufactured.
Meta began testing its first “tape-out” of the chip, a significant milestone towards silicon development work that involves sending an initial design through a chip factory.
According to Reuters, a typical tape-out costs tens of millions of dollars and takes roughly a quarter to half a year to complete, with no guarantee the test will succeed. A failure would require a company to diagnose the problem and repeat the tape-out step.
Meta has been looking at developing its own chips in order to reduce its reliance on Nvidia hardware. Meta has remained one of Nvidia’s biggest customers and has accumulated a collection of GPUs to train its models, including its Llama foundation models series.
AI analysts have expressed concerns and doubts about the progress that can be achieved by continuously scaling up LLMs by adding more data and computing power, doubts which were reinforced when DeepSeek launched its models at a fraction of the cost incurred by its peers.
First reported to be in development in 2023, the Meta’s in-house chips dubbed the Meta Training and Inference Accelerator are based on 7nm nodes and provide 102 Tops of Integer (8-bit) accuracy computation or 51.2 teraflops of FP16 accuracy computation.
The chips run an 800 megahertz and are about 370 millimeters square.
This chip is the latest in the company’s MTIA series and the program has had a shaky start for years and at one point scrapped a chip at a similar phase of development.
Meta missed initial targets, delaying the roll out of the chip
Meta was originally expected to roll out its chips in 2022 but scrapped the plan after they failed to meet internal targets, with the shift from CPUs to GPUs for AI training forcing the company to redesign its data centers and cancel multiple projects.
However, Meta last year started using an MTIA chip to perform inference, or the process involved in running an AI system as users interact with it, for the recommendation systems that determine which content shows up on Facebook and Instagram news feeds.
In February 2024, according to the report, the company was planning to deploy the second generation of the MTIA chip.
The company which also owns Instagram and WhatsApp has forecast total 2025 expenses of $114 billion to $119 billion, including up to $65 billion in capital expenditure largely driven by spending on AI infrastructure.
Meta executives reportedly said they want to begin the use of their own chips by 2026 for training or the compute-intensive process of feeding the AI system reams of data to teach it how to perform.
As with the inference chip, the goal for the training chip is to start with recommendation systems and later use it for generative AI products like chatbot Meta AI, the executives said.
At the Morgan Stanley technology, media, and telecom conference last week, Meta’s chief product officer Chris Cox said, “We are working on how would we do training for recommender systems and then eventually how do we think about training and Inference for gen AI.”
Commenting on Meta’s chip development efforts, Cox described them as “kind of a walk, crawl, run situation,” so far. However, he said executives considered the first-generation inference chip recommendations to be a “huge success.”
Cryptopolitan Academy: Coming Soon - A New Way to Earn Passive Income with DeFi in 2025. Learn More