CoinsValue.net logo CoinsValue.net logo
Cryptopolitan 2026-03-11 21:15:26

Meta said it is building four new MTIA chip generations within the next two years

Meta is moving harder into custom AI chips as the fight to cut reliance on Nvidia gets more serious across big tech. The company said it began the Meta Training and Inference Accelerator, or MTIA, in 2023, and it is now preparing four new chip generations over the next two years. Per Meta, these chips are allegedly being built for ranking, recommendations, and GenAI jobs across Meta’s products. The company also said it is not betting on one supplier or one piece of hardware. Meta plans to keep buying silicon from several industry players while keeping MTIA at the center of its own AI infrastructure plan. In the company’s words, it is taking a “portfolio approach” as its AI workloads grow and change. That means mixing outside chips with in-house silicon instead of handing the whole stack to one vendor. Meta rolls out four MTIA chip generations on a faster schedule Meta said it already uses hundreds of thousands of MTIA chips for inference work tied to both organic content and ads across its apps. The chips are built for the company’s own jobs, not for general use. That matters because Meta said the hardware is part of a custom full-stack solution, which gives it a more tuned system for the work it runs every day. The company said that setup delivers better compute efficiency for its specific use cases and lowers cost compared with broader-purpose chips. The next phase is a larger rollout. Meta said it is building MTIA 300, 400, 450, and 500, with each version bringing gains in compute, memory bandwidth, and efficiency. MTIA 300 is already in production and will handle ranking and recommendations training. MTIA 400, 450, and 500 can run all workloads, but Meta said those chips will mainly be used for GenAI inference production in the near term and through 2027. The company also said the silicon is modular, which lets new chips slide into existing rack system infrastructure. That cuts the wait between design and deployment. On release speed, Meta said the industry usually launches a new AI chip every one to two years, but it now has the capacity to release its own chips every six months or less by reusing modular designs. Meta builds its AI chip strategy around inference and open standards The company said its MTIA strategy rests on three parts: fast iteration, an inference-first design, and easy adoption through common standards. On the first point, Meta said the shorter release cycle helps it adjust faster as AI techniques change, bring in newer hardware technology, and reduce the cost of developing and deploying fresh chip versions. On the second point, Meta drew a line between its plan and the usual market model. The company said most mainstream chips are built first for large GenAI pre-training jobs and then used for other work, often at a worse cost level. Meta said it is doing the opposite. MTIA 450 and 500 are being tuned first for GenAI inference, then used for ranking, recommendations training and inference, and GenAI training when needed. The company also said MTIA is built from the start on standard tools and systems, including PyTorch, vLLM, Triton, and the Open Compute Project. Its system and rack designs also follow OCP standards for use in data centers. Meta added that no single chip can cover every demand it has, which is why it plans to deploy different chips for different workloads while pushing toward what it called “personal superintelligence for all.” If you want a calmer entry point into DeFi crypto without the usual hype, start with this free video.

阅读免责声明 : 此处提供的所有内容我们的网站,超链接网站,相关应用程序,论坛,博客,社交媒体帐户和其他平台(“网站”)仅供您提供一般信息,从第三方采购。 我们不对与我们的内容有任何形式的保证,包括但不限于准确性和更新性。 我们提供的内容中没有任何内容构成财务建议,法律建议或任何其他形式的建议,以满足您对任何目的的特定依赖。 任何使用或依赖我们的内容完全由您自行承担风险和自由裁量权。 在依赖它们之前,您应该进行自己的研究,审查,分析和验证我们的内容。 交易是一项高风险的活动,可能导致重大损失,因此请在做出任何决定之前咨询您的财务顾问。 我们网站上的任何内容均不构成招揽或要约