The focus on meta-infrastructure is key to AI and metaverse ambitions
Social media and tech giant Meta is looking to build what it believes to be solid hardware and software infrastructure to thrive over the long term as it pushes further into AI and the metaverse.
In the last two weeks Meta and its Meta AI division unveiled a variety of plans for its products and internal infrastructure, including its AI supercomputer, data center, and AI coding assistant platform.
The Facebook parent company also announced for the first time that it has developed an AI chip: the Meta Training and Inference Accelerator (MTIA).
“Meta’s focus on its infrastructure bodes well for the company’s growth and longevity,” said Ray “R” Wang, founder and analyst at Constellation Research.
“Meta is in the right place now,” he said.
The investment allows the company to focus more on AI and move beyond its intense focus of recent years the metaverse world of virtual and augmented reality applications, Wang said.
This change of direction comes after Meta spent around $36 billion building the Metaverse by pumping that money into its Reality Labs division. But despite the large amount of money that Meta spent on the department, Meta made little return on the spend.
However, Meta’s renewed focus on AI technology is not an entirely new course. It has been using AI recommenders and other AI systems for almost two decades.
For example, Facebook’s news feed, which has long been based on AI, was launched in 2005. Meta was also created in 2016 PyTorch, an open-source machine learning framework for deep neural networks and deep learning research that underlies all of Facebook’s AI workloads. Last December, Meta released PyTorch 2.0.
“This is really a progression for us,” said Meta Vice President of Engineering Aparna Ramani during a streamed panel discussion at Meta’s At Scale conference on May 18. “What’s changing now is that the pace of innovation is picking up really fast.”
Meta’s current way of using automation and AI to create efficiencies is “smart for their future”. Wang called.
Even metas recent layoffs It’s the right move for the company’s future, Wang said, adding that the company is somewhat bloated with talent and can now focus more on attracting the right talent.
CEO Mark Zuckerberg said in March that the company plans to cut about 11,000 jobs by May. Some of these posts were eliminated in April. More layoffs will follow next week.
“Now they have to prioritize what they want to do with their network,” Wang said.
While Meta focuses on building a solid infrastructure that can support both AI and Metaverse initiatives, Meta can continue to work on the Metaverse quietly and away from the public eye.
“You can do both at the same time because AI is fundamental to the metaverse,” Wang said. “You must strengthen the infrastructure for the metaverse.”
A custom AI chip
When building this basis, the first thing to do is to produce a silicon chip.
MTIA is Meta’s in-house custom accelerator chip. The chip will help the tech giant use GPUs to deliver better performance and efficiency for any workload.
With MTIA, Meta aims to improve the user experience in Meta’s Facebook, Instagram and WhatsApp applications.
According to Meta, the accelerator will provide more accurate and exciting predictions, increase watch time and improve click-through rate.
MTIA fills a need for developer workloads that neither CPUs nor GPUs can meet, Meta said. In addition, its software stack is integrated with Pytorch.
MTIA is a way for Meta to move into the next era of specialization, Gartner analyst Chirag Dekate said.
While GPUs are flexible, more and more computing power is needed to run the latest generative AI techniques large language models. Therefore, tech giants like Meta and Google with its TPUhave begun to develop newer techniques to handle these much larger models.
“They take some of these neural networks and create identifying commonalities in their workload mix and create purpose-specific cases,” Dekate added.
Meta’s new AI silicon chip is also about being more AI-native, he said.
“It’s not yesterday’s technology,” Dekate continued. “It’s about innovating the model platforms, model products and model ecosystem of tomorrow.”
For example, Meta’s Metaverse strategy involves a highly immersive experience and ecosystem. These will probably not only be VR/AR headsets, but also worlds with avatars with more and better voice options and more realistic movements. However, with its current infrastructure, it will be difficult to fit advertising platforms into a Metaverse ecosystem.
As such, Meta will likely evolve its hardware strategy to develop different chip families that will enable training and inference acceleration of generative AI models and multimodal AI, and will help Meta create a better Metaverse experience, Dekate said.
“These experiences require the merging of vision models, language models, and NLP (understanding of natural language) techniques,” he said.
“It’s not just about solving generative AI techniques,” added Dekate. “It’s about using a lot of those techniques as building blocks and building larger native AI ecosystems that Meta specializes in, especially in terms of his vision for the metaverse.”
Looking to the future
However, building custom chips is an expensive endeavor that only companies like Meta, Google, and AWS can undertake due to their financial resources.
“The scope of AI in their organization is so vast, and more importantly, they have a deep understanding of the issues they need to address not only today but in their future where AI is paramount,” said decats.
beam Analyst and Founder, Constellation Research
Among those issues is research into how meta’s language models and platforms — including Facebook, Instagram, and WhatsApp — can be optimized with targeted advertising. As a tech company with such a wide social reach, Meta faces the challenge of ensuring its language models can span multiple world languages, using video, audio, and images to deliver the right ads to the right audiences.
Meta uses what it learns from those platforms to create future immersive platforms at scale, including those for the Metaverse, Dekate said.
Part of this strategy is the next generation data center. According to Meta, the new data center will have an AI-optimized design, supporting liquid-cooled AI hardware and a powerful AI network.
Meta also announced that it has completed the second phase of building its AI supercomputer Research SuperCluster. This has enabled the tech company to train large AI models like its own big language model, llamaon the supercomputer.
Earlier this year, Meta made LLaMA available as an open source model, going in a direction Microsoft, Google, and ChatGPT Creator OpenAI have shied away from this and referred to the risks associated with misusing the models.
“By sourcing Llama open-source, Meta hopes to accelerate innovation,” said Karl Freund, AI analyst at Cambrian.
Despite criticism for the decision to open source the technology, Meta’s decision to go with LLaMA shows how Meta aims to rise to the forefront of the AI industry.
“Meta wants to use AI in all of its products and be at the forefront of developing new LLMs,” said Freund, adding that in addition to its many products for internal use, Meta also plans to develop large-scale AI models and release them as open source, to enable widespread adoption of the meta-technology across the industry.
“We’ve been building advanced infrastructure for AI for years, and this work reflects long-term efforts that will enable even more advances and better uses of this technology in everything we do,” Zuckerberg said in a statement to the media.
Esther Ajao is a news writer covering artificial intelligence software and systems.