Delivering new AI applied sciences at scale additionally means rethinking each layer of our infrastructure – from silicon and software program programs and even our information heart designs.
For the second 12 months in a row, Meta’s engineering and infrastructure groups returned for the AI Infra @ Scale convention, the place they mentioned the challenges of scaling up an infrastructure for AI in addition to work being accomplished on our large-scale GPU clusters, open hardware designs for next-generation information heart {hardware}, and the way Meta is constructing customized silicon just like the Meta Training and Inference Accelerator (MTIA) to deal with a few of our AI coaching workloads.
Aparna Ramani, VP of Engineering at Meta, answerable for AI infrastructure, information infrastructure and developer infrastructure, delivered the opening keynote at AI Infra @Scale 2024 and mentioned the AI panorama as much as immediately, the technical challenges, and the way options like open fashions and {hardware} can push AI to new frontiers.
Watch the total keynote presentation beneath: