Scaling Large Language Model Serving Infrastructure at Meta

Running LLMs requires significant computational power, which scales with model size and context length. We will discuss strategies for fitting models to various hardware configurations and share techniques for optimizing inference latency and throughput at Meta.

As we transition from stand-alone LLMs to production grade systems that support LLM at a global scale, we delve into our approach to constructing systems that accommodate dynamic user requests and widespread product adoption. This includes implementing caching strategies and addressing infra latency, efficiency and reliability issues within real data centers of a heterogeneous hardware fleet.

Finally, we will present case studies that demonstrate our methods for achieving a balance between model quality, latency, throughput, reliability, and cost in a complex and demanding environment.


Speaker

Charlotte Qi

Senior Staff Engineer @Meta

Ye (Charlotte) Qi is a production engineer on the AI inference team at Meta. She is one of the inference technical leads behind Meta’s initial Meta.AI product launch and LLaMa3 development.

With over six years of experience at Meta, she has run large-scale online inference systems for both RecSys and LLM models across various organizations. Charlotte enjoys working at the multidisciplinary intersection of infrastructure, machine learning, product development and DevOps, advancing end-to-end development from research to production. Her background spans the entire software stack, including hardware productionization, inference runtime optimizations, distributed system reliability, experiment management, and service operations.

Prior to joining Meta, Charlotte earned her Master's degree from Carnegie Mellon University, specializing in large-scale machine learning systems and neural machine translation.

Read more

Date

Tuesday Nov 19 / 10:35AM PST ( 50 minutes )

Location

Ballroom BC

Share

From the same track

Session AI/ML

LLM Powered Search Recommendations and Growth Strategy

Tuesday Nov 19 / 02:45PM PST

In this deep exploration of employing Large Language Models (LLMs) for enhancing search recommendation systems, we will conduct a technical deep dive into the integral aspects of developing, fine-tuning, and deploying these advanced models.

Speaker image - Faye Zhang

Faye Zhang

Staff Software Engineer @Pinterest, Lead on GenAI Search Traffic Projects, Speaker, Expert in AI/ML with a Strong Background in Full-Stack Development

Session LLMOps

Navigating LLM Deployment: Tips, Tricks, and Techniques

Tuesday Nov 19 / 01:35PM PST

Self-hosted Language Models are going to power the next generation of applications in critical industries like financial services, healthcare, and defense.

Speaker image - Meryem Arik

Meryem Arik

Co-Founder @TitanML, Recognized as a Technology Leader in Forbes 30 Under 30, Recovering Physicist

Session Generative AI

GenAI for Productivity

Tuesday Nov 19 / 11:45AM PST

At Wealthsimple, we leverage Generative AI internally to improve operational efficiency and streamline monotonous tasks. Our GenAI stack is a blend of tools we developed in house and third party solutions.

Speaker image - Mandy Gu

Mandy Gu

Senior Software Development Manager @Wealthsimple

Session AI/ML

10 Reasons Your Multi-Agent Workflows Fail and What You Can Do About It

Tuesday Nov 19 / 03:55PM PST

Multi-agent systems – a setup where multiple agents (generative AI models with access to tools) collaborate to solve complex tasks – are an emerging paradigm for building applications.

Speaker image - Victor Dibia

Victor Dibia

Principal Research Software Engineer @Microsoft Research, Core Contributor to AutoGen, Author of "Multi-Agent Systems with AutoGen" book. Previously @Cloudera, @IBMResearch

Session Machine Learning

A Framework for Building Micro Metrics for LLM System Evaluation

Tuesday Nov 19 / 05:05PM PST

LLM accuracy is a challenging topic to address and is much more multi dimensional than a simple accuracy score. In this talk we’ll dive deeper into how to measure LLM related metrics, going through examples, case studies and techniques beyond just a single accuracy and score.

Speaker image - Denys Linkov

Denys Linkov

Head of ML @Voiceflow, LinkedIn Learning Instructor, ML Advisor and Instructor, Previously @LinkedIn