Thinking Outside the GPU: Systems for Scalable Machine Learning Pipelines
Scalable and efficient machine learning (ML) systems have been instrumental in fueling recent advancements in ML capabilities. However, further scaling these systems requires more than simply increasing the performance and quantity of accelerators such as GPUs. Modern ML deployments rely on complex pipelines composed of many diverse and interconnected systems beyond just accelerators.
In this talk, I will emphasize the importance of building scalable systems across the entire ML pipeline. In particular, I will first explore how to build scalable data storage and ingestion systems to manage massive datasets for large-scale ML training pipelines, including those at Meta. To meet growing ML data demands, these data systems must be optimized for performance and efficiency. I will next illustrate how to leverage synergistic optimizations across the training data pipeline to unlock performance and efficiency gains beyond what isolated system optimizations can achieve. However, effectively deploying these optimizations requires navigating a complex system design space. To address this, I will then introduce cedar, a framework that automates these optimizations and orchestrates ML data processing for diverse training workloads. Finally, I will discuss key opportunities in further advancing the scalability, security, and capabilities of the systems that will drive the next generations of ML training and inference pipelines.