Serving LLMs on Kubernetes
Lingua:
Inglese
Track 4 - Ops & Tech
Orario: 16:45
- 17:30
Abstract
What are the key hurdles in running Large Language Models (LLMs) efficiently on Kubernetes? This session is crafted for MLOps and Platform Engineers seeking effective strategies for LLM integration. It will provide an overview of the current landscape for LLM deployment options, discussing the suitability of Kubernetes for these models.
The talk will dissect the complexities associated with the size, tuning, and scaling of LLMs, and explore technologies such as KServe, vLLM, KubeFlow Model Registry, and Ray