Large Language Models (LLMs) have shown remarkable capabilities in a variety of tasks, including chatting, programming, and searching. However, the high costs of LLMs are preventing these models from being deployed for the vast majority of applications. In this dissertation, we focus on building efficient and automated systems to reduce costs and democratize access to large language models.
We first introduce systems to optimize computational efficiency and reduce the engineering overhead for distributed LLM training. We develop TeraPipe, which proposes a new dimension to perform pipeline parallel training for LLMs, and also Alpa, the world’s first compiler capable of automatically distributing arbitrary neural networks with all existing parallelization methods.
While training is typically a one-time cost, deploying and serving an LLM requires running LLM inference continuously, which is the top blocker for the real-world deployment of LLMs. We improve the serving scalability with AlpaServe through model parallelism, and increase the memory utilization and the LLM inference throughput with a new attention algorithm, PagedAttention, and an end-to-end serving system, vLLM.
Overall, these systems provide comprehensive solutions that significantly improve both training and inference efficiency for large language models. Together, these systems lower the high costs associated with large language models, democratizing their deployment across various real-world applications.