Skip to main content
eScholarship
Open Access Publications from the University of California

UCLA

UCLA Electronic Theses and Dissertations bannerUCLA

Towards Efficient Federated Learning: Overcoming Challenges in Communication, Heterogeneity, and Data Scarcity

Abstract

The rapid growth of edge computing, 5G, and IoT technologies has led to a significant increase in distributed data, creating both opportunities and challenges for machine learning. Federated learning has emerged as a promising approach to enable collaborative model training across decentralized devices while not sharing user data. However, effectively implementing federated learning involves addressing several key challenges, including data scarcity, communication efficiency, and data heterogeneity.

This dissertation addresses these challenges through three key contributions. First, to enhance communication efficiency, we develop compressed stochastic gradient descent (SGD) algorithms that incorporate techniques such as event-triggered communication and local iterations. This approach reduces the frequency and size of data exchanges, minimizing communication overhead in decentralized training environments. We provide rigorous theoretical analysis to demonstrate the convergence rates and efficiency gains of these methods.

Second, to further address data heterogeneity, we consider communication efficient multi-task learning for decentralized topologies. These techniques allow for the simultaneous optimization of multiple related tasks, creating personalized models tailored to the unique data distributions and objectives of individual devices while also focusing on communication efficiency of exchanges. We formulate the multi-task learning problem in decentralized settings and provide bounds on the convergence of Gradient Descent and comment on performance improvements compared to traditional methods without compression.

Third, to tackle data scarcity, we leverage transfer learning methods with a focus on linear models. By utilizing pre-trained regression models from diverse source domains, we provide a robust starting point for target models and fine-tune them on limited local data. This method improves model performance and adaptability in data-scarce environments. We offer theoretical guarantees on the excess risk bounds for these transfer learning approaches, ensuring their reliability and effectiveness.

Overall, these contributions seek to enhance the robustness, scalability, and practicality of federated learning, enabling effective and collaborative learning in diverse and distributed environments. This work lays the groundwork for advanced federated learning applications, addressing critical challenges and providing a path forward for future research and development.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View