- Main
Energy Saving in Data Centers Through Traffic and Application Scheduling
- Zhou, Liang
- Advisor(s): Bhuyan, Laxmi N.
Abstract
Energy saving in data centers is challenging due to workload fluctuation and strict application latency constraints. In this dissertation, we aim to develop a suite of practical techniques to improve the energy efficiency of data centers, ensuring that they are effective, scalable and responsive. We have studied this research topic from two perspectives: server and network load consolidation and scheduling of latency-sensitive applications. All our techniques have been experimentally shown to produce more energy savings compared to state-of-the-art techniques.
It is known that servers consume maximum energy in a data center. We propose Goldilocks, a resource provisioning system for optimizing both power and task completion time by allocating tasks to servers in groups. Tasks hosted in containers are grouped together by running a graph partitioning algorithm that considers the frequency of communication between the containers. The groups are allocated to a subset of the servers near to each other and all idle servers are turned off to save power. As a next step, we make the data center network energy proportional by developing a distributed traffic consolidation scheme named DREAM. DREAM splits a TCP flow across multiple paths by adapting the path selection probability for sending a flowlet. The unused switches and links are disabled to save power.
Several techniques have been developed in the literature to save energy on servers through Dynamic Voltage and Frequency Scaling (DVFS) and sleep-based schemes. We have developed EPRONS to minimize the overall data center's power consumption by trading-off network slack in favor of providing additional slack by using DVFS. We design a two-step DVFS scheme to save more power for latency-sensitive applications. Since accurate estimation of the latency is difficult, we extend the two-step DVFS with a neural network based predictor to judiciously boost the CPU frequency at the right time to catch-up to the deadline. Finally, in addition to optimizations at the Index Serving Node (ISN), we design a coordinated time budget assignment framework between the aggregator and ISNs in a search engine to improve search quality and latency.
Main Content
Enter the password to open this PDF file:
-
-
-
-
-
-
-
-
-
-
-
-
-
-