A Proactive Top-Down Approach to Dynamic Allocation of Resources in Data Centers
- Author(s): Seracini, Filippo;
- et al.
Over the last couple of decades, data centers have seen a substantial increase in number, size, and use. This massive growth started with the emergence of the World Wide Web and accelerated with the advent of social media (e.g. Facebook, Twitter), contentsharing platforms (e.g. Instagram, Flickr), and cloud technologies (e.g., Amazon EC2, Microsoft Azure and OneDrive). Nowadays, large enterprises, service providers, and public cloud providers utilize anywhere from few thousands to hundred of thousands of servers in their data centers, which consume incredible amounts of electricity. However, studies have shown that the average utilization of these data centers is extremely low, resulting in a waste of computing resources and energy. This is due to the fact that resources are typically allocated in such a way that, if a spike of requests occurs in a data center, the Internet applications that run inside it do not violate service level agreements (SLAs). As a result, resources are over- allocated, and the overall utilization of the data center remains very low. This dissertation improves upon the state of the art of resource allocation techniques by taking an integrated, holistic, top-down approach, that proactively adapts the amount of resources allocated, based on incoming future workloads. This approach can be applied to workloads where different requests are correlated, such as service-oriented Internet applications. To enable proactive adaptation, this research introduces two contributions : 1) an approach to predict future workloads by leveraging correlations between requests sent to an application and 2) a technique to estimate, with acute accuracy, the impact that those workloads have on the performance of the currently allocated infrastructure. Thanks to these contributions, the work presented here can predict an incoming workload, calculate the required amount of computing resources needed to run it under a given SLA, and proactively adapt the infrastructure whenever needed. The contributions presented in this dissertation are validated using a well- known benchmark and an implementation of a web service based on a public API; results are compared against existing solutions from scientific literature. Results show double-digit improvements for both energy savings and reduction of the amount of resources allocated, with no additional SLA violations