Date: April 28, 2016
Location: Meeting room Ofek - Polo scientifico e tecnologico "Fabio Ferrari" (Building Povo 1, via Sommarive 5 – Povo, Trento)
- Alexandre Kandalintsev, University of Trento
Clouds are an irreplaceable part of many business applications. They provide tremendous flexibility and gave birth for many related technologies – Software as a Service (SaaS) and the like. One of the biggest powers of clouds is load redistribution for scaling up and down on demand. This helps dealing with varying loads, increasing resource utilization and cutting down electricity bills while maintaining reasonable performance isolation.The last one is of our particular interest.
Most cloud systems are accounted and billed not by useful throughput, but by resource usage. For example, a cloud provider may charge according to cumulative CPU time and/or average memory footprint. But this does not guarantee that the application realized its full performance potential because CPU and memory are shared resources. As a result, if there are many other applications it could experience frequent execution stalls due to contention on memory bus or cache pressure. The problem is more and more pronounced because modern hardware rapidly increases in density leading to more applications are co-located. The performance degradation caused by co-location of applications is called application interference.
In this work we study in-depth reasons of interference as well as ways to mitigate it. The first part of the work is devoted to interference analysis and introduces a simple yet powerful empirical model of CPU performance that takes interference into account. The model is based on empirical observations and build up from extrapolation of a two-task (trivial) case.
In the following part we present a method of ranking of virtual machines according to their average interference. The method is based on analysis of performance counters. We first launch a set of very diverse benchmark programs (to be representative for wide range of programs) one-by-one together with all sorts of performance counters. This gives us their “ideal” (isolated) performances. Then we run them in pairs to see the level of interference they create to each other. Once this is done, for each benchmark we calculate average interference. Finally we calculate the correlation be-tween the average interference and performance counters. The counters with the biggest correlation are to be used as interference estimators.
The final part deals with measuring interference in production environment with affordable overhead. The technique is based on short (in the order of milliseconds) freezes of virtual machines to see how they affect other VMs (hence the name of method – Freeze’nSense). By comparing the performance of the VM when other VMs active and when they frozen it is possible to conclude how much it looses in speed because of sharing hardware with other applications.
Contact: Alexandre Kandalintsev, a.kandalintsev [at] unitn.it