Search-Based Scheduling of Experiments in Continuous Deployment

Gerald’s research talk at ICSME’18 in Madrid

Paper reference:
Gerald Schermann, Philipp Leitner, “Search-Based Scheduling of Experiments in Continuous Deployment”, in Proceedings of the 34th IEEE International Conference on Software Maintenance and Evolution, Madrid, Spain, 2018
-> Preprint

Abstract:
Continuous experimentation involves practices for testing new functionality on a small fraction of the user base in production environments. Running multiple experiments in parallel requires handling user assignments (i.e., which users are part of which experiments) carefully as experiments might overlap and influence each other. Furthermore, experiments are prone to change, get canceled, or are adjusted and restarted, and new ones are added regularly. We formulate this as an optimization problem, fostering the parallel execution of experiments and making sure that enough data is collected for every experiment avoiding overlapping experiments. We propose a genetic algorithm that is capable of (re-)scheduling experiments and compare with other search-based approaches (random sampling, local search, and simulated annealing). Our evaluation shows that our genetic implementation outperforms the other approaches by up to 19% regarding the fitness of the solutions identified and up to a factor three in execution time in our evaluation scenarios.

Estimating Cloud Application Performance Based on Micro-Benchmark Profiling

Joel’s Research Talk at IEEE CLOUD 2018 in San Francisco

Paper Reference

Joel Scheuner, Philipp Leitner (2018). Estimating Cloud Application Performance Based on Micro-Benchmark Profiling in Proceedings of the 11th IEEE International Conference on Cloud Computing (CLOUD’18).

Cloud WorkBench: https://github.com/sealuzh/cloud-workbench

Abstract

The continuing growth of the cloud computing market has led to an unprecedented diversity of cloud services. To support service selection, micro-benchmarks are commonly used to identify the best performing cloud service. However, it remains unclear how relevant these synthetic micro-benchmarks are for gaining insights into the performance of real-world applications.
Therefore, this paper develops a cloud benchmarking methodology that uses micro-benchmarks to profile applications and subsequently predicts how an application performs on a wide range of cloud services. A study with a real cloud provider (Amazon EC2) has been conducted to quantitatively evaluate the estimation model with 38 metrics from 23 micro-benchmarks and 2 applications from different domains. The results reveal remarkably low variability in cloud service performance and show that selected micro-benchmarks can estimate the duration of a scientific computing application with a relative error of less than 10% and the response time of a Web serving application with a relative error between 10% and 20%. In conclusion, this paper emphasizes the importance of cloud benchmarking by substantiating the suitability of micro-benchmarks for estimating application performance in comparison to common baselines but also highlights that only selected micro-benchmarks are relevant to estimate the performance of a particular application.