Skip to content. | Skip to navigation

Personal tools
Log in

DIstributed Computing Environments (DICE) Team
We are a group of computer scientists and IT experts from the Department of Computer Science AGH and ACC Cyfronet AGH. We are a curiosity- and research-driven team, specializing in large-scale distributed computing, HPC, Web and Cloud technologies. We develop new methods, tools and environments and we apply these solutions in e-Science, healthcare and industrial domains.


Sections
You are here: Home DICE Blog Performance evaluation of cloud functions

Performance evaluation of cloud functions

Posted by Maciej Malawski at Dec 21, 2017 04:39 PM |
Cloud Functions, often called Function-as-a-Service (FaaS), pioneered by AWS Lambda, are an increasingly popular method of running distributed applications. We developed a cloud function benchmarking framework, based on Serverless Framework, and we evaluated all the major cloud function providers: AWS Lambda, Azure Functions, Google Cloud Functions and IBM OpenWhisk. We make our results available online and continuously updated at http://cloud-functions.icsr.agh.edu.pl

Cloud Functions, often called Function-as-a-Service (FaaS), pioneered by AWS Lambda, are an increasingly popular method of running distributed applications. We developed a cloud function benchmarking framework, based on Serverless Framework, and we evaluated all the major cloud function providers: AWS Lambda, Azure Functions, Google Cloud Functions and IBM OpenWhisk. We make our results available online and continuously updated at http://cloud-functions.icsr.agh.edu.pl/

This service provides the continuous performance monitoring of cloud functions:
  • AWS Lambda
  • Google Cloud Functions
  • Azure Functions
  • IBM Bluemix OpenWhisk
The data is available for browsing using this dashboard and for other analysis with proper acknowledgments to the authors.

 

CPU Performance

In this experiment we use a random number generator, as an example of an integer-based CPU-intensive benchmark. Such generators are key in many scientific applications, such as Monte Carlo methods, which are good potential candidates for running as cloud functions.

Specifically, the cloud function is a JavaScript wrapper around the binary benchmark, which is a program written in C. We used a popular Mersenne Twister (MT19937) random number generator algorithm. The benchmark runs approximately 16.7 milion iterations of the algorithm using a fixed seed number during each run and provides reproducible load.

We measure the execution time t_b of the binary benchmark from within the JavaScript wrapper that is running on serverless infrastructure, and the total request processing time t_r on the client side.

Instance lifetime

We observed that providers reuse the same execution environment, i.e. Node.js process, to process subsequent requests. In order to investigate this sound approach, we measure tl which is maximum recorded lifetime of each execution environment process. To do this we assign a global variable with timer value when the execution environment is started. Then, we return the time elapsed since with response to every request. To distinguish one execution environment from another we either use MAC address of virtualized network adapter (AWS) or use random identifier that we assign to another global variable (other providers).

Data transfer benchmark

We have also deployed second set of functions for AWS and GCF where we replaced our CPU-intensive benchmark with a procedure that measures time required to download and upload 64 MB file from object storage. We chose this file size so that the transfer time is between 1 second and 30 seconds, where we can expect that the transfer rate dominates the latency. As object storage we used Amazon S3 for AWS Lambda and Google Cloud Storage for GCF.

Results

Our measurements revealed the interesting observations on how Amazon and Google dif- ferently interpret the resource allocation policies. These observations can be summarized that AWS Lambda functions execution performance is proportional to the memory allocated, but sometimes slightly slower, while for Google Cloud Functions the performance is proportional to the memory allocated, but often much faster. This behavior is also confirmed for data transfer times, which we measured for GCF and AWS Lambda.

More information: Maciej Malawski, Kamil Figiela, Adam Gajek, Adam Zima: "Benchmarking Heterogeneous Cloud Functions", In Proceedings of HeteroPar Workshop, Santiago de Compostela, Spain, Aug. 2017, (preprint)

Comments (0)

NOTE! This web site uses cookies and similar technologies (if you do not change the browser settings, you agree to this).

cyfronet agh