VI. RELATED WORK
In this section, we give an overview of related work on
SELENIUM-based test automation and load testing.
Several prior studies discussed automated test generation
methodologies in SELENIUM using a combination of human
written scripts and crawlers to fetch the dynamic states of
the application [8,9,10]. The performance issues of SELE-
NIUM were discussed by Vila et al. [12]. They highlighted
that the SELENIUM WebDriver consumes a large amount of
resources as the whole application needs to be loaded in
the browser (including all the images, CSS and JavaScript
files). Our experimental results confirm that SELENIUM-based
testing is resource-intensive. Therefore, we proposed to share
browsers between user instances to improve the efficiency of
SELENIUM-based load testing.
There exists a large body of prior work on load testing,
which was summarized by Jiang and Hassan [7]. However,
this body of work has always focused on testing how the AUT
responds to various levels of load. The focus of our work
is quite different, as we focus on how we can improve the
efficiency of the load driver, the component that generates load
for the AUT.
To the best of our knowledge, we are the first to system-
atically study how SELENIUM can be used for load testing.
Dowling and McGrath [4] suggested that SELENIUM can be
used next to a request-based load testing framework, such as
JMeter. However, we are the first to suggest a load testing
framework that solely uses SELENIUM.
VII. CONCLUSION
Request-based frameworks for load testing such as JMeter
are the de facto standard for executing load tests. However,
browser-based load tests (e.g., using SELENIUM) have several
advantages over such request-based load tests. For example,
browser-based load tests can simulate complex user interac-
tions within a real browser. Unfortunately, browser-based load
testing is very resource heavy, which limits its applicability.
In this paper, we studied the resource usage of SELENIUM-
based load tests in different configurations for executing the
load test. Our most important findings are:
•Headless browsers consume considerably less resources
than other types of browser instances.
•The capacity of a load driver (in terms of the number
of users that it can simulate) can be increased by at least
20% by sharing browser instances between user instances.
We took the first important step towards more efficient load
testing in SELENIUM. Practitioners can use our approach as a
foundation to improve the capacity of the load drivers of their
own browser-based load tests.
ACKNOWLEDGMENT
We are grateful to BlackBerry for providing valuable sup-
port and suggestions for our study. The findings and opinions
expressed in this paper are those of the authors and do not
necessarily represent or reflect those of BlackBerry and/or
its subsidiaries and affiliation. Our results do not in any way
reflect the quality of BlackBerry’s products.
REFERENCES
[1] BlazeMeter (2016). Headless Execution of Sele-
nium Tests in Jenkins. https://www.blazemeter.com/blog/
headless-execution-selenium- tests-jenkins. (Accessed on
02/01/2019).
[2] Census (2016). Census 2016: IT experts say Bureau
of Statistics should have expected website crash.
https://www.smh.com.au/national/census-2016-it-experts-
say-bureau-of-statistics-should-have-expected-website-
crash-20160809-gqosj7.html. (Accessed on 02/01/2019).
[3] Debroy, V., Brimble, L., Yost, M., and Erry, A. (2018).
Automating web application testing from the ground up:
Experiences and lessons learned in an industrial setting.
In 2018 IEEE 11th International Conference on Software
Testing, Verification and Validation (ICST), pages 354–362.
[4] Dowling, P. and McGrath, K. (2015). Using free and
open source tools to manage software quality. Queue,
13(4):20:20–20:27.
[5] Exchange (2005). Exchange performance result.
https://www.dell.com/downloads/global/solutions/
poweredge6850 05 31 2005.pdf. (Accessed on
02/01/2019).
[6] Gojare, S., Joshi, R., and Gaigaware, D. (2015). Analysis
and design of Selenium WebDriver automation testing
framework. Procedia Computer Science, 50:341 – 346. Big
Data, Cloud and Computing Challenges.
[7] Jiang, Z. M. and Hassan, A. E. (2015). A survey on load
testing of large-scale software systems. IEEE Transactions
on Software Engineering, 41(11):1091–1118.
[8] Milani Fard, A., Mirzaaghaei, M., and Mesbah, A. (2014).
Leveraging existing tests in automated test generation for
web applications. In Proceedings of the 29th ACM/IEEE
International Conference on Automated Software Engineer-
ing, pages 67–78. ACM.
[9] Mirshokraie, S., Mesbah, A., and Pattabiraman, K. (2013).
Pythia: Generating test cases with oracles for JavaScript
applications. In 2013 28th IEEE/ACM International Con-
ference on Automated Software Engineering (ASE), pages
610–615.
[10] Stocco, A., Leotta, M., Ricca, F., and Tonella, P. (2015).
Why creating web page objects manually if it can be
done automatically? In 2015 IEEE/ACM 10th International
Workshop on Automation of Software Test, pages 70–74.
[11] The Exchange Team (2007). MAPI Messaging
Benchmark Being Retired. https://blogs.technet.microsoft.
com/exchange/2007/11/06/mapi-messaging-benchmark-
being-retired/. (Accessed on 02/01/2019).
[12] Vila, E., Novakova, G., and Todorova, D. Automation
testing framework for web applications with Selenium
WebDriver: Opportunities and threats. In Proceedings of the
International Conference on Advances in Image Processing,
ICAIP 2017, pages 144–150. ACM.