Skip to main content

Selenium Grid with Docker

Selenium webdriver on it’s own, or with it’s implementation, like Geb is arguably the most popular solution for testing web-based applications. Besides all it’s greatness, it has some flaws. Selenium tests are slow, and it’s cost of maintenance is big. The answer for the first issue is distributed testing with Selenium Grid, which I described previously. 

From the DevOps perspective though, setting Selenium Grid configuration like that is highly over-expensive and non-scalable. The answer for this can be Docker with it’s docker-compose tool. In this post we will try to create vm provisioned by docker-compose and set up scale Selenium Grid. All of this will be run with one command.

What is Docker 

In simple words, Docker – with use of linux-containers – allows you to pack all your application dependencies, like database, system libraries and so on, into standardised and portable units, called containers. The main difference from virtualization tools like vagrant, is that you don’t need to ship entire OS to your CI or production server. Instead of this, you manage containers with independent units. This is just a big picture of the Docker motivation. For detailed documentation and installation instructions, please visit Docker official site. 

Since there are plenty of Docker installation’s instructions on the web, we’ll assume that you’ve already done it (you can reffer to get started section from official documentation), and you have default machine up and running. To test your installation, type:

$ docker info

…and you should see similar output:


Setting Grid configuration 

Little reminder of Selenium Grid architecture. The entry point of our Grid is Selenium Hub. It’s a place (vm or bare metal machine) where we point our test execution. Next element are nodes, which are machines that – previously registered to hub – can execute our selenium tests.

In order to create the hub on our localhost, we need to pull and run container from Docker repository with selenium hub:

$ docker run -d --name selenium-hub -p 4444:4444 selenium/hub

This command will download and run hub container on our localhost. When container’s downlaod is complete, visit http://localhost:4444/grid/console, and you should see an empty grid console (if you create your docker machine with different address than standard, change localhost to choosen IP).

Now we will create two nodes, one with Firefox and second with Chrome. To download and run chrome container:

$ docker run -d -P --link selenium-hub:hub selenium/node-firefox

…and for Chrome:

$ docker run -d -P --link selenium-hub:hub selenium/node-chrome

We should have three docker containers running on our local docker machine. We can check that with:

$ docker ps

Output should be similar to this:


We’ve created Firefox and Chrome nodes, so web console of our grid should display:


Provisioning with docker-compose 

Everything is great, but what about this one-line command to start this whole thing up? Here comes docker-compose tool. Docker-compose is definition file for multi-container docker set ups. First of all, let’s stop all running containers. You can do it with:

$ docker stop $(docker ps -a -q)

We have to create docker-compose.yml file with the following content:


File structure is rather simple. We’ve defined seleniumhub entity, pointed image name (images can be found in docker hub repository) and assign ports. Then, we’ve defined two node entities: chromenode and firefoxnode. Important thing here is that we have to link them to seleniumhub container. Since it’s and yaml file, you should be aware of proper indentation. When our file is ready, run:

$ docker-compose up -d

If everything went smooth, you can check docker ps or point directly to your browser and open http://localhost:4444/grid/console. In result, just like previously there is hub with two nodes, but this time the configuration is define in one file and can be run with one command. Docker-compose file can be added now to your repository and reuse.

Scaling 

When our test base grows, two nodes can be far not enough. Luckily, docker-compose comes with great feature which allows to scale number of similar containers on the fly. If your two-nodes grid is running, and you want to increase the number of chrome nodes to three, enter command:

$ docker-compose scale chromenode=3

Now you have two more containers with chrome node, registered to your hub.


Continue reading 

If you want to continue reading and expand your knowledge in area of Docker and Selenium Grid, I recommend you these books:


Summary 

If you are running Selenium Grid configuration, Docker can be great way to boost your productivity and to help you managing your stack. In future post I will describe some more advanced configurations with Docker. If you have any questions, please leave a comment.


Popular posts from this blog

Test Automation: Good, Bad and Ugly

The modern approach to software quality and software development life cycle requires that business guys, developers and testers understand that the long manual test phase, although often still necessary, must be reduced to a minimum and replaced by test automation. Working in continuous delivery and continuous integration environment requires us to create automated tests that run on demand, checking our application integration and it’s core functionality correctness. However, there are still many problems with designing and writing automated tests, resulting in their costly maintenance or abandonment in favour of a return to manual processes.
In this article I will focus on describing common good practices of test automation. This post is more than an overview than complete reference guide. Broader aspects, such as the Page Object pattern or Page Factory will be described in detail in a separate article on this blog. Although most practices apply for every types of automated tests, thi…

REST-Assured framework overview

In modern software development, REST services becomes most popular choice for implementing distributed and scalable web application. They are light and easy to maintain, which results in faster and more effective implementation and system integration.
I recommend you also my other posts about REST-Assured and building microservice’s test automation frameworks: REST-Assured: going deeperBuilding microservices testing framework
With the increase popularity of RESTful services, there is a need for fast and lightweight tool for REST webservices testing automation. One of the most popular choice is Rest-Assured framework from Jayway. It introduces simplicity of testing web services from dynamic languages like groovy or ruby to java. In this post we will get our hands dirty and write automatic test in Rest-Assured framework.
In order to create complete implementation of automated tests in Rest-Assured framework, we need to write our code against some example API. We’ll use standalone Wiremock m…

REST-Assured: going deeper

In my previous post I described the basic REST-Assured usage – the lightweight framework for testing RESTful services. Despite the fact that described range of functionalities would be enough in most cases,REST-Assured has a lot more to offer. In this post I would like to bring some more advanced examples of the framework usage.



I recommend you also my other posts about REST-Assured and building microservice’s test automation frameworks:

REST-Assured – framework overviewBuilding microservices testing framework
Object Mapping Sending request’s body as string is easy and straightforward, but it can be inconvenient in the case of more complex operations on request / response properties. Proven solution for this is a good-known serialization of request/response body to objects. REST-Assured supports object mapping to (and from) JSON and XML. For JSON you need either to have Jackson or Gson in your classpath and for XML you need JAXB. Here is an example of request object serialization using J…