Skip to main content

Selenium Grid with Docker

Selenium webdriver on it’s own, or with it’s implementation, like Geb is arguably the most popular solution for testing web-based applications. Besides all it’s greatness, it has some flaws. Selenium tests are slow, and it’s cost of maintenance is big. The answer for the first issue is distributed testing with Selenium Grid, which I described previously. 

From the DevOps perspective though, setting Selenium Grid configuration like that is highly over-expensive and non-scalable. The answer for this can be Docker with it’s docker-compose tool. In this post we will try to create vm provisioned by docker-compose and set up scale Selenium Grid. All of this will be run with one command.

What is Docker 

In simple words, Docker – with use of linux-containers – allows you to pack all your application dependencies, like database, system libraries and so on, into standardised and portable units, called containers. The main difference from virtualization tools like vagrant, is that you don’t need to ship entire OS to your CI or production server. Instead of this, you manage containers with independent units. This is just a big picture of the Docker motivation. For detailed documentation and installation instructions, please visit Docker official site. 

Since there are plenty of Docker installation’s instructions on the web, we’ll assume that you’ve already done it (you can reffer to get started section from official documentation), and you have default machine up and running. To test your installation, type:

$ docker info

…and you should see similar output:


Setting Grid configuration 

Little reminder of Selenium Grid architecture. The entry point of our Grid is Selenium Hub. It’s a place (vm or bare metal machine) where we point our test execution. Next element are nodes, which are machines that – previously registered to hub – can execute our selenium tests.

In order to create the hub on our localhost, we need to pull and run container from Docker repository with selenium hub:

$ docker run -d --name selenium-hub -p 4444:4444 selenium/hub

This command will download and run hub container on our localhost. When container’s downlaod is complete, visit http://localhost:4444/grid/console, and you should see an empty grid console (if you create your docker machine with different address than standard, change localhost to choosen IP).

Now we will create two nodes, one with Firefox and second with Chrome. To download and run chrome container:

$ docker run -d -P --link selenium-hub:hub selenium/node-firefox

…and for Chrome:

$ docker run -d -P --link selenium-hub:hub selenium/node-chrome

We should have three docker containers running on our local docker machine. We can check that with:

$ docker ps

Output should be similar to this:


We’ve created Firefox and Chrome nodes, so web console of our grid should display:


Provisioning with docker-compose 

Everything is great, but what about this one-line command to start this whole thing up? Here comes docker-compose tool. Docker-compose is definition file for multi-container docker set ups. First of all, let’s stop all running containers. You can do it with:

$ docker stop $(docker ps -a -q)

We have to create docker-compose.yml file with the following content:


File structure is rather simple. We’ve defined seleniumhub entity, pointed image name (images can be found in docker hub repository) and assign ports. Then, we’ve defined two node entities: chromenode and firefoxnode. Important thing here is that we have to link them to seleniumhub container. Since it’s and yaml file, you should be aware of proper indentation. When our file is ready, run:

$ docker-compose up -d

If everything went smooth, you can check docker ps or point directly to your browser and open http://localhost:4444/grid/console. In result, just like previously there is hub with two nodes, but this time the configuration is define in one file and can be run with one command. Docker-compose file can be added now to your repository and reuse.

Scaling 

When our test base grows, two nodes can be far not enough. Luckily, docker-compose comes with great feature which allows to scale number of similar containers on the fly. If your two-nodes grid is running, and you want to increase the number of chrome nodes to three, enter command:

$ docker-compose scale chromenode=3

Now you have two more containers with chrome node, registered to your hub.


Continue reading 

If you want to continue reading and expand your knowledge in area of Docker and Selenium Grid, I recommend you these books:


Summary 

If you are running Selenium Grid configuration, Docker can be great way to boost your productivity and to help you managing your stack. In future post I will describe some more advanced configurations with Docker. If you have any questions, please leave a comment.


Popular posts from this blog

Notes after TestingCup 2018

On May 28-29th I attended TestingCup conference in Łódź. Having quite unique perspective: this was my second year in row as a Speaker at this conference I want to share some thoughts on the event. Dust has settled, lets go! 



Championship Originally TestingCup is a software testing championship. Wait, what? Yes, the formula is unique: teams and individuals from all around Poland are competing in finding the most bugs and defects in specially prepared application - Mr. Buggy. I don’t have specific data, but since this year’s conference was all english I guess competitors were not only from Poland. As a spectator, I must say that the whole competition looked very professional. There were team shirts and names, podium and trophies (gold cup and cash). 
Some cons? Testing championship is held at the first day of the conference. So this is already a conference, but if you’re not taking part in the championship… there’s not much to do, since all the talks are in the next day. Organizers are aw…

REST-Assured framework overview

In modern software development, REST services becomes most popular choice for implementing distributed and scalable web application. They are light and easy to maintain, which results in faster and more effective implementation and system integration.
I recommend you also my other posts about REST-Assured and building microservice’s test automation frameworks: REST-Assured: going deeperBuilding microservices testing framework
With the increase popularity of RESTful services, there is a need for fast and lightweight tool for REST webservices testing automation. One of the most popular choice is Rest-Assured framework from Jayway. It introduces simplicity of testing web services from dynamic languages like groovy or ruby to java. In this post we will get our hands dirty and write automatic test in Rest-Assured framework.
In order to create complete implementation of automated tests in Rest-Assured framework, we need to write our code against some example API. We’ll use standalone Wiremock m…

Building microservices testing framework

RESTful services and microservice architecture in general are big trends in the industry right now. I’ve noticed it also from this blog’s traffic analytics, where topics related with REST testing get the highest interest. Therefore I’ve decided to create some kind of example framework for RESTful APIs functional testing. Project is based on REST-Assured as a services client, Spock as a testing framework, Groovy and Gradle. You can clone full repository from my github. Tests are run against Wiremock API described in this post. Please consider this project as a kind of bootstrap, since it’s an example, not full-blown test framework. Ok, so as Linus said – talk is cheap, show me the code!
Dependencies Usually, first thing for me after importing new project is opening build.gradle (or pom.xml, in case of maven). In our example, the most important dependencies are REST-Assured and Spock. We’ve also Jackson, for serializing objects, Logback for our logs and snakeyaml for reading .yaml files,…