Methods & Tools Software Development Magazine

Software Development Magazine - Project Management, Programming, Software Testing

Scrum Expert - Articles, tools, videos, news and other resources on Agile, Scrum and Kanban

Dockerize your Tests and Selenium Test Environment

Bakir Jusufbegovic, Zlatan Cilic, Atlantbh, https://www.atlantbh.com/

If you are working as a software test engineer, then you are very well aware of the struggles in test automation when it comes to testing against various different browsers, installing test environments from scratch, incompatibilities between selenium, browser drivers, browser versions, etc. It is hard to replicate testing environments, install them from scratch, remember browsers dependencies etc.

Luckily, with emerging technologies based on containers like Docker, many struggles in development and operations have been eradicated. But, did you ever wonder how Docker can contribute to the field of test automation?

This is the question we posed to ourselves in order to eliminate the problems described above and this article (as well as the next one) will present how we did it.

Our initial problem

We started with the following requests: We want to have an isolated environment based on specific technology that is used for running tests (for example an isolated ruby or java environment with preinstalled tools needed for running tests like RSpec or TestNG). This environment should be easy to install and would be temporary (it would exist only when tests are running and teardown would be done after). We should have the ability to test against different browser versions and change those versions very easily. Finally, we wanted to integrate this solution with our company product for test reporting called Owl (you can read more about Owl here and here)

A very big help in this effort came from: https://github.com/SeleniumHQ/docker-selenium which represents Selenium project for dockerized selenium environments (more precisely, you are able to run Firefox or Chrome standalone browsers with the appropriate driver in Docker or you can utilise Selenium Grid and run Hub and specific browsers in a dockerized environment). We thought that the approach with using Selenium Grid was very interesting and could provide us the most benefits, so we started to build our solution around it.

For the purposes of this POC, we used our test automation suite for the Atlantbh homepage which is implemented in Ruby/RSpec/Capybara, but the solution was built so it could be used with any Ruby/RSpec related project. For the purposes of this article, we will stick to the Atlantbh homepage project which is accessible on github repo: https://github.com/ATLANTBH/abhhomepage-automation

Our approach

Here is what it should look like:

  1. Deploy Selenium Grid by running Selenium Hub and then separate nodes for Chrome/Firefox combinations. These nodes would be connected to Selenium Grid
  2. Create Docker image which would contain everything necessary for running tests (rvm, ruby)
  3. Run that image as a docker container and copy over content of our tests
  4. Run tests against Selenium Grid which would, by reading capabilities, know which browser is the target
  5. After tests have been executed, temporary container where tests have been executed, would be destroyed since it is no longer needed

We thought this approach fulfilled most of our requests from above and it would give each test engineer a very easy setup where s/he only needs to have: repo cloned and docker installed.

First thing to do here was to create Dockerfile for our rvm/ruby environment which we could utilize for building a new container where tests would be executed. This is how it looks:

FROM ubuntu:xenial
LABEL maintainer="bakir@atlantbh.com"
 
# Defaults
ARG RUBY_VERSION="2.3.3"
 
# Install rvm
RUN gpg --keyserver hkp://keys.gnupg.net --recv-keys 409B6B1796C275462A1703113804BB82D39DC0E3 7D2BAF1CF37B13E2069D6956105BD0E739499BDB
RUN apt-get update && apt-get install -y \
    curl \
    git \
    libpq-dev
RUN \curl -sSL https://get.rvm.io | bash -s stable
 
# Install ruby version
RUN /bin/bash -l -c "rvm install ${RUBY_VERSION}"
RUN /bin/bash -l -c "gem install bundler --no-rdoc --no-ri"
RUN /bin/bash -l -c "source /etc/profile.d/rvm.sh"
 
# Copy test scripts
RUN mkdir /tests
COPY . /tests
RUN cd /tests && chmod +x spec/* && /bin/bash -l -c "bundle install"
 
# Set working directory and pass tests that you want to execute
WORKDIR /tests
ENTRYPOINT ["/bin/bash", "-l", "-c"]
CMD ["bundle exec rspec spec/${TESTS_TO_RUN}"]

Even if you are not so familiar with the structure of Dockerfile, this one should look fairly easy. In a nutshell, this is an explanation by sections:

# Defaults - we can pass which version of ruby we want to install in rvm

# Install RVM - as the name suggests, this section installs rvm

# Install ruby version - we install ruby version in rvm and bundler gem

# Copy test scripts - we will create /tests directory, copy over our tests from the local machine to the container and run bundle install (Advantage of running bundle install is that it will be executed only during time of creating docker image and we don't need to execute it again every time we create the container to execute tests. This will save a considerable amount of time in the container run phase. Also, we assume that Gemfile.lock is up to date already, the user will check its repository and then execute tests located in that repository)

# Set working directory and pass tests that you want to execute - as the name suggests, we are setting the work directory in the container to be /tests and we execute tests from that location (tests have already been copied over to this directory in the previous step). An important thing to note is the TESTS_TO_RUN variable which should be populated when we run the container. Here, we can pass which exact tests we want to execute (so we are not limited to execution of complete suite only). This works in the same way that the filtering of tests works in RSpec. We can provide * to execute everything, substring or complete name of the script to execute part of the test suite or one specific test script

How it all works together

Now that we have knowledge of how our Dockerfile works under the hood, it is time to put all the pieces together into the deployable workflow:

1. Run Selenium Hub:

docker run -d -P -p 4444:4444 -e GRID_BROWSER_TIMEOUT=60 --name selenium-hub selenium/hub

This command will download selenium/hub image from Docker Hub, run container and expose 4444 port to be accessible from the outside (this is needed since our tests will communicate with Selenium Hub through that port).

We also set GRID_BROWSER_TIMEOUT since it is, by default = 0, and we don't want our tests to fail because of potential timeout issues on Selenium Hub. To make sure Selenium Hub is running, you can access: http://<SELENIUM_HUB_ADDRESS>:4444 in your browser.

Run Nodes:

docker run -d -P --link selenium-hub:hub selenium/node-firefox-debug:3.4.0-einsteinium
docker run -d -P --link selenium-hub:hub selenium/node-chrome-debug:3.4.0-einsteinium

These two commands will run separate containers for both firefox and chrome browsers with appropriate drivers. These two nodes will be connected to the previously started Selenium Hub. Also, notice that we use selenium/node-firefox-debug and selenium/node-chrome-debug docker images. We don't need to use "debug" images, but they expose one interesting detail and that is VNC server. By using these images, we are able to connect to these two nodes via any VNC client and watch the execution of tests live. Inside the container, VNC is running on 5900 port. To be able to access VNC servers from the outside, you need to know which ports are exposed outside of the container so they can be accessed. To find that, you can use the following command:

docker ps -a
 
CONTAINER ID  IMAGE                                          COMMAND                CREATED         STATUS        PORTS                   NAMES
 
adc7496479e8  selenium/node-chrome-debug:3.4.0-einsteinium   "/opt/bin/entry_po..." 8 minutes ago   Up 8 minutes  0.0.0.0:32769->5900/tcp fervent_hypatia
 
ddea6a689c6d  selenium/node-firefox-debug:3.4.0-einsteinium  "/opt/bin/entry_po..." 8 minutes ago   Up 8 minutes  0.0.0.0:32768->5900/tcp zen_goodall

You can see that port 32768 is exposed for firefox node, while port 32769 is exposed for chrome node. Use your VNC client (Mac users can use the Screen Sharing tool for this purpose) to see live execution on these nodes

You can also verify that chrome and firefox nodes are attached to Selenium Hub using browser: http://<SELENIUM_HUB_ADDRESS>:4444/grid/console

3. Create image from Dockerfile:

~/abhhomepage-automation$ docker build -t atlantbh/ruby .

This command assumes that you are located in your project's root directory and it contains Dockerfile (for this purpose, we will use abhhomepage-automation project). It can take a couple of minutes to build a docker image locally and it will be available under the name: atlantbh/ruby (any other name can be provided also)

4. Run container from the newly created image:

Now that we have the docker image ready, only thing that is left to do is to run you test suite (or part of it) in a dockerized environment. You can do that using the following command:

docker run -e SELENIUM_GRID_URL="${SELENIUM_HUB_ADDRESS}:4444" -e TESTS_TO_RUN="*" -v /home/ubuntu/abhhomepage-automation:/tests atlantbh/ruby

This command is pretty much self-explanatory. It will run a temporary docker container with the environment variable specified for SELENIUM_GRID_URL (this variable will be picked up in the tests, for more info see: https://github.com/ATLANTBH/abhhomepage-automation/blob/master/setup_browser.rb), environment variable specified for tests which you want to execute, mounted volume which will make sure that content of /home/ubuntu/abhhomepage-automation is copied over to /tests (it will overwrite /tests content that we had when building the docker image. The purpose of this is to make sure that any change in your tests scripts code does not need re-building of new image, you can just run the docker container again and it will pick up the changes) and image name (atlantbh/ruby)

Conclusion and next steps

I hope this short introduction to the containerized world in context of test automation will help you to realize all the benefits this approach can give you. Using containers in software development is becoming more and more of a mainstream approach and tests are no exception either. The benefits of containerized micro-services can easily been seen in the setup of testing workflows. Here are some of the problems we solved with this approach:

  1. We can easily share our tests and test environment between each other with minimal or zero configuration that needs to be done before running tests (even more, we can pass tests to developers when they want to execute them against their local development environments).
  2. We don't have to worry about compatibility issues between various selenium and browser drivers versions. It is very easy to change the versions and test them out. Previously, it was very time consuming to install browsers, browser drivers and corresponding selenium web driver gems and make sure all of them work together.
  3. You can set up a cluster of various nodes for a specific browser and run you tests against different versions of browsers to know which ones your tested application covers. Previously, it was nearly impossible to maintain multiple browser versions, browser drivers and selenium web driver gems on one machine. It was compatibility/dependency hell.
  4. Very easy setup in CI environments
  5. Last but not least, there is zero footprint on your environment where you ran this setup. When you are finished with you work, just stop/remove containers and all of this setup is removed also so you can always start from scratch.

Of course, when we went in this direction, more and more ideas arose and the next one seemed natural: How can we fit all this setup into one configuration file which would be executed with one command and all of this will be up and running along with tests being executed?

Enter: Docker Compose. But, it doesn't stop there. Owl  (our in-house test reporting tool) also has support for Docker and Docker Compose so we wanted to create integrated solution using Docker Compose which will give us ability to easily configure and manage complete end to end test environment (from tests up to the reports).

Going forward, we wanted to integrate our solution with the Owl test reporting tool, thus, completing the test execution lifecycle. The other, more challenging part of the process, was simplification of use, mainly through change in running the tests themselves and configuration files maintenance. An abstract diagram of the containerised environment that we are trying to build is shown below.

Docker software testing environment

But, can you really simplify running multiple Docker containers and attaching them to external networks? Can you really convert it to a one-liner? That was the challenge that was upon us, and the solution is presented in this article.

Complicated vs Complex

Our solution at the moment was good and was functioning as it should, but it was cumbersome to use and required a lot of manual steps to get the tests up and running. We wanted to make it more simple, more easy to use, and more easy to maintain. This meant making the solution less complicated but more complex. This was our first goal and it went hand in hand with our second goal: to integrate this solution with our company product for test reporting: Owl (you can read more about Owl here and here).

To help us achieve this goal, we used Docker Compose. Compose is a tool that is used for both defining and running Docker setups with multiple containers that have some kind of interaction. To use Docker Compose, we need one configuration file: docker-compose.yml. In this file, you define which services (containers) make up you setup and how they behave and interact. These services can include any public Docker image(s), as well as your own, defined in local Dockerfile(s). After defining a configuration file, the only thing that you have to do is run the configuration, which will start all your containers, taking care of all the dependencies, when you want it, in the order you want it. Tearing down the setup is also really simple, and we will talk more about that later.

Configuring Docker Compose

There are a couple of key elements that our docker-compose.yml file needs to handle:

  • running Selenium Grid that consists of hub and Firefox and Chrome nodes
  • opening ports on Selenium Grid nodes for VNC access
  • running test execution in docker container
  • resolving timing between running containers stated above
  • connecting to the Owl reporting tool

Structure of the Compose file that contains stated elements is shown below:

version: "3"
services:
  selenium-hub:
    image: selenium/hub:3.4.0-einsteinium
    container_name: selenium-hub
    ports:
      - "4444:4444"
    environment:
      - GRID_BROWSER_TIMEOUT=60
  chrome:
    image: selenium/node-chrome-debug:3.0.1-aluminum
    ports:
      - "5900:5900"
    depends_on:
      - selenium-hub
    environment:
      - HUB_PORT_4444_TCP_ADDR=selenium-hub
      - HUB_PORT_4444_TCP_PORT=4444
    volumes:
      - /dev/shm:/dev/shm
  firefox:
    image: selenium/node-firefox-debug:3.4.0-einsteinium
    ports:
      - "5901:5900"
    depends_on:
      - selenium-hub
    environment:
      - HUB_PORT_4444_TCP_ADDR=selenium-hub
      - HUB_PORT_4444_TCP_PORT=4444
    volumes:
      - /dev/shm:/dev/shm
  abhtests:
    image: atlantbh/ruby
    build: .
    depends_on:
      - firefox
      - chrome
    environment:
      - SELENIUM_GRID_URL=selenium-hub:4444
      - DOCKER_COMPOSE_WAIT=30
      - TESTS_TO_RUN=${TESTS_TO_RUN}
    volumes:
      - .:/tests
networks:
  default:
    external:
      name: ${OWL_NETWORK}

Docker Compose file itself is a YAML file structured in the form of services object, whose child nodes define particular services with all its parameters and configurable options. Besides that, there is a version parameter, which always lays in the root of the document and defines how the document is parsed, minimum version of Docker engine needed to run this compose file and some networking dependencies. In the end of the file there is networks object which will be explained later.

Now that we understand the basic structure of the Compose file,  we can break down the services that are defined:

  • selenium-hub service which defines Selenium Grid hub container. As with every service, the image used for the container is defined in the beginning. After that, we define the service name and the port(s) that we want to map from the container to the host. At last, we define any environment variables that are needed. In this particular case, GRID_BROWSER_TIMEOUT which gives hub 60 seconds to return the response to the test, once a command is sent to be executed. If we don't define this, tests would crash every time when a response from the hub is not immediate.
  • chrome and firefox services both have the same layout, which has a few interesting parameters. First of them is the depends_on parameter, which defines the services that need to be up and running before this service attempts to run. In both cases, for firefox and chrome, that is selenium-hub service. The Port that is exposed is used for VNC connection to the container, which makes test debugging a lot easier. Environment variables needed for nodes to operate are the address of the hub and the port for connection. Ending the node service configuration is the volumes parameter, which in this case mounts the /dev/shm folder from the container to the host, thus, increasing the shared memory that processes can use.
  • abhtests service defines the container that executes the tests themselves. The definition of the image used for this container is explained above and in the compose file we added a few more variables to adjust test execution, build parameter points to the directory in which the Dockerfile is located, and name parameter tags for that image. In the environment object, besides the address of the Selenium Hub, we specify a period that docker-compose will wait before running this container. This is needed so that the test waits for Hub service to be ready, and not only running. In the end, we specify the test which we want to run, and a folder to mount, where the test will be located.
  • If the tests are executing on the same node where Owl containers are running, then the integration of Owl into this solution is only a matter of naming the docker network that Owl containers are using, so that our docker-compose container can join it and pass on the test results.

How it all works together

After understanding how docker-compose works and how we configured the compose file, it is time to run the containers and execute the tests.

Pre-requisites for installation on host:

  • docker
  • docker-compose
  • Owl

While the first two are rather simple, Owl installation needs a bit more detail. Owl source code can be cloned from its Github repository. After cloning it, all you need to do is to start the application using the following command:

docker-compose up -d

This builds both Owl and Postgres images and starts the application, which is available on default 8090 port.

As for the tests themselves, by default, when running docker-compose with the provided configuration, all the test will be started. This behaviour can be changed by setting the value of TESTS_TO_RUN from the .env file to a specific test name. For example:

TESTS_TO_RUN=check_about_links.rb

As explained earlier, OWL_NETWORK parameter is used for joining the Owl reporting tool network in order to pass on the test results. If you use Owl, just set this parameter value in the .env file to the network name:

OWL_NETWORK=owl_default

Setting up tests to run against Chrome or Firefox is done through the config/environment.yaml file, in which platform paramter has to be set to web and driver is then used for selecting either browser. Owl reads test results from the database, so you also need to set up a rspec2db.yml file to point to correct database (the current configuration of ABH tests is set to default database parameters that Owl application uses so additional configuration is not needed).

After setup is done, complete testing environment along with tests can be run using the same command that we used to start up the Owl application:

docker-compose up -d

This will run Selenium hub, two nodes: node-firefox-debug and node-chrome-debug and container that will execute the tests. If you have all images already downloaded, output should be similar to this:

Starting selenium-hub ... done
 
Starting abhhomepageautomation_chrome_1 ...
 
Starting abhhomepageautomation_firefox_1 ... done
 
Starting abhhomepageautomation_abhtests_1 ... done

In order to access the logs from the container that executes the tests, run this command:

docker logs -f abhhomepageautomation_abhtests_1

As mentioned before, to make debugging easier, VNC can be used to access ports specified in configuration for both Firefox and Chrome containers.

After the tests are finished with the execution, this environment can be turned off using the following command:

docker-compose down

Output of the command should be similar to this:

Stopping abhhomepageautomation_abhtests_1 ... done
 
Stopping abhhomepageautomation_chrome_1   ... done
 
Stopping abhhomepageautomation_firefox_1  ... done
 
Stopping selenium-hub                     ... done
 
Removing abhhomepageautomation_abhtests_1 ... done
 
Removing abhhomepageautomation_chrome_1   ... done
 
Removing abhhomepageautomation_firefox_1  ... done
 
Removing selenium-hub                     ... done
 
Network owl_default is external, skipping

Conclusion

In our case, using docker-compose turned out to be a really good choice. Running everything with one command makes your life a lot easier and centralised configuration makes this a rather easy solution to maintain. Although the setup of joining the Owl reporting tool network is rather easy, it is a huge step forward when it comes to completing the process of daily test execution. With only two "docker-compose up's", you have a test environment running along with tests and a reporting tool where results of the tests will be persisted. After reading these two articles, we really hope you see the benefits of this approach and that it can help you in your daily test environment setup also.

This article is based on blog posts that were originally published on https://www.atlantbh.com/qatest-automation/dockerize-tests-test-environment-part-1/ and https://www.atlantbh.com/qatest-automation/dockerize-your-tests-and-test-environment-part2/.


Related Software Testing Resources


Click here to view the complete list of Methods & Tools articles

This article was originally published in November 2018

Methods & Tools
is supported by


Testmatick.com

Software Testing
Magazine


The Scrum Expert