It probably already happened to you. You've been working for some time on a project with your team and there is a new functionality to add. You write the code to implement it and test it. It seems to work well so you merge your code to the main branch. Everybody is happy until the customer calls you to tell you that the login page is broken and nobody can log in anymore. After some research you find out that your last update broke the login page and you must fix it ASAP.
All developers know this situation. You test your new functionality but can't cover all cases when you update your code, or you simply forget to test something and boom ! Here is your bug… This is bad, because it could have been avoided, and what is worse, the customer is the one who gave you the feedback !
The risks are even greater when you work in a team. You are not always 100 % aware of the code produced by the other developers, it can be easy to introduce a bug in the code written by someone else.
It happened to us in one of our projects. We lost a lot of time fixing the same things again and again. During the project retrospective meeting, it has been decided that this situation could not occur anymore and that a solution had to be found.
Code Review & Merge Request
The first mandatory step in our quest towards a successful, bug-free release, is code review.
At Buzzwoo, we manage our code using Git and Gitlab; and we follow two different workflows: Git Flow and Github Flow. Which flow is used for a given project depends on the type of the project, its size, and other factors.
Using the Git flow, or the Gitlab flow, combined with Gitlab, makes it very easy for us to have code review utilizing branches and merge requests.
Basically, before you add some code to the project, it must be reviewed by other developers. The other developers have the opportunity to see your code and be aware of what you are working on. If all developers agree on the code being submitted, you can accept your merge request. This will merge your code back into the main branch.
This flow helps prevent the introduction of bugs because other people are reviewing your code. It is also beneficial to the team because team members have to interact with each other, thus the communication is improved. Also, as a developer, you may learn a lot from the comments coming from your teammates.
All this is very good but doesn't bring any real solution to our main problem : how to prevent the new code from breaking existing functionalities ? This is where testing framework and automated testing comes into play.
Codeception, a testing framework
The first step was to decide which testing framework to use. Because we are writing most of our backend projects with Laravel, we have decided to use the framework Codeception. Codeception integrates perfectly with Laravel and makes it easy to write different types of tests: unit testing, functional and acceptance tests.
- Laravel with Codeception
- Drupal with Behat
- Angular 2 with Jasmine, Karma and Protractor
TDD, or Test Driven Development
Writing tests is not something trivial. Writing tests itself is not really complicate, but writing tests that are worth the time spent to write them, this is where the difficulty resides.
The method Test Driven Development is an evolutionary approach to development which combines test-first development where you write a test before you write just enough production code to fulfill that test and refactoring. It is a programming technique which goal is to write clean code that works (Definition taken from http://agiledata.org/essays/tdd.html)
This small diagram shows how TDD works:
Writing tests is a good improvement, but it involves more work. Hopefully, this means more time will be spent on the tests, and less time on the bug fixing part.
Now that we have code review and TDD, we should find a way to automate these tests so that we don't have to run them manually every time (running a task manually means that one day someone will make a mistake or forget to run the task, which could lead to troubles).
Automate your tests
Knowing that we are using Git and Gitlab to manage our code, it would be nice to find a way to integrate the automation of the tests with our existing flow.
Luckily, Gitlab has developed its own continuous integration tool called Gitlab Continuous Integration, which makes it super easy to integrate with our Gitlab installation. We will not cover the installation of Gitlab CI, but it is fairly easy to setup.
Once everything is in place, you will find more icons and screens inside Gitlab showing your build results (a build is a test that has been run).
^ Here you can see the green mark next to the status “Merged”. This means the build has been successfully run.
^ List of builds and their results.
Configure your project in Gitlab
The project settings screen holds more options now, to configure Gitlab CI.
You must also configure which runners are allowed to run your tests for the current project.
Configure your Gitlab CI in your project
The configuration file
The next step is to create a configuration file that you will include at the root of your project. It will be read and executed automatically by Gitlab CI whenever you pushed your code to the repository.
The name of the file is gitlab-ci.yml and its definition is explained here. This file will explain to Gitlab CI which code to run and when. For example, you may want your Selenium tests to be run only when a merge is made onto your master branch.
Here is a small example of what a gitlab-ci.yml file looks like:
daily_build: only: - /^(feature|hotfix)\/.*$/ script: - cp .env.testing .env - touch database/database.sqlite - php artisan migrate:refresh --seed –database=sqlite - php codecept run functional –no-colors - phpcs app --standard=psr2 --warning-severity=0 –report=full
Let's analyze what is done here :
We have named the job daily_build. This name will be used by Gitlab CI when it displays the build results.
The tests will run only if some code is being pushed to a branch whose name starts with feature/ or hotfix/.
script: - cp .env.testing .env - touch database/database.sqlite - php artisan migrate:refresh --seed –database=sqlite - php codecept run functional –no-colors - phpcs app --standard=psr2 --warning-severity=0 –report=full
Each line in this section will be executed by Gitlab CI. First, it will copy the file .env.testing to .env, then it will create the file database/database.sqlite and run the migration and seed files. Finally, it will execute the tests with codeception and run PHPCS (PHPCS is a separate tool that must also be installed on the server).
Configure different types of builds
The longer the project lasts, the bigger the number of tests you have to write. Because of this, your builds become very slow to run over the time, so you want to optimize them a bit.
We usually run three types of tests: Unit, Functional and Acceptance tests. The acceptance tests are very slow, because they are using Selenium and simulate a browser experience. Therefore, at Buzzwoo we decided to separate the tests in two different sets:
- The daily builds which are run every time we push code to the repository
- The nightly builds which are run every night and prepare a report for the developers and project managers to read in the morning.
The Daily Build
The daily build runs only the unit and functional tests. There is no report created (except the result in Gitlab of course) and no code coverage, linter or anything like that.
The daily build is run every time we push code to a branch whose name starts with feature/ or hotfix/ (these are the conventions we use for our branch names). The builds are run fast so we can quickly know the result (a matter of a few minutes maximum).
Using the automated tests with the merge requests makes the flow more efficient and safe than before. Every time a merge request is created or some code pushed to a branch that is in a merge request, the daily build is run. If the tests fail, Gitlab does not allow to accept the merge request.
The Nightly Build
We run our nightly build every night, when nobody is supposed to work. The nightly build takes a lot longer to execute because of several reasons :
- The acceptance tests are run, and they are very slow as they rely on Selenium and the use of browser
- The other tests are run with the code coverage option on. This option is very powerful because it allows you to see which code has been covered by the tests.
- We use PHPCS to make sure the code written by all the developers follow the PSR-2 standard.
Because of all this, it is not possible to run these builds while developing. The good side is that all the tests are done while the developers are (hopefully) sleeping and they get the reports first thing in the morning when they start working.
This flow is pretty basic but works well for us for the moment. One improvement we are thinking about is the use of groups or set of tests.
There is no need to run all the tests every time we work on a new functionality. Instead, we could run only the tests related to our functionality during the day plus a small set of tests that should always be tested in all cases. The rest of the tests would be run only during night time.
You can find more info about this functionality on the Codeception website, in the Advanced Section.
We have almost spent one year between the time we started to look at a solution to automate our tests (or even write tests in our case) and what we have now.
This may seem a very long time, but that is because we have been testing different solutions (PHPCI, Jenkins) and different frameworks (Codeception being our final choice for all our Laravel projects, Behat for Drupal, and Jasmine with Karma for our Angular projects).
Also, we've had to deal we new releases over the time (Drupal 7 to 8, Angular 2, Laravel 4 then 5), it took us some time to adapt our flow to these major updates.
All the software we've talked about in this post are open source and/or free of charge.
If you think this is too overwhelming or that you don't have enough time to cover all this by yourself, you can also opt for paid solutions, there are plenty of them available on the net.
In our case, we decided to run the test tools on our own servers. For the Gitlab runners and the Seleniums agents, we have configured a few droplets on Digital Ocean. The costs depend on how you configure them: it starts at 5 USD/month and can go up to 640 USD/month!