In this article I will show how to optimize integration testing by running tests incrementally. My example is using .Net Core with Bazel, but the concepts discussed in this article are language agnostic.

Integration Testing

Integration tests are meant to be comprehensive tests for testing complete systems end to end. This is different from unit tests where small units of code are tested in isolation with mocked out integration points. Integration tests are very important in complex application since they ensure that the whole system hangs together when all the different pieces are integrated. An example of this could be testing an api with a series of internal api calls.

Integration tests can be relatively expensive in terms of runtime since the entire system is exercised. It usually doesn’t take long before your test runs start taking several minutes, if not hours after adding enough tests to feel confident in the quality of your testing. Sadly the consequence of this is often that developers start taking shortcuts by reducing the number of tests, or skipping parts of the test suite on regular check-ins.

Part of the problem is that most CI builds are set up using non incremental build systems, which means an all of nothing approach to building and testing. Basically you are forced to to run all tests regardless of the scope of your code change. This is wasteful since your commit may be small enough to only benefit from running a single test. In the next section I will show how an incremental build system like Bazel can help you limit tests runs to only relevant tests for a given code change.

Bazel

One of the key goals of Bazel is to avoid unnecessary work in your build pipeline. In the context of testing this means only running tests that are relevant for a particular code change. As a result the time it takes to run your tests is directly proportional to the size of the change. A large change may exercise many or all your tests, but smaller changes may run just a single test, or no tests at all.

How is this possible?

It’s logical to assume that the output of an operation will only change if the input(s) to the operation changed. This principle applies in general to all Bazel tasks, but in testing this means a test only needs to be rerun if the files under test, or their dependencies, changed. If there were no relevant code changes you can just play back the cached result of the previous run. As a result Bazel CI builds will only run tests that are relevant for the current change. Performance is an obvious benefit of this since fewer tests are usually run, but it also improves the stability of the test suite. If there are any flaky tests in the test suite, only running those tests in response to relevant code changes is optimal for avoiding failed builds from flaky tests.

Demo

Bazel is language agnostic, but in my sample application I will show how to set up an incremental test suite in .Net Core. I have added my repo to Github if you want to try it out yourself.

My demo application is a car api for retrieving information about luxury cars. The .Net Core controller is included below:

[Route("api/[controller]")] public class CarController : Controller { private readonly IHttpClientFactory clientFactory; public CarController(IHttpClientFactory clientFactory) { this.clientFactory = clientFactory; } [Route("")] public async Task<Car> Get() { var client = clientFactory.CreateClient(); var topSpeed = await client.GetStringAsync ("http://localhost:5004/api/speed"); var car = new Car() { Name = "Lamborghini Aventador", TopSpeed = $"{topSpeed} km/h" }; return car; } }

Internally a call is made to a different api to get the top speed of the car.

In the following sections I will discuss how Bazel will determine if this test should run or be skipped in the build.

Source Changes to Api Under Test

The first trigger for running the test is code changes to the api controller source code. Bazel requires us to map out the relationship between the test and the relevant source code in a Bazel rule.

As you can see I have defined a dependency on //Api/Car:CarApi.exe, which is the Bazel rule that builds the source code for the api. This means the test will be triggered whenever there are code changes in the car api source code, or one of its dependencies. Bazel will skip over this test if there are no relevant code changes.

core_xunit_test( name = "CarApiTests.dll", srcs = [ "CarApiTests.cs", ], data = [ ":api-version.txt" ], deps = [ "@xunit.assert//:netcoreapp3.1_core", "@xunit.extensibility.core//:netcoreapp3.1_core", "@xunit.extensibility.execution//:netcoreapp3.1_core", "//Api/Car:CarApi.exe" ], )

The source code for the Xunit test can be found below:

[Fact] public async Task Car_Api_Returns_Car() { HttpClient client = new HttpClient (); var content = await client.GetStringAsync ("http://localhost:5002/api/car"); var options = new JsonSerializerOptions { PropertyNameCaseInsensitive = true }; var car = JsonSerializer.Deserialize<Car> (content, options); Assert.Equal("350 km/h", car.TopSpeed); Assert.Equal("Lamborghini Aventador", car.Name); } }

External Api Changes

In many cases it may be enough to only rely on code changes in the api under test to the decide if a test should be run or not. It is however worth noting the dependency on the top speed api in this particular example. In a true integration test it may make sense to include changes to the top speed api as a trigger for running the test. One challenge though, is that there is no direct code dependency on the top speed api in this case. In a mono repo we would be able to declare a code dependency on the top speed api from the test, but what if you don’t have access to the source?

Bazel requires some sort of file artifact to determine if a test should be run, but in this case we only have a url to work with.

I think you can handle this scenario a few different ways, but in my case I decided to go with a versioning scheme for the api. Before running my test suite I make an http request to a version endpoint in the api to capture the current version and write it to a file in the test suite. This gives me a secondary test trigger since Bazel will pick up on changes in the version file and run the test if a new version of the speed is detected.

I have wrapped this up in a bash script as you can see below:

curl http://localhost:5004/api/speed/version -o Integration-Tests/Api/Car/api-version.txt bazel test --test_output=errors //integration-tests/...

The first command is a curl to write the version to a file called api-version.txt. If you take a look at my Bazel rule from the previous section you will see a dependency on api-version.txt. To Bazel a file is a file. It doesn’t care if the file is a text file or a source file, the hashing algorithm will flag it as changed input regardless.

An improvement on this would be to wrap the curl in a Bazel rule.