Reproducible testing with docker

Reproducible testing is hard, and doing it without automated tests, is even harder. With Kontact we’re unfortunately not yet in a position where we can cover all of the functionality by automated tests.

If manual testing is required, being able to bring the test system into a “clean” state after every test is key to reproducibility.

Fortunately we have a lightweight virtualization technology available with linux containers by now, and docker makes them fairly trivial to use.

Docker

Docker allows us to create, start and stop containers very easily based on images. Every image contains the current file system state, and each running container is essentially a chroot containing that image content, and a process running in it. Let that process be bash and you have pretty much a fully functional linux system.

The nice thing about this is that it is possible to run a Ubuntu 12.04 container on a Fedora 22 host (or whatever suits your fancy), and whatever I’m doing in the container, is not affected by what happens on the host system. So i.e. upgrading the host system does not affect the container.

Also, starting a container is a matter of a second.

Reproducible builds

There is a large variety of distributions out there, and every distribution has it’s own unique set of dependency versions, so if a colleague is facing a build issue, it is by no means guaranteed that I can reproduce the same problem on my system.

As an additional annoyance, any system upgrade can break my local build setup, meaning I have to be very careful with upgrading my system if I don’t have the time to rebuild it from scratch.

Moving the build system into a docker container therefore has a variety of advantages:
* Builds are reproducible across different machines
* Build dependencies can be centrally managed
* The build system is no longer affected by changes in the host system
* Building for different distributions is a matter of having a couple of docker containers

For building I chose to use kdesrc-build, so building all the necessary repositories is the least amount of effort.

Because I’m still editing the code from outside of the docker container (where my editor runs), I’m simply mounting the source code directory into the container. That way I don’t have to work inside the container, but my builds are still isolated.

Further I’m also mounting the install and build directories, meaning my containers don’t have to store anything and can be completely non-persistent (the less customized, the more reproducible), while I keep my builds fast and incremental. This is not about packaging after all.

Reproducible testing

Now we have a set of binaries that we compiled in a docker container using certain dependencies, so all we need to run the binaries is a docker container that has the necessary runtime dependencies installed.

After a bit of hackery to reuse the hosts X11 socket, it’s possible run graphical applications inside a properly setup container.

The binaries are directly mounted from the install directory, and the prepared docker image contains everything from the necessary configurations to a seeded Kontact configuration for what I need to test. That way it is guaranteed that every time I start the container, Kontact starts up in exactly the same state, zero clicks required. Issues discovered that way can very reliably be reproduced across different machines, as the only thing that differs between two setups is the used hardware (which is largely irrelevant for Kontact).

..with a server

Because I’m typically testing Kontact against a Kolab server, I of course also have a docker container running Kolab. I can again seed the image with various settings (I have for instance a John Doe account setup, for which I have the account and credentials already setup in client container), and the server is completely fresh on every start.

Wrapping it all up

Because a bunch of commands is involved, it’s worthwhile writing a couple of scripts to make the usage a easy as possible.

I went for a python wrapper which allows me to:
* build and install kdepim: “devenv srcbuild install kdepim”
* get a shell in the kdepim dir: “devenv srcbuild shell kdepim”
* start the test environment: “devenv start set1 john”

When starting the environment the first parameter defines the dataset used by the server, and the second one specifies which client to start, so I can have two Kontact instances with different users for invitation handling testing and such.

Of course you can issue any arbitrary command inside the container, so this can be extended however necessary.

While that would of course have been possible with VMs for a long time, there is a fundamental difference in performance. Executing the build has no noticeable delay compared to simply issuing make, and that includes creating a container from an image, starting the container, and cleaning it up afterwards. Starting the test server + client also takes all of 3 seconds. This kind of efficiency is really what enables us to use this in a lather, rinse, repeat approach.

The development environment

I’m still using the development environment on the host system, so all file-editing and git handling etc. happens as usual so far. I still require the build dependencies on the host system, so clang can compile my files (using YouCompleteMe) and hint if I made a typo, but at least these dependencies are decoupled from what I’m using to build Kontact itself.

I also did a little bit of integration in Vim, so my Make command now actually executes the docker command. This way I get seamless integration and I don’t even notice that I’m no longer building on the host system. Sweet.

While I’m using Vim, there’s no reason why that shouldn’t work with KDevelop (or whatever really..).

I might dockerize my development environment as well (vim + tmux + zsh + git), but more on that in another post.

Overall I’m very happy with the results of investing in a couple of docker containers, and I doubt we could have done the work we did, without that setup. At least not without a bunch of dedicated machines just for that. I’m likely to invest more in that setup, and I’m currently contemplating dockerizing also my development setup.

In any case, sources can be found here:
https://github.com/cmollekopf/docker.git

Advertisements

A new folder subscription system

Wouldn’t it be great if Kontact would allow you to select a set of folders you’re interested in, that setting would automatically be respected by all your devices and you’d still be able to control for each individual folder whether it should be visible and available offline?

I’ll line out a system that allows you to achieve just that in a groupware environment. I’ll take Kolab and calendar folders as example, but the concept applies to all groupware systems and is just as well applicable to email or other groupware content.

User Scenarios

  • ¬†Anna has access to hundreds of shared calendars, but she usually only uses a few selected ones. She therefore only has a subset of the available calendars enabled, that are shown to her in the calendar selection dialog, available for offline usage and also get synchronized to her mobile phone. If she realizes she no longer requires a calendar, she simply disables it and it disappears from the Kontact, the Webclient and her phone.
  • Joe works with a small team that shares their calendars with him. Usually he only uses the shared team-calendar, but sometimes he wants to quickly check if they are in the office before calling them, and he’s often doing this in the train with unreliable internet connection. He therefore disables the team member’s calendars but still enables synchronization for them. This hides the calendars from all his devices, but he still can quickly enable them on his laptop while being offline.
  • Fred has a mailing list folder that he always reads on his mobile, but never on his laptop. He keeps the folder enabled, but hides it on his laptop so his folder list isn’t unnecessarily cluttered.

What these scenarios tell us is that we need a flexible mechanism to specify the folders we want to see and the folders we want synchronized. Additionally we want, in today’s world where we have multiple devices, to synchronize the selection of folders that are important to us. It is likely I’d like to see the calendar I have just enabled in Kontact also on my phone. However, we always want to keep the possibility to alter that default setting on specific devices.

Current State

If you’re using a Kolab Server, you can use IMAP subscriptions to control what folders you want to see on your devices. Kontact currently respects that setting in that it makes all folders visible and available for offline usage. Additionally you have local subscriptions to disable certain folders (so they are not downloaded or displayed) on a specific device. That is not very flexible though, and personally I ended up having pretty much all folders enabled that I ever used, leading to cluttered folder selections and lot’s of bandwith and storage space used to keep everything available offline.

To change the subscription state, KMail offers to open the IMAP-subscription dialog which allows to toggle the subscription state of individual folders. This works, but is not well integrated (it’s a separate dialog), and is also not well integrable since it’s IMAP specific.

Because the solution is not well integrated, it tends to be rather static in my experience. I tend to subscribe to all folders that I ever use, which results in a very long and cluttered folder-list.

A new integrated subscription system

What would be much better, is if the back-end could provide a default setting that is synchronized to the server, and we could quickly enable or disable folders as we require them. Additionally we can override the default settings for each individual folder to optimize our setup as required.

To make the system more flexible, while not unnecessarily complex, we need a per folder setting that allows to override a backend provided default value. Additionally we need an interface for applications to alter the subscription state through Akonadi (instead of bypassing it). This allows for a well integrated solution that doesn’t rely on a separate, IMAP-specific dialog.

Each folder requires the following settings:

  • An enabled/disabled state that provides the default value for synchronizing and displaying a folder.
  • An explicit preference to synchronize a folder.
  • An explicit preference to make a folder visible.

A folder is visible if:

  • There is an explicit preference that the folder is visible.
  • There is no explicit preference on visibility and the folder is enabled.

A folder is synchronized if:

  • There is an explicit preference that the folder is synchronized.
  • There is no explicit preference on synchronization and the folder is enabled.

The resource-backend can synchronize the enabled/disabled state which should give a default experience as expected. Additionally it is possible to override that default state using the explicit preference on a per folder level.

User Interaction

By default you would be working with the enabled/disabled state, that is synchronized by the resource backend. If you enable a folder it becomes visible and synchronized, if you disable it, it becomes invisible and not synchronized. For the enabled/disabled state we can build a very easy user interface, as it is a single boolean state, that we can integrate into the primary UI.

Because the enabled/disabled state is synchronized, an enabled calendar will automatically appear on your MyKolab.com web interface and your mobile. One click, and you’re all set.

Mockup of folder sync properties
Example mockup of folder sync properties

In the advanced settings, you can then override visibility and synchronization preference at will as a local-only setting, giving you full flexibility. This can be hidden in a properties dialog, so it doesn’t clutter the primary UI.

This makes the default usecase very simple to use (you either want a folder or you don’t want it), while we keep full flexibility in overriding the default behaviour.

IMAP Synchronization

The IMAP resource will synchronize the enabled/disabled state with IMAP subscriptions if you have subscriptions enabled in the resource. This way we can use the enabled/disabled state as interface to change the subscriptions, and don’t have to use a separate dialog to toggle that state.

Interaction with existing mechanisms

This mechanism can probably replace local subscriptions eventually. However, in order not to break existing setups I plan to leave local subscriptions working as they currently are.

Conclusion

By implementing this proposal we get the required flexibility to make sure the resources of our machine are optimally used, while different clients still interact with each other as expected. Additionally we gain a uniform interface to enable/disable a collection that can be synchronized by backends (e.g. using the IMAP subscription state). This will allow applications to nicely integrate this setting, and should therefore make this feature a lot easier to use and overall more agile.

New doors are opened as this will enable us to do on-demand loading of folders. By having the complete folder list available locally (but disabled by default and thus hidden), we can use the collections to load their content temporarily and on-demand. Want to quickly look at that shared calendar you don’t have enabled? Simply search for it and have a quick look, the data is synchronized on-demand and the folder is as quickly gone as you found it, once it is no longer required. This will diminish the requirement to have folders constantly clutter your folder list even further.

So, what do you think?