Kube: Accounts

Kube is a next generation communication and collaboration client, built with QtQuick on top of a high performance, low resource usage core called Sink.
It provides online and offline access to all your mail, contacts, calendars, notes, todo’s etc.
Kube has a strong focus on usability and the team works with designers and Ux experts from the ground up, to build a product that is not only visually appealing but also a joy to use.

To learn more about Kube, please see here.

Kube’s Account System

Data ownership

Kube is a network application at its core. That doesn’t mean you can’t use it without network (even permanently), but you’d severely limit its capabilities given that it’s meant to be a communication and collaboration tool.

Since network communication typically happens over a variety of services where you have a personal account, an account provides a good starting point for our domain model. If you have a system with large amounts of data that are constantly changing it’s vital to have a clear understanding of data ownership within the system. In Kube, this is always an account.

By putting the account front and center we ensure that we don’t have any data that just belongs to the system as a whole. This is important because it becomes very complex to work with data that “belongs to everyone” once we try to synchronize that data with various backends. If we modify a dataset should that replicate to all copies of it? What if one backend already deleted that record? Would that mean we also have to remove it from the other services?
And what if we have a second client that has a different set of account connected?
If we ensure that we always only have a single owner, we can avoid all those issues and build a more reliable and predictable system.

The various views can of course still correlate data across accounts where useful, e.g. to show a single person entry instead of one contact per addressbook, but they then also have to make sure that it is clear what happens if you go and modfiy e.g. the address of that person (Do we modify all copies in all accounts? What happens if one copy goes out of sync again because you used the webinterface?).

Last but not least we ensure this way that we have a clear path to synchronize all data to a backend eventually, even if we can’t do so immediately. E.g. because the backend in use does not support that data type yet.

The only bit of data that is stored outside of the account is data specific to the device in use, such as configuration data for the application itself. Data that isn’t hard to recreate, is easy to migrate and backup, and very little data in the first place.

Account backends

Most services provide you with a variety of data for an individual account. Whether you use Kolabnow, Google or a set of local Maildirs and ICal files,
you typically have access to Contact, Mails, Events, Todos and many more. Fortunately most services provide access to most data through open protocols,
but unfortunately we often end up in a situation where we need a variety of protocols to get to all data.

Within Sink we call each backend a “Resource”. A resource typically has a process to synchronize data to an offline cache, and then makes that data accessible through a standardized interface. This ensures that even if one resource synchronizes email over IMAP and another just gathers it from a local Maildir,
the data is accessible to the application through the same interface.

Because various accounts use various combinations of protocols, accounts can mix and match various resources to provide access to all data they have.
A Kolab account for instance, could combine an IMAP resource for email, a CALDAV resource for calendars and CARDDAV resource for contacts, plus any additional resources for instant messaging, notes, … you get the idea. Alternatively we could decide to get to all data over JMAP (a potential IMAP successor with support for more datatypes than just email) and thus implement a JMAP resource instead (which again could be reused by other accounts with the same requirements).



Specialized accounts

While accounts within Sink are mostly an assembly of some resources with some extra configuration, on the Kube side a QML plugin is used (we’re using KPackage for that) to define the configuration UI for the account. Because accounts are ideally just an assembly of a couple of existing Sink resources with a QML file to define the configuration UI, it becomes very cheap to create account plugins specific to a service. So while a generic IMAP account settings page could look like this:


… a Kolabnow setup page could look like this (and this already includes the setup of all resources including IMAP, CALDAV, CARDDAV, etc.):


Because we can build all we know about the service directly into that UI, the user is optimally supported and all that is left ideally, are the credentials.


In the end the aim of this setup is that a user first starting Kube selects the service(s) he uses, enters his credentials and he’s good to go.
In a corporate setup, login and service can of course be preconfigured, so all that is left is whatever is used for authentication (such as a password).

By ensuring all data lives under the account we ensure no data ends up in limbo with unclear ownership, so all your devices have the same dataset available, and connecting a new devices is a matter of entering credentials.

This also helps simplifying backup, migration and various deployment scenarios.

So what is Kube? (and who is Sink?)

Michael first blogged about Kube, but we apparently missed to properly introduce the Project. Let me fix that for you 😉

Kube is a modern groupware client, built to be effective and efficient on a variety of platforms and form-factors. It is built on top of a high-performance data access layer and Qt Quick to provide an exceptional user experience with minimal resource usage. Kube is based on the lessons learned from KDE Kontact and Akonadi, building on the strengths and replacing the weak points.

Kube is further developed in coordination with Roundcube Next, to achieve a consistent user experience across the two interfaces and to ensure that we can collaborate while building the UX.

A roadmap has been available for some time for the first release here, but in the long run we of course want to go beyond a simple email application. The central aspects of the the problem space that we want to address is communication and collaboration as well as organization. I know this is till a bit fuzzy, but there is a lot of work to be done before we can specify this clearly.

To ensure that we can move fast once the basic framework is ready, the architecture is very modular to enable component reuse and make it as easy as possible to create new ones. This way we can shift our focus over time from building the technology stack to evolving the UX.


Sink is a high-performance data access layer that provides a plugin mechanism for various backends (remote servers e.g. imap, local maildir, …) an editable offline cache that can replay changes to the server, a query system for efficient data-access and a unified API for groupware types such as events, mails, todos, etc.

It is built on top of LMDB (a key-value store) and Qt to be fast and efficient.

Sink is built for reliability, speed and maintainability.

What Kube & Sink aren’t

It is not a rename of Kontact and Akonadi.
Kontact and Akonadi will continue to be maintained by the KDEPIM team and Kube is a separate project (altough we share bits and pieces under the hood).
It is not a rewrite of Kontact
There is no intention of replicating Kontact. We’re not interested in providing every feature that Kontact has, but rather focus on a set that is useful for the usecases we try to solve (which is WIP).


Development planning happens on phabricator, and the kdepim mailinglist. Our next sprint is in Toulouse together with the rest of the KDEPIM team.

We also have a weekly meeting on Wednesday, 16:00 CET with notes sent to the ML. If you would like to participate in those meetings just let me know, you’re more than welcome.

Current state

Kube is under heavy development and in an early stage, but we’re making good progress and starting to see the first results (you can read mail from maildir and even reply to mails). However, it is not yet ready for general consumption (though installable installable).

If you want to follow the development closely it is also possible to build Kube inside a docker container, or just use the container that contains a built version of Kube (it’s not yet updated automatically, so let me know if you want further information on that).

I hope that makes it a bit clearer what Kube and Sink is and isn’t, and where we’re going with it. If something is still unclear, please let me know in the comments section, and if you want to participate, by all means, join us =)

Kube Architecture – A Primer

Kube’s architecture is starting to emerge, so it is time that I give an overview on the current plans.

But to understand why we’re going where we’re going it is useful to consider the assumptions we work with, so let’s start there:

Kube is a networked application.
While Kube can certainly be used on a machine that has never seen a network connection, that is not where it shines. Kube is built to interact with various services and to work well with multiple devices. This is the reality we live in and that we’re building for.
Kube is scalable.
Kube not only scales from small datasets that are quick to synchronize to large datasets, that we can’t simply load into memory all at once. It also scales to different form factors. Kube is usable on devices with small and large screens, with touch or mouse input, etc.
Kube is cross platform.
Kube should run just as well on your laptop (be it Linux, OS X or Windows) as it does on your mobile (be it Plasma Mobile or Android).
Kube is a platform for rapid development.
We’re not interested in rebuilding mail and calendar and stopping there. Groupware needs to evolve and we want to facilitate communication and collaboration, not email and events. This requires that the user experience can continue to evolve and that we can experiment with new ideas quickly, without having to do large-scale changes to the codebase.
Groupware types are overlapping.
Traditionally PIM/Groupware applications are split up by formats and protocols, such as IMAP, MIME and iCal but that’s not how typical workflows work. Just because the transport chosen by iTip for an invitation happens to be a MIME message transported over IMAP to my machine, doesn’t mean that’s necessarily how I want to view it. I may want to start a communication with a person from my addressbook, calendar or email composer. A note may turn into a set of todo’s eventually. …

A lot of pondering over these points has led to a set of concepts that I’d like to quickly introduce:


Kube is built from different components. Each component is a KPackage that provides a QML UI backed by various C++ elements from the Kube framework. By building reusable components we ensure that i.e. the email application can show the very same contact view as the addressbook, with all the actions you’d expect available. This not only allows us to mix various UI elements freely while building the User Experience, it also ensures consistency across the board with little effort. The components load their data themselves by instantiating the appropriate models and are thus fully self contained.

Components will come in various granularities, from simple widgets suitable for popup display to i.e. a full email application.

The components concept will also be interesting for integration. A plasma clock plasmoid could for instance detect that the Kube calendar package is available, and show this instead of it’s native one. That way the integration is little effort, the user experience is well integrated (you get the exact same UX as in the regular application), and the full set of functionality is directly available (unlike when only the data was shared).


Kube is reactive. Models provide the data that the UI is built upon, so the UI only has to render whatever the model provides. This avoids complex stateful UI’s and ensures a proper separation of bussiness logic and UI. The UI directly instantiates and configures the models it requires.
The models feed on the data they get from Sink or other sources, and are as such often thin wrappers around other API’s. The dynamic nature of models allows to dynamically load more data as required to keep the system efficient.


In the other direction provide “Actions” the interaction with the rest of the system. An action can be “mark as read”, or “send mail”, or any other interaction with the system that is suitable for reuse. The action system is a publisher-subscriber system where various parts can execute actions that are handled by one of the registered action-handlers.

This loose-coupling between action and handler allows actions to be dynamically handled by different parts of the system system, i.e. based on the currently active account when sending an email. It also ensures that action handlers are nice and small functional components that can be invoked from various parts in the system that require similar functionality.

Pre-Handlers allow preparatory steps to be injected into the action-execution, such as retrieving configuration or requesting authentication, or resolving some identifier over a remote service. Anything that is required really to have all input data available to be able to execute the action handler.


Controllers are C++ components that expose properties for a QML UI. These are useful to prepare data for the UI where a simple model is not sufficient, and can include additional UI-helpers such as validators or autocompletion for input fields.


Accounts is the attempt to account for (pun intended) the networked nature of the environment we’re working in. Most information we’re working with in Kube is or should be synchronized over one or the other account and there remains very little that is specific to the local machine (besides application state). This means most data and configuration is always tied to an account to ensure clear ownership.

However accounts not only manifest in where data is being put, they also manifest as “plugins” for various backends. They tie together a QML configuration UI, an underlying configuration controller (for validation and autocompletion etc), a Sink resource to access data i.e. over IMAP, a set of action handlers i.e. to send mail over smtp and potentially various defaults for identity etc.

In case you’re internally already shouting “KAccounts!, KAccounts!”; We’re aware of the overlap, but I don’t see how we can solve all our problems using it, and there is definitely an argument for an integrated solution with regards to portability to other platforms. However, I do think there are opportunities in terms of platform integration.

An that’s it!

Further information can be found in the Kube Documentation.

Bringing Akonadi Next up to speed

It’s been a while since the last progress report on akonadi next. I’ve since spent a lot of time refactoring the existing codebase, pushing it a little further,
and refactoring it again, to make sure the codebase remains as clean as possible. The result of that is that an implementation of a simple resource only takes a couple of template instantiations, apart from code that interacts with the datasource (e.g. your IMAP Server) which I obviously can’t do for the resource.

Once I was happy with that, I looked a bit into performance, to ensure the goals are actually reachable. For write speed, operations need to be batched into database transactions, this is what allows the db to write up to 50’000 values per second on my system (4 year old laptop with an SSD and an i7). After implementing the batch processing, and without looking into any other bottlenecks, it can process now ~4’000 values per second, including updating ten secondary indexes. This is not yet ideal given what we should be able to reach, but does mean that a sync of 40’000 emails would be done within 10s, which is not bad already. Because commands first enter a persistent command queue, pulling the data offline is complete even faster actually, but that command queue afterwards needs to be processed for the data to become available to the clients and all of that together leads to the actual write speed.

On the reading side we’re at around 50’000 values per second, with the read time growing linearly with the amount of messages read. Again far from ideal, which is around 400’000 values per second for a single db (excluding index lookups), but still good enough to load large email folders in a matter of a second.

I implemented the benchmarks to get these numbers, so thanks to HAWD we should be able to track progress over time, once I setup a system to run the benchmarks regularly.

With performance being in an acceptable state, I will shift my focus to the revisioned, which is a prerequisite for the resource writeback to the source. After all, performance is supposed to be a desirable side-effect, and simplicity and ease of use the goal.


Coming up next week is the yearly Randa meeting where we will have the chance to sit together for a week and work on the future of Kontact. This meetings help tremendously in injecting momentum into the project, and we have a variety of topics to cover to direct the development for the time to come (and of course a lot of stuff to actively hack on). If you’d like to contribute to that you can help us with some funding. Much appreciated!

Kontact on Windows

I recently had the dubious pleasure of getting Kontact to work on windows, and after two weeks of agony it also yielded some results =)

Not only did I get Kontact to build on windows (sadly still something to be proud off), it is also largely functional. Even timezones are now working in a way that you can collaborate with non-windows users, although that required one or the other patch to kdelibs.

To make the whole excercise as reproducible as possible I collected my complete setup in a git repository [0]. Note that these builds are from the kolab stable branches, and not all the windows specific fixes have made it back upstream yet. That will follow as soon as the waters calm a bit.

If you want to try it yourself you can download an installer here [1],
and if you don’t (I won’t judge you for not using windows) you can look at the pretty pictures.

[0] https://github.com/cmollekopf/kdepimwindows
[1] http://mirror.kolabsys.com/pub/upload/windows/Kontact-E5-2015-06-30-19-41.exe


Reproducible testing with docker

Reproducible testing is hard, and doing it without automated tests, is even harder. With Kontact we’re unfortunately not yet in a position where we can cover all of the functionality by automated tests.

If manual testing is required, being able to bring the test system into a “clean” state after every test is key to reproducibility.

Fortunately we have a lightweight virtualization technology available with linux containers by now, and docker makes them fairly trivial to use.


Docker allows us to create, start and stop containers very easily based on images. Every image contains the current file system state, and each running container is essentially a chroot containing that image content, and a process running in it. Let that process be bash and you have pretty much a fully functional linux system.

The nice thing about this is that it is possible to run a Ubuntu 12.04 container on a Fedora 22 host (or whatever suits your fancy), and whatever I’m doing in the container, is not affected by what happens on the host system. So i.e. upgrading the host system does not affect the container.

Also, starting a container is a matter of a second.

Reproducible builds

There is a large variety of distributions out there, and every distribution has it’s own unique set of dependency versions, so if a colleague is facing a build issue, it is by no means guaranteed that I can reproduce the same problem on my system.

As an additional annoyance, any system upgrade can break my local build setup, meaning I have to be very careful with upgrading my system if I don’t have the time to rebuild it from scratch.

Moving the build system into a docker container therefore has a variety of advantages:
* Builds are reproducible across different machines
* Build dependencies can be centrally managed
* The build system is no longer affected by changes in the host system
* Building for different distributions is a matter of having a couple of docker containers

For building I chose to use kdesrc-build, so building all the necessary repositories is the least amount of effort.

Because I’m still editing the code from outside of the docker container (where my editor runs), I’m simply mounting the source code directory into the container. That way I don’t have to work inside the container, but my builds are still isolated.

Further I’m also mounting the install and build directories, meaning my containers don’t have to store anything and can be completely non-persistent (the less customized, the more reproducible), while I keep my builds fast and incremental. This is not about packaging after all.

Reproducible testing

Now we have a set of binaries that we compiled in a docker container using certain dependencies, so all we need to run the binaries is a docker container that has the necessary runtime dependencies installed.

After a bit of hackery to reuse the hosts X11 socket, it’s possible run graphical applications inside a properly setup container.

The binaries are directly mounted from the install directory, and the prepared docker image contains everything from the necessary configurations to a seeded Kontact configuration for what I need to test. That way it is guaranteed that every time I start the container, Kontact starts up in exactly the same state, zero clicks required. Issues discovered that way can very reliably be reproduced across different machines, as the only thing that differs between two setups is the used hardware (which is largely irrelevant for Kontact).

..with a server

Because I’m typically testing Kontact against a Kolab server, I of course also have a docker container running Kolab. I can again seed the image with various settings (I have for instance a John Doe account setup, for which I have the account and credentials already setup in client container), and the server is completely fresh on every start.

Wrapping it all up

Because a bunch of commands is involved, it’s worthwhile writing a couple of scripts to make the usage a easy as possible.

I went for a python wrapper which allows me to:
* build and install kdepim: “devenv srcbuild install kdepim”
* get a shell in the kdepim dir: “devenv srcbuild shell kdepim”
* start the test environment: “devenv start set1 john”

When starting the environment the first parameter defines the dataset used by the server, and the second one specifies which client to start, so I can have two Kontact instances with different users for invitation handling testing and such.

Of course you can issue any arbitrary command inside the container, so this can be extended however necessary.

While that would of course have been possible with VMs for a long time, there is a fundamental difference in performance. Executing the build has no noticeable delay compared to simply issuing make, and that includes creating a container from an image, starting the container, and cleaning it up afterwards. Starting the test server + client also takes all of 3 seconds. This kind of efficiency is really what enables us to use this in a lather, rinse, repeat approach.

The development environment

I’m still using the development environment on the host system, so all file-editing and git handling etc. happens as usual so far. I still require the build dependencies on the host system, so clang can compile my files (using YouCompleteMe) and hint if I made a typo, but at least these dependencies are decoupled from what I’m using to build Kontact itself.

I also did a little bit of integration in Vim, so my Make command now actually executes the docker command. This way I get seamless integration and I don’t even notice that I’m no longer building on the host system. Sweet.

While I’m using Vim, there’s no reason why that shouldn’t work with KDevelop (or whatever really..).

I might dockerize my development environment as well (vim + tmux + zsh + git), but more on that in another post.

Overall I’m very happy with the results of investing in a couple of docker containers, and I doubt we could have done the work we did, without that setup. At least not without a bunch of dedicated machines just for that. I’m likely to invest more in that setup, and I’m currently contemplating dockerizing also my development setup.

In any case, sources can be found here:

Progress on the prototype for a possible next version of akonadi

Ever since we introduced our ideas the next version of akonadi, we’ve been working on a proof of concept implementation, but we haven’t talked a lot about it. I’d therefore like to give a short progress report.

By choosing decentralized storage and a key-value store as the underlying technology, we first need to prove that this approach can deliver the desired performance with all pieces of the infrastructure in place. I think we have mostly reached that milestone by now. The new architecture is very flexible and looks promising so far. We managed IMO quite well to keep the levels of abstraction to a necessary minimum, which results in a system that is easily adjusted as new problems need to be solved and feels very controllable from a developer perspective.

We’ve started off with implementing the full stack for a single resource and a single domain type. For this we developed a simple dummy-resource that currently has an in-memory hash map as backend, and can only store events. This is a sufficient first step, as turning that into the full solution is a matter of adding further flatbuffer schemas for other types and defining the relevant indexes necessary to query what we want to query. By only working on a single type we can first carve out the necessary interfaces and make sure that we make the effort required to add new types minimal and thus maximize code reuse.

The design we’re pursuing, as presented during the pim sprint, consists of:

  • A set of resource processes
  • A store per resource, maintained by the individual resources (there is no central store)
  • A set of indexes maintained by the individual resources
  • A clientapi that knows how to access the store and how to talk to the resources through a plugin provided by the resource implementation.

By now we can write to the dummyresource through the client api, the resource internally queues the new entity, updates it’s indexes and writes the entity to storage. On the reading part we can execute simple queries against the indexes and retrieve the found entities. The synchronizer process can meanwhile generate also new entities, so client and synchronizer can write concurrently to the store. We therefore can do the full write/read roundtrip meaning we have most fundamental requirements covered. Missing are other operations than creating new entities (removal and modifications), and the writeback to the source by the synchronizer. But that’s just a matter of completing the implementation (we have the design).

To the numbers: Writing from the client is currently implemented in a very inefficient way and it’s trivial to drastically improve this, but in my latest test I could already write ~240 (small) entities per second. Reading works around 40k entities per second (in a single query) including the lookup on the secondary index. The upper limit of what the storage itself can achieve (on my laptop) is at 30k entities per second to write, and 250k entities per second to read, so there is room for improvement =)

Given that design and performance look promising so far, the next milestone will be to refactor the codebase sufficiently to ensure new resources can be added with sufficient ease, and making sure all the necessary facilities (such as a proper logging system), or at least stubs thereof, are available.

I’m writing this on a plane to Singapore which we’re using as gateway to Indonesia to chase after waves and volcanoes for the next few weeks, but after that I’m  looking forward to go full steam ahead with what we started here. I think it’s going to be something cool =)