Kube: Accounts

Kube is a next generation communication and collaboration client, built with QtQuick on top of a high performance, low resource usage core called Sink.
It provides online and offline access to all your mail, contacts, calendars, notes, todo’s etc.
Kube has a strong focus on usability and the team works with designers and Ux experts from the ground up, to build a product that is not only visually appealing but also a joy to use.

To learn more about Kube, please see here.

Kube’s Account System

Data ownership

Kube is a network application at its core. That doesn’t mean you can’t use it without network (even permanently), but you’d severely limit its capabilities given that it’s meant to be a communication and collaboration tool.

Since network communication typically happens over a variety of services where you have a personal account, an account provides a good starting point for our domain model. If you have a system with large amounts of data that are constantly changing it’s vital to have a clear understanding of data ownership within the system. In Kube, this is always an account.

By putting the account front and center we ensure that we don’t have any data that just belongs to the system as a whole. This is important because it becomes very complex to work with data that “belongs to everyone” once we try to synchronize that data with various backends. If we modify a dataset should that replicate to all copies of it? What if one backend already deleted that record? Would that mean we also have to remove it from the other services?
And what if we have a second client that has a different set of account connected?
If we ensure that we always only have a single owner, we can avoid all those issues and build a more reliable and predictable system.

The various views can of course still correlate data across accounts where useful, e.g. to show a single person entry instead of one contact per addressbook, but they then also have to make sure that it is clear what happens if you go and modfiy e.g. the address of that person (Do we modify all copies in all accounts? What happens if one copy goes out of sync again because you used the webinterface?).

Last but not least we ensure this way that we have a clear path to synchronize all data to a backend eventually, even if we can’t do so immediately. E.g. because the backend in use does not support that data type yet.

The only bit of data that is stored outside of the account is data specific to the device in use, such as configuration data for the application itself. Data that isn’t hard to recreate, is easy to migrate and backup, and very little data in the first place.

Account backends

Most services provide you with a variety of data for an individual account. Whether you use Kolabnow, Google or a set of local Maildirs and ICal files,
you typically have access to Contact, Mails, Events, Todos and many more. Fortunately most services provide access to most data through open protocols,
but unfortunately we often end up in a situation where we need a variety of protocols to get to all data.

Within Sink we call each backend a “Resource”. A resource typically has a process to synchronize data to an offline cache, and then makes that data accessible through a standardized interface. This ensures that even if one resource synchronizes email over IMAP and another just gathers it from a local Maildir,
the data is accessible to the application through the same interface.

Because various accounts use various combinations of protocols, accounts can mix and match various resources to provide access to all data they have.
A Kolab account for instance, could combine an IMAP resource for email, a CALDAV resource for calendars and CARDDAV resource for contacts, plus any additional resources for instant messaging, notes, … you get the idea. Alternatively we could decide to get to all data over JMAP (a potential IMAP successor with support for more datatypes than just email) and thus implement a JMAP resource instead (which again could be reused by other accounts with the same requirements).



Specialized accounts

While accounts within Sink are mostly an assembly of some resources with some extra configuration, on the Kube side a QML plugin is used (we’re using KPackage for that) to define the configuration UI for the account. Because accounts are ideally just an assembly of a couple of existing Sink resources with a QML file to define the configuration UI, it becomes very cheap to create account plugins specific to a service. So while a generic IMAP account settings page could look like this:


… a Kolabnow setup page could look like this (and this already includes the setup of all resources including IMAP, CALDAV, CARDDAV, etc.):


Because we can build all we know about the service directly into that UI, the user is optimally supported and all that is left ideally, are the credentials.


In the end the aim of this setup is that a user first starting Kube selects the service(s) he uses, enters his credentials and he’s good to go.
In a corporate setup, login and service can of course be preconfigured, so all that is left is whatever is used for authentication (such as a password).

By ensuring all data lives under the account we ensure no data ends up in limbo with unclear ownership, so all your devices have the same dataset available, and connecting a new devices is a matter of entering credentials.

This also helps simplifying backup, migration and various deployment scenarios.

Kube going cross-platform in Randa

I’m on my way back home from the cross-platform sprint in Randa. The four days of hacking, discussing and hiking that I spent there, allowed me to get a much clearer picture of how the cross-platform story for Kube can work, and what effort we will require to get there.

We intend to ship Kube eventually on not only Linux, Windows and Mac, but also on mobile platforms like Android, so it is vital that we figure out blockers as early as possible, and keep the whole stack portable.


The first experiment was a new distribution mechanism instead of a full new platform. Fortunately Aleix Pol already learned the ropes and quickly whipped up a Flatpak definition file that resultet in a self contained Kube distribution that actually worked.

Given that we already use homegrown docker containers to achieve similar results, we will likely switch to Flatpak to build git snapshots for early adopters and people participating in the development process (such as designers).


Anreas Cord-Landwehr perpared a docker image that brings the complete cross-compiler toolchain with it, so that makes for a much smoother setup process than doing everything manually. I mostly just followed the KDE-Android documentation .

After resolving some initial issues with the Qt-Installer with Andreas (Qt has to be installed using the Gui-Installer from console as well, using some arcane configuration script that changes variables with every release…. WTF Qt), this got me quickly set up to compile the first dependencies.

Thanks to the work of Andreas, most frameworks already compile flawlessly, some other dependencies like LMDB, flatbuffers and KIMAP (which currently still depends on KIO), will require some more work though. However, I have a pretty good idea by now what will be required to get everything to build on Android, which was the point of the exercise.


I postponed the actual building of anything on Windows until I get back to my workstation, but I’ve had some good discussions about the various possibilities that we have to build for Windows.

While I was initially intrigued by using MXE to cross-compile the whole stack (I’d love not having to have a Windows VM to build packages), the simplicity of the external CMake project that Kåre Särs setup for Kate is tempting as well. The downside of MXE would be of course that we don’t get to use the native compiler, which may or may not result in performance impacts, but definitely doesn’t allow developers on Windows to work with MSVC (should we get any at some point….).

I guess some experimentation will be in order to see what works best.


Lacking an OS X machine this is also still in the theoretical realm, but we also discussed the different building techniques and how the aim must be to produce end-user installable application bundles.


As you can see there is still plenty to do, so if you feel like trying to build the stack on your favorite platform, that help would be greatly appreciated! Feel free to contact me directly, grab a task on Phabricator, or join our weekly meetings on meet.jit.si/kube (currently every Wednesday 12:00).

The time I spent in Randa showed once more how tremendously useful these sprints are to exchange knowledge. I would have had a much harder time figuring out all the various processes and issues without the help of the various experts at the sprint, so this was a nice kick-start of the cross-platform effort for Kube. So thank you Mario and team that you organized this excellent event once more, and if you can, please help keeping these Sprints happening.

So what is Kube? (and who is Sink?)

Michael first blogged about Kube, but we apparently missed to properly introduce the Project. Let me fix that for you😉

Kube is a modern groupware client, built to be effective and efficient on a variety of platforms and form-factors. It is built on top of a high-performance data access layer and Qt Quick to provide an exceptional user experience with minimal resource usage. Kube is based on the lessons learned from KDE Kontact and Akonadi, building on the strengths and replacing the weak points.

Kube is further developed in coordination with Roundcube Next, to achieve a consistent user experience across the two interfaces and to ensure that we can collaborate while building the UX.

A roadmap has been available for some time for the first release here, but in the long run we of course want to go beyond a simple email application. The central aspects of the the problem space that we want to address is communication and collaboration as well as organization. I know this is till a bit fuzzy, but there is a lot of work to be done before we can specify this clearly.

To ensure that we can move fast once the basic framework is ready, the architecture is very modular to enable component reuse and make it as easy as possible to create new ones. This way we can shift our focus over time from building the technology stack to evolving the UX.


Sink is a high-performance data access layer that provides a plugin mechanism for various backends (remote servers e.g. imap, local maildir, …) an editable offline cache that can replay changes to the server, a query system for efficient data-access and a unified API for groupware types such as events, mails, todos, etc.

It is built on top of LMDB (a key-value store) and Qt to be fast and efficient.

Sink is built for reliability, speed and maintainability.

What Kube & Sink aren’t

It is not a rename of Kontact and Akonadi.
Kontact and Akonadi will continue to be maintained by the KDEPIM team and Kube is a separate project (altough we share bits and pieces under the hood).
It is not a rewrite of Kontact
There is no intention of replicating Kontact. We’re not interested in providing every feature that Kontact has, but rather focus on a set that is useful for the usecases we try to solve (which is WIP).


Development planning happens on phabricator, and the kdepim mailinglist. Our next sprint is in Toulouse together with the rest of the KDEPIM team.

We also have a weekly meeting on Wednesday, 16:00 CET with notes sent to the ML. If you would like to participate in those meetings just let me know, you’re more than welcome.

Current state

Kube is under heavy development and in an early stage, but we’re making good progress and starting to see the first results (you can read mail from maildir and even reply to mails). However, it is not yet ready for general consumption (though installable installable).

If you want to follow the development closely it is also possible to build Kube inside a docker container, or just use the container that contains a built version of Kube (it’s not yet updated automatically, so let me know if you want further information on that).

I hope that makes it a bit clearer what Kube and Sink is and isn’t, and where we’re going with it. If something is still unclear, please let me know in the comments section, and if you want to participate, by all means, join us =)

Kube Architecture – A Primer

Kube’s architecture is starting to emerge, so it is time that I give an overview on the current plans.

But to understand why we’re going where we’re going it is useful to consider the assumptions we work with, so let’s start there:

Kube is a networked application.
While Kube can certainly be used on a machine that has never seen a network connection, that is not where it shines. Kube is built to interact with various services and to work well with multiple devices. This is the reality we live in and that we’re building for.
Kube is scalable.
Kube not only scales from small datasets that are quick to synchronize to large datasets, that we can’t simply load into memory all at once. It also scales to different form factors. Kube is usable on devices with small and large screens, with touch or mouse input, etc.
Kube is cross platform.
Kube should run just as well on your laptop (be it Linux, OS X or Windows) as it does on your mobile (be it Plasma Mobile or Android).
Kube is a platform for rapid development.
We’re not interested in rebuilding mail and calendar and stopping there. Groupware needs to evolve and we want to facilitate communication and collaboration, not email and events. This requires that the user experience can continue to evolve and that we can experiment with new ideas quickly, without having to do large-scale changes to the codebase.
Groupware types are overlapping.
Traditionally PIM/Groupware applications are split up by formats and protocols, such as IMAP, MIME and iCal but that’s not how typical workflows work. Just because the transport chosen by iTip for an invitation happens to be a MIME message transported over IMAP to my machine, doesn’t mean that’s necessarily how I want to view it. I may want to start a communication with a person from my addressbook, calendar or email composer. A note may turn into a set of todo’s eventually. …

A lot of pondering over these points has led to a set of concepts that I’d like to quickly introduce:


Kube is built from different components. Each component is a KPackage that provides a QML UI backed by various C++ elements from the Kube framework. By building reusable components we ensure that i.e. the email application can show the very same contact view as the addressbook, with all the actions you’d expect available. This not only allows us to mix various UI elements freely while building the User Experience, it also ensures consistency across the board with little effort. The components load their data themselves by instantiating the appropriate models and are thus fully self contained.

Components will come in various granularities, from simple widgets suitable for popup display to i.e. a full email application.

The components concept will also be interesting for integration. A plasma clock plasmoid could for instance detect that the Kube calendar package is available, and show this instead of it’s native one. That way the integration is little effort, the user experience is well integrated (you get the exact same UX as in the regular application), and the full set of functionality is directly available (unlike when only the data was shared).


Kube is reactive. Models provide the data that the UI is built upon, so the UI only has to render whatever the model provides. This avoids complex stateful UI’s and ensures a proper separation of bussiness logic and UI. The UI directly instantiates and configures the models it requires.
The models feed on the data they get from Sink or other sources, and are as such often thin wrappers around other API’s. The dynamic nature of models allows to dynamically load more data as required to keep the system efficient.


In the other direction provide “Actions” the interaction with the rest of the system. An action can be “mark as read”, or “send mail”, or any other interaction with the system that is suitable for reuse. The action system is a publisher-subscriber system where various parts can execute actions that are handled by one of the registered action-handlers.

This loose-coupling between action and handler allows actions to be dynamically handled by different parts of the system system, i.e. based on the currently active account when sending an email. It also ensures that action handlers are nice and small functional components that can be invoked from various parts in the system that require similar functionality.

Pre-Handlers allow preparatory steps to be injected into the action-execution, such as retrieving configuration or requesting authentication, or resolving some identifier over a remote service. Anything that is required really to have all input data available to be able to execute the action handler.


Controllers are C++ components that expose properties for a QML UI. These are useful to prepare data for the UI where a simple model is not sufficient, and can include additional UI-helpers such as validators or autocompletion for input fields.


Accounts is the attempt to account for (pun intended) the networked nature of the environment we’re working in. Most information we’re working with in Kube is or should be synchronized over one or the other account and there remains very little that is specific to the local machine (besides application state). This means most data and configuration is always tied to an account to ensure clear ownership.

However accounts not only manifest in where data is being put, they also manifest as “plugins” for various backends. They tie together a QML configuration UI, an underlying configuration controller (for validation and autocompletion etc), a Sink resource to access data i.e. over IMAP, a set of action handlers i.e. to send mail over smtp and potentially various defaults for identity etc.

In case you’re internally already shouting “KAccounts!, KAccounts!”; We’re aware of the overlap, but I don’t see how we can solve all our problems using it, and there is definitely an argument for an integrated solution with regards to portability to other platforms. However, I do think there are opportunities in terms of platform integration.

An that’s it!

Further information can be found in the Kube Documentation.

The year of Kube

After having reached the first milestone of a read-only prototype of Kube, it’s time to provide a lookout of what we plan to achieve in 2016.
I have put together a Roadmap, of what I think are realistic goals that we can achieve in 2016. Obviously this will evolve over time and we’ll keep adjusting this as we advance faster/slower or simply move in other directions.

Since we’re building a completely new technology stack, a lot of the roadmap revolves around ensuring that we can create what we envision technology wise,
and that we have the necessary infrastructure to move fast while having confidence in the quality. It’s important that we do this before growing the codebase too much so we can still make the necessary adjustments without having too much code to adjust.

On the UX side we’ll want to work on concepts and prototypes, although we’ll probably keep the first implemented UI’s to something fairly simple and standard.
Over time we have to build a vision where we want to go in the long run so this can steer the development. This will be a long and ongoing process involving not only wire-frames and mockups, but hopefully also user research and analysis of our problem space (how do we communicate rather than how does GMail work).

However, since we can’t just stomp that grander vision out of the ground, the primary goal for us this year, is a simple email client that doesn’t do much, but does what it does well. Hopefully we can go beyond that with some other components available (calendar, addressbook, …), or perhaps something simple available on mobile already, but we’ll have to see how fast it goes first. Overall we’ll want to focus on quality rather than quantity to prove what quality level we’re able to reach and to ensure we’re well lined up to move fast in the following year(s).

The Roadmap

I split the roadmap into four quarters, each having it’s own focus. Note that Akonadi Next has been renamed to Sink to avoid confusion (now that Akonadi 5 is released and we planned for Akonadi2…).

1. Quarter

– Read-only Kube Mail prototype.
– Fully functional Kube Mail prototype but with very limited functionality set (read and compose mail).
– Testenvironment that is also usable by designers.
– Logging mechanism in Sink and potentially Kube so we can produce comprehensive logs.
– Automatic gathering of performance statistics so we can benchmark and prove progress over time.
– The code inventory1 is completed and we know what features we used to have in Kontact.
– Sink Maildir resource.
– Start of gathering of requirements for Kube Mail (features, ….).
– Start of UX design work.

We focus on pushing forward functionality wise, and refactoring the codebase every now and then to get a feeling how we can build applications with the new framework.
The UI is not a major focus, but we may start doing some preparatory work on how things eventually should be. Not much attention is paid to usability etc.
Once we have the Kube Mail prototype ready, with a minimum set of features, but a reasonable codebase and stability (so it becomes somewhat useful for the ones that want to give it a try), we start communicating about it more with regular blogposts etc.

2. Quarter

– Build on Windows.
– Build on Mac.
– Comprehensive automated testing of the full application.
– First prototype on Android.
– First prototype on Plasma Mobile?
– Sink IMAP resource.
– Sink Kolab resource.
– Sink ICal resource.
– Start of gathering of performance requirements for Kube Mail (responsiveness, disk-usage, ….)
– Define target feature set to reach by the end of the year.

We ensure the codebase builds on all major platforms and ensure it keeps building and working everywhere. We ensure we can test everything we need, and work out what we want to test (i.e. including UI or not). Kube is extended with further functionality and we develop the means to access a Kolab/IMAP Server (perhaps with mail only).

3. Quarter

– Prototype for Kube Shell.
– Prototype for Kube Calendar.
– Potentially prototype for other Kube applications.
– Rough UX Design for most applications that are part of Kube.
– Implementation of further features in Kube Mail according to the defined feature set.

We start working on prototypes with other datatypes, which includes data access as well as UI. The implemented UI’s are not final, but we end up with a usable calendar. We keep working on the concepts and designs, and we approximately know what we want to end up with.

4. Quarter

– Implementation of the final UI for the Kube Mail release.
– Potentially also implementation of a final UI for other components already.
– UX Design for all applications “completed” (it’s never complete but we have a version that we want to implement).
– Tests with users.

We polish Kube Mail, ensure it’s easy to install and setup on all platforms and that all the implemented features work flawlessly.

Progress so far

Currently we have a prototype that has:
– A read-only maildir resource.
– HTML rendering of emails.
– Basic actions such as deleting a mail.

My plan is to hook the Maildir resource up with offlineimap, so I can start reading my mail in Kube within the next weeks😉

Next to this we’re working on infrastructure, documentation, planning, UI Design…
Current progress can be followed in our Phabricator projects 23, and the documentation, while still lagging behind, is starting to take shape in the “docs/” subdirectory of the respective repositories45.

There’s meanwhile also a prototype of a docker container to experiment with available 6, and the Sink documentation explains how we currently build Sink and Kube inside a docker container with kdesrcbuild.

Join the Fun

We have weekly hangouts on that you are welcome to join (just contact me directly or write to the kde-pim mailinglist). The notes are on notes.kde.org and are regularly sent to the kdepim mailinglist as well.
As you can guess the project is in a very early state, so we’re still mostly trying to get the whole framework into shape, and not so much writing the actual application. However, if you’re interested in trying to build the system on other platforms, working on UI concepts or generally tinker around with the codebase we have and help shaping what it should become, you’re more than welcome to join =)

  1. git://anongit.kde.org/scratch/aseigo/KontactCodebaseInventory.git 
  2. https://phabricator.kde.org/project/profile/5/ 
  3. https://phabricator.kde.org/project/profile/43/ 
  4. git://anongit.kde.org/akonadi-next 
  5. git://anongit.kde.org/kontact-quick 
  6. https://github.com/cmollekopf/docker/blob/master/kubestandalone/run.sh 

Akonadi Next Cmd

For Akonadi Next I built a little utility that I intend to call “akonadi_cmd” and it’s slowly becoming useful.

It started as the first Akonadi Next client, for me to experiment a bit with the API, but it recently gained a bunch of commands and can now be used for various tasks.

The syntax is the following:
akonadi_cmd COMMAND TYPE ...

The Akonadi Next API always works on a single type, so you can i.e. query for folders, or mails, but not for folders and mails. Instead you query for the mails with a folder filter, if that’s what you’re looking for. akonadi_cmd’s syntax reflects that.


The list command allows to execute queries and retreive results in form of lists.
Eventually you will be able to specify which properties should be retrieved, for now it’s a hardcoded list for each type. It’s generally useful to check what the database contains and whether queries work.
Like list, but only output the result count.
Some statistics how large the database is, how the size is distributed accross indexes, etc.
Allows to create/modify/delete entities. Currently this is only of limited use, but works already nicely with resources. Eventually it will allow to i.e. create/modify/delete all kinds of entities such as events/mails/folders/….
Drops all caches of a resource but leaves the config intact. This is useful while developing because it i.e. allows to retry a sync, without having to configure the resource again.
Allows to synchronize a resource. For an imap resource that would mean that the remote server is contacted and the local dataset is brought up to date,
for a maildir resource it simply means all data is indexed and becomes queriable by akonadi.

Eventually this will allow to specify a query as well to i.e. only synchronize a specific folder.

Provides the same contents as “list” but in a graphical tree view. This was really just a way for me to test whether I can actually get data into a view, so I’m not sure if it will survive as a command. For the time being it’s nice to compare it’s performance to the QML counterpart.

Setting up a new resource instance

akonadi_cmd is already the primary way how you create resource instances:

akonadi_cmd create resource org.kde.maildir path /home/developer/maildir1

This creates a resource of type “org.kde.maildir” and a configuration of “path” with the value “home/developer/maildir1”. Resources are stored in configuration files, so all this does is write to some config files.

akonadi_cmd list resource

By listing all available resources we can find the identifier of the resource that was automatically assigned.

akonadi_cmd synchronize org.kde.maildir.instance1

This triggers the actual synchronization in the resource, and from there on the data is available.

akonadi_cmd list folder org.kde.maildir.instance1

This will get you all folders that are in the resource.

akonadi_cmd remove resource org.kde.maildir.instance1

And this will finally remove all traces of the resource instance.


What’s perhaps interesting from the implementation side is that the command line tool uses exactly the same models that we also use in Kube.

    Akonadi2::Query query;
    query.resources << res.toLatin1();

    auto model = loadModel(type, query);
    QObject::connect(model.data(), &QAbstractItemModel::rowsInserted, [model](const QModelIndex &index, int start, int end) {
        for (int i = start; i <= end; i++) {
            std::cout << "\tRow " << model->rowCount() << ":\t ";
            std::cout << "\t" << model->data(model->index(i, 0, index), Akonadi2::Store::DomainObjectBaseRole).value<Akonadi2::ApplicationDomain::ApplicationDomainType::Ptr>()->identifier().toStdString() << "\t";
            for (int col = 0; col < model->columnCount(QModelIndex()); col++) {
                std::cout << "\t|" << model->data(model->index(i, col, index)).toString().toStdString();
            std::cout << std::endl;
    QObject::connect(model.data(), &QAbstractItemModel::dataChanged, [model, &app](const QModelIndex &, const QModelIndex &, const QVector<int> &roles) {
        if (roles.contains(Akonadi2::Store::ChildrenFetchedRole)) {
    if (!model->data(QModelIndex(), Akonadi2::Store::ChildrenFetchedRole).toBool()) {
        return app.exec();

This is possible because we’re using QAbstractItemModel as an asynchronous result set. While one could argue whether that is the best API for an application that is essentially synchronous, it still shows that the API is useful for a variety of applications.

And last but not least, since I figured out how to record animated gifs, the above procedure in a live demo😉


Bringing Akonadi Next up to speed

It’s been a while since the last progress report on akonadi next. I’ve since spent a lot of time refactoring the existing codebase, pushing it a little further,
and refactoring it again, to make sure the codebase remains as clean as possible. The result of that is that an implementation of a simple resource only takes a couple of template instantiations, apart from code that interacts with the datasource (e.g. your IMAP Server) which I obviously can’t do for the resource.

Once I was happy with that, I looked a bit into performance, to ensure the goals are actually reachable. For write speed, operations need to be batched into database transactions, this is what allows the db to write up to 50’000 values per second on my system (4 year old laptop with an SSD and an i7). After implementing the batch processing, and without looking into any other bottlenecks, it can process now ~4’000 values per second, including updating ten secondary indexes. This is not yet ideal given what we should be able to reach, but does mean that a sync of 40’000 emails would be done within 10s, which is not bad already. Because commands first enter a persistent command queue, pulling the data offline is complete even faster actually, but that command queue afterwards needs to be processed for the data to become available to the clients and all of that together leads to the actual write speed.

On the reading side we’re at around 50’000 values per second, with the read time growing linearly with the amount of messages read. Again far from ideal, which is around 400’000 values per second for a single db (excluding index lookups), but still good enough to load large email folders in a matter of a second.

I implemented the benchmarks to get these numbers, so thanks to HAWD we should be able to track progress over time, once I setup a system to run the benchmarks regularly.

With performance being in an acceptable state, I will shift my focus to the revisioned, which is a prerequisite for the resource writeback to the source. After all, performance is supposed to be a desirable side-effect, and simplicity and ease of use the goal.


Coming up next week is the yearly Randa meeting where we will have the chance to sit together for a week and work on the future of Kontact. This meetings help tremendously in injecting momentum into the project, and we have a variety of topics to cover to direct the development for the time to come (and of course a lot of stuff to actively hack on). If you’d like to contribute to that you can help us with some funding. Much appreciated!