Putting the code where it belongs

I have been working on better ways to write asynchronous code. In this post I’m going to analyze one of our current tools, KJob, in how it helps us writing asynchronous code and what is missing. I’m then going to present my prototype solution to address these problems.

KJob

In KDE we have the KJob class to wrap asynchronous operations. KJob gives us a framework for progress and error reporting, a uniform start method, and by subclassing it we can easily write our own reusable asynchronus operations. Such an asynchronous operation typically takes a couple of arguments, and returns a result.

A KJob, in it’s simplest form, is the asynchronous equivalent of a function call:

int doSomething(int argument) {
    return getNumber(argument);
}
struct DoSomething : public KJob {
    KJob(int argument): mArgument(argument){}

    void start() {
        KJob *job = getNumberAsync(mArgument);
        connect(job, SIGNAL(result(KJob*)), this, SLOT(onJobDone(KJob*)));
        job->start();
    }

    int mResult;
    int mArgument;

private slots:
    void onJobDone(KJob *job) {
        mResult = job->result;
        emitResult();
    }
};

What you’ll notice immediately that this involves a lot of boilerplate code. It also introduces a lot of complexity in a seemingly trivial task. This is partially because we have to create a class when we actually wanted a function, and partially because we have to use class members to replace variables on the stack, that we don’t have available during an asynchronous operation.

So while KJob gives us a tool to wrap asynchronous operations in a way that they become reusable, it comes at the cost of quite a bit of boilerplate code. It also means that what can be written synchronously in a simple function, requires a class when writing the same code asynchronously.

Inversion of Control

A typical operation is of course slightly more complex than doSomething, and often consists of several (asynchronous) operations itself.

What in imperative code looks like this:

int doSomethingComplex(int argument) {
    return operation2(operation1(argument));
}

…results in an asynchronous operation that is scattered over multiple result handlers somewhat like this:

...
void start() {
    KJob *job = operation1(mArgument);
    connect(job, SIGNAL(result(KJob*)), this, SLOT(onOperation1Done(KJob*)));
    job->start();
}

void onOperation1Done(KJob *operation1Job) {
    KJob *job = operation2(operation1Job->result());
    connect(job, SIGNAL(result(KJob*)), this, SLOT(onOperation1Done(KJob*)));
    job->start();
}

void onOperation2Done(KJob *operation1Job) {
    mResult = operation1Job->result();
    emitResult();
}
...

We are forced to split the code over several functions due to the inversion of control introduced by the handler based asynchronous programming. Unfortunately these additional functions (the handlers), that we are now forced to use, are not helping the program strucutre in any way. This manifests itself also in the rather useless function names that typically follow a pattern such as on”$Operation”Done() or similar. Further, because the code is scattered over functions, values that are available on the stack in a synchronous function have to be stored explicitly as class member, so they are available in the handler where they are required for a further step.

The traditional way to make code easy to comprehend is to split it up into functions that are then called by a higher level function. This kind of function composition is no longer possible with asynchronous programs using our current tools. All we can do is chain handler after handler. Due to the lack of this higher level function that composes the functionality, a reader is also force to read very single line of the code, instead of simply skimming the function names, only drilling deeper if more detailed information about the inner workings are required.
Since we are no longer able to structure the code in a useful way using functions, only classes, and in our case KJob’s, are left to structure the code. However, creating subjob’s is a lot of work when all you need is a function, and while it helps the strucutre, it scatters the code even more, making it potentially harder to read and understand. Due to this we also often end up with large and complex job classes.

Last but not least we loose all available control structures by the inversion of control. If you write asynchronous code you don’t have the if’s, for’s and while’s available that are fundamental to write code. Well, obviously they are still there, but if you write asynchronous code you can’t use them as usual because you can’t plug a complete asynchonous operation inside an if{}-block. The best that you can do, is initiate the operation inside the imperative control structures, and dealing with the results later on in handlers. Because we need control structures to build useful programs, these are usually emulated by building complex statemachines where each function depends on the current class state. A typical (anti)pattern of that kind is a for loop creating jobs, with a decreasing counter in the handler to check if all jobs have been executed. These statemachines greatly increase the complexity of the code, are higly error prone, and make larger classes incomprehensible without drawing complex state diagrams (or simply staring at the screen long enough while tearing your hear out).

Oh, and before I forget, of course we also no longer get any useful backtraces from gdb as pretty much every backtrace comes straight from the eventloop and we have no clue what was happening before.

As a summary, inversion of control causes:

  • code is scattered over functions that are not helpful to the structure
  • composing functions is no longer possible, since what would normally be written in a function is written as a class.
  • control structures are not usable, a statemachine is required to emulate this.
  • backtraces become mostly useless

As an analogy, your typical asynchronous class is the functional equivalent of single synchronous function (often over 1000 loc!), that uses goto’s and some local variables to build control structures. I think it’s obvious that this is a pretty bad way to write code, to say the least.

JobComposer

Fortunately we received a new tool with C++11: lambda functions
Lambda functions allow us to write functions inline with minimal syntactical overhead.

Armed with this I set out to find a better way to write asynchronous code.

A first obvious solution is to simply write the result handler of a slot as lambda function, which would allow us to write code like this:

make_async(operation1(), [] (KJob *job) {
    //Do something after operation1()
    make_async(operation2(job->result()), [] (KJob *job) {
        //Do something after operation2()
        ...
    });
});

It’s a simple and concise solution, however, you can’t really build reuasable building blocks (like functions) with that. You’ll get one nested tree of lambda’s that depend on each other by accessing the results of the previous jobs. The problem making this solution non-composable is that the lambda function which we pass to make_async, starts the asynchronous task, but also extracts results from the previous job. Therefore you couldn’t, for instance, return an async task containing operation2 from a function (because in the same line we extract the result of the previous job).

What we require instead is a way of chaining asynchronous operations together, while keeping the glue code separated from the reusable bits.

JobComposer is my proof of concept to help with this:

class JobComposer : public KJob
{
    Q_OBJECT
public:
    //KJob start function
    void start();

    //This adds a new continuation to the queue
    void add(const std::function<void(JobComposer&, KJob*)> &jobContinuation);

    //This starts the job, and connects to the result signal. Call from continuation.
    void run(KJob*);

    //This starts the job, and connects to the result signal. Call from continuation.
    //Additionally an error case continuation can be provided that is called in case of error, and that can be used to determine wether further continuations should be executed or not.
    void run(KJob*, const std::function<bool(JobComposer&, KJob*)> &errorHandler);

    //...
};

The basic idea is to wrap each step using a lambda-function to issue the asynchronous operation. Each such continuation (the lambda function) receives a pointer to the previous job to extract results.

Here’s an example how this could be used:

auto task = new JobComposer;
task->add([](JobComposer &t, KJob*){
    KJob *op1Job = operation1();
    t.run(op1Job, [](JobComposer &t, KJob *job) {
        kWarning() << "An error occurred: " << job->errorString()
    });
});
task->add([](JobComposer &t, KJob *job){
    KJob *op2Job = operation2(static_cast<Operation1*>(job)->result());
    t.run(op2Job, [](JobComposer &t, KJob *job) {
        kWarning() << "An error occurred: " << job->errorString()
    });
});
task->add([](JobComposer &t, KJob *job){
    kDebug() << "Result: " << static_cast<Operation2*>(job)->result();
});
task->start();

What you see here is the equivalent of:

int tmp = operation1();
int res = operation2(tmp);
kDebug() << res;

There are several important advantages of using this to writing traditional asynchronous code using only KJob:

  • The code above, which would normally be spread over several functions, can be written within a single function.
  • Since we can write all code within a single function we can compose functions again. The JobComposer above could be returned from another function and integrated into another JobComposer.
  • Values that are required for a certain step can either be extracted from the previous job, or simply captured in the lambda functions (no more passing of values as members).
  • You only have to read the start() function of a job that is written this way to get an idea what is going on. Not the complete class.
  • A “backtrace” functionality could be built in to JobComposer that would allow to get useful information about the state of the program although we’re in the eventloop.

This is of course only a rough prototype, and I’m sure we can craft something better. But at least in my experiments it proved to work very nicely.
What I think would be useful as well are a couple of helper jobs that replace the missing control structures, such as a ForeachJob which triggers a continuation for each result, or a job to execute tasks in parallel (instead of serial as JobComposer does).

As a little showcase I rewrote a job of the imap resource.
You’ll see a bit of function composition, a ParallelCompositeJob that executes jobs in parallel, and you’ll notice that only relevant functions are left and all class members are gone. I find the result a lot better than the original, and the refactoring was trivial and quick.

I’m quite certain that if we build these tools, we can vastly improve our asynchronous code making it easier to write, read, and debug.
And I think it’s past time that we build proper tools.

Posted in KDE, Kolab | Tagged | 6 Comments

A new folder subscription system

Wouldn’t it be great if Kontact would allow you to select a set of folders you’re interested in, that setting would automatically be respected by all your devices and you’d still be able to control for each individual folder whether it should be visible and available offline?

I’ll line out a system that allows you to achieve just that in a groupware environment. I’ll take Kolab and calendar folders as example, but the concept applies to all groupware systems and is just as well applicable to email or other groupware content.

User Scenarios

  •  Anna has access to hundreds of shared calendars, but she usually only uses a few selected ones. She therefore only has a subset of the available calendars enabled, that are shown to her in the calendar selection dialog, available for offline usage and also get synchronized to her mobile phone. If she realizes she no longer requires a calendar, she simply disables it and it disappears from the Kontact, the Webclient and her phone.
  • Joe works with a small team that shares their calendars with him. Usually he only uses the shared team-calendar, but sometimes he wants to quickly check if they are in the office before calling them, and he’s often doing this in the train with unreliable internet connection. He therefore disables the team member’s calendars but still enables synchronization for them. This hides the calendars from all his devices, but he still can quickly enable them on his laptop while being offline.
  • Fred has a mailing list folder that he always reads on his mobile, but never on his laptop. He keeps the folder enabled, but hides it on his laptop so his folder list isn’t unnecessarily cluttered.

What these scenarios tell us is that we need a flexible mechanism to specify the folders we want to see and the folders we want synchronized. Additionally we want, in today’s world where we have multiple devices, to synchronize the selection of folders that are important to us. It is likely I’d like to see the calendar I have just enabled in Kontact also on my phone. However, we always want to keep the possibility to alter that default setting on specific devices.

Current State

If you’re using a Kolab Server, you can use IMAP subscriptions to control what folders you want to see on your devices. Kontact currently respects that setting in that it makes all folders visible and available for offline usage. Additionally you have local subscriptions to disable certain folders (so they are not downloaded or displayed) on a specific device. That is not very flexible though, and personally I ended up having pretty much all folders enabled that I ever used, leading to cluttered folder selections and lot’s of bandwith and storage space used to keep everything available offline.

To change the subscription state, KMail offers to open the IMAP-subscription dialog which allows to toggle the subscription state of individual folders. This works, but is not well integrated (it’s a separate dialog), and is also not well integrable since it’s IMAP specific.

Because the solution is not well integrated, it tends to be rather static in my experience. I tend to subscribe to all folders that I ever use, which results in a very long and cluttered folder-list.

A new integrated subscription system

What would be much better, is if the back-end could provide a default setting that is synchronized to the server, and we could quickly enable or disable folders as we require them. Additionally we can override the default settings for each individual folder to optimize our setup as required.

To make the system more flexible, while not unnecessarily complex, we need a per folder setting that allows to override a backend provided default value. Additionally we need an interface for applications to alter the subscription state through Akonadi (instead of bypassing it). This allows for a well integrated solution that doesn’t rely on a separate, IMAP-specific dialog.

Each folder requires the following settings:

  • An enabled/disabled state that provides the default value for synchronizing and displaying a folder.
  • An explicit preference to synchronize a folder.
  • An explicit preference to make a folder visible.

A folder is visible if:

  • There is an explicit preference that the folder is visible.
  • There is no explicit preference on visibility and the folder is enabled.

A folder is synchronized if:

  • There is an explicit preference that the folder is synchronized.
  • There is no explicit preference on synchronization and the folder is enabled.

The resource-backend can synchronize the enabled/disabled state which should give a default experience as expected. Additionally it is possible to override that default state using the explicit preference on a per folder level.

User Interaction

By default you would be working with the enabled/disabled state, that is synchronized by the resource backend. If you enable a folder it becomes visible and synchronized, if you disable it, it becomes invisible and not synchronized. For the enabled/disabled state we can build a very easy user interface, as it is a single boolean state, that we can integrate into the primary UI.

Because the enabled/disabled state is synchronized, an enabled calendar will automatically appear on your MyKolab.com web interface and your mobile. One click, and you’re all set.

Mockup of folder sync properties

Example mockup of folder sync properties

In the advanced settings, you can then override visibility and synchronization preference at will as a local-only setting, giving you full flexibility. This can be hidden in a properties dialog, so it doesn’t clutter the primary UI.

This makes the default usecase very simple to use (you either want a folder or you don’t want it), while we keep full flexibility in overriding the default behaviour.

IMAP Synchronization

The IMAP resource will synchronize the enabled/disabled state with IMAP subscriptions if you have subscriptions enabled in the resource. This way we can use the enabled/disabled state as interface to change the subscriptions, and don’t have to use a separate dialog to toggle that state.

Interaction with existing mechanisms

This mechanism can probably replace local subscriptions eventually. However, in order not to break existing setups I plan to leave local subscriptions working as they currently are.

Conclusion

By implementing this proposal we get the required flexibility to make sure the resources of our machine are optimally used, while different clients still interact with each other as expected. Additionally we gain a uniform interface to enable/disable a collection that can be synchronized by backends (e.g. using the IMAP subscription state). This will allow applications to nicely integrate this setting, and should therefore make this feature a lot easier to use and overall more agile.

New doors are opened as this will enable us to do on-demand loading of folders. By having the complete folder list available locally (but disabled by default and thus hidden), we can use the collections to load their content temporarily and on-demand. Want to quickly look at that shared calendar you don’t have enabled? Simply search for it and have a quick look, the data is synchronized on-demand and the folder is as quickly gone as you found it, once it is no longer required. This will diminish the requirement to have folders constantly clutter your folder list even further.

So, what do you think?

Posted in KDE, Kolab, Uncategorized | Tagged , , , , | 6 Comments

Kontact-Nepomuk Integration: Why data from akonadi is indexed in nepomuk

So Akonadi is already a “cache” for your PIM-data, and now we’re trying hard to feed all that data into a second “cache” called Nepomuk, just for some searching? We clearly must be crazy.

The process of keeping these to caches in sync is not entirely trivial, storing the data in Nepomuk is rather expensive, and obviously we’re duplicating all data. Rest assured we have our reasons though.

  • Akonadi handles the payload of items stored in it transparently, meaning it has no idea what it is actually caching (apart from some hints such as mimetypes). While that is a very good design decision (great flexibility), it has the drawback that we can’t really search for anything inside the payload (because we don’t know what we’re searching through, where to look, etc)
  • The solution to the searching problem is of course building an index, which is a cache of all data optimized for searching. It essentially structures the data in a way that content->item lookups become fast (while normal usage does this the other way round). So that  already means duplicating all your data (more or less), because we’re trading disk-space and memory for searching speed. And Nepomuk is what we’re using as index for that.

Now there would of course be simpler ways to build an index for searching than using Nepomuk, but Nepomuk provides way more opportunities than just a simple, textbased index, allowing us to build awesome features on top of it, while the latter would essentially be a dead end.

To build that cache we’re doing the following:

  • analyze all items in Akonadi
  • split them up into individual parts such as (for an email example): subject, plaintext content, email addresses, flags
  • store that separated data in Nepomuk in a structured way

This results in networks of data stored in Nepomuk:

PersonA [hasEMailAddress] addressA
PersonA [hasEMailAddress] addressB
emailA [hasSender] addressA
emailB [hasSender] addressB

So this “network” relates emails to email-addresses, and email-addresses to contacts, and contacts to actual persons, and suddenly you can ask the system for all emails from a person, no matter which of the person’s email-addresses have been used in the mails. Of course we can add to that IM conversations with the same Person, or documents you exchanged during that conversation, … the possibilities are almost endless.

Based on that information much more powerful interfaces can be written. For instance one could write a communication tool which doesn’t really care anymore which communication channel you’re using and dynamically mixes IM and email depending on whether/where the other person is currently available for a chat or would rather have a mail, which can be read later on, and doing so without splitting the conversation across various mail/chat interfaces.
This is of course just one example of many (neither am I claiming the idea, it’s just a nice example for what is possible).

So that’s basically why we took the difficult route for searching (At least that is why I am working on this).

Now, we’re not quite there yet, but we already start to get the first fruits of our labor;

  • KMail can now automatically complete addresses from all emails you have ever received
  • Filtering in KMail does fulltext searching, making it a lot easier to find old conversations
  • The kpeoples library already uses this data for contacts merging, which will result in a much nicer addressbook
  • And of course having the data available in Nepomuk enables other developers to start working with it

I’ll follow up on that post with some more technical background on how the feeders are working and possibly some information on the problematic areas from a client perspective (such as the address auto-completion in KMail).

Posted in KDE, Uncategorized | 30 Comments

On minimalistic text editors

I think I already mentioned that I’m quite fond of minimalistic UI’s and texteditors with an undisturbing interface.
Recently I stumbled upon FocusWriter (http://gottcode.org/focuswriter/), which is now by far my favorite app for writing. It’s awesome how nice it is to work with such a tool which eleminates all distraction.
I only wish all KDE applications would have such a mode, where fullscreen really means fullscreen. Imagine how awesome this would be in KMail or even better, KDevelop. Especially KDevelop is, while an awesome IDE, just way to cluttered so far. Maybe the Kate devs eventually get around to implement a real fullscreen mode =)

Until then I’ll stick to copy paste from FocusWriter, or to what I created for Zanshin.

Posted in KDE, Uncategorized | 23 Comments

MindMirror/Zanshin

You might have noticed that not a lot happened recently regarding MindMirror, my little Notetaking/Todomanagement application. This was partially due to the fact that I was occupied with things like the Akonadi-Nepomuk-Feeders (after two months of holidays and exams), on which I rely in MindMirror, but also because I started some lengthy discussions with Kevin Ottens from the Zanshin team about a cooperation between MindMirror and Zanshin.

Fortunately he was also attending the PIM sprint, which allowed for another brainstorming, and it turns out that our ideas align that much that I decided to stop working on MindMirror and focus my development time on Zanshin instead. Of course that was a difficult decision since it is quite a bit of work to integrate my work from MindMirror into Zanshin and because I no longer have the full control over the project. On the other hand they did really good work on Zanshin so far, and it only makes sense to work together since we’re trying to build essentially the same application. This way I want to ensure that the project is a bit more future proof (with a community instead of a single developer), and no (scarce) development time goes to waste.

Most of my work on MindMirror can be reused in Zanshin and anyways I needed this project to develop the idea of the application. Kind of a hands on brainstroming, so no regrets here =)
As a first step I’m going to integrate the notetaking into Zanshin, so I hope to release the notetaking part with the next Zanshin release. Today I hacked together a first crude version, which is already functional, but as you can see there’s still some work to do.

I’m really looking forward to finally get a releasable version of what I started with project MindMirror, and hope I made the right call with killing my project before the first release in favor of another one ;-)

Posted in KDE | 8 Comments

PIM developer sprint

Last weekend we had a PIM developer sprint in Berlin, with the aim to give KMail a little boost. The sprint was hosted by KDAB and my employer (Kolab Systems) took care of my expenses, so there was nothing holding me from joining =)

While many concentrated on squashing KMail/Akonadi bugs I focused on yet another rewrite of the Akonadi-Nepomuk feeders, which are supposed to make the information stored in Akonadi available in Nepomuk.

Previously we had an agent for each mimetype, which made it difficult to control the resources which were used by the agents to index the data. To improve this situtation we decided on a plugin based architecture, where we can write plugins to index a certain mimetype, but the indexing is done by a single agent. This gives us a much better control of the used resources, so the indexing of the emails doesn’t bring down your whole system. It also allows us to index various items at different priorities. For instance the initial indexing of all your email (which can take rather long), has a lower priority than an item which you just changed. This is important so applications can rely on the feeder to bring changed items into nepomuk within reasonable time, so they are i.e. retrieved by fulltext search.

The first version already landed in master, but there are still some features of the old agent missing, such as the indexing of email attachments. However I think the new architecture is a real improvement over the old one and should give us a much better situation than before.

Just need to work out the last few kinks…

Posted in KDE | 7 Comments

Updates from MindMirror

From time to time I need a little feature for my own motivation, especially after spending quite some time with the, somewhat boring, implementation of datamodels.

So this time I chose the fullscreen editor as my little feature, which allows you to use the whole screen to write some text. I use this i.e. to draft this blogpost, which gives me exactly what I need (a texteditor), and no distractions.

The mode can easily be toggled with a shortcut, which allows to go back and forth in a snap.
Just after I implemented this, I stumbled upon this neat little tool (http://www.golem.de/1105/83651.html, or just google iA Writer), which is unfortunately for mac only.
However I like the approach of the minimalistic UI and  the focus mode, which highlights always the latest sentence. Also the auto markup looks like a good way of writing structured text without spending to much time on the layout.
Overall I believe they did a very good job on stripping down an application to the essentials for a usecase, and I think this would make for some nice additions to the kde texteditor components, which are also used by MindMirror.
The limiting of the text to an area in the middle of the screen is also something which I want to add to mindmirror, otherwise the lines get very long in fullscreen mode, and/or your sticking on the left half of your monitor.

Also in MindMirror I tried to strip down the UI a bit:
It is now possible to hide the toolbar, which really clutters the UI quite a bit and is not for everyone essential.
In fullscreenmode, where the toolbar is normally shown on top, you can get now a completely white screen, which I really like for writing.
Further I replaced the toolbox on the bottom of the editor component with a custom one, which allows to collapse all boxes, instead of one being always open. As a side effect, the resizing of the toolbox works now properly, meaning there is no space wasted anymore.
I’m now relatively happy with the editor part UI (except for the edit buttons next to title and due date, etc.), but I’m sure there is still a lot to improve.


The control pane on the left on the other hand, is nowhere near where I’d like to have it, and really bad looking. I find I somewhat difficult to get it into shape though.
One thing that really bugs me, is the greyish look of almost all UI’s. I’m not aware of a remedy though, without breaking with the KDE style, or using lots of white boxes, which doesn’t look much better either.
If you have some ideas for the current UI, or know of techniques to alter the look of KDE applications, please tell me.

Apart from the UI bits, the next steps on the way to a first releasable version is a rewrite of the akonadi nepomukfeeders and the fixing of the kreparentingproxymodel so the todo hierarchy works. Also a searchview which uses the relevancy of the matches to sort the items is in the works.

I’m on holiday for the next few weeks, and afterwards I’ll have my exams, so don’t expect too much activity from my side. But afterwards I will get all parts into a releasable shape, to make sure there is a decent release ready for KDE 4.8.

On a side note: I just got a part-time employment (next to my studies) with Kolab Systems, which means I will earn my money with OpenSource software from now on!
About as awesome as it gets =)

Posted in KDE, Uncategorized | 4 Comments