Progress on the prototype for a possible next version of akonadi

Ever since we introduced our ideas the next version of akonadi, we’ve been working on a proof of concept implementation, but we haven’t talked a lot about it. I’d therefore like to give a short progress report.

By choosing decentralized storage and a key-value store as the underlying technology, we first need to prove that this approach can deliver the desired performance with all pieces of the infrastructure in place. I think we have mostly reached that milestone by now. The new architecture is very flexible and looks promising so far. We managed IMO quite well to keep the levels of abstraction to a necessary minimum, which results in a system that is easily adjusted as new problems need to be solved and feels very controllable from a developer perspective.

We’ve started off with implementing the full stack for a single resource and a single domain type. For this we developed a simple dummy-resource that currently has an in-memory hash map as backend, and can only store events. This is a sufficient first step, as turning that into the full solution is a matter of adding further flatbuffer schemas for other types and defining the relevant indexes necessary to query what we want to query. By only working on a single type we can first carve out the necessary interfaces and make sure that we make the effort required to add new types minimal and thus maximize code reuse.

The design we’re pursuing, as presented during the pim sprint, consists of:

  • A set of resource processes
  • A store per resource, maintained by the individual resources (there is no central store)
  • A set of indexes maintained by the individual resources
  • A clientapi that knows how to access the store and how to talk to the resources through a plugin provided by the resource implementation.

By now we can write to the dummyresource through the client api, the resource internally queues the new entity, updates it’s indexes and writes the entity to storage. On the reading part we can execute simple queries against the indexes and retrieve the found entities. The synchronizer process can meanwhile generate also new entities, so client and synchronizer can write concurrently to the store. We therefore can do the full write/read roundtrip meaning we have most fundamental requirements covered. Missing are other operations than creating new entities (removal and modifications), and the writeback to the source by the synchronizer. But that’s just a matter of completing the implementation (we have the design).

To the numbers: Writing from the client is currently implemented in a very inefficient way and it’s trivial to drastically improve this, but in my latest test I could already write ~240 (small) entities per second. Reading works around 40k entities per second (in a single query) including the lookup on the secondary index. The upper limit of what the storage itself can achieve (on my laptop) is at 30k entities per second to write, and 250k entities per second to read, so there is room for improvement =)

Given that design and performance look promising so far, the next milestone will be to refactor the codebase sufficiently to ensure new resources can be added with sufficient ease, and making sure all the necessary facilities (such as a proper logging system), or at least stubs thereof, are available.

I’m writing this on a plane to Singapore which we’re using as gateway to Indonesia to chase after waves and volcanoes for the next few weeks, but after that I’m  looking forward to go full steam ahead with what we started here. I think it’s going to be something cool =)

Posted in KDE, Kolab | Tagged , , | 19 Comments

On Domain Models and Layers in kdepim

In our current kdepim code we use some classes throughout the codebase. I’m going to line out the problems with that and propose how we can do better.

The Application Domain

Each application has a “domain” it was created for. KOrganizer has for instance the calendar domain, and kmail the email domain, and each of those domains can be described with domain objects, which make up the domain model. The domain model of an application is essential, because it is what defines how we can represent the problems of that domain. If Korganizer didn’t have a domain model with attendees to events, we wouldn’t have any way to represent attendees internally, and thus couldn’t develop a feature based on that.

The logic implementing the functionality on top of those domain objects is the domain logic. It implements for instance what has to happen if we remove an event from a calendar, or how we can calculate all occurrences of a recurring event.

In the calendaring domain we use KCalCore to provide many of those domain objects and a large part of the domain logic. KCalCore::Event for instance, represents an event, can hold all necessary data of that event, and has the domain logic directly built-in to calculate recurrences.
Since it is a public library, it provides domain-objects and the domain-logic for all calendaring applications, which is awesome, right? Only if you use it right.

KCalCore

KCalCore provides additionally to the containers and the calendaring logic also serialization to the iCalendar format, which is also why it more or less tries to adhere to the iCalendar RFC, for both representation an interpretation of calendaring data. This is of course very useful for applications that deal with that, and there’s nothing particularly wrong with it. One could argue that serialization and interpretation of calendaring data should be split up, but since both is described by the same RFC I think it makes a lot of sense to keep the implementations together.

Coupling

A problem arises when classes like KCalCore::Event are used as domain objects, and interface for the storage layer, and as actual storage format, which is precisely what we do in kdepim.

The problem is that we introduce very high coupling between those components/layers and by choosing a library that adheres to an RFC the whole system is even locked down by a fully grown specification. I suppose that would be fine if only one application is using the storage layer,
and that application’s sole purpose is to implement exactly that RFC and nothing else, ever. In all other cases I think it is a mistake.

Domain Logic

The domain logic of an application has to evolve with the application by definition. The domain objects used for that are supposed to model the problem at hand precisely, in a way that a domain logic can be built that is easy to understand and evolve as requirements change. Properties that are not used by an application only hide the important bits of a domain objects, and if a new feature is added it must be possible to adjust the domain object to reflect that. By using a class like KCalCore::Event for the domain object, these adjustments become largely impossible.

The consequence is that we employ workarounds everywhere. KCalCore doesn’t provide what you need? Simply store it as “custom property”. We don’t have a class for calendars? Let’s use Akonadi::Collection with some custom attributes. Mechanisms have been designed to extend these rigid structures so we can at least work with it, but that only lead to more complex code that is ever harder to understand.

Instead we could write domain logic that expresses precisely what we need, and is easier to understand and maintain.

Zanshin for instance took the calendaring domain, and applied the GettingThingsDone (GTD) methodology to it. It takes a rather simple approach to todo’s and initially only required a description, a due date and a state. However, it introduced the notion that only “projects” can have subtodo’s. This restriction needs to be reflected in the domain model, and implemented in the domain logic.
Because there are no projects in KCalCore, it was simply defined that todo’s with a magic property “X-Project” are defined as project. There’s nothing wrong with that itself, but you don’t want to litter your code with “if (todo->hasProperty(X-Project))”. So what do you do? You create a wrapper. And that wrapper is now already your new domain object with a nice interface. Kevin fortunately realized that we can do better, and rewrote zanshin with its own custom domain objects, that simply interface with the KCalCore containers in a thin translation layer to akonadi. This made the code much clearer, and keeps those “x-property”-workarounds in one place only.

Layering

A useful approach to think about application architecture are IMO layers. It’s not a silver bullet, and shouldn’t be done too excessively I think, but in some cases layer do make a lot of sense. I suggest to think about the following layers:

  • The presentation layer: Displays stuff and nothing else. This is where you expose your domain model to the UI, and where your QML sits.
  • The domain layer: The core of the application. This is where all the useful magic happens.
  • The data access layer: A thin translation layer between domain and storage. It makes it possible to use the same storage layer from multiple domains and to replace the storage layer without replacing all the rest.
  • The storage layer: The layer that persists the domain model. Akonadi.

By keeping these layer’s in mind we can do a better job at keeping the coupling at a reasonable level, allowing individual components to  change as required.

The presentation layer is required in any case if we want to move to QML. With QML we can no longer have half of the domain logic in the UI code, and most of the domain model should probably be exposed as a model that is directly accessible by QML.

The data access layer is where akonadi provides a standardized interface for all data, so multiple applications can shared the same storage layer. This is currently made up by the i.e. KCalCore for calendars, the akonadi client API, and a couple of akonadi objects, such as Akonadi::Item and Akonadi::Collection. As this layer defines what data can be accessed by all applications, it needs to be flexible and likely has to be evolved frequently.

The way forward

For akonadi’s client API, aka the data access layer, I plan on defining a set of interfaces for things like calendars, events, mailfolders, emails, etc. This should eventually replace KCalCore, KContacts and friends from being the canonical interface to the data.

Applications should eventually move to their own domain logic implementation. For reading and structuring data, models are IMO a suitable tool, and if we design them right this will also pave the way for QML interfaces. Of course i.e. KCalCore still has its uses for its calendaring routines, or as a serialization library to create iTip messages, but we should IMO stop using it for everything. The same of course applies to KContacts.

What we still could do IMO, is share some domain logic between applications, including some domain objects. A KDEPIM::Domain::Contact could be used across applications, just like KContact::Adressee was. This keeps different applications from implementing the same logic, but of course also introduces coupling between those again.

What IMO has to stay separate is the data access layer, which implements an interface to the storage layer, and that doesn’t necessarily conform to the domain layer (you could i.e. store “Blog posts” as notes in storage). This separation is IMO useful, as I expect the application domain to evolve separately from what actual storage backends provide (see zanshin).

This is of course quite a chunk of work, that won’t happen at once. But need to know now where we want to end up in a couple of years, if we intend to ever get there.

Posted in KDE, Kolab, Uncategorized | Leave a comment

Putting the code where it belongs

I have been working on better ways to write asynchronous code. In this post I’m going to analyze one of our current tools, KJob, in how it helps us writing asynchronous code and what is missing. I’m then going to present my prototype solution to address these problems.

KJob

In KDE we have the KJob class to wrap asynchronous operations. KJob gives us a framework for progress and error reporting, a uniform start method, and by subclassing it we can easily write our own reusable asynchronus operations. Such an asynchronous operation typically takes a couple of arguments, and returns a result.

A KJob, in it’s simplest form, is the asynchronous equivalent of a function call:

int doSomething(int argument) {
    return getNumber(argument);
}
struct DoSomething : public KJob {
    KJob(int argument): mArgument(argument){}

    void start() {
        KJob *job = getNumberAsync(mArgument);
        connect(job, SIGNAL(result(KJob*)), this, SLOT(onJobDone(KJob*)));
        job->start();
    }

    int mResult;
    int mArgument;

private slots:
    void onJobDone(KJob *job) {
        mResult = job->result;
        emitResult();
    }
};

What you’ll notice immediately that this involves a lot of boilerplate code. It also introduces a lot of complexity in a seemingly trivial task. This is partially because we have to create a class when we actually wanted a function, and partially because we have to use class members to replace variables on the stack, that we don’t have available during an asynchronous operation.

So while KJob gives us a tool to wrap asynchronous operations in a way that they become reusable, it comes at the cost of quite a bit of boilerplate code. It also means that what can be written synchronously in a simple function, requires a class when writing the same code asynchronously.

Inversion of Control

A typical operation is of course slightly more complex than doSomething, and often consists of several (asynchronous) operations itself.

What in imperative code looks like this:

int doSomethingComplex(int argument) {
    return operation2(operation1(argument));
}

…results in an asynchronous operation that is scattered over multiple result handlers somewhat like this:

...
void start() {
    KJob *job = operation1(mArgument);
    connect(job, SIGNAL(result(KJob*)), this, SLOT(onOperation1Done(KJob*)));
    job->start();
}

void onOperation1Done(KJob *operation1Job) {
    KJob *job = operation2(operation1Job->result());
    connect(job, SIGNAL(result(KJob*)), this, SLOT(onOperation1Done(KJob*)));
    job->start();
}

void onOperation2Done(KJob *operation1Job) {
    mResult = operation1Job->result();
    emitResult();
}
...

We are forced to split the code over several functions due to the inversion of control introduced by the handler based asynchronous programming. Unfortunately these additional functions (the handlers), that we are now forced to use, are not helping the program strucutre in any way. This manifests itself also in the rather useless function names that typically follow a pattern such as on”$Operation”Done() or similar. Further, because the code is scattered over functions, values that are available on the stack in a synchronous function have to be stored explicitly as class member, so they are available in the handler where they are required for a further step.

The traditional way to make code easy to comprehend is to split it up into functions that are then called by a higher level function. This kind of function composition is no longer possible with asynchronous programs using our current tools. All we can do is chain handler after handler. Due to the lack of this higher level function that composes the functionality, a reader is also force to read very single line of the code, instead of simply skimming the function names, only drilling deeper if more detailed information about the inner workings are required.
Since we are no longer able to structure the code in a useful way using functions, only classes, and in our case KJob’s, are left to structure the code. However, creating subjob’s is a lot of work when all you need is a function, and while it helps the strucutre, it scatters the code even more, making it potentially harder to read and understand. Due to this we also often end up with large and complex job classes.

Last but not least we loose all available control structures by the inversion of control. If you write asynchronous code you don’t have the if’s, for’s and while’s available that are fundamental to write code. Well, obviously they are still there, but if you write asynchronous code you can’t use them as usual because you can’t plug a complete asynchonous operation inside an if{}-block. The best that you can do, is initiate the operation inside the imperative control structures, and dealing with the results later on in handlers. Because we need control structures to build useful programs, these are usually emulated by building complex statemachines where each function depends on the current class state. A typical (anti)pattern of that kind is a for loop creating jobs, with a decreasing counter in the handler to check if all jobs have been executed. These statemachines greatly increase the complexity of the code, are higly error prone, and make larger classes incomprehensible without drawing complex state diagrams (or simply staring at the screen long enough while tearing your hear out).

Oh, and before I forget, of course we also no longer get any useful backtraces from gdb as pretty much every backtrace comes straight from the eventloop and we have no clue what was happening before.

As a summary, inversion of control causes:

  • code is scattered over functions that are not helpful to the structure
  • composing functions is no longer possible, since what would normally be written in a function is written as a class.
  • control structures are not usable, a statemachine is required to emulate this.
  • backtraces become mostly useless

As an analogy, your typical asynchronous class is the functional equivalent of single synchronous function (often over 1000 loc!), that uses goto’s and some local variables to build control structures. I think it’s obvious that this is a pretty bad way to write code, to say the least.

JobComposer

Fortunately we received a new tool with C++11: lambda functions
Lambda functions allow us to write functions inline with minimal syntactical overhead.

Armed with this I set out to find a better way to write asynchronous code.

A first obvious solution is to simply write the result handler of a slot as lambda function, which would allow us to write code like this:

make_async(operation1(), [] (KJob *job) {
    //Do something after operation1()
    make_async(operation2(job->result()), [] (KJob *job) {
        //Do something after operation2()
        ...
    });
});

It’s a simple and concise solution, however, you can’t really build reuasable building blocks (like functions) with that. You’ll get one nested tree of lambda’s that depend on each other by accessing the results of the previous jobs. The problem making this solution non-composable is that the lambda function which we pass to make_async, starts the asynchronous task, but also extracts results from the previous job. Therefore you couldn’t, for instance, return an async task containing operation2 from a function (because in the same line we extract the result of the previous job).

What we require instead is a way of chaining asynchronous operations together, while keeping the glue code separated from the reusable bits.

JobComposer is my proof of concept to help with this:

class JobComposer : public KJob
{
    Q_OBJECT
public:
    //KJob start function
    void start();

    //This adds a new continuation to the queue
    void add(const std::function<void(JobComposer&, KJob*)> &jobContinuation);

    //This starts the job, and connects to the result signal. Call from continuation.
    void run(KJob*);

    //This starts the job, and connects to the result signal. Call from continuation.
    //Additionally an error case continuation can be provided that is called in case of error, and that can be used to determine wether further continuations should be executed or not.
    void run(KJob*, const std::function<bool(JobComposer&, KJob*)> &errorHandler);

    //...
};

The basic idea is to wrap each step using a lambda-function to issue the asynchronous operation. Each such continuation (the lambda function) receives a pointer to the previous job to extract results.

Here’s an example how this could be used:

auto task = new JobComposer;
task->add([](JobComposer &t, KJob*){
    KJob *op1Job = operation1();
    t.run(op1Job, [](JobComposer &t, KJob *job) {
        kWarning() << "An error occurred: " << job->errorString()
    });
});
task->add([](JobComposer &t, KJob *job){
    KJob *op2Job = operation2(static_cast<Operation1*>(job)->result());
    t.run(op2Job, [](JobComposer &t, KJob *job) {
        kWarning() << "An error occurred: " << job->errorString()
    });
});
task->add([](JobComposer &t, KJob *job){
    kDebug() << "Result: " << static_cast<Operation2*>(job)->result();
});
task->start();

What you see here is the equivalent of:

int tmp = operation1();
int res = operation2(tmp);
kDebug() << res;

There are several important advantages of using this to writing traditional asynchronous code using only KJob:

  • The code above, which would normally be spread over several functions, can be written within a single function.
  • Since we can write all code within a single function we can compose functions again. The JobComposer above could be returned from another function and integrated into another JobComposer.
  • Values that are required for a certain step can either be extracted from the previous job, or simply captured in the lambda functions (no more passing of values as members).
  • You only have to read the start() function of a job that is written this way to get an idea what is going on. Not the complete class.
  • A “backtrace” functionality could be built in to JobComposer that would allow to get useful information about the state of the program although we’re in the eventloop.

This is of course only a rough prototype, and I’m sure we can craft something better. But at least in my experiments it proved to work very nicely.
What I think would be useful as well are a couple of helper jobs that replace the missing control structures, such as a ForeachJob which triggers a continuation for each result, or a job to execute tasks in parallel (instead of serial as JobComposer does).

As a little showcase I rewrote a job of the imap resource.
You’ll see a bit of function composition, a ParallelCompositeJob that executes jobs in parallel, and you’ll notice that only relevant functions are left and all class members are gone. I find the result a lot better than the original, and the refactoring was trivial and quick.

I’m quite certain that if we build these tools, we can vastly improve our asynchronous code making it easier to write, read, and debug.
And I think it’s past time that we build proper tools.

Posted in KDE, Kolab | Tagged | 7 Comments

A new folder subscription system

Wouldn’t it be great if Kontact would allow you to select a set of folders you’re interested in, that setting would automatically be respected by all your devices and you’d still be able to control for each individual folder whether it should be visible and available offline?

I’ll line out a system that allows you to achieve just that in a groupware environment. I’ll take Kolab and calendar folders as example, but the concept applies to all groupware systems and is just as well applicable to email or other groupware content.

User Scenarios

  •  Anna has access to hundreds of shared calendars, but she usually only uses a few selected ones. She therefore only has a subset of the available calendars enabled, that are shown to her in the calendar selection dialog, available for offline usage and also get synchronized to her mobile phone. If she realizes she no longer requires a calendar, she simply disables it and it disappears from the Kontact, the Webclient and her phone.
  • Joe works with a small team that shares their calendars with him. Usually he only uses the shared team-calendar, but sometimes he wants to quickly check if they are in the office before calling them, and he’s often doing this in the train with unreliable internet connection. He therefore disables the team member’s calendars but still enables synchronization for them. This hides the calendars from all his devices, but he still can quickly enable them on his laptop while being offline.
  • Fred has a mailing list folder that he always reads on his mobile, but never on his laptop. He keeps the folder enabled, but hides it on his laptop so his folder list isn’t unnecessarily cluttered.

What these scenarios tell us is that we need a flexible mechanism to specify the folders we want to see and the folders we want synchronized. Additionally we want, in today’s world where we have multiple devices, to synchronize the selection of folders that are important to us. It is likely I’d like to see the calendar I have just enabled in Kontact also on my phone. However, we always want to keep the possibility to alter that default setting on specific devices.

Current State

If you’re using a Kolab Server, you can use IMAP subscriptions to control what folders you want to see on your devices. Kontact currently respects that setting in that it makes all folders visible and available for offline usage. Additionally you have local subscriptions to disable certain folders (so they are not downloaded or displayed) on a specific device. That is not very flexible though, and personally I ended up having pretty much all folders enabled that I ever used, leading to cluttered folder selections and lot’s of bandwith and storage space used to keep everything available offline.

To change the subscription state, KMail offers to open the IMAP-subscription dialog which allows to toggle the subscription state of individual folders. This works, but is not well integrated (it’s a separate dialog), and is also not well integrable since it’s IMAP specific.

Because the solution is not well integrated, it tends to be rather static in my experience. I tend to subscribe to all folders that I ever use, which results in a very long and cluttered folder-list.

A new integrated subscription system

What would be much better, is if the back-end could provide a default setting that is synchronized to the server, and we could quickly enable or disable folders as we require them. Additionally we can override the default settings for each individual folder to optimize our setup as required.

To make the system more flexible, while not unnecessarily complex, we need a per folder setting that allows to override a backend provided default value. Additionally we need an interface for applications to alter the subscription state through Akonadi (instead of bypassing it). This allows for a well integrated solution that doesn’t rely on a separate, IMAP-specific dialog.

Each folder requires the following settings:

  • An enabled/disabled state that provides the default value for synchronizing and displaying a folder.
  • An explicit preference to synchronize a folder.
  • An explicit preference to make a folder visible.

A folder is visible if:

  • There is an explicit preference that the folder is visible.
  • There is no explicit preference on visibility and the folder is enabled.

A folder is synchronized if:

  • There is an explicit preference that the folder is synchronized.
  • There is no explicit preference on synchronization and the folder is enabled.

The resource-backend can synchronize the enabled/disabled state which should give a default experience as expected. Additionally it is possible to override that default state using the explicit preference on a per folder level.

User Interaction

By default you would be working with the enabled/disabled state, that is synchronized by the resource backend. If you enable a folder it becomes visible and synchronized, if you disable it, it becomes invisible and not synchronized. For the enabled/disabled state we can build a very easy user interface, as it is a single boolean state, that we can integrate into the primary UI.

Because the enabled/disabled state is synchronized, an enabled calendar will automatically appear on your MyKolab.com web interface and your mobile. One click, and you’re all set.

Mockup of folder sync properties

Example mockup of folder sync properties

In the advanced settings, you can then override visibility and synchronization preference at will as a local-only setting, giving you full flexibility. This can be hidden in a properties dialog, so it doesn’t clutter the primary UI.

This makes the default usecase very simple to use (you either want a folder or you don’t want it), while we keep full flexibility in overriding the default behaviour.

IMAP Synchronization

The IMAP resource will synchronize the enabled/disabled state with IMAP subscriptions if you have subscriptions enabled in the resource. This way we can use the enabled/disabled state as interface to change the subscriptions, and don’t have to use a separate dialog to toggle that state.

Interaction with existing mechanisms

This mechanism can probably replace local subscriptions eventually. However, in order not to break existing setups I plan to leave local subscriptions working as they currently are.

Conclusion

By implementing this proposal we get the required flexibility to make sure the resources of our machine are optimally used, while different clients still interact with each other as expected. Additionally we gain a uniform interface to enable/disable a collection that can be synchronized by backends (e.g. using the IMAP subscription state). This will allow applications to nicely integrate this setting, and should therefore make this feature a lot easier to use and overall more agile.

New doors are opened as this will enable us to do on-demand loading of folders. By having the complete folder list available locally (but disabled by default and thus hidden), we can use the collections to load their content temporarily and on-demand. Want to quickly look at that shared calendar you don’t have enabled? Simply search for it and have a quick look, the data is synchronized on-demand and the folder is as quickly gone as you found it, once it is no longer required. This will diminish the requirement to have folders constantly clutter your folder list even further.

So, what do you think?

Posted in KDE, Kolab, Uncategorized | Tagged , , , , | 6 Comments

Kontact-Nepomuk Integration: Why data from akonadi is indexed in nepomuk

So Akonadi is already a “cache” for your PIM-data, and now we’re trying hard to feed all that data into a second “cache” called Nepomuk, just for some searching? We clearly must be crazy.

The process of keeping these to caches in sync is not entirely trivial, storing the data in Nepomuk is rather expensive, and obviously we’re duplicating all data. Rest assured we have our reasons though.

  • Akonadi handles the payload of items stored in it transparently, meaning it has no idea what it is actually caching (apart from some hints such as mimetypes). While that is a very good design decision (great flexibility), it has the drawback that we can’t really search for anything inside the payload (because we don’t know what we’re searching through, where to look, etc)
  • The solution to the searching problem is of course building an index, which is a cache of all data optimized for searching. It essentially structures the data in a way that content->item lookups become fast (while normal usage does this the other way round). So that  already means duplicating all your data (more or less), because we’re trading disk-space and memory for searching speed. And Nepomuk is what we’re using as index for that.

Now there would of course be simpler ways to build an index for searching than using Nepomuk, but Nepomuk provides way more opportunities than just a simple, textbased index, allowing us to build awesome features on top of it, while the latter would essentially be a dead end.

To build that cache we’re doing the following:

  • analyze all items in Akonadi
  • split them up into individual parts such as (for an email example): subject, plaintext content, email addresses, flags
  • store that separated data in Nepomuk in a structured way

This results in networks of data stored in Nepomuk:

PersonA [hasEMailAddress] addressA
PersonA [hasEMailAddress] addressB
emailA [hasSender] addressA
emailB [hasSender] addressB

So this “network” relates emails to email-addresses, and email-addresses to contacts, and contacts to actual persons, and suddenly you can ask the system for all emails from a person, no matter which of the person’s email-addresses have been used in the mails. Of course we can add to that IM conversations with the same Person, or documents you exchanged during that conversation, … the possibilities are almost endless.

Based on that information much more powerful interfaces can be written. For instance one could write a communication tool which doesn’t really care anymore which communication channel you’re using and dynamically mixes IM and email depending on whether/where the other person is currently available for a chat or would rather have a mail, which can be read later on, and doing so without splitting the conversation across various mail/chat interfaces.
This is of course just one example of many (neither am I claiming the idea, it’s just a nice example for what is possible).

So that’s basically why we took the difficult route for searching (At least that is why I am working on this).

Now, we’re not quite there yet, but we already start to get the first fruits of our labor;

  • KMail can now automatically complete addresses from all emails you have ever received
  • Filtering in KMail does fulltext searching, making it a lot easier to find old conversations
  • The kpeoples library already uses this data for contacts merging, which will result in a much nicer addressbook
  • And of course having the data available in Nepomuk enables other developers to start working with it

I’ll follow up on that post with some more technical background on how the feeders are working and possibly some information on the problematic areas from a client perspective (such as the address auto-completion in KMail).

Posted in KDE, Uncategorized | 31 Comments

On minimalistic text editors

I think I already mentioned that I’m quite fond of minimalistic UI’s and texteditors with an undisturbing interface.
Recently I stumbled upon FocusWriter (http://gottcode.org/focuswriter/), which is now by far my favorite app for writing. It’s awesome how nice it is to work with such a tool which eleminates all distraction.
I only wish all KDE applications would have such a mode, where fullscreen really means fullscreen. Imagine how awesome this would be in KMail or even better, KDevelop. Especially KDevelop is, while an awesome IDE, just way to cluttered so far. Maybe the Kate devs eventually get around to implement a real fullscreen mode =)

Until then I’ll stick to copy paste from FocusWriter, or to what I created for Zanshin.

Posted in KDE, Uncategorized | 23 Comments

MindMirror/Zanshin

You might have noticed that not a lot happened recently regarding MindMirror, my little Notetaking/Todomanagement application. This was partially due to the fact that I was occupied with things like the Akonadi-Nepomuk-Feeders (after two months of holidays and exams), on which I rely in MindMirror, but also because I started some lengthy discussions with Kevin Ottens from the Zanshin team about a cooperation between MindMirror and Zanshin.

Fortunately he was also attending the PIM sprint, which allowed for another brainstorming, and it turns out that our ideas align that much that I decided to stop working on MindMirror and focus my development time on Zanshin instead. Of course that was a difficult decision since it is quite a bit of work to integrate my work from MindMirror into Zanshin and because I no longer have the full control over the project. On the other hand they did really good work on Zanshin so far, and it only makes sense to work together since we’re trying to build essentially the same application. This way I want to ensure that the project is a bit more future proof (with a community instead of a single developer), and no (scarce) development time goes to waste.

Most of my work on MindMirror can be reused in Zanshin and anyways I needed this project to develop the idea of the application. Kind of a hands on brainstroming, so no regrets here =)
As a first step I’m going to integrate the notetaking into Zanshin, so I hope to release the notetaking part with the next Zanshin release. Today I hacked together a first crude version, which is already functional, but as you can see there’s still some work to do.

I’m really looking forward to finally get a releasable version of what I started with project MindMirror, and hope I made the right call with killing my project before the first release in favor of another one ;-)

Posted in KDE | 8 Comments