Kube: Views and Workflows

“Kube is a modern communication and collaboration client built with QtQuick on top of a high performance, low resource usage core. It provides online and offline access to all your mail, contacts, calendars, notes, todo’s and more. With a strong focus on usability, the team works with designers and UX experts from the ground up, to build a product that is not only visually appealing but also a joy to use.”

For more info, head over to: kube.kde.org

Ever since we started working on Kube we faced the conundrum of how to allow Kube to innovate and to provide actual additional value to whatever is already existing out there, while not ending up being a completely theoretical exercise of what could be done, but doesn’t really work in practice. In other words, we want to solve actual problems, but do so in “better” ways than what’s already out there, because otherwise, why bother?

I put “better” into quotes because this is of course subjective, but let me elaborate a bit on what I mean by that.

Traditionally, communication and organization has been dealt with using fairly disjoint tools, that we as users then combine in whatever arbitrary fashion that is useful to us.

For instance:

  • EMail
  • Chat
  • Voice/Video Chat
  • Calendaring
  • Taskmanagement
  • Notetaking

However, these are just tools that may help us work towards a goal, but often don’t support us directly in what we’re actually trying to accomplish.

My goto example; Jane wants to have a meeting with Jim and Bob:

  • Jane tries to find a timeslot that works for all of them (by mail, phone, personally…)
  • She then creates an event in her calendar and invites Jim and Bob, who can in turn accept or decline (scheduling)
  • The meeting will probably have an Agenda that is perhaps distributed over email, or within the event description
  • A meeting room might need to be booked, or an online service might need to be decided.
  • Once the meeting takes place the agenda needs to be followed and notes need to be taken.
  • Meeting minutes and some actionable items come out of the meeting, that then, depending on the type of meeting, may need to be approved by all participants.
  • So finally the approved meeting minutes are distributed and the actionable items are assigned, and perhaps the whole thing is archived somewhere for posterity.

As you can see, a seemingly simple task can actually become a fairly complex workflow, and while we do have a toolbox that helps with some of those steps, nothing really ties the whole thing together.

And that’s precisely where I think we can improve.

Instead of trying to do yet another IMAP client, or yet another calendaring application a far more interesting aspect is how can we improve our workflows, whatever tools that might involve.
Will that involve some email, and some calendaring and some notetaking? Probably, but it’s just a means to and end and not an end by itself.

So if we think about the meeting scheduling workflow there is a variety of ways how we can support Jane:

  • The scheduling can be supported by:
    • Traditional iCal based scheduling
    • An email message to invite someone by text.
    • Some external service like doodle.com
  • The agenda can be structured as a todo-list that you can check off during the meeting (perhaps with time limits assigend for each agenda item)
  • An online meeting space can be integrated, directly offering the agenda and collaborative note-taking.
  • The distribution and approval of meeting minutes can be automated, resulting in a timeline of past meetings, including meeting minutes and actionable items (tasks) that fell out of it.

That means however that rather than building disjoint views for email, calendar and chat, perhaps we would help Jane more if we built and actual UI for that specific purpose, and other UI’s for other purposes.

Views

So in an ideal world we’d have an ideal tool for every task the user ever has to execute which would mean we fully understand each individual user and all his responsibilities and favorite workflows….
Probably not going to happen anytime soon.

While there are absolutely reachable goals like Jane’s meeting workflow above, they all come at significant implementation cost and we can’t hope to implement enough of them right off the bat in adequate quality.
What we can do however is keeping that mindset of building workflows rather than IMAP/iCal/… clients and setting a target somewhere far off on the horizon and try to build useful stuff along the way.

For us that means that we’ve now set the basic structure of Kube as a set of “Views” that are used as containers for those workflows.

KubeViewSwitcher

This is a purposefully loose concept that will allow us to transition gradually from fairly traditional and generic views to more purposeful and specific views because we can introduce new views and phase out old ones, once their purpose is helped better by a more specific view.

Some of our initial view ideas are drafted up here: https://phabricator.kde.org/T6029

What we’re starting out with is this:

  • A Conversations view:
    While this initially is a fairly standard email view, it will eventually become a conversation centric view where you can follow and pick up on ongoing conversations no matter on what medium (email, chat, …).

kube_mailview

  • A People view:
    While also serving the purpose of an addressbook it is first and foremost a people centric way to interact with Kube.
    Perhaps you just want to start a conversation that way, or perhaps you want to lookup past interactions you had with the person.

kube_addressbook

  • A composer view:
    Initially a fairly standard email composer but eventually this will be much more about content creation and less about email specifically.
    The idea is that what you actually want to do if you’re opening the composer is to write some content. Perhaps this content will end up as email,
    or perhaps it will just end up on a note that will eventually be turned into a blogpost, chances are you don’t even know before you’re done writing.
    This is why the composer implements a workflow that starts with the starting point (your drafts) then goes over to the actual composer to create the content, and finally allows you to do something with the content, i.e. publish or store it somewhere (for now that only supports sending it by email or saving it as draft, but a note/blog/… could be equally viable goals).

The idea of all this is that we can initially build fairly standard and battle-tested layouts and over time work our way towards more specialized, better solutions. It also allows us to offload some perhaps necessary, but not central features to a secondary view, keeping us from having to stuff all available features into the single “email” view, and allowing us to specialize the views for the usecases they’re built for.

Kube::Fabric

“Kube is a modern communication and collaboration client built with QtQuick on top of a high performance, low resource usage core. It provides online and offline access to all your mail, contacts, calendars, notes, todo’s and more. With a strong focus on usability, the team works with designers and UX experts from the ground up, to build a product that is not only visually appealing but also a joy to use.”

For more info, head over to: kube.kde.org

One of the key features of QML is to be able to describe the UI in a declarative fashion, resulting in code that closely represents the visual hierarchy.
This is nice for writing UI’s because you can think very visually and then just turn that into code.
A couple of trial and error cycles later you typically end up with a result that reflects approximately what you sketched out on paper.

However, interactions with the UI are also handled from QML, and this is where this becomes problematic. User interactions with QML result in some signals or API calls on some component, typically some signal handler, and that’s where your application code takes over and tries to do something useful with that interaction. Perhaps we trigger some visual change, or we call a function on a controller object that does further work. Because we try to encapsulate and build interfaces we might build an API for the complete component that can then be used by the users of the component to further interact with the system.
This can quickly get complex with a growing number of interactions and components.

The straightforward implementation is to just forward the necessary API through your components, but that creates a lot of friction in the development process. A simple move of a button may result in drastic changes where that button is within the visual hierarchy, and if we have to forward the API for that we end up writing a lot of boilerplate code for what ought to be a trivial change.

Item {
    id: root
    property signal reply
    Button {
        onClicked: root.reply()
    }
}

One alternative approach is to abuse the QML Context to avoid forwarding the API (so we just call to some magic component id), but this results in components that are not self-contained and that just call to some magic “singleton” objects to communicate with the rest of the application.
Not great in my opinion.

Item {
    id: root
    Button {
        onClicked: applicationController.reply() //Let's hope applicationController is defined somewhere
    }
}

The approach we’ve chosen in Kube is to introduce a mechanism that is orthogonal to the visual hierarchy. Kube::Fabric is a simple messagebus where messages can be published and subscribed to. A clicked button may for instance post a message to the bus that is called “Kube.Messages.reply”. This message may then be handled by any component listening on the messagebus. Case in point; the main component has a handler that will open a composer and fill it with the quoted message so you can fill in your reply.

So the “Fabric” creates a separate communication plane that “weaves” the various UI components together into something functional. It allows us to concentrate on the user experience when working on the UI without having to worry how we can eventually wire up the component to the other bits required (and it gives us an explicit point where we interconnect parts over the fabric), and thus allows us to write cleaner code while moving faster.

Item {
    id: root
    Button {
        onClicked: Kube.Fabric.postMessage(Kube.Messages.reply, {"mail": model.mail, "isDraft": model.draft})
    }
}

API vs. Protocols

The API approach tightly couples components together and eventually leads to situations where we i.e. have multiple versions of the same API call because we’re dealing with some new requirements but we have to maintain compatibility for existing users of the API. This problem is then further amplified by duplicating API’s throughout the visual hierarchy.

The Fabric rather builds a protocol approach. The fabric becomes the transport layer, the messages posted form the protocol. While the messages form the contract and thus also require that no parameter is removed or changed, they can be expanded without affecting existing users of the message.
New parameters don’t bother any existing users and thus allow for much more flexibility with little (development) overhead.

Other applications

Another part where we started to use the fabric extensively is for notifications. A component that listens for notifications from Sink (Error/Progress/Status/…), simply feeds those notifications into the fabric, allowing various UI components to react to those notifications as a appropriate. This approach allows us to feed notifications from a variety of sources into the system, so they can be dealt with in a uniform way.

Code

In actual code it is used like this.

Posting a message from QML:

 Kube.Fabric.postMessage(Kube.Messages.reply, {"mail": model.mail, "isDraft": model.draft})

Posting a message from C++ (message being a QVariantMap):

Fabric::Fabric{}.postMessage("errorNotification", message);

Listening for messages in QML:

Kube.Listener {
    filter: Kube.Messages.reply
    onMessageReceived: kubeViews.openComposerWithMail(message.mail, false)
}

Lookout

The fabric is right now rather simplistic and we’ll keep it that way until we see actual requirements for more. However, I think there is interesting potential in separating message-buses by assigning different fabrics to different components, with a parent fabric to hold everything together. This would enable components to first handle a message locally before resorting to a parent fabric if necessary. For the reply button that could mean spawning an inline-reply editor instead of switching the whole application state into compose mode.

Release of Kube 0.1.0

It’s finally done! Kube 0.1.0 is out the door.

First off, this is a tech preview really and not meant for production use.

However, this marks a very important step for us, as it lifts us out of a rather long stretch of doing the ground work to get regular development up and running. With that out of the way we can now move in a steadier fashion, milestone by milestone.

That said, it’s also the perfect time to get involved!
We’re planning our milestones on phabricator, at least the ones within reach, so that’s the place to follow development along and where you can contribute, be it with ideas, feedback, packaging, builds on new platforms or, last but not least, code.

So what is there yet?

You can setup an IMAP account, you can read your mail (even encrypted), you can move messages around or delete them, and you can even write some mails.

kube_main

BUT there are of course a lot of missing bits:

  • GMail support is not great (it needs some extra treatment because GMail IMAP doesn’t really behave like IMAP), so you’ll see some duplicated messages.
  • We don’t offer an upgrade path between versions yet. You’ll have to nuke your local cache from time to time and resync.
  • User feedback in the UI is limited.
  • A lot of commonly expected functions are not existing yet.
  • ….

As you see… tech preview =)

What’s next?

We’ll focus on getting a solid mail client together first, so that’s what the next few milestones are all about.

The next milestone will focus on getting an addressbook ready, and after that we’ll focus on search for a bit.

I hope we can scope the milestones approximately ~1 month, but we’ll have to see how well that works. In any case releases will be done only once the milestone is reached, and if that takes a little longer, so be it.

Packaging

This also marks the point where it starts to make sense to package Kube.
I’ve built some packages on copr already which might help packagers as a start. I’ll also maintain a .spec file in the dist/ subdirectory for the kube and sink repositories (that you are welcome to use).

Please note that the codebase is not yet prepared for translations, so please wait with any translation efforts (of course patches to get translation going are very welcome).

In order to release Kube a couple of other dependencies are released with it (see also their separate release announcements):

  • sink-0.1.0: Being the heart of Kube, it will also see regular releases in the near future.
  • kimap2-0.1.0: The brushed up imap library that we use in sink.
  • kasync-0.1.0: Heavily used in sink for writing asynchronous code.

Tarballs

Release of KAsync 0.1.0

I’m pleased to announce KAsync 0.1.0.

KAsync is a library to write composable asynchronous code using lambda-based continuations.

In a nutshell:

Instead of:

class Test : public QObject {
    Q_OBJECT
public:
    void start() {
        step1();
    }
public signals:
    void complete();
private:

    void step1() {
        .... execute step one
        QObject::connect(step1, Step1::result, this, Test::step2);
    }

    void step2() {
        .... execute step two
        QObject::connect(step1, Step1::result, this, Test::done);
    }

    void done() {
        emit complete();
    }

};

you write:

KAsync::Job step1() {
    return KAsync::start([] {
        //execute step one
    });
}

KAsync::Job step2() {
    return KAsync::start([] {
        //execute step two
    });
}

KAsync::Job test() {
    return step1().then(step2());
}

The first approach is the typical “job” object (e.g. KJob), using the object for encapsulation but otherwise just chaining various slots together.

This is however very verbose (because what typically would be a function now has to be a class), resulting in huge functional components implemented in a single class, which is really the same as having a huge function.

The problem get’s even worse with asynchronous for-loops and other constructs, because at that point member variables have to be used as the “stack” of your “function” and the chaining of slots resembles a function with lot’s and lot’s of goto statements (It becomes really hard to decipher what’s going on.

KAsync allows you to write such code in a much more compact fashion and also brings the necessary tools to write things like asynchronous for loops.

There’s a multitude of benefits with this:

  • The individual steps become composable functions again (that can also have input and output). Just like with regular functions.
  • The full assembly of steps (the test() function), becomes composable as well. Just like with regular functions.
  • Your function’s stack doesn’t leak to class member variables.

Additional features:

  • The job execution handles error propagation. Errors just bubble up through all error handlers unless a job reconciles the error (in which case the normal execution continues).
  • Each job has a “context” that can be used to manage the lifetime of objects that need to be available for the whole duration of the execution.

Please note that for the time being KAsync doesn’t offer any API/ABI guarantees.
The implementation heavily relies on templates and is thus mostly implemented in the header, thus most changes will require recompilation of your source code.

If you’d like to help out with KAsync or have feedback or comments, please use the comment section, or the phabricator page.

If you’d like to see KAsync in action, please see Sink.

Thanks go to Daniel Vrátil who did most of the initial implementation.

Tarball

Release of KIMAP2 0.1.0

I’m pleased to announce the release of KIMAP2 0.1.0.

KIMAP2 is a KJob based IMAP protocol implementation.

KIMAP2 received a variety of improvements since it’s KIMAP times, among others:

  • A vastly reduced dependency chain as we no longer depend on KIO. KIMAP2 depends only on Qt, KF5CoreAddons, KF5Codecs, KF5Mime and the cyrus SASL library.
  • A completely overhauled parser. The old parser performed poorly in situations where data arrived faster at the socket than we could process it, resulting in a lot of time wasted memcpying buffers around and an unbounded memory usage. To fix this a dedicated thread was used for every socket, resulting in a lot of additional complexity and threading issues. The new parser uses a ringbuffer with fixed memory usage and doesn’t require any extra threads, allowing the socket to fill up if the application can’t process fast enough, and thus keeping the server from sending more data eventually.
    It also minimizes memcopies and other parsing overhead, so the cost of parsing is roughly reading the data once.
  • Minor API improvements for the fetch and list jobs.
  • Various fixes for the login process.
  • No more KI18N dependencies. KIMAP2 is a protocol implementation and doesn’t require translations.

For more information please refer to the README.

While KIMAP2 the successor of KIMAP, KIMAP will stick around for the time being for kdepim.
KIMAP and KIMAP2 are completely coinstallable: library names, namespaces, envirionment variables etc. have all been adjusted accordingly.

KIMAP2 is actively used in sink and is fully functional, but we do not yet guarantee API or ABI stability (this will mark the 1.0 release).
If you need API stability rather sooner than later, please do get in touch with me.

If you’d like to help out with KIMAP2 or have feedback or comments, please use the comment section, the kde-pim mailinglist or the phabricator page.

Tarball

Kube: Accounts

Kube is a next generation communication and collaboration client, built with QtQuick on top of a high performance, low resource usage core called Sink.
It provides online and offline access to all your mail, contacts, calendars, notes, todo’s etc.
Kube has a strong focus on usability and the team works with designers and Ux experts from the ground up, to build a product that is not only visually appealing but also a joy to use.

To learn more about Kube, please see here.

Kube’s Account System

Data ownership

Kube is a network application at its core. That doesn’t mean you can’t use it without network (even permanently), but you’d severely limit its capabilities given that it’s meant to be a communication and collaboration tool.

Since network communication typically happens over a variety of services where you have a personal account, an account provides a good starting point for our domain model. If you have a system with large amounts of data that are constantly changing it’s vital to have a clear understanding of data ownership within the system. In Kube, this is always an account.

By putting the account front and center we ensure that we don’t have any data that just belongs to the system as a whole. This is important because it becomes very complex to work with data that “belongs to everyone” once we try to synchronize that data with various backends. If we modify a dataset should that replicate to all copies of it? What if one backend already deleted that record? Would that mean we also have to remove it from the other services?
And what if we have a second client that has a different set of account connected?
If we ensure that we always only have a single owner, we can avoid all those issues and build a more reliable and predictable system.

The various views can of course still correlate data across accounts where useful, e.g. to show a single person entry instead of one contact per addressbook, but they then also have to make sure that it is clear what happens if you go and modfiy e.g. the address of that person (Do we modify all copies in all accounts? What happens if one copy goes out of sync again because you used the webinterface?).

Last but not least we ensure this way that we have a clear path to synchronize all data to a backend eventually, even if we can’t do so immediately. E.g. because the backend in use does not support that data type yet.

The only bit of data that is stored outside of the account is data specific to the device in use, such as configuration data for the application itself. Data that isn’t hard to recreate, is easy to migrate and backup, and very little data in the first place.

Account backends

Most services provide you with a variety of data for an individual account. Whether you use Kolabnow, Google or a set of local Maildirs and ICal files,
you typically have access to Contact, Mails, Events, Todos and many more. Fortunately most services provide access to most data through open protocols,
but unfortunately we often end up in a situation where we need a variety of protocols to get to all data.

Within Sink we call each backend a “Resource”. A resource typically has a process to synchronize data to an offline cache, and then makes that data accessible through a standardized interface. This ensures that even if one resource synchronizes email over IMAP and another just gathers it from a local Maildir,
the data is accessible to the application through the same interface.

Because various accounts use various combinations of protocols, accounts can mix and match various resources to provide access to all data they have.
A Kolab account for instance, could combine an IMAP resource for email, a CALDAV resource for calendars and CARDDAV resource for contacts, plus any additional resources for instant messaging, notes, … you get the idea. Alternatively we could decide to get to all data over JMAP (a potential IMAP successor with support for more datatypes than just email) and thus implement a JMAP resource instead (which again could be reused by other accounts with the same requirements).

diagram

 

Specialized accounts

While accounts within Sink are mostly an assembly of some resources with some extra configuration, on the Kube side a QML plugin is used (we’re using KPackage for that) to define the configuration UI for the account. Because accounts are ideally just an assembly of a couple of existing Sink resources with a QML file to define the configuration UI, it becomes very cheap to create account plugins specific to a service. So while a generic IMAP account settings page could look like this:

imapaccount

… a Kolabnow setup page could look like this (and this already includes the setup of all resources including IMAP, CALDAV, CARDDAV, etc.):

kolabaccount

Because we can build all we know about the service directly into that UI, the user is optimally supported and all that is left ideally, are the credentials.

Conclusion

In the end the aim of this setup is that a user first starting Kube selects the service(s) he uses, enters his credentials and he’s good to go.
In a corporate setup, login and service can of course be preconfigured, so all that is left is whatever is used for authentication (such as a password).

By ensuring all data lives under the account we ensure no data ends up in limbo with unclear ownership, so all your devices have the same dataset available, and connecting a new devices is a matter of entering credentials.

This also helps simplifying backup, migration and various deployment scenarios.