Bringing Akonadi Next up to speed

It’s been a while since the last progress report on akonadi next. I’ve since spent a lot of time refactoring the existing codebase, pushing it a little further,
and refactoring it again, to make sure the codebase remains as clean as possible. The result of that is that an implementation of a simple resource only takes a couple of template instantiations, apart from code that interacts with the datasource (e.g. your IMAP Server) which I obviously can’t do for the resource.

Once I was happy with that, I looked a bit into performance, to ensure the goals are actually reachable. For write speed, operations need to be batched into database transactions, this is what allows the db to write up to 50’000 values per second on my system (4 year old laptop with an SSD and an i7). After implementing the batch processing, and without looking into any other bottlenecks, it can process now ~4’000 values per second, including updating ten secondary indexes. This is not yet ideal given what we should be able to reach, but does mean that a sync of 40’000 emails would be done within 10s, which is not bad already. Because commands first enter a persistent command queue, pulling the data offline is complete even faster actually, but that command queue afterwards needs to be processed for the data to become available to the clients and all of that together leads to the actual write speed.

On the reading side we’re at around 50’000 values per second, with the read time growing linearly with the amount of messages read. Again far from ideal, which is around 400’000 values per second for a single db (excluding index lookups), but still good enough to load large email folders in a matter of a second.

I implemented the benchmarks to get these numbers, so thanks to HAWD we should be able to track progress over time, once I setup a system to run the benchmarks regularly.

With performance being in an acceptable state, I will shift my focus to the revisioned, which is a prerequisite for the resource writeback to the source. After all, performance is supposed to be a desirable side-effect, and simplicity and ease of use the goal.

Randa

Coming up next week is the yearly Randa meeting where we will have the chance to sit together for a week and work on the future of Kontact. This meetings help tremendously in injecting momentum into the project, and we have a variety of topics to cover to direct the development for the time to come (and of course a lot of stuff to actively hack on). If you’d like to contribute to that you can help us with some funding. Much appreciated!

Advertisement

Author: cmollekopf

Christian Mollekopf is an open source software enthusiast with a special interest in personal organization tools. He started to contribute actively to KDE in 2008 and currently works for Kolab Systems leading the development for the next generation desktop client.

26 thoughts on “Bringing Akonadi Next up to speed”

  1. Please check also with spinning disks, as it is where you could notice a big difference regarding the effects of big chunks of write operations. Not so much for the performance, but especially whether the system is still reactive.
    (and thanks for the work!)

    1. It ends up doing a (more or less) serial read of the data set as it goes from one matching entry to the next and grabbing the message payload from disk. The payload is not resident in an index; listing the first N*1000 entries in a folder doesn’t offer a lot of opportunity for fetch optimizations. I don’t expect this performance to be related to the database itself, but the fetch+process of the payload. These numbers are measuring the full write and read cycles, not simply database performance.

    2. As aaron pointed out, the problem is indeed not the db, but the benchmark. I simply tried fetching the complete dataset with varying dataset sizes, so the linear increase in time is to be expected. I’ll have to do more in detail analysis for other cases of course.

  2. Maybe you could use the Randa meeting to discuss if you really need to make Akonadi Next a KDEPIM-only technology.
    Making it possible for non-KDEPIM applications to access, e.g. the addressbook, was a core feature of a KDE as a platform provider and with Akonadi we improved and expanded that.

    It makes me quite sad that the currently active Akonadi developers no longer see that as a goal.

    1. Really? mail/calendar/contacts will no longer be generally available to kde apps? that was half the point of akonadi! integrated access to quality groupware data for a diverse range of apps and applets.

    2. “Making it possible for non-KDEPIM applications to access”

      This is not only still a goal, it was one of the things that drove certain aspects of the design. Making it possible, for instance, for Plasma’s calendar to show upcoming events without having to either start process *or* load the entire calendar was an explicit use case we worked from. Currently, Akonadi’s design means integration with e.g. Plasma means incurring the overhead of starting the entire Akonadi system (MySQL, akonadiserver, aaaaall the resources) just to get at the calendar events for the current month. With Akonadi Next, _no_ processes are started for reading (which is all Plasma’s calendar does currently) and “give me just this month’s events” can be done efficiently with a calendaring-data-schema-specific query.

      Soooo … not sure how you arrived at “only meant for KDEPIM apps”, but that’s thankfully just not so.

      1. This is great news!
        Up until as recent as three weeks ago the chosen approach was to abandon all non-KDEPIM applications, basically telling their developers and users “tough luck, your data access needs don’t matter to us.”

        You see me estatic that reason has prevailed and Akonadi is still developed as a viable and superior contender to Evolution Data Server.

        Maybe you or any of the other developers can blog about how you are going to approach this now?

        My guess is a runtime proxy given how much hassle it would be to release a new version of kdepimlibs or separately packaging a libakonadi-kde reimplementation, but who knows 🙂

        1. There will be a public API for clients using Akonadi, and the calendar applet is no different from any other client. The public ASAP protocol (and indeed the protocol in general) is gone, but the open protocol part was never used anyways. So whatever has been communicated, I guess some clarification would be in order 😉

          1. Why do people always bring up the protocol? Did I write anything about using the protocol?

            Fun fact: there is a client library called libakonadi-kde, part of a module called kdepimlibs, shipped as part of the KDE platform.

            Fun fact: there are applications outside of KDE PIM using that library to access PIM data through Akonadi

            Fun fact: the developers of these applications do this in order to provide needed functionality to their users

            Fun fact: users don’t like it when the applications they are using stop working

            Fun fact: application developers don’t like it when their applications stop working

            Serious question: what is the plan to prevent this from happening?

            Answers I’ve seen so far over the last couple of months:
            – there is no plan
            – there are no applications using PIM data through Akonadi
            – this will, probably, maybe, work for future versions of Akonadi

            Bad, very bad.

            Then we have positive statements like from Aaron who says that yes, being a reliable data provider for applications is still a goal. Which implies there must now, finally, be a plan how to ensure that applications don’t break and have still access to the data they require.

            But whenever I ask which approach is chosen to make this happen, I get “the public protocol is gone”.
            Seriously?

            1. Fun Fact: Fun Facts don’t help.
              I don’t know where you have the impression from that non pim developers/users no longer matter to akonadi developers. The only conclusion I could take is that this is a reaction to the fact that the protocol was turned into an implementation detail that is no longer public (which is not the same as the API).

              I don’t have a definitive plan for Akonadi Next yet, though a couple ideas, if that is what you’re asking, and given that we’re not porting anything to it yet it don’t see why you seem to get so agitated over it.

              If I fail to answer your questions it’s just because I mistunderstood them or missed them or what do I know. I’m not trying to ignore them or knowingly mislead you (or anyone).
              So let’s just start over I guess, and I’ll try to answer as best I can.

            2. Allright.

              So you are saying that it is just not know yet how current Akonadi using applications will be supported once KDE PIM switches to Akonadi Next?

              Then I would suggest to make that a topic in Randa.

              There aren’t that many options:
              – provide a binary compatible implementation of libakonadi-kde
              – make Akonadi Next capable of serving original libakonadi-kde using clients
              – use some form of mediator between Akonadi (current) clients and Akonadi (next) server
              – have Akonadi (current) and Akonadi (next) run in parallel

            3. The plan so far has been to continue to provide the current Akonadi API, that internally just talks to Akonadi Next.
              However, we’d first have to see how well this works, and I doubt it would be good solution for resources, so we’d want to port at least the most important ones right away.

              An alternative approach would be indeed to have akonadi next and current akonadi running next to each other for a transition period,
              allowing us to port application by application. Not the worst option IMO.

            4. Ah, great!

              I agree that keeping the old resources working is probably not at the same level of importance than keeping the applications working.

  3. I’m glad to read that simplicity and ease of use are your main goals since they are what makes the difference between widespread adoption and a framework that stays in the niche.

    To ensure that Akonadi Next will not only be easy to use for those who have developed it, but also to its users (i.e. developers of PIM applications), I hope you’ll adopt some Human-Centered Design methods: In your case, that would mean to try to get something into the hands of application developers as soon as parts of the API are available, so that they can create mock applications and give you feedback of where they found things not so easy to implement or where they are missing features.

    Of course they have to know that what they’re using is by no means final and subject to change, but early feedback is just as important if your users are other developers as it is if you’re creating something directly for the end user.

    1. We have all of Kontact and the Plasma use cases to work towards already (something Akonadi did not have the benefit of), and the people who have been working thus far on Akonadi Next are all developers of Akonadi-using applications (not just Kontact, either) … more people playing with the API would be very much welcome, as always, and the code is all there in the open on git.kde.org for people to do so.

      1. Sounds good! I hope (and I’m pretty confident) that those KDEPIM and Plasma devs who are not directly involved in Akonadi Next development will be eager to try it out during its development and give feedback.

      2. If there was easy access via a distro or repor with a rolling development release that would be much more likely to happen.

        Configuring/Building all the kdepim/akonadi stuff is a fairly painful exercise.

  4. This is great work. Thanks!
    One of may main concerns is whether the new akonadi will be able to run over NFS.
    My concern is because most government institutions (including schools and universities) use /home folders on NFS. Users have very little control over what distribution they have on their machine and most of the time they don’t even have access to local file space. In at least two universities in the UK I have been advised to switch to gnome if I wanted to have a mail client running because they would not support kdepim. More worrying is the fact that CENTOS, the distribution of choice for institutions in the UK because of it’s “stability” and “support”, doesn’t even ship kmail or kaddressbook any more. Again, the explanation I have heard is “akonadi does not work over nfs”.
    I consider the /home-over-nfs flawed from the start but sysadmins love it and they are not going to change it any time soon. Asking a sysadmin to install a different backend seems to leave people very confused and even when it is possible the sqlite backend seems to not be very reliable, mostly due to the age of the software that is being shipped as “stable”. This is severely damaging the image of KDE and, if downstream cannot adapt (as it seems not to) maybe you guys are in the best position to improve the situation with the new design.
    Anyway, thank you for your great efforts!

    1. Thanks for your input! I agree that home on NFS is still widely used in various environments, and that it would therefore be very interesting if we could support it. The implications go a bit beyond only the datastorage and involve configurations etc. as well, so I can’t make any promises yet, but it’s definitely something I want to look into regarding it’s feasibility.

  5. Attractive component to content. I simply stumbled
    upon your site and in accession capital to assert that I get in fact enjoyed acount
    your blog posts. Anyway I’ll be subscribing to your feeds and even I
    achievement you access constantly fast.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: