KDE

Akonadi - still alive and rocking

Post thumbnail

It’s been a while since I wrote anything about Akonadi but that does not mean I was slacking all the time ;) The KDE PIM team has ported PIM to KDE Frameworks 5 and Qt 5 and released the first KF5-based version in August 2015 and even before that we already did some major changes under the hood that were not possible in the KDE4 version due to API and ABI freezes of kdepimlibs. The KF5-based version of Akonadi libraries (and all the other KDE PIM libraries for that matter) have no guarantees of stable API yet, so we can bend and twist the libraries to our needs to improve stability and performance. Here’s an overview of what has happened (mostly in Akonadi) since we started porting to KDE Frameworks 5. It is slightly more technical than I originally intended to, sorry about that.

Human-readable formats are overrated

As you probably know Akonadi has two parts: the Server (that manages the data and resources) and client libraries (that applications use to access the data managed by the server). The libraries need to talk to the Server somehow. In KDE4 we were using a text-based protocol very similar to IMAP (it started as RFC-compliant IMAP implementation, but over the time we diverged a bit). The problem with text-based protocol and large amount of data is that serializing everything into string representation and then deserializing it again on the other end is not very effective. The performance penalty is negligible when talking to IMAP servers over network because the network latency hides it. It however shows when everything is happening locally. So we switched from a text-based protocol to a binary protocol. That means we take the actual representation of the data in the memory (bit by bit) and write it to the socket. The other side then just takes the binary data and directly interprets them as values (numbers or strings or whatever). This means we spent almost zero time on serialization and we are able to transmit large chunks of data between the server and the applications very, very efficiently.

Let’s abuse the new cool stuff we have for things we did not originally designed it for

The communication between clients and server needs to work in two directions. It’s not just the clients sending requests to server (and server sending back replice), we also need a mechanism for the server to notify the clients that something has changed (new event in a calendar, email marked as read, etc.). For this we were using DBus signals. The clients could connect to a DBus signal provided by the Akonadi Server and when something changed, the server notified the clients via the signal. However during initial synchronization or during intensive mail checks the amount of the messages sent over DBus by Akonadi was just too high. We were clogging the DBus daemon and the transmission of messages via DBus is not cheap either. But hey, we have an awesome and super-fast binary protocol, why not use that? And so we switched from using DBus for change notifications to sending those notifications through the same mechanism that we use for all other communication with the server. In the future it will also allow us tu even more customize the content of the notification thus further improving performance.

Pfff, who needs database indexes?

We do! Once we switched to the binary protocol we found that we are no longer waiting for the data from database to be serialized and sent over to client, but that we are waiting for the database itself! I sat down and look at EXPLAIN ANALYZE results of our biggest queries. Turns out we were doing some unnecessary JOINs (usually to get data that we already had in in-memory cache inside the Server) that we could get rid of. SQL planners and optimizers are extremely efficient nowadays, but JOINing large tables still takes time, so getting rid of those JOINs made the queries up to twice faster.

One unexpected issue I found was that the database was spending large amount of time on “ORDER BY … DESC” on our main table (yes, we query results in descending order - this way we can show the newest (= usually most relevant) emails in KMail first, while still retrieving the rest). PostgreSQL users will be happy to know that adding a special descending index sped up the queries massively. MySQL users are out of luck - although MySQL allows to create a descending index on a column, it does not really do anything about it.

Splitting libraries is too mainstream, we merge stuff!

One of the things that I’ve been looking forward to for a very long time was making the Akonadi server a private part (an implementation detail) of the Akonadi client libraries. In KDE4 versions we had to maintain a backwards compatibility of the Akonadi protocol (the text-based one I mentioned earlier) as it was considered a part of public API. This was extremely limiting and annoying for me as a maintainer, as it was making fixing certain bugs and adding new features unnecessarily hard. Historically Akonadi server was a standalone project and it was expected that 3rd party developers would write their own client libraries in their own toolkits/languages. Unfortunately that never happened and the KDE Akonadi client libraries were the only client libraries out there that were actively developed and used (there were some proof-of-concept GLib/Gtk client libraries, but never used seriously).

So, since KDE Applications 15.08 the Akonadi server has no public API and writing custom client libraries is not officially supported. The only official way to talk to the server is through the KDE Akonadi client libraries, which is now the only public API for Akonadi. This may sound like a bad decision, like closing ourselves down from the world, like if we don’t care about anyone else but KDE and Qt. And it’s sort of true - we were waiting for almost a decade for someone else to start writing their client libraries, but nobody did. So why bother? On the other hand the only actual and real user of Akonadi - KDE - benefits much more now - for example the binary protocol is optimized so that (de)serializing Qt types (like QString or QDateTime) is very efficient because we can use the format that Qt uses internally. If we were to be “toolkit agnostic”, we would have to waste time on converting the data to some more standard representation and nobody would win.

To finally get to the point: today I took the Akonadi client libraries (that lived in kdepimlibs.git) and merged them to akonadi.git repository, where the Akonadi server is - at least locally on my machine, still need to fix some build issues before actually pushing this, but I expect to do it tomorrow. In other words the entire Akonadi Framework now lives in a single self-contained git repository. This brings even more benefits, mostly from maintainer point of view. For instance we can now share more code between the server and the libraries that we previously had to duplicate or expose via some private shared library.

The kdepimlibs.git will still contain some libraries for now that we yet have to figure out what to do with, but I guess that eventually kdepimlibs.git will meet the same fate as kdelibs.git - being locked down and preserved only for historical reference.

The Cheese Dependency

In September last year the KDE PIM team also met in Randa in Swiss Alps. I was totally going to blog about it, but then other things got into way and I kept delaying it further and further until now. So with an awkward 5 months delay: huge thanks and hugs to the entire Randa meetings staff and one more hug to Mario just for being Mario. In Randa we met to discuss where KDE PIM should go next and how to get there. After several days on intensive talking we outlined the path into future - you probably read about it already in some of the blogs about AkonadiNext from Christian and Aaron, so I won’t go much into that.

To list some of the visible and practical results - we now use Phabricator to coordinate the work in the PIM team and to better communicate what is happening and who’s working on what. There’s a nice backlog of tasks waiting to be done, so if you want to help us make PIM better feel free to pick up some task and get to work! Furthermore we looked into cleaning up some of the old code and optimizing some critical code-paths - basically a continuation of an effort that started already during Akademy in A Coruña. Some of the changes were already implemented, some are still pending.

Lord of the PIM: The Return of The KJots

One of the major complaints we heard about the new KF5-based KDE PIM was the disappearance of KJots, our note-taking app. Earlier last year, on a PIM sprint in Toulouse, we decided that we need to reduce the size of the code base to keep it maintainable given the current manpower (or rather lack thereof). KJots was one of the projects we decided to kill. What we did not realize back then was that we will effectively prevent people from accessing their notes, since we don’t have any other app for that! I apologize for that to all our users, and to restore the balance in the Force I decided to bring KJots back. Not as a part of the main KDE PIM suite but as a standalone app. I have yet to make a first release that packagers can package, but it already builds and is reasonably usable. I’m not planning on developing the application very actively - I’ll keep it breathing, but that’s about it. That’s all I can afford. If there’s anyone who would be interesting in maintaining the application and developing it further (it’s a rather small and simple application), feel free to step up! When the app reaches certain quality level, we can start thinking about merging it back to KDE PIM.

Is that all?

Yes. No. Well, it’s all for today. There is much much more happening in KDE PIM - Laurent did tons of work on of refactoring and splitting the monolithic kdepim.git repository into smaller, better reusable pieces and now seems to be messing around Akregator, and Sandro is actively working on refactoring the email rendering code and calendaring. But I’ll leave it up to them to report on their work :) And of course there’s much more planned for the future (as always), but this blog post already got a bit out of hand, I’ll report on the rest maybe next time I “accidentally” have an energy drink at 11 PM.

And as always: we need help. Like, lots of it. KDE PIM might look huge and scary and hard to work on, but in fact it’s all rainbows and unicorns. Hacking on PIM is fun (and we are fun too sometimes!), so if you like KDE (PIM) and would like to help us, let’s talk!

Git trick #628: automatically set commit author based on repo URL

If you have more than one email identity that you use to commit to different projects you have to remember to change it in .git/config every time you git clone a new repository. I suck at remembering things and it’s been annoying me for a long time that I kept pushing commits with wrong email addresses to wrong repositories.

I can’t believe I am the only one having this problem, but I could not find anything on the interwebs so I just fixed it myself and I’m posting it here so that maybe hopefuly someone else will find it useful too :).

The trick is very simple: we create a post-checkout hook that will check the value of user.email in .git/config and set it to whatever we want based on URL of the “origin” remote.  Why post-checkout? Because there’s no post-clone hook, but git automatically checkouts master after clone so the hook gets executed. It also gets executed every time you run git checkout by hand but the overhead is minimal and we have a guard against overwriting the identity in case it’s already set.

#!/usr/bin/python
# -*- coding: utf-8 -*-
#
# (C) 2015 Daniel Vrátil <dvratil@kde.org>
# License: GPL
#
# Requires: Python 2 or 3 and compatible GitPython
#
# https://github.com/gitpython-developers/GitPython
import git
import ConfigParser
import os
import sys

repo = git.Repo(os.getcwd())

# Don't do anything if an identity is already configured in this
# repo's .git/config
config = repo.config_reader(config_level = 'repository')
try:
    # The value of user.email is non-empty, stop here
    if config.get_value('user', 'email'):
        sys.exit(0)
except (ConfigParser.NoSectionError, ConfigParser.NoOptionError):
    # Section or option does not exist, continue
    pass
origin = repo.remote('origin')
if not origin:
    print('** Failed to detect remote origin, identity not updated! **')
    sys.exit(0)
# This is where you adjust the code to fit your needs
if 'kde.org' in origin.url or origin.url.startswith('kde:'):
    email = 'dvratil@kde.org'
elif 'fedoraproject.org' in origin.url:
    email = 'dvratil@fedoraproject.org'
elif 'kdab.com' in origin.url:
    email = 'daniel.vratil@kdab.com'
else:
    print('** Failed to detect identity! **')
    sys.exit(0)
# Write the option to .git/config
config = repo.config_writer()
config.set_value('user', 'email', email)
config.release()
print('** User identity for this repository set to \'%s\' **' % email)

To install it, just copy the script above to ~/.git-templates/hooks/post-checkout, make it executable and run

git config --global init.templatedir ~/.git-templates

All hooks from templatedir are automatically copied into .git/hooks when a new repository is created (git init or git clone) - this way the hook will get automatically deployed to every new repo.

And here’s a proof that it works :-)

[dvratil@Odin ~/devel/KDE]
$ git clone kde:kasync
Cloning into 'kasync'...
remote: Counting objects: 450, done.
remote: Compressing objects: 100% (173/173), done.
remote: Total 450 (delta 285), reused 431 (delta 273)
Receiving objects: 100% (450/450), 116.44 KiB | 0 bytes/s, done.
Resolving deltas: 100% (285/285), done.
Checking connectivity... done.
** User identity for this repository set to 'dvratil@kde.org' **

[dvratil@Odin ~/packaging/fedpkg]
$ git clone ssh://dvratil@pkgs.fedoraproject.org/gammaray
Cloning into 'gammaray'...
remote: Counting objects: 287, done.
remote: Compressing objects: 100% (286/286), done.
remote: Total 287 (delta 113), reused 0 (delta 0)
Receiving objects: 100% (287/287), 57.24 KiB | 0 bytes/s, done.
Resolving deltas: 100% (113/113), done.
Checking connectivity... done.
** User identity for this repository set to 'dvratil@fedoraproject.org' **

Update 1: added utf-8 coding (thanks, Andrea) Update 2: changed shebang to more common /usr/bin/python (/bin/python is rather Fedora-specific), added “Requires” comment to top of the script (thanks, Derek)

KDE PIM in Randa

The first release of KDE PIM based on KDE Frameworks 5 and Qt 5, which will be part of the KDE Applications 15.08 release, is getting closer and closer. Except for porting the entire suite from Qt 4 to Qt 5 the team also managed to fix many bugs, add a few new features and do some pretty big performance and memory optimizations. And we already have some new improvements and optimizations stacked in the development branch which will be released in December!

The biggest performance improvement is thanks to switching to a faster implementation of the communication protocol used by applications to talk to the Akonadi server. We also extended the protocol and we can now use it to send change notifications from the Akonadi server to clients much more effectively than previously. Additionally we started cleaning up API of our libraries and improving it in a way that allows for safer and more effective use. None of this was possible in the KDE 4 version of KDE PIM, where we promised API and ABI compatibility with previous releases. For now we decided not to give any such promises for several more releases, so that we can tune the API and functionality even more.

During Akademy the KDE PIM team had a very long session where we analyzed where the project currently stands and we created a vision of where we want KDE PIM to be in the future. We know what parts we want to focus on more now and which parts are less relevant to us. KDE PIM is a huge and rather complicated project, unfortunately the development team is very small and so we have to make the hard and painful decision to lay off some of the features and functionality in exchange for improvement in reliability and user experience of the core parts of the product.

In order to make these decisions the team is going to meet again in couple weeks in Randa alongside many other KDE contributors and projects and will spend there a whole week. During the sprint we want to take a close look at all the parts and evaluate what to do with them as well as plan how to proceed towards Akonadi Next - the new version of Akonadi, which has some major changes in architecture and overall design (see the Christian’s talk from Akademy about Akonadi Next).

However organizing such sprint is not easy and so we would like to ask for your support by donating to the KDE Sprints Fundraiser. Although the attendees cover some of the costs themselves, there are still expenses like travel and accommodation that need to be covered. This year the Fundraiser has been extended so that the collected money will also be used to support additional KDE sprints throughout the year.

Qt containers and C++11 range-based loops

Much has been written on teh interwebs about performance of iterations over Qt containers with Q_FOREACH vs. std iterators vs. Java iterators. However there is [very little][1] about how the new C++11 range-based loops work with Qt containers (or maybe I just suck at Googling, but well here I am…). Today I found out that there is a little catch that one has to be very careful about when using range-based loops with Qt containers.

Qt containers are all implicitly shared classes, which means that copying them is very cheap since only shallow copy occurs. When a shared copy is modified (or rather when a non-const method is called) it calls detach() and performs the expensive deep copy. When there are no other copies of the object (i.e. when the reference count is 1) no copying happens when detach() is called. We say that such instance is “not shared”.

To get to the point - the code below performs equally fast in both cases:

QStringList list{ "1", "2", "3", .... };
Q_FOREACH (const QString &v, list) {
   ...
}

for (const QString &v : list) {
  ...
}

However in the following code range-based loop will perform much worse than Q_FOREACH.

class MyClass
{
public:
   ...
   QStringList getList() const { return mList; }
   ...
private:
   QStringList mList;
};

...

Q_FOREACH (const QString &v, myObject.getList()) {
   ...
}

for (const QString &v : myObject.getList()) {
  ...
}

The difference between the first example and this one is that the QStringList in this example is shared, i.e. reference count of it’s data is higher than 1. In this particular case one reference is held by myObject and one reference is held by the copy returned from the getList() method. That means that calling any non-const method on the list will call detach() and perform a deep copy of the list. And that is exactly what is happening in the range-based loop (but not in the Q_FOREACH loop) and that’s why the range-based loop is way slower than Q_FOREACH in this particular case. The example above could be even simpler, but this way it highlights the important fact that returning a copy from a method means that the copy is shared and has negative side-effects when used with range-based loops. Note that if the method would return a const reference to QStringList, everything would be OK (because const …).

The reason for the speed difference is one peculiarity of Qt containers: they have a const overload of begin() which does not call detach(). Q_FOREACH internally makes a const copy of the list, so the const overload of begin() gets called instead of the non-const one.

On the other hand the range-based loop does not take any copy and simply uses the non-const version of begin(). As we explained above, calling non-const methods on shared Qt containers performs a deep copy. Only exception is when the container itself is const because then the const version of begin() is called and the code will behave the same as Q_FOREACH.

Ironically with stdlib containers (std::vector for example) the situation is exactly the opposite. std iterators are not shared classes so making a copy of an std container always performs a deep copy, but calling a non-const method does not trigger any copying. That means that Q_FOREACH, which always takes a copy of the container would be doing a deep copy in such case while range-based loop, which only calls begin() and end() would not be triggering any copying. Although std containers provide cbegin() and cend() methods to get const interators, there’s no need for the range-based loop to use them, since begin() and end() will always perform equally well on std containers.

To prove my point, here is the benchmark code I used. It’s an extended version of an [older benchmark of Qt containers][2].

#include <QStringList>
#include <QObject>
#include <QMetaType>
#include <qtest.h>
#include <cassert>

enum IterationType
{
    Foreach,
    RangeLoop,
    Std,
    StdConst
};

Q_DECLARE_METATYPE(IterationType)

class IterationBenchmark : public QObject
{
    Q_OBJECT
private Q_SLOTS:
    void stringlist_data()
    {
        QTest::addColumn<QStringList>("list");
        QTest::addColumn<IterationType>("iterationType");
        QTest::addColumn<bool>("shared");

        const int size = 10e6;
        QStringList list;
        list.reserve(size);

        for (int i = 0; i < size; ++i) {
            list << QString::number(i);
        }

        QTest::newRow("Foreach") << list << Foreach << false;
        QTest::newRow("Foreach (shared)") << list << Foreach << true;
        QTest::newRow("Range loop") << list << RangeLoop << false;
        QTest::newRow("Range loop (shared)") << list << RangeLoop << true;
        QTest::newRow("Std") << list << Std << false;
        QTest::newRow("Std (shared)") << list << Std << true;
        QTest::newRow("Std Const") << list << StdConst << false;
        QTest::newRow("Std Const (shared)") << list << StdConst << true;
    }

    void stringlist()
    {
        QFETCH(QStringList, list);
        QFETCH(IterationType, iterationType);
        QFETCH(bool, shared);

        if (!shared) {
            // Force detach
            list.push_back(QString());
            list.pop_back();
        }

        int dummy = 0;
        switch (iterationType) {
        case Foreach:
            QBENCHMARK {
                Q_FOREACH(const QString &v, list) {
                    dummy += v.size();
                }
            }
            break;
        case RangeLoop:
            QBENCHMARK {
                for (const QString &v : list) {
                    dummy += v.size();
                }
            }
            break;
        case Std:
            QBENCHMARK {
                QStringList::iterator iter = list.begin();
                const QStringList::iterator end = list.end();
                for (; iter != end; ++iter) {
                    dummy += (*iter).size();
                }
            }
            break;
        case StdConst:
            QBENCHMARK {
                QStringList::const_iterator = list.cbegin();
                const QStringList::const_iterator = list.cend();
                for (; iter != end; ++iter) {
                    dummy += (*iter).size();
                }
            }
            break;
        }
        assert(dummy);
    }
};

QTEST_MAIN(IterationBenchmark)
#include "iterationbenchmark.moc"
$ moc iterationbenchmark.cpp > iterationbenchmark.moc
$ g++ iterationbenchmark.cpp `pkg-config --cflags --libs Qt5Core` `pkg-config --cflags --libs Qt5Test` --std=c++11 -fPIC -O3 --o iterationbenchmark
$ ./iterationbenchmark
********* Start testing of IterationBenchmark *********
Config: Using QtTest library 5.4.2, Qt 5.4.2 (x86_64-little_endian-lp64 shared (dynamic) release build; by GCC 5.1.1 20150422 (Red Hat 5.1.1-1))
PASS   : IterationBenchmark::initTestCase()
PASS   : IterationBenchmark::stringlist(Foreach)
RESULT : IterationBenchmark::stringlist():"Foreach":
     48 msecs per iteration (total: 96, iterations: 2)
PASS   : IterationBenchmark::stringlist(Foreach (shared))
RESULT : IterationBenchmark::stringlist():"Foreach (shared)":
     48 msecs per iteration (total: 96, iterations: 2)
PASS   : IterationBenchmark::stringlist(Range loop)
RESULT : IterationBenchmark::stringlist():"Range loop":
     53.5 msecs per iteration (total: 107, iterations: 2)
PASS   : IterationBenchmark::stringlist(Range loop (shared))
RESULT : IterationBenchmark::stringlist():"Range loop (shared)":
     177 msecs per iteration (total: 177, iterations: 1)
PASS   : IterationBenchmark::stringlist(Std)
RESULT : IterationBenchmark::stringlist():"Std":
     51 msecs per iteration (total: 51, iterations: 1)
PASS   : IterationBenchmark::stringlist(Std (shared))
RESULT : IterationBenchmark::stringlist():"Std (shared)":
     179 msecs per iteration (total: 179, iterations: 1)
PASS   : IterationBenchmark::stringlist(Std Const)
RESULT : IterationBenchmark::stringlist():"Std Const":
     53 msecs per iteration (total: 53, iterations: 1)
PASS   : IterationBenchmark::stringlist(Std Const (shared))
RESULT : IterationBenchmark::stringlist():"Std Const (shared)":
     52 msecs per iteration (total: 52, iterations: 1)
PASS   : IterationBenchmark::cleanupTestCase()
Totals: 10 passed, 0 failed, 0 skipped, 0 blacklisted
********* Finished testing of IterationBenchmark *********

Both Q_FOREACH cases are equally fast because as we explained above, Qt always uses the const iterators and no deep copying happens. Range-based loop with non-shared list performs equally well, because even though it calls detach(), there are no copies to detach from and so no deep copy occurs. However range-based loop with a shared list is over 3 times slower, because detach() here will actually perform a deep copy. The same happens with for loop with non-const std iterators, which is basically just expanded version of range-based loops (range-based loops are just a syntactic sugar for for loops with non-const std iterators). For loops with const std iterators perform equally well as Q_FOREACH, because that is what Q_FOREACH does internally.

To sum this up, when using range-based loops with Qt containers:

Make sure the container is const …

// shared, but const, forces call to QStringList::begin() const,
// which does not call detach()
const QStringList list = objectOfClassA.getList();
...
for (const QString &v : list) {
   ...
}

… or make sure the container is not shared.

// shared and non-const
QStringList list = objectOfClassA.getList();
// call to non-const method causes detach() and deep copy,
// 'list' is now non-shared
list.append(QLatin1String("some more data"));
...
// calls non-const begin(), but detach() of non-shared
// containers does not perform deep copy
for (const QString &v : list) {
   ...
}

Note that this just moves the slow deep-copying outside of the loop, but the deep copy still occurs. The point is that you need to be careful not to create a new copy of the ’list’ after it has been detached on line 5, but before passing it to the loop on line 9. Failing to do so would make the list shared again and the loop would trigger yet another deep copy.

I was very excited when range-based loops were added in C++0x and I’ve been using them in some new C++11 code I wrote since then. But in Qt-based code I’ll be reverting back to the much safer Q_FOREACH. While it is possible to have range-based loops as fast as Q_FOREACH as we’ve shown above, one has to be really careful and constantly think about whether the container is non-shared or at least const and use Q_FOREACH if not. For that reason using Q_FOREACH everywhere is much safer for now.

I know that this is not any ground-breaking revelation and many of you probably even know of it, but I hope that it will still be useful for people who are not aware of the implementation details of Q_FOREACH and range-based loops, or just like me did not realize the importance of difference between shared and non-shared container instance.

[1] http://blog.qt.io/blog/2011/05/26/cpp0x-in-qt/ [2] https://blog.qt.io/blog/2009/01/23/iterating-efficiently/

Plasma 5.3 for Fedora

Post thumbnail
Fedora logo

Plasma 5.3, new feature release of KDE workspace, has been released on Tuesday and you can get it now on Fedora.

Plasma 5.3 brings new features, improvements and almost 400 bug fixes for basically all of its components ranging from power management to various applets.

For users of Fedora 20 and Fedora 21 the traditional COPR repository has been updated. If you already use it just do yum update. If you want to switch to Plasma 5 from KDE 4 just follow the instructions on the main page.

Fedora 22, which is currently in beta, already has the 5.3 update in updates-testing and we are continuously polishing the update. For all KDE users updating to Fedora 22, when it’s released in May, it will also mean final bye bye to KDE 4 and switch to Plasma 5. Fedora 22 repositories also features the latest release of KDE Telepathy, which finally brings IM integration into Plasma 5.

If you want to try out Plasma 5.3 on Fedora but don’t want to install it on your computer yet there’s, as always, a live ISO available for you based on Fedora 22 beta. And this time I did include a working installer (for real!), so when you change your mind just click “Install” ;-)

We welcome any feedback and testing from users, feel free to report any bugs to bugzilla.redhat.com, talk to us on #fedora-kde IRC channel on Freenode or join our mailing list.

What happened in Toulouse?

… a KDE PIM sprint happened in Toulouse! And what happened during that sprint? Well, read this wholly incomplete report!

Let’s start with the most important part: we decided what to do next! On the last PIM sprint in Munich in November when Christian and Aaron introduced their new concept for next version of Akonadi, we decided to refocus all our efforts on working on that which meant switching to maintenance mode of KDE PIM for a very long time and then coming back with a big boom. In Toulouse we discussed this plan again and decided that it will be much better for the project and for the users as well if we continue active development of KDE PIM instead of focusing exclusively on the “next big thing” and take the one-step-at-the-time approach. So what does that mean?

We will aim towards releasing KF5-based KDE PIM in August as part of KDE Applications 15.08. After that we will be working on fixing bugs, improving the current code and adding new features like normally, while at the same time preparing the code base for migration to Akonadi 2 (currently we call it Akonadi Next but I think eventually it will become “2”). I will probably write a separate technical blog post on what those “preparations” mean. In the meantime Christian will be working from the other side on Akonadi 2 and eventually both projects should meet “in the middle”, where we simply swap the Akonadi 1 backend with the Akonadi 2 backend and ship next version. So instead of one “big boom” release where we would switch to Qt 5 and Akonadi 2 at the same time we do it step-by-step, causing as little disruption to user experience as possible and allowing for active development of the project. In other words WIN-WIN-WIN situation for users, devs and the KDE PIM project.

I’m currently running the entire KDE PIM from git master (so KF5-based) and I must say that everything works very well so far. There are some regression against the KDE 4 version but nothing we couldn’t handle. If you like to use bleeding-edge versions of PIM feel free to update and help us finding (and fixing) regressions (just be careful not to bleed to death ;-)).

Another discussion we had is closely related to the 15.08 release. KDE PIM is a very huge code base, but the active development team is very small. Even with the incredible Laurent Montel on our side it’s still not enough to keep actively maintaining all of the KDE PIM (yes, it’s THAT huge ;-)). So we had to make a tough decision: some parts of KDE PIM have to die, at least until a new maintainer steps up, and some will move to extragear and will live their own lives there. What we release as part of KDE Applications 15.08 I call KDE PIM Core and it consists of the core PIM applications: KMail, KOrganizer, KAddressbook, Kleopatra, KNotes and Kontact. If your favorite PIM app is not in the list you can volunteer as a maintainer and help us make it part of the core again. We believe that in this case quality is more important than quantity and this is the trade-off that will allow us to make the next release of PIM the best one to date ;-).

Still related to the release is also reorganization of our repos, as we have some more splitting and indeed some merging ahead of us but we’ll post an announcement once everything is discussed and agreed upon.

Thanks to Christian’s hard work most of the changes that Kolab did in their fork of KDE PIM has been upstreamed during the sprint. There are some very nice optimizations and performance improvements for Akonadi included (among other things), so indeed the next release will be a really shiny one and there’s a lot to look forward to.

Vishesh brought up the topic of our bug count situation. We all realize the sad state of our PIM bugs and we talked a bit about re-organizing and cleaning up our bug tracker. The clean up part has already begun as Laurent with Vishesh have mass-closed over 850 old KMail 1 bugs during the sprint to make it at least a little easier to get through the rest. Regarding the re-organization I still have to send a mail about it but a short summary would be that we want to remove bugzilla components and close bugs for the apps we decided to discontinue and maybe do a few more clean up rounds for the existing bugs.

I’m sure I’ve forgotten something because much more happened during the sprint but let’s just say I’m leaving some topics for others to blog about ;-).

Huge thank you to Franck Arrecot and Kevin Ottens for taking care of us and securing the venue for the sprint! All in all it was a great sprint and I’m happy to say that we are back on track to dominate the world of PIM.

The only disappointment of the entire sprint was my failure to acquire a French beer. I managed to try Belgian, Spanish, Mexican and Argentinian beer but they did not serve any French beer anywhere. Either there’s no such thing or it must be really bad…:-)

KDE PIM Sprint in Toulouse

KDE PIM Sprint in Toulouse

We had a great dinner with the local KDE people on Saturday. Also a proof that Laurent is a real person :-D

Fedora RPM: Automatic "Provides" for CMake projects packages

If you ever did any RPM packaging (not just on Fedora) you probably noticed that some SPEC files don’t use package names in BuildRequires fields but instead they refer to pkg-config module names, like this:

Name:           qt5-qtbase
Version:        5.4.1
Release:        1%{?dist}
...
BuildRequires:  pkgconfig(dbus-1)
BuildRequires:  pkgconfig(fontconfig)
BuildRequires:  pkgconfig(gl)
BuildRequires:  pkgconfig(glib-2.0)
BuildRequires:  pkgconfig(gtk+-2.0)
...

This is achieved by the respective packages simply having these aliases as their Provides (for example dbus-devel package Provides: pkgconfig(dbus-1)). The Provides are extracted automatically by an RPM script when the package is being built which gave me an idea…what if we could do the same for CMake modules?

And so I’ve written a simple script for RPM which extracts CMake package name and version from the package config files installed to /usr/lib/cmake. Simply put it means that kf5-kcoreaddons-devel will have

Provides:       cmake(KF5CoreAddons) = 5.8.0

and qt5-qtdeclarative-devel will have

Provides:       cmake(Qt5Qml) = 5.4.1
Provides:       cmake(Qt5Quick) = 5.4.1
Provides:       cmake(Qt5QuickTest) = 5.4.1
Provides:       cmake(Qt5QuickWidgets) = 5.4.1

…and all this happens automatically :-)

So, if you are packaging a CMake-based projects for Fedora you don’t have to wonder which package provides the needed dependencies but you can just use the name from find_package() in BuildRequires and be done with it.

Name:           plasma-workspace
Version:        5.2.1
Release:        6%{?dist}
Summary:        Plasma workspace, applications and applets
...
BuildRequires:  cmake(Qt5Widgets) cmake(Qt5Quick) cmake(Qt5QuickWidgets) cmake(Qt5Concurrent) cmake(Qt5Test) cmake(Qt5Script) cmake(Qt5Network) cmake(Qt5WebKitWidgets)
BuildRequires:  cmake(Phonon4Qt5)
BuildRequires:  cmake(KF5Plasma) cmake(KF5DocTools) cmake(KF5Runner) ...
...

Another advantage is that this makes it easier to automate dependencies extraction from CMakeLists because we will no longer have to bother with mapping the CMake names to package names (for reference I wrote a script to mass-update dependencies of all our KDE Frameworks 5 packages in Fedora).

We have pushed the script into Fedora’s cmake package (currently in rawhide and (soon) in F22 but eventually I’d like to have it in F20 and F21 too) so all packages that will be rebuilt after this will get the automatic Provides.

In the long-term we would like to try to get the script to upstream RPM so that other distributions can use this too. For now the script is available in cmake package distgit.

Plasma 5.2 arrives to Fedora

Post thumbnail

It’s here! Plasma 5.2 has been released just yesterday and you don’t have to wait a single minute longer to update your beloved Fedora boxes :-)

I won’t go into detail here about all the new awesome things that are waiting for you in Plasma 5.2, but I totally recommend that you go and read Plasma 5.2: The Quintessential Breakdown by Ken Vermette while you are waiting for your package manager to wade through the update. You can also read the official Plasma 5.2 release announcement, it has fancy animated screenshots ;).

And there’s other news related to Plasma 5.2 and Fedora: Fedora rawhide has been updated to Plasma 5.2 too. This means that KDE SIG will ship Plasma 5 in Fedora 22! Of course we will still maintain the Copr repository for our Fedora 20 and Fedora 21 users.

So, how to get Plasma 5.2 on Fedora?

On rawhide, just do dnf update. On Fedora 20 and Fedora 21, if you are already running Plasma 5.1.2 from dvratil/plasma-5 Copr, then all you need to do is to run dnf update. If you are running Plasma 5.1.95 (aka Plasma 5.2 beta) from dvratil/plasma-5-beta Copr, then it’s time to switch back to stable:

Plasma 5.2 Beta available for Fedora testers

Post thumbnail

On Tuesday KDE has released first beta of the upcoming Plasma 5.2. Plasma 5.2 is adding many new features and improvements and we would welcome testers to help find and fix bugs before the final release.

Fedora 21 with Plasma 5.2 beta

Fedora 21 with Plasma 5.2 beta

Fedora users are welcome to try out Plasma 5.2 beta, either by running Fedora Plasma 5.2 beta live ISO or by installing packages from plasma-5-beta Copr (see Installation Instructions on the Copr page).

Check out the release announcement to see what new features and improvements are waiting for you in Plasma 5.2. Final release will be in two weeks on January 27, after that we will update the plasma-5 Copr to get the update to all our users :-)

KDE Frameworks 5.3 and KDE Plasma 5.1 for Fedora are ready!

Post thumbnail

Fedora KDE SIG is happy to announce that latest version of KDE Frameworks 5 have just reached stable repositories of Fedora and brand new version of KDE Plasma 5 is now available in the our Plasma 5 COPR.

KDE Frameworks 5.3.0

The third release of KDE Frameworks brings mostly bugfixes. KDE Frameworks 5 is a collection of libraries and software frameworks created by the KDE community. It’s an effort to rework KDE 4 libraries into a set of individual and independent, cross platform modules that will be readily available to all Qt-based applications.

KDE Frameworks 5 are available in official Fedora repositories for Fedora 20 and the upcoming Fedora 21.

KDE Plasma 5.1

Fedora 20 running KDE Plasma 5

Fedora 20 running KDE Plasma 5

KDE Plasma 5 is the next generation of KDE workspace based on Qt 5 and KDE Frameworks. Its latest version brings many bug fixes, performance improvements but also many new features! Dark color theme for the Breeze style, more widgets, improved Task switcher, reworked tray icons and much more. You can read about all the new things in Plasma 5.1 in the official release announcement.

To install KDE Plasma 5 on Fedora, just add the Plasma 5 COPR repository to yum, and simply run yum install plasma-5.

Live ISO

Do you want to give Plasma 5 a try, but don’t want to install it yet? Easy! We have prepared a live ISO image based on Fedora 20 for you! You can get it from here: http://pub.dvratil.cz/plasma/iso/5.1/ (use Torrent for faster download).

Do you need help? Come talk to us: either on #fedora-kde IRC channel on Freenode, or join our mailing list kde@lists.fedoraproject.org.

Fedora 20 running KDE Plasma 5

Fedora 20 running KDE Plasma 5