I was considering to make the title of the post “STL algorithms considered harmful” to test my click-bait creation skills, but I decided it is better to aim for the audience interested in the topic of the post instead of getting readers who just want to argue about my outrageous claims.
This way, I can assume that you care about algorithms, algorithm complexity and writing the best software that you can.
Algorithms
One of the advices commonly heard in the modern C++ community is to use the algorithms provided by the standard library if you want to make your program safer, faster, more expressive, etc. This is the advice I also try to popularize – in my book, talks, workshops… everywhere I have the audience to do so.
While it is true that whenever we want to write a for loop to solve a problem we have, we should first consider whether there is a way to solve that problem with the standard (or boost) algorithms, we should not do it blindly.
We still need to know how those algorithms are implemented, what are their requirements and guarantees, and what is their space and time complexity.
Usually, if we have a problem that perfectly matches the requirements of an STL algorithm so that we can apply it directly, that algorithm will be the most effective solution.
The problem can arise when we have to prepare the data in some way prior to applying the algorithm.
Set intersection
Imagine we’re writing a tool for C++ developers to give advisories of replacing default-captures ([=] and [&]) in lambdas with an explicit list of captured variables.
std::partition(begin(elements), end(elements),
[=] (auto element) {
^~~ - being implicit is uncool, replace with [threshold]
return element > threshold;
});
While parsing the file, we would need to keep a collection containing the variables from current and surrounding scopes, and when we encounter a lambda with a default-capture we need to see which variables it uses.
This gives us two sets – one set containing the variables from the surrounding scopes, and one set containing the variables used in the lambda body.
The capture list we should propose as a replacement should be intersection of those two sets (lambda can use global variables which do not need to be captured, and not all variables from the surrounding scopes are used in the lambda).
And, if we need intersection, we can use the std::set_intersection algorithm.
This algorithm is quite beautiful in its simplicity. It takes two sorted collections, and goes through them in parallel from start to end:
If the current item in the first collection is the same as the current item in the second collection, it gets added to the result the algorithm just moves on to the next item in both collections;
If the current item in the first collection is less than the current item in the second collection, the algorithm just skips the current item in the first collection;
If the current item in the first collection is greater than the current item in the second collection, the algorithm just skips the current item in the second collection.
With each iteration at least one element (from the first or the second input collection) is skipped – therefore, the algorithm complexity will be linear – O(m + n), where m is the number of elements in the first collection, and n the number of elements in the second collection.
Simple and efficient. As long as the input collections are sorted.
Sorting
The problem is what to do when collections are not sorted in advance.
In the previous example, it would be prudent to store the variables from the surrounding scopes in a stack-like structure so that the parser can efficiently add new ones when it enters a new scope, and remove the variables from the current scope when it leaves the current scope.
This means that the variables will not be sorted by name and that we can not use std::set_intersection directly to operate on them. Similarly, tracking the variables in the lambda body is very likely not going to keep them sorted.
Since std::set_intersection works only on sorted collections, it is a common pattern I’ve seen in many projects to first sort the collections, and then call the std::set_intersection algorithm.
Skipping the fact that sorting the variables stack in our example would destroy the purpose of the stack we defined, the intersection algorithm for unordered collections would look like this:
Sorting takes quasilinear time, which makes the total complexity of this approach O(n log n + m log m + n + m).
Sort less
Can we skip sorting?
If both collections are unordered, we would need to traverse the second collection for each element in the first one – to check whether it needs to be put into the resulting set. Although this approach is not that uncommon in real-world projects, it is even worse than the previous one – its complexity is O(n * m).
Instead sorting everything, or sorting nothing, we can be Zen and take the Middle Path – sort only one collection.
If only one collection is sorted, we will have to iterate through the unsorted one value by value and check whether that value exists in the sorted collection for which we can use binary search.
The time complexity for this will be O(n log n) for sorting the first collection, and O (m log n) for iteration and checking. In total, we have O((n + m) log n) complexity.
If we decided to sort the second collection instead of the first, the complexity would be O((n + m) log m).
In order to be as efficient as possible, we will always sort the collection that has fewer elements thus making the final complexity for our algorithm ((m + n) log (min(m, n)).
In our example of lambda captures, the collection that we’d want to sort is usually going to be the collection of variables used in the lambda as it is most likely going to be smaller than a list of all variables from all surrounding scopes.
Hashing
The last option is to build a std::unordered_set (unordered, hash-based set implementation) from the smaller collection instead of sorting it. This will make lookups O(1) on average, but the construction of std::unordered_set will take time. Construction can be between O(n) and O(n*n) which can be a problem.
The hashing approach would be the absolute winner for the case where we want to calculate intersections of several sets with a single predefined small set. That is, we have a set A, sets B₁, B₂… and we want to calculate A ∩ B₁, A ∩ B₂…
In this case, we could ignore the complexity of the std::unordered_set construction, and the complexity of calculating each intersection would be linear – O(m), where m is the number of elements in the second collection.
Benchmark
While it is always useful to check the algorithm complexity, it is prudent also to benchmark different approaches in cases like these. Especially when choosing between the last two approaches where we pit binary search against hash-based sets.
In my basic benchmarks, the first option where both collections have to be sorted was always the slowest.
Sorting the shorter collection was slightly better than using std::unordered_set, but not much.
Both the second and the third approach were slightly faster than the first in the case when both collections had the same number of elements, and were significantly faster (6x) when one collection had 1000x less elements than the other.
I have no idea to which category this post belongs to – to C++ or to KDE development – because it is about one component in Plasma Blade. But the post is mostly about the approach to writing a distributed system I went for while implementing it, and a C++ framework that will come out of it.
Blade is my long-term R&D project to implement a multi-process KRunner alternative.
Ranges
One of the most powerful parts of the C++ standard library is the <algorithm> header. It provides quite a few useful algorithms for processing collections. The main problem is that algorithms take iterator pairs to denote the start and end of a collection which should be processed instead of taking the collection directly.
While this has some useful applications, most of the time it just makes algorithms more difficult to use and difficult to compose.
There were multiple attempts at fixing this by introducing an abstraction over sequence collections called ranges. There are several 3rd party libraries that implement ranges, and one of them (Eric Niebler’s range-v3 library) is on its way to become a part of the C++ standard library.
There are numerous articles written about ranges available online (and I even dedicated a whole chapter to them in my book), so I’m not going to cover them in more detail here. I’m just going to say that ranges allow us to easily create sequences of transformations that should be applied to a collection.
Imagine the following scenario – we have the output of the ping command, and we want to convert the whole output to uppercase, then extract the number of miliseconds each response took, and then filter out all the responses which took longer than some predefined time limit.
64 bytes from localhost (::1): icmp_seq=1 ttl=64 time=0.015 ms
64 bytes from localhost (::1): icmp_seq=2 ttl=64 time=0.041 ms
64 bytes from localhost (::1): icmp_seq=3 ttl=64 time=0.041 ms
In order to demonstrate the range transformation chaining, we are going to make this a bit more complicated than it needs to be:
we will convert each line to uppercase;
find the last = character in each line;
chop off everything before the = sign (including the sign);
keep only results that are less than 0.045.
Written in the pipe notation supported by the range-v3 library, it would look something like this:
In one of my previous posts, I wrote that using the for_each algorithm instead of the range-based for loop provides us with an abstraction high enough to be used for processing asynchronous data streams instead of being able to work only with regular data collections.
Some people argued that for_each was not meant to be a customization point in C++, which might be true, but it works very well as one. Now, it suffers from the same problems as other STL algorithms, and if we want to reach new heights of abstraction, it truly is not the best customization point out there. So, we might find something else to customize instead.
If you look at the previous code snippet, you’ll see several transformations performed on ping_output. But it does not say what ping_output is. It can be a vector of strings, it can be an input stream tokenized on newlines, it can be a QFuture, etc. It can be anything that transform and filter can exist for.
Reactive streams
We can create an abstraction similar to ranges, but instead of trying to create an abstraction over collections, we will create abstractions over series of events.
Imagine if ping_output was a range that reads the data from an input stream like std::cin. Whenever we request a result from the results range, it would block the execution of our whole program until a whole line is read from the input stream.
This is a huge problem. The ping command will send our program one line each second which means that our program will be suspended for most of its lifetime waiting for those seconds to pass,
Instead, if would be better if it could continue working on other tasks until the ping command sends it the new data to process.
This is where event processing comes into play. Instead of requesting a value from the results, and then blocking the execution until that value appears, we want to react to new values (events) that are received from the ping command, and process them when they arrive – without ever blocking our program.
This is what reactive streams are meant to model – an asynchronous stream of values (events). If we continue with the ping example, the transformations it defines should also be able to work on reactive streams. When a new value arrives, it goes through the first transformation which converts it to uppercase, and sends it to the second transformation. The second transformation processes the value and sends it to the third. And so on.
The only thing that changed is the source of the values. Instead of using a range (a vector, or another sequence collection), this uses a reactive stream which emits a line every time the ping command outputs it.
This is the power of abstraction – using the code we have written for one thing, for something completely different. In this case, using the code that was written to process a collection synchronously, to process asynchronous event streams.
Distributed stream processing
While it is nice to be able to write asynchronous software systems in the same way we write synchronous systems, that is not enough for this post.
One of the great things about defining programs as series of pure transformations to be performed on the data is that those transformations are independent from one another – they can be isolated from each other and even moved into different processes (even processes on separate computers in a network).
For this case, I’ve implemented in the Voy library a special type of transformation called a bridge which is used to transparently send the data from one process to the other.
Let’s split the data processing in ping example into three parts (similar to what the Plasma Blade will need) – to execute the first and the last part in the main program, and to execute the middle part in the backend. It will look like this (added the voy namespace for completeness):
This data pipeline is defined once for both the main program and the backend. For the main program, the middle part of the pipeline will be disabled (no code generated for it), while for the backend only the middle part will be compiled.
A note on the implementation
At the last Akademy, I talked to Tomaz Canabrava about making KDE software more appealing to students to join our development efforts.
Now, this project is probably going to be overly complex for an average student to join in, but I’m trying to make it as readable and as clean as possible for more adventurous students. It uses all the new and cool features of C++17, along with void_t and the detection idiom to simulate concepts, etc.
If you desire to have a chance to work on a real-world C++17 project, just send me an e-mail, and I’ll try to get you up to speed. The only pre-requirement is for you to do some investigation and find out which repository the code resides in. ;)
As you might have seen before, there were some issues with CryFS for which I had to switch Vault to use EncFS by default for Plasma 5.12 since it is a long-term support release.
Plasma Vault now communicates the error messages reported by cryfs clearly and is able to upgrade the cryfs filesystem when you upgrade to a new cryfs version.
The new version of Plasma Vault is ready for testing (it is in master, Plasma 5.13 beta will be released in a few weeks).
Published
in the Prog C++ section,
on 15 April 2018
Tomorrow, I’ll be heading to Piter (St. Petersburg) for acclimatization training before the C++ Russia 2018 conference starts.
While most talks at C++ Russia are in Russian, this year will have more than a few amazing talks in English – from famous names like Andrei Alexandrescu, Jon Kalb and Herb Sutter, to a few less famous ones including yours truly. :)
My talk will be a bit out of my comfort zone – it will not be about monads nor any other aspect of functional programming. I’ll be talking about modern meta-programming for C++17 and 20 – the void_t meta-function and the detection idiom.
If you happen to be close to Piter, join us (the tickets are available at cppconf.ru) or if you just want to grab a beer, send me an e-mail.
There have been a few smaller improvements to the Plasma Vault pushed to master in the past few days, scheduled for release in Plasma 5.13.
KDE Connect
Imagine you left your computer unattentded, unlocked with a few Vaults open. And you have a few spies from a competing company roaming around.
Thanks to the new feature of KDE Connect, it is possible to define commands that you can execute remotely from you phone.
This amazing feature can be used to lock your screen, but also to close the open vaults thanks to the new D-Bus commands of the Vault system.
The commands are defined in the KDE Connect Settings module. Select the phone you want to be able to run the commands from and go to configure the ‘Run commands’ plugin. There you can add any commands you wish, and they will automatically appear on your phone in the KDE Connect application.
For locking the session, you can create a new command that executes
The difference between these two commands is that closeAllVaults will close only the vaults that are not used by a running application, while forceCloseAllVaults will try to kill all those applications and close all vaults.
Also, error messages in the password dialogue got more prominent:
Next steps
The next task will be to rewrite the CryFS backend to use the features added in the 0.9.9 version. This will provide much better error handling and will finally allow CryFS to be the default encryption mechanism for Plasma Vault.