Most sane programmers hate writing tests. Because of that, we have a lot of testing frameworks around that make that task more streamlined, but it is still borring work.
A lot of sane programmers hate writing anything that needs (dynamic) SQL. For that, a lot of DB frameworks came into existence. One of the fancier ones for C++ has to be sqlpp11 which provides a type-safe way of writing SQL usnig regular C++.
But, alas, this is not a post about sqlpp11 since it is not something that can be used in KDE Frameworks since its heavy use of C++11 features.
What is this all about then?
This post is more about how to write tests for code that uses the Qt’s way of working with databases - the JDBC-inspired QSql framework.
I’m currently working on a new part of the KActivities framework which will allow applications to retrieve usage statistics in order to provide you with ‘most used documents’ and improved ‘recent documents’ lists, and a few other things as well.
The library needs to provide a nice querying mechanism, different sorting and filtering options, but without exposing the actual database schema and giving the clients all the “destructive” :) power of SQL.
One of the approaches is to provide a predefined “dummy” database to the tests, and write the tests that check for the corner cases that you can predict, along with a few expected ones.
This works well, but might not be sufficient. It would be nice if we could generate a larger corpus of data to test against.
Generating the data is not that difficult - just a few for-loops and a lot of rand() calls, just make sure you are generating valid data.
Now comes the tricky part, and that is reimplementing the logic that used the database with standard collections like std::vector (or QVector) and others.
When that is done, it is trivial run every possible query combination that you have against the database and the in-memory collections and compare the results.
As with the regular databases, you first need to define the schema you want to work on. We can define the records as simple structs (do not judge the usefulness of the example :) ):
Now that we have the record, we should probably define a primary key for it. An easy way would be to add a method called PRIMARY_KEY to all the records. It can either return a single column, or a tuple of more than one of those. Something like this:
It is now trivial to create in-memory tables that obey the uniqueness of these keys.
std::set (or the equivalents from boost and Qt) will ensure that you can not add more items with the same key.
So, this is the schema. The basics are here, you could even play around and try to create foreign keys, and multiple unique keys on a single table, but that is out of the scope of this post.
Now, the only thing left is to mimic some SQL queries. We will not do anything fancy, but some basic SQL concepts are easily transferable to the C++ world using the boost.range library.
Filtering these collections on a specific predicate is really easy, you can just use the boost::adaptors::filtered class:
The predicate can be any function, a class with operator(), or a lambda.
ORDER BY clause
ORDER BY is simply called std::sort. Even better, it is called boost::sort.
Again, the argument can be any function, a class with operator(), or a lambda. Since filtering and sorting on a specific column of a table is a common use-case, it would be an overkill to write lambdas for all of them all the time. We can easily do something like this (it is easily adapted for filtering):
Selecting is, again, simple. Just use the boost::adaptor::transformed.
Food for thoughts
It might be fun if one would use a library like sqlpp11 for the tests while implementing the main logic manually…
This is all for now, I’m tired of writing. Back to the code. :)
EDIT: An alternative to sqlpp11 which also makes SQL type safe is sqlate from our friends at KDAB. I wish all the best to both projects, can not wait for the day when I can stop writing raw SQL.