Naming things

Naming things is apparently one of the hardest problems in software engineering.

I have a bunch of side projects with rather generic and uninspiring names. I’d like to see them gain some wider usage and I think the names are putting people off. Unfortunately, I can’t think of any decent names. So I’d like to throw it out to the community to see if we can find a better name for them! Here’s a list of projects and what they do. If you can think of a better name, just post it in the comments.

polysquare-ci-scripts

Elevator Pitch: Getting software to run on CI environments like Travis-CI requires installing a bunch of dependencies, activating environments and doing other setup. This creates a lot of duplicate code in configuration files. These extensible scripts, written in Python, can be directly fetched with CURL and executed. They set up any required language environments, install dependencies and do deployment-specific steps.

polysquare-travis-container

Elevator Pitch: System package managers are great, but they make life painful when trying to reproduce builds between systems. They often require system level access – something you don’t always have or want. Docker and Vagrant partially solve this problem, but one only works for linux guests and the other is quite heavy-weight. This project creates a local version of your operating system’s package manager so you can install just what you need and nothing else. You run binaries through it and it will automatically set up any required PATHs or LD_LIBRARY_PATHS to make it work.

cmake-ast

Elevator Pitch: Parse CMake files and create an abstract syntax tree, usable from Python.

polysquare-cmake-linter

Elevator Pitch: Catches bad practice in CMake files. Like cmake-lint, but it checks for other things, especially variable quoting.

polysquare-generic-file-linter

I can’t think of a worse name!

Elevator Pitch: Ensures that each source code file’s header is consistently styled and checks for spelling mistakes in comments and user facing strings. For instance, it checks to make sure that every file contains a copyright notice, or that if the name of the file appears at the top of its copyright notice, that the name is actually correct. It also makes sure that anything referred to in a code comment can actually be found in the code if it is not an english word.

polysquare-setuptools-lint

Elevator Pitch: Integrates every decent python linting tool into a setuptools command. Collects all the output into a single format and de-duplicates any warnings. Runs prospector, flake8, pyroma and polysquare-generic-file-linter. Caches results and parallelises the linter processes where possible to speed up builds.

travis-bump-version

Elevator Pitch: Bumps your project’s version number, tags a new release and pushes tags to git on request. Uses bumpversion under the good. Designed to be used in conjunction with Travis-CI.

tooling-cmake-util

Elevator Pitch: A library for CMake that makes it easy to integrate new static analysis tools into your build. Just run psq_run_tool_for_each_source on a target with your tool’s binary and arguments and that tool will run every time that target is updated during your build.

common-universal-cmake

Elevator Pitch: Add it to your project, add executables and libraries through it, and you get amazing tooling like CPPCheck, clang-tidy, include-what-you-use, vera++ and others for free. Adds an option to build code with AddressSanitizer, UndefinedBehaviourSanitizer, MemorySanitizer and ThreadSanitizer. Adds an option which turns on pre-compiled headers and unity builds without having to make any underlying changes to the build system.

cmake-header-language

Elevator Pitch: Examine a header file to determine all of its dependencies and whether it is C only or involves C++. Many tools require that the language be specified manually for such headers.

 

If you can think of a better name for any of these, please let me know. I’ll take any suggestion!

Bringing back the old animations

One of the other casualties when we switched to using Modern OpenGL in Compiz was the loss of the older animation plugins, such as animationaddon, simple-animations, animationjc and animationsplus.

I took some time last weekend to make the necessary changes to bring them back to life and get them merged back into mainline.

One of the more interesting parts of all this was the polygon-based animations. You might remember this as the “glass shatter” or “explode” animations. Unlike most of the other code in Compiz plugins that did transformations on windows, the “polygon animation” mode actually completely took over window drawing. This meant that there was a lot more work to do in terms of getting them to work again.

glDrawElements

Compiz has had (for a few years now) a class called GLVertexBuffer which encapsulates the entire process of setting up geometry and drawing it. If you want to draw something, the process is usually one of getting a handle for something called the “streaming buffer”, resetting its state, adding whatever vertices, texture co-ordinates, attribute and uniform values you needed then calling its render method.

Under the hood, that would populate vertex buffer objects with all the data just before rendering and then call glDrawArrays to render it on screen using the defined vertex and pixel processing pipeline.

glDrawArrays can be cumbersome to work with though, especially with primitive types where you might end up having a lot of repeated vertex data. You have to repeat the components of each vertex for every single triangle that you want to specify.

glDrawElements on the other hand allows you to set up an array of vertices once, adding that array to the vertex buffer, then specifying a little bit later the order in which those vertices will be drawn. That means that if you were drawing some object in which triangles always had a point of (0, 0, 0), then you could just refer to that vertex as “1”, so long as it was the second vertex in the vertex buffer. This is very handy when you have complex 3D geometry.

Quite understandably, animationaddon’s polygon animation mode didn’t use glDrawArrays but glDrawElements.

In order to support both OpenGL and GLES it was necessary add some sort of support for this in GLVertexBuffer, since the old code was using client side vertex and attribute arrays. The quickest way to do this was to just add some overloads to GLVertexBuffer’s render method, so now as a user you can specify an array of indices to render. Its a little more OpenGL traffic, but it makes things a lot easier as a user.

Re-tessellation

All the geometry for those 3D animations was rendered using the GL_POLYGON primitive type. Polygons are essentially untesselated concave shapes. GLES only supports triangles, triangle fans and triangle strips which threw a spanner in the words.

The polygon animation mode supported splitting windows into rectangles, hexagons and glass shards.

At first I was wondering how to convert between the two geometries, but it turns out that for concave shapes there’s an easy way to split it up into triangles. Just take a reference point, then make a line from that reference point to each of its neighbours, bar its neighbours.

path4156.png

That can be represented with this simple function:

namespace
{
    enum class Winding : int
    {
    Clockwise = 0,
    Counterclockwise = 1
    };

    /* This function assumes that indices is large enough to
     * hold a polygon of nSides sides */
    unsigned int determineIndicesForPolygon (GLushort *indices,
                         GLushort nSides,
                         Winding direction)
    {
    unsigned int index = 0;
    bool front = direction == Winding::Counterclockwise;

    for (GLushort i = 2; i < nSides; ++i)
    {
        indices[index] = 0;
        indices[index + 1] = (front ? (i - 1) : i);
        indices[index + 2] = (front ? i : (i - 1));

        index += 3;
    }

    return index;
    }
}

Depth Buffer

We never really used the depth (or stencil buffers) particularly extensively in Compiz, even though the depth buffer is a common feature in most OpenGL applications.

The depth buffer is a straightforward solution to a hard problem – given a bunch of geometry, how do you draw it so that geometry which is closer to the camera is drawn on top of geometry that is further away?

For simple geometry, the answer is usually just to sort it by Z order and draw it back to front. For the vast majority of cases, compiz does just that. But this solution tends to break down once you have a lot of intersecting geometry. And those animations have a lot of intersecting geometry.

Incorrect Depth Buffer.png

Note in this image how the white borders around each piece are drawn on top of everything else?

The better alternative is to use the depth buffer. It isn’t perfect and doesn’t allow for transparency as between objects whilst the depth buffer is enabled, but it does handle the intersecting geometry case very well.

The way it works is to create an entirely separate framebuffer where each “pixel” is a single 24 bit floating point number. Compiz uses an implementation where the other 8 bits are masked out and used for the stencil buffer. Every time OpenGL is about to write a pixel to the framebuffer, it keeps track of how far away that pixel is in the scene. It does that during something called the “rasterisation stage”. This is where a determination is made as to where to draw pixels. That’s done by interpolating between each vertex to reach a position and its relatively trivial to keep track of depth too by similar methods. Then, OpenGL compares the depth to the existing value at that position in the depth buffer. The usual depth test is GL_LESS – so the value in the depth buffer is updated and the framebuffer write is allowed.

The result is that parts of geometry which were already occluded are simply not drawn, where as geometry which occludes other previously-drawn geometry overwrites that geometry.

Correct Depth Buffer.png

I this image, you’ll notice that each piece correctly overlaps each other piece, even if they are intersecting.

Trying it out

The newly returned plugins should be back in the next Compiz release to hit Yakkety. They won’t be installed or enabled by default, but you can install the  compiz-plugins package and compizconfig-settings-manager to get access to them.

If you’re ever curious about how some of those effects work, taking the time to re-write them to work with the Modern OpenGL API is a great way to learn. In some cases it can take a lot of head-scratching and debugging, but the end result is always very pleasant and rewarding. There’s still a few more to do, like group, stackswitch and bicubic.

 

Revenge of the blur plugin for compiz

A couple of years ago I blogged about the blur plugin for compiz – how it worked and some of the changes necessary to make it work with the modernised codebase. I wanted it to be available for the rest of the Ubuntu users, but I was a little overzealous about how much I chose to re-write and I took a rather long hiatus from development before I was able to get it through review.

Revenge of the blur plugin.png

I’ve decided to revive that branch and minimise the change-set so that there might be one last chance of it making it into 16.10 before Unity 7 is dropped. I have to admit that these days there isn’t really all that much use for it, unless you like transparent terminals. Transparent panels and window decorations have more or less gone away now and most WM-integrated shells handle blurs on their own just fine.

It can be found at lp:~smspillaz/compiz/compiz.revenge-of-the-blur-plugin. What can I say, reviving it has even been a little fun!

Thoughts on graduating

I graduated from university a few months ago.

I didn’t blog about it immediately because I didn’t really know how to feel about it. The emotions that I have around university are certainly complex. Decomposing them helps:

  • Happy: That I got to go to university, learned a great deal and came out a largely changed person. There’s something about six years of taking notes on stuff every day, creating hundreds of pages of study notes, writing tens of thousands of words worth of assignments, meeting new people, organising things and participating everywhere you can that widens your horizons and shows you that whatever you know only scratches the surface. Depending on how you count it, for various reasons less than 10% of the world go to university and so I’m very privileged to have had that experience.
  • Regretful: When I started university in 2010, I started full of ideals. I got involved in a fantastic organisation called UN Youth, went to the debating club’s social debates every week, volunteered everywhere I could, participated in free software, ran for the Guild Council (student union) in the Guild Elections, started an amazing part time job and studied things I was really passionate about. My plans for the next year were even bolder. Then something happened. I think I was Icarus and flew too close to the sun. I started to burn out. I made some mistakes that upset some people and really took the pain that I caused to heart. I felt like a monster. I thought nobody would ever want to talk to me again and so I withdrew socially. I resigned from all my positions, stopped going to events and closed my Facebook account. Its a miracle that I even passed some of my classes for the next two years. Its a miracle that my grades are even halfway decent, though they’re nowhere near as good as they could have been. Over the next five years I found it difficult to get involved with anything and I had a huge difficulty trusting myself not to hurt others. I regret not finding a way past that, because it meant that I couldn’t be as involved as a truly wanted to be.
  • Anxious: University provides a safety net. People seem to give it this intrinsic value where it comes first above all other things. I could use it to escape from commitments people were forcing on to me that I didn’t want. Now that its gone, I need to learn how to be accountable for my own time and how to let other people down when you can’t give them what they want. Its a scary thought and a difficult transition to make.
  • Experienced: Perhaps experienced is the wrong word because university isn’t really a place where you go to get real-world experience. But I think I’m certainly more experienced than I am innocent. My experience at university has taught me about the ways that people can try to manipulate you and what the signs are that you’re ending up in a codependent situation. I’m starting to learn that only you are responsible for setting the direction you want in life and you have to follow your own feelings and not what other people tell you to do. I started out studying a Law degree because I was good at the feeder subjects at school, had the grades to get in and most importantly, its what other people told me to do. I finished my Law degree because that’s what other people told me to do. I didn’t want to disappoint those people, so I  ended up disappointing myself. I always wanted to study something like Software Engineering or Computer Science but I rationalised myself out of it.
  • Frustrated: I’m lucky. I never failed a course and I completed my degree ahead of schedule. But I don’t think I appreciated just how long it would take me when I signed up to do it. I started in 2010 and graduated in 2016. That’s six years worth of study. I took a total of 53 courses – 49 from my main degree programme and 4 in Math and Computer Science out of stream. Each course runs over the course of half a year and I’d typically take four or five per semester. I tried working in law practices for a little while, but I’m not sure if I’m at the right point in my life where I want to do that. I want to make stuff, not facilitate transactions. I’m 24 now and I feel like I’m at the point in my life where I should have had my story straight by now. I’m also wondering where the last three years went.

I’ve actually tried to write a lot of posts where I get these feelings down in writing, but I’ve struggled because I feel like it has to come to some sort of symphonic climax or moment of catharsis. There isn’t one. I’m sure there are lots of other components to the complex feeling that I have about graduating that I haven’t quite identified yet, but I’ll keep trying.

I think I’m also scared about posting this too, even though I want to. I’m scared about who might read it and what they might think of me. I’m worried that I’m not supposed to be feeling the way that I’m feeling and that I should be feeling happy and optimistic like everyone says I should. I don’t though. Maybe I just need to admit that.

I’m also not confident that writing this post and pushing “publish” is going to give me a deep sense of relief or new purpose. The only thing I can do is continue to move forward. I’ll make new commitments, shed the old ones and reflect on my progress in the next year.

Moving from biicode to conan.io

Today I moved a bunch of projects over from biicode to conan.io. It was certainly and interesting experience and I think it is worth talking about conan and what they are doing for the C++ community.

Most established programming languages and runtimes these days have their own de-facto package managers. Node has npm, Python has pypi. CPAN apparently the most important thing to have happened Perl, and so the list goes on.

C and C++ have gone without for quite some time. Some might argue that your distribution’s package manager is really the “true” package manager for systems-level languages. That’s true to some extent, but it is a solution with difficulties in its design. Distribution packages are typically installed systemwide. They require super-user access to be installed. Usually you can only install one version of a package at a time, unless the package is re-named and installed into separate directories such that two installations can co-exist. It is typically also not the maintainer of the software who maintains the package in each distribution, which generally leads to a fragmentation in update frequency and overall slowdown in getting new versions of code out to users.

Language based packaging systems take the opposite approach. The software maintainer maintains the packaging information, which is usually built right into the build system. For most modern languages, its entirely feasible to run development versions of an application in a “virtual environment”, where packages can be installed isolated from the rest of the system. Node takes this approach by the default. Python and Ruby have got virtualenv and bundler respectively. As a part of your build, you can update all your dependencies as once and lock dependencies to particular versions on a per-app basis.

Creating such a system for C++ has been known for a long time to have been fraught with difficulty. For one, there’s no standard build system for C++ and attempts to create the one true buildsystem have all failed. That means that there’s no way to simply build a package manager into a build system that has a wealth of information about every project. Every platform has its own preferred compiler and usually a preferred build system too. There are binary compatibility nightmares. Compiling C++ code takes a long time and it looks like that won’t be fixed until we have modules. Most C and C++ projects were written during the time when we expected that distributions to package everything and so many projects will dynamically link to libraries already installed on the system, systemwide.

Conan is here to try and tackle what seems like an insurmountable problem and they have an approach that is seriously worth checking out. It provides a model that is a reasonable hybrid of what we’ve come to expect from distribution packaging systems and language based package managers. It doesn’t depend on any build system in particular and tries to support all the major ones.

It works by having either the maintainer or someone else write a “conanfile.” A conanfile can be either an ini or python file that describes briefly what the package is about, what its dependencies are and how it is built. One of the really nice things about it is that you don’t have to upload the entire package source code or binaries to the conan servers if they’re already hosted somewhere – just provide a URL to a zip file and some information on how to deal with it. For instance, on each release of my CMake modules, I upload a new package description which links to a download for a tarball of the git tag of that version.

Conan will try to fetch any uploaded binaries that match your system configuration if it can (reducing the binary compatibility problem), but if not, it will rebuild a package from source upon installation. All a package’s dependencies, whether binary or otherwise, are pulled in for your project’s use upon running conan install. Nothing gets installed systemwide. Once conan install is done, it generates a file that can be used by your build system. In the case of cmake, that file sets all of the include, library and cmake paths so that a dependency can be used in a project. Just include and link to it as you usually would and it should all just work.

conan.io runs their own package registry, but you can also host your own since the server software is open source. Creating and uploading a package is a relatively straightforward procedure. Each version of a package is treated as a unique entry, so an upload of a newer version will not overwrite an older version in case anybody else needs to depend on an older version of a package. A package descriptor in conan might look something like “my-package/version@user/channel.” Everything after the “@” allow for multiple copies of the same package to be maintained by different users if there are modifications those users would like to apply. The channel allows each user to maintain a separate copy of each version of a package if there is a need to subdivide further.

To upload a package, you first need to register it with your “local store” using conan export inside the package directory where the conanfile is located like so:

conan export smspillaz/my-package

After that, you can upload the specified version to conan, which depending on your exports setting, might upload just the conanfile or some other files if there’s no need to fetch the source code from another location.

conan upload my-package/master@smspillaz/my-package

For most of my projects, I only needed to maintain one copy, so it was as as simple as having a version called “master” (which pointed to the most up-to-date tarball) and numerical versions where appropriate. Everything was just under the “smspillaz/my-package” stream.

A dependency can be re-used within a project by specifying its full descriptor (e.g., my-package/master@smspillaz/my-package in the dependencies section of the conanfile).

Overall, I would really recommend checking out conan and looking into making your software available as a dependency, if you’re developing a C++ module that you want others to use. Modules like catch, boost and sfml are already available. There’s no lock in, in the sense that your build process doesn’t have to depend on conan if you start using it, though there’s certainly very little disadvantage in doing so. Hopefully with conan we’ll start seeing a greater proliferation of small C++ modules so that developers and focus on making great applications as opposed to choosing between re-inventing the wheel or managing another dependency across several platforms.

A unit testing framework for CMake

The first question that might pop into your head is why. The answer to that is pretty straightforward – CMake code can get quite complex very quickly. There can be a lot of edge cases based on different configuration options and different platforms.

One popular CMake module, cotire is about 3900 lines long at this count. Cotire provides a simple layer to use precompiled headers across the three main compilers. It has about 75 functions and 13 macros to handle all sorts of stuff, from getting compiler definitions to parsing include trees. Getting that stuff right is hard. Getting it wrong on just one set of options or system definition can cause no end of annoyance for users of your library. Especially for those users left to debug the problem and not familiar with the details of the language.

Over the last year I’ve been working on a unit testing framework for CMake so that module authors can catch these kinds of bugs before they happen. Note that I don’t propose that people start testing their project build definitions as found in the CMakeLists.txt. Those definitions are typically written to be as declarative as possible. Your continuous integration process which builds the project should catch any relevant problems in those build definition files. I’m more interested in testing modules that ship with libraries, or just modules that provide useful functionality to CMake, of which there has been a great proliferation over the last few years.

The framework is called, somewhat unimaginatively, cmake-unit. It supports everything that you’d expect in a typical xUnit-like framework, including:

  • Multiple test definitions per file.
  • A generic cmake_unit_assert_that function which can take pre-defined or user-defined matcher functions to verify that a value matches certain criteria.
  • Automatic test discovery and execution.
  • Suppression of output messages except on failure.
  • Conditional enabling of test cases.
  • XML output of test results.
  • Clean execution slate between tests.
  • Code coverage reports.

There’s currently no support for test fixtures, though in my own testing, I’ve found that they haven’t really been necessary. CMake doesn’t have the concept of resources that need to be managed manually. If shared setup needs to be done for a set of tests, it can be refactored into a separate function and called from the test definition.

CMake presents some interesting problems in terms of implementing a test framework, which cmake-unit tries to accommodate:

  • Multiple Phases: Configuring, building and testing a CMake build-specification is separated into multiple phases, with the state at the end of each phase available only ephemerally before the execution of the next one. The framework allows for custom cmake code to be run for each phase, all contained within the same test. It also allows for variables to propagate across phases of a test.
  • No support for first class functions: The language doesn’t provide a mechanism to call a function by a name specified in a variable. The framework provides a work-around and calling convention encapsulated in cmake_call_function to provide this functionality. This is what makes custom matchers and test-case auto discovery possible.
  • Build system commands operate on source files: Most CMake commands that would  directly affect Makefile generation are not available in CMake’s script mode. Hand writing source files for each test case can be frustrating. The framework provides a mechanism to create a minimal build environment for supported source types and functions to declaratively generate source files.
  • Location of output binaries varies by platform: On some platforms, binaries are nested within a directory specified by CMAKE_CFG_INTDIR. The value of this directory varies by platform and is not readable in script mode. The framework provides a mechanism obtain the true location of a binary and transfer that value between phases.

cmake-unit‘s own test suite provides a great deal of examples as to what tests can look like. The simplest test, which generates a library and executable, then links the two together, looks as follows

function (namespace_test_one)

    function (_namespace_configure)

        cmake_unit_create_simple_library (library SHARED FUNCTIONS function)
        cmake_unit_create_simple_executable (executable)
        target_link_libraries (executable library)

    cmake_unit_assert_that (executable is_linked_to library)

    endfunction ()

    function (_namespace_verify)

        cmake_unit_get_log_for (INVOKE_BUILD OUTPUT BUILD_OUTPUT)

        cmake_unit_assert_that ("${BUILD_OUTPUT}"
                                file_contents any_line
                                matches_regex
                                "^.*executable.*$")

    endfunction ()

    cmake_unit_configure_test (INVOKE_CONFIGURE LANGUAGES C CXX
                               CONFIGURE COMMAND _namespace_configure
                               VERIFY COMMAND _namespace_verify)

endfunction ()

The entire test is encapsulated inside namespace_test_one function. There are two phase that we’re interested in – the configure and verify phases. These are also the only two phases you’ll need in most tests.

The configure phase just looks exactly like a user would use your library in a CMakeLists.txt file. It runs in project-generation mode, so you have complete access to the Makefile generating functions. Since CMakeUnit.cmake has already been included, you can start asserting things right away, for instance, checking before the build even happens whether executable is set up to be linked to library.

The verify phase runs in script mode after both cmake –build and ctest have been run on the project.  A utility function, cmake_unit_get_log_for provides a way to get the full output of both the standard output and standard error of any phase. From there, you can make assertions, either about the state of the build tree or about what was found in the build log.

The final command, cmake_unit_configure_test is a function with metadata about the test. It tells cmake-unit what functions will be used to configure and verify the build process and whether support for particular programming languages should be enabled. It is worth noting that support for all programming languages on each test are turned off by default, since the overhead for some generators to initialise support for those languages can be quite high.

Finally, in your test file, you will need to call cmake_unit_init to start the test auto-discovery process and register files for coverage reports. For example:

The NAMESPACE option tells cmake-unit to look for any functions in the current file  which start with ${NAMESPACE}_test and add them to the execution list. Any files specified in COVERAGE_FILES will have coverage information recorded about them if CMAKE_UNIT_LOG_COVERAGE is enabled.

From there, testing a CMake module is as easy as building a CMake project. Just create a build directory, use cmake to configure the project and discover all the tests, then use ctest to run the tests.


cmake_unit_init (NAMESPACE namespace)

COVERAGE_FILES "${CMAKE_CURRENT_LIST_DIR}/Module.cmake")

I’ve waited quite some time before publishing this framework, mainly because I actually started it in early 2014 and re-wrote it in early 2015. Since then, I’ve been using it in about ten or so of my own modules and its reached a state of relative stability. I’d like to get some feedback from other module maintainers to see if this project is useful.

You can find the project on biicode on the smspillaz/cmake-unit block. I’ll eventually move everything over to conan once I get a chance. If you need to include it in a non-bii project, you’ll need to copy the dependencies into the bii/deps directory manually.

I’ve been working on some other cool development-related projects in the last year, so I’ll be blogging about them soon. Stay tuned!

Bash substitution and ssh-keygen

Here’s something to note after almost being locked out of an account.

Be careful about bash variable substitution when using ssh-keygen -N. Or better yet, don’t use ssh-keygen -N at all, preferring ssh-keygen -p PRIVATE_KEY_FILE.

The reason why is that the passphrase provided to -N can be modified by reason of variable substitution in bash. For instance, if you had the characters $? in your passphrase as provided to -N, they’ll be replaced with last command’s pid – good luck finding out what that was after trying to unlock your private key a few times.