Sharing the love

Tim Penhey made an excellent post this morning where he took some of his time to thank those who contributed to who he was an his professional development.  I think that his post makes a very good post that us software engineers don’t become who we are in isolation – we are part of a much broader network of other people and our development is often shaped by the activities and advice of those around us. I wanted to take this post to share a few thank-you’s of my own to former mentors and coworkers who showed me a lot.

First of all, thank you to the original compiz team – Danny Baumann, David Reveman, Dennis Kaspyrzk, Jigish Gohil, Erkin Bahceci, Guillaume Seguin. You were all my original mentors who got me into programming in the first place and its been a wonderful ride ever since. Thanks for putting up with fifteen-year-old me and some of my dramatic behavior at times! But thanks even more for believing in me and giving me a chance to go on and make further contributions to compiz and other projects. It was something I really needed at the time and gave me a real head start.

Further thanks go out to some of the people I worked with on some of the hobby projects I did. Rodolfo Granata – thanks for writing the original Freewins plugin and thanks for taking my (massive) patches to make it do even more. Kevin Lange – thanks for working on the original wiitrack Wii-remote headtracking plugin and thanks even-more for working with me to get it working with actual hardware.

To my buddies at Canonical – thanks Jason Smith for being an all-round awesome dude and putting yourself on the line to recommend me for working at Canonical. I enjoyed working with you on every single damn project we worked on together, whether that be Compiz, Nux, Unity or whatever. Our late-night debugging sessions were awesome and you were supportive all the way. I always loved geeking out and talking about OpenGL and such with you.

To Neil Patel – thanks for being an all-round awesome technical lead. Neil was the one who stood up for us when we had a “massive axe placed over our heads” to deliver a newly rewritten based-on-compiz-and-nux Unity for Ubuntu 11.04.

To Didier Roche – thanks for showing me the ins and outs of Ubuntu and Debian packaging, a world that I totally would not have been able to understand without you. Thanks for always giving me another chance, especially when there was a last minute bug or tarball that wouldn’t build or whatever.

Robert Carr – thanks for being a general all-round awesome hacker who showed me that you could always do something in novel ways, and furthermore, thanks for being an awesome friend.

Tim Penhey – you didn’t think I was going to leave you out did you? Tim was an awesome line manager. Not only did he stand up for his engineering team (which was massive), he also worked with all of us to turn us into really smart engineers. I’ve got about 12 books (and counting!) on software in my library now and its all thanks to Tim’s recommendations. Tim taught me how to go from a hacker to a software craftsman by writing good, clean and lean code, test driven development and consistent (i.e. non chaotic) development process. I could always rely on Tim for advice on how to tackle a particular problem or improve myself.

Alan Griffiths – I was incredibly privileged to work with you, and the people whom you work with at Canonical still are. Alan taught me the fine details of getting tests in place around almost everything and his “you touch it, you test it” mentality really changed the way I thought about writing code. Alan also helped me to write a lean C-based object system for compizconfig, which is the very reason why things like the compizconfig gsettings backend (along with its desktop integration code and others) has 98% test coverage, which is remarkable for software written in C.

Daniel van Vugt – thanks for showing me that there was always another way of doing things. We may not have agreed on everything, but that is certainly a good thing. Daniel showed me that there are ways to write good, clean, testable software without it becoming enormously complex and hard to read.

Marco Trevisan, Andrea Azzarone, Brandon Schafer – thanks for being awesome colleagues and working with me to produce great stuff. We were in the same boat for a lot of things and all learnt new stuff together.

Of course, this list doesn’t encompass everyone. There are people whom I worked with from time to time both at Canonical and on compiz which definitely had an impact upon my professional development as an engineer and if I listed all of you I’d end up with a blog post several pages long!

The end result of years of engineering process isn’t just a final product that you see on a screen. Its also strongly-forged friendships of all the people who were there every step of the way.

Experiencing the Mir client API

Now that XBMC has display, mouse and keyboard support on Mir, I feel somewhat qualified to write about using the Mir client API, from the perspective of porting an existing application.

From a high level, the Mir client API (affectionately known as mir_toolkit in libmirclient) follows some of the basic constructs that you would find in Wayland. Namely, you have a MirConnection which is responsible for application-level communication with the display server, and a MirSurface which is responsible for managing some remote surface, drawing into it and handling events. Aside from MirWaitHandle, those are the only two objects that you’ll be concerned with, so its a little unlike Wayland which splits its “Surface” concept up into separate interfaces like wl_surface, wl_shell_surface and wl_egl_window(*) and its “Connection” concept up into wl_displaywl_registrywl_compositorwl_shell etc.

(*) Technically, wl_egl_window is not an interface but just a wrapper around wl_surface and wl_buffer which uses wl_drm under the hood.

Synchronously opening a connection and creating an OpenGL(ES)-renderable surface in Mir is a simple-enough affair:

MirConnection *connection =
  mir_connect_sync (path_to_socket, app_name);

MirSurfaceParameters parameters =

MirSurface *surface =
  mir_connection_create_surface_sync(connection, &parameters);

EGLNativeDisplayType nativeDisplay =
EGLNativeWindowType nativeWindow =

From there, all you need to do is just initialize EGL, create an EGLDisplay with eglGetDisplay, create an EGLContext with eglCreateContext, create an EGLSurface with eglCreateWindowSurface, and then call eglMakeCurrent and start using OpenGL(ES) to render to your surface.

Usually Asynchronous

Earlier I mentioned that a MirConnection and MirSurface can be created synchronously, but they can also be created asynchronously too. A callback can be registered on mir_connect and mir_connection_create_surface which will be called as soon as the connection or surface is ready. Your application can do other initialization tasks which don’t depend on a MirConnection or MirSurface while it waits for the server to return them.

Pervasively Multithreaded

One of the really interesting things about these callbacks and other callbacks, for example, the callback provided through MirEventDelegate to mir_surface_set_event_handler, is that your callback gets called in a completely independent thread. This means that there’s no main loop to spin or poll on – the display server will wake up the thread that’s interested in the particular data that its sending.

I believe that Mir is unique here in that no other display server has been designed to work this way. Doing it this way has some immediate benefits – handling input in applications can be somewhat expensive, because your application will often need to go through some complicated process to look up what to do in response to some particular combination of input. Instead of blocking the main thread, all this can happen in a worker thread which then tells the main thread what to do.

Unfortunately for legacy applications, the vast majority of which are single threaded and pretty much all of which do input and display handling in one thread, it means that the author of the application needs to lock a queue, push the incoming data on to that queue and then unlock the queue (and vice-versa in the main-thread) in order to preserve compatibility with other windowing system backends. The client library’s documentation provides that you as the author are responsible for thread-safety.

 * Set the event handler to be called when events arrive for a surface.
 * \warning event_handler could be called from another thread. You must do
 * and locking appropriate to protect your data accessed in the
 * callback. There is also a chance that different events will be
 * called back in different threads, for the same surface,
 * simultaneously.
 * \param [in] surface The surface
 * \param [in] event_handler The event handler to call
void mir_surface_set_event_handler(MirSurface *surface,
 MirEventDelegate const *event_handler);

Thankfully, there aren’t going to be too many consumers of mir_toolkit other than application toolkits or other special-case cross platform applications like XBMC. This design also opens up some interesting future possibilities. I’m looking into doing some of the (somewhat time-consuming) keysym-to-internal-keysym processing inside of the worker thread before handing it off to the XBMC main thread as a kind of demonstration of how that would work.

XBMC on Mir

As part of my GSoC project to make XBMC work on newer linux display systems, I also wrote a backend to make it render on the Mir display server too. This was actually quite straightforward, and took about the same amount of time to get to same point of completeness as it did for Wayland support.  There’s no mouse or keyboard input support yet, but only because I haven’t had the time to implement it yet. Should be done in a few days.

You can find it at git:// mir-gsoc-1.

Incidentally, I think this is the first non-toolkit non-demo Mir client out there.

XBMC on Wayland Compositors, take two

In late February this year, I published a proof of concept demonstrating the XBMC Media Center on the Weston system compositor. It was basically a hack which used SDL’s existing wayland compositor support with a few additions required to make XBMC work. XBMC plans to drop SDL usage and use window systems directly, which makes a lot of sense, but it meant that this proof of concept would have to be largely rewritten.

As such, I’ve decided to do so as a Google Summer of Code project (suprise!), as a kind of “take-two”, except that it uses the Wayland client interface directly, (shouldn’t) crash, (should) have some more comprehensive test coverage and perhaps could be expanded to support other display systems like Mir and direct display on the framebuffer via kms/gbm.


I’ve just published a new pull request for an initial version of this work. Hopefully users will be able to use this (and perhaps much more at the end of the GSoC cycle).

Let your tests tell you when your code design sucks.

Google’s testing on the toilet blog is a really great read for people who are looking at getting into test-driven development and unit testing as a development methodology. Their most recent post called “Don’t overuse mocks” makes an important point – overmocking in your tests makes your tests less readable.

For the uninitiated, “mocking” in the unit-testing world is basically where you replace an interface with a “mock”, whose behaviour can be specified at runtime. If the code that you’re trying to get under test is dependent on some behaviour from an interface provided to it, then the mocking framework effectively allows you to specify that behaviour so that you can make your code do certain things.

For example:

class Animal
        virtual bool feet () const = 0;

class Seal :
    public Animal
        Seal ()
             makeLotsOfNoise ();

        virtual bool feet () const { return false; }

bool canJump (Animal const &animal)
    return animal.feet ();

We want to test that canJump will return false if the animal has no feet and will return true if animal has feet. We don’t want to depend on Seal, because its constructor makes lots of noise. Also, Seal might get feet in the future and that would make our test invalid. So we use a mock:

class MockAnimal :
    public Animal

        MOCK_CONST_METHOD0 (feet, bool ());

TEST (Main, JumpIfFeet)
    MockAnimal animal;
    ON_CALL (animal, feet ()).WillByDefault (Return (true));
    EXPECT_TRUE (canJump (animal));

TEST (Main, NoJumpIfNotFeet)
    MockAnimal animal;
    ON_CALL (animal, feet ()).WillByDefault (Return (false));
    EXPECT_FALSE (canJump (animal));

The example in “Don’t overuse mocks” is a bit more involved. It involves the use of mocking to control the interaction between a CreditCardServer, TransactionManager, Transaction, Payment and a PaymentProcessor. Here’s the example of how its’ done:

public void testCreditCardIsCharged() {
  paymentProcessor = new PaymentProcessor(mockServer);
  when(, creditCard,
  paymentProcessor.processPayment(creditCard, Money.dollars(500));
  verify(mockServer).pay(transaction, creditCard, 500);

[some of the variable names have been changed for the sake of line-wrapping]

Its quite clear that the usage of mocks in this case is used to describe a multi-step process:

  1. The server must be available (say it is)
  2. We need a TransactionManager from the server (return our mock one)
  3. We need a Transaction from the TransactionManager (return our mock one)
  4. We need a payment from the TransactionManager (return our mock one)
  5. We need to know if the payment is over balance (say it isn’t)

If all of those conditions are satisfied, then the payment should be processed on the server. The fact that we had to wriggle so much to get the paymentProcessor to call pay () results in a somewhat fragile test. If we added a new condition unrelated to what we are testing here then we also need to change the behaviour of the mocks otherwise that test will fail. This is the peril of “over-mocking” – you’ve removed a dependency on some behaviour, but you need to re-implement the rest yourself.

Testing on the Toilet’s solution to this problem is “don’t use mocking for stuff like this”.

public void testCreditCardIsCharged() {
  paymentProcessor = new PaymentProcessor(creditCardServer);
  paymentProcessor.processPayment(creditCard, Money.dollars(500));
  assertEquals(500, creditCardServer.getMostRecentCharge(creditCard));

And to some extent, they’d be right. But I think its treating the symptom (the test smell) as opposed to the problem (the code smell).

There are three real problems here:

  1. paymentProcessor.processPayment does too much. It needs to ask if the server is running, ask for a transaction manager, ask for a transaction, ask for a payment and ask if the payment is over balance.
  2. paymentProcessor.processPayment is also asking too much. It should be told what do rather than just being provided some interfaces and figuring out what to do.
  3. testCreditCardIsCharged is too vague – what conditions exactly are we testing that the credit card is charged? Are we just testing that it is charged? Are we testing that it is charged when the payment server is available? Are we testing that it is charged when the payment is over balance? In reality, they’re testing that a series of steps results in success rather that testing that the system behaves as expected under particular conditions.

How can we fix this?

Well, first of all, we should figure out what processPayment is doing and whether or not it can be split up. We don’t have the code, but judging by the mock’s behaviour – it does the following:

  1. Checks if the server is running
  2. Gets a payment object for the amount requested
  3. Checks if the payment is over-balance
  4. Calls pay () on the server

Steps 1 & 2 and 3 & 4 can really be split up into two separate functions. We’re going to make one method get our payment () object from the server and another method responsible paying if appropriate. So we have:

  1. Preconditions for being able to pay at all (server running, get objects)
  2. Preconditions for being able to pay the amount requested (not over balance)
  3. Postcondition (payment succeeded)

So lets do that:

public class PaymentProcessor
    public Transaction prepareTransaction (CreditCardServer server)
        if (!server.isServerAvailable ())
            return null;

        return server.beginTransaction ().getTransaction ();

    public Payment preparePayment (Transaction transaction,
                                   Money money)
        return new Payment (transaction, money);

    public void processPayment (CreditCardServer server,
                                Transaction transaction,
                                Payment payment)
        if (!payment.isOverMaxBalance ())
   (transaction, payment);

Now, we have three distinct methods which each do different things. Now we want to test them.

The first thing we want to test is that a payment which is not over-balance is paid:

public void testPaymentPaidIfNotOverBalance ()
    Payment payment = new Mock <Payment> ();
    when (payment, isOverMaxBalance ()).thenReturn (false);

    paymentProcessor.processPayment (mockServer,

    verify (mockServer, pay (stubTransaction, payment);

One assert done, two to go. Lets check that a non-active server does not give us a transaction to play with:

public void testNoTransactionIfServerNotOnline ()
    Mock <CreditCardServer> server = new Mock <CreditCardServer> ();
    assertThat (paymentProcessor.prepareTransaction (server),
                IsNull ());

And now make sure that we get a payment object from our transaction

public void testPaymentObtainedFromTransaction ()
    StubTransaction transaction;
    assertThat (paymentProcessor.preparePayment (transaction,
                                                 Money.dollars (500),
                NotNull ());

Our client code is going to be a little bit more involved now:

Transaction transaction = processor.prepareTransaction (server);
Payment payment = processor.preparePayment (transaction,
                                            Money.dollars (500));
processor.processPayment (server, transaction, payment);

On the other hand we now know what the process is going to be within the client, and testing PaymentProcess is not so involved anymore, because each of its parts only perform a very specific task.

Also – in the client we don’t ask for anything. We just tell the processor to prepare a transaction, prepare a payment and pay. And in the tests, we don’t have to say “yes the transaction manager is running, yes, you can have a transaction” in order to make the part of the code responsible for paying if not over balance to work correctly. It just works correctly as long as you pass it some data it expects.

The moral of the story is: If you end up with fragile tests, go and have a look at your code. The tests will tell you if your interface sucks.

Regression Testing for crashes on Google Test

One common use for automated testing is to put tests in place that reproduce bugs to ensure that the same kind of bug can’t happen again once the failing test is fixed.

There is a small caveat when doing this with bugs that result in segfaults however – which is that a test that fails-by-crashing will cause the entire test binary to halt as well, meaning that you won’t see the results of any tests afterwards. Its also difficult to see what it was in the test itself that crashed at a glance without using a debugger of some sort.

Thankfully, google-test provides a method to test for things-that-shouldn’t-crash – by forking and checking the exit code of the forked process.

So my new best practice for writing these sorts of tests is now:

using ::testing::ExitedWithCode;

TEST(FooDeathTest, FooPtrFromBarIsSafe)
    std::unique_ptr <Foo> foo (from_bar ());

        foo->baz = 1;
        std::cerr << "Success!" << std::endl;
        exit (0);
    }, ExitedWithCode (0), "Success");

Instead of:

TEST(Foo, FooPtrFromBarIsSafe)
    std::unique_ptr <Foo> foo (from_bar ());
    foo->baz = 1; // won't crash if foo is valid

There are three benefits to doing it this way:

Whoever is reading the test source can instantly see that the thing you’re testing is that some operation succeeds and exits cleanly

This is especially the case when EXPECT_EXIT ({ }, ExitedWithCode (0), “”) becomes somewhat of an idiom.

You can test the inverse – that is – you can test that something will crash with the same style:

using ::testing::ExitedWithCode;

TEST(FooDeathTest, FooPtrFromBarIsNotSafe)
    std::unique_ptr <Foo> foo (from_bar ());

        foo->baz = 1;
        exit (0);
    }, "");

You get a much more informative error message at a first glance:

[RUN    ] FooDeathTest.FooPtrFromBarIsSafe
Failed expectation (lineno):
    Expected that the block:
        foo->baz = 1;
        std::cerr << "Success!" << std::endl;
        exit (0);
    Would exit with the message "Success" and an exit code of 0;
    Exited with no message and an exit code of 9
[    FAIL] FooDeathTest.FooPtrFromBarIsSafe
(other tests run here)

As opposed to:

[RUN    ] Foo.FooPtrFromBarIsSafe
Segmentation Fault
(no other tests run here)


I’ve been getting a bit of impostor syndrome lately, something which I feel is holding me back a fair bit. Not sure what it is, but it seems like every time I try to move on a moment of inspiration, I start to second-guess. Is what I’m doing a useful endeavor? Maybe someone else has already done this? Am I good enough to write this yet?

Of course, its terribly annoying – I’ve been left with a bunch of half-finished projects for a broader initiative, which seems to create a nasty feedback loop confirming the sense of unworthiness.

Perhaps this is the wrong place to ask this – but does anybody have a good idea on how to make these thoughts go away? I’d much rather be presenting cool things in this space than saying that I’ve talked myself out of yet another idea.