Monthly Archives: January 2013

Hidden away in the corner

Triggers: depression, suicide

I was saddened to hear the news of Aaron Swartz death last night. Aaron wasn’t really a mentor to me in any way. I was never really a big follower of the free culture and free software movement. What bothered me the most, was how the death occurred.

On January 11, 2013, Aaron took his own life.

There are a lot of theories out there by those searching for answers. Some believe it was the looming court case, or the harsh sentencing that might follow. But far more unsurprisingly, Aaron suffered from a common condition. Depression.

I debated with myself for a long time about whether or not I should talk about this. Its a sensitive area, with many lost souls, broken hearts and fractured minds. Just writing about it brings me to a place I’d rather not be. But its time to talk, because we can’t let this tragedy continue any more.

Mental illness is frighteningly prevalent in the software and technology industry*. And even more frightening is the fact that we are so isolated from each other that you really have no idea how bad it is.

It might come as no surprise that I succumbed to an episode of depression early last year. Thankfully, the bottom I bounced on for a while was just above what one might consider “at immediate risk of suicide”, but still enough to impact my quality of life.

What was surprising to me, was that many I have shared this detail with, have gone through the same thing.

I’ll say that again, because its really important.

There appears to be a far-above-average rate of depression and other mental illness in this field.

That is tragic.

Sadly, I’m just another hacker in the broader scheme of things. I’m not a psychologist, and I’m not even close to having answers that might help everyone get out of this. There are no concrete “cure” for depression. Its a cancer of the mind. There is therapy to help people untie the knot their heads have gotten into. There is medication to make it seem as though its not there. But its often up to those who live their lives to ensure they don’t get into it, or if they do get into it, are able to get out of it.

There’s only one thing I know for certain.

Software and Technology is a high-stakes, high-stress, hard-work industry. We are all amazing pioneers in one way or another, and that’s what society has come to expect of us. The problem is that there’s often a disconnect between the product of the inputs, and the perception of the process from others.

Advancing technology is really hard to get right, and requires lots of concentration to fit all the details together. As such, it isn’t really conductive to the kinds of social interaction that you might get in other fields of work. Many of those who work in this industry tend to be “hidden away and out of sight” – either in their basement, their bedroom, their office, their cubicle, or wherever. The means of interaction becomes a charade behind an IRC handle, or an email address, or an account of a bugtracker, or a forum or blog. Its the most productive way of working, so we tend to reward it. But it is also the fastest way to begin to lose touch, and then eventually, “lose it”. The stress increases, and the coping resources are already at critically low levels.

Perhaps that insight might be useful to someone else. But there’s not much I know or can say about why we are like that, or what we can do to ensure that it doesn’t lead to tragedy.

All I can say is that I really regret for waiting until the conversation I never had before talking about it.

* I do not mean this to say that mental illness and suicide is any more tragic in software than it is in any other field or walk of life. Any preventable loss of life is tragic, no matter what the reason, circumstances or cause.

edit: I don’t want to imply that every person in the industry had or has depression. Thankfully, there are some of us who were lucky enough to have never been there.

A note on compiz development and Unity / Ubuntu

I was quite disappointed to see this in a bug comment today:

Compiz [...]. The only work on it are hacks to bend it to the will of Unity

There seems to be a misconception going around that Compiz exists only to serve the needs of Unity as the compositor framework, and that development of compiz exists as a series of “bends” to make Unity work.

That is not true, and has never been true.

Internally at Canonical, compiz was always handled as a separate upstream project. It was a separate upstream project before I worked there, a separate upstream project while I worked there, and is a separate upstream project after I left.

Not once was any development decision in compiz made for the sole benefit of those who use Unity to the detriment of those who use compiz as a standalone window manager or with other desktop environments. If it did – you would know about it. Unity is a very tightly designed desktop shell in which many of the parts that make it up were highly dependent on the other parts. That was by design – the team that led the implementation of Unity wanted to create a great desktop shell, and not to create a series of independent parts.

Compiz was always the exception to the rule.

If compiz truly was a compositing framework that was a part-and-parcel of Unity, then one would see that the entire plugin system / settings framework would have been dropped – the window decorators would have been dropped, many of the plugins would have been dropped from the source tree completely, and much of the window management behaviour would have been rewritten internally to match the Unity design guidelines.

That never happened, because the DX team and later the Product Strategy team at Canonical saw the value of keeping compiz as a separate upstream project, in which Canonical and the Ubuntu community invested effort into which benefited all users and not just those who used Unity.

What did happen, during my employment at Canonical, and while compiz is the compositing framework for Unity is that the developers who are working on it tend to put their priority on things that affect the most users. Considering that its the default desktop on Ubuntu – that’s a very large chunk of users. The good thing is, that all of the effort put into that maintenance usually always benefits those who don’t use Unity as well.

There’s only one place where I screwed up in this aspect, and that was in the maintenance of the grid plugin. I believe that’s one area where I let design requirements take over the original intent of the plugin. The better thing to do would have been to implement it inside of the Unity shell. So to the original author – an apology. I’ve messed up in that regard. But I hope that all of the work I did both at Canonical and outside of Canonical has been worth it for everyone who uses compiz, and not just those who used it with Unity.

Automated testing and Compiz

One of the best decisions we ever made for Compiz was to invest in a solid automated testing framework late last year. Today, and likely by our 0.9.9 release we will have about 1163 tests running in continuous-integration, and just under 1200 tests total. Unity has about 700 or so tests, and Nux has about 300.

For Compiz, over 1000 tests might seem like a large number, but its actually a relatively small one, especially in terms of code coverage. The code coverage is pretty dismal – the figure is probably below 10% the last time I checked. That’s not necessarily a bad thing though – compiz is a large project, with several parts that haven’t been touched in quite some time. It obviously makes sense to invest testing effort in the bits that change a lot, and I think over time, we have been relatively successful at that.

We have automated testing which covers all sorts of stuff, including but not limited to:

  • How pixmap-to-texture binding behaviour works with pixmaps provided by external applications
  • How the GNOME configuration backends pick the right options to integrate with
  • How those backends actually integrate properly with those options
  • How the GSettings configuration backend works, in every aspect of its functionality (profile management, writing out different key types, handling of lists, handling of updates from both compiz and gsettings). This makes up the majority of our tests, because there are lots of things to think about.
  • How the grid plugin picks which state the window should be in when its resized (maximized, maximized vertically etc)
  • How the place plugin handles screen size changes
  • How the decor plugin communicates image updates from the decorator process
  • How the settings for gtk-window-decorator affect its internal state.
  • How timers work under different conditions
  • How button presses are handled
  • How plugins are sorted
  • How keyboard modifiers are converted to strings
  • How offsets are applied to the expo plugin’s animations
  • How vsync behaviour works on different hardware
  • How fullscreen window unredirection detects when a window can safely be unredirected
  • How decoration shadow clipping works
  • How certain window stacking cases work
  • How –replace behaviour works

Whenever we go back to those – and other sections of the code, make test becomes a very useful piece of documentation. One can make a change and check straight away if they broke anything that was covered by the testing suite. Then upon closer examination, the tests themselves provide pre-and-post assertions for certain codepaths. If the expected behaviour was meant to change, then just update the test. If it wasn’t – it means that your new code has unintended side-effects.

Test Driven Development is another aspect of automated testing, which involves creating a sort of “walking skeleton” of how you expect a particular system to work, in terms of how its actors interact and what interface it provides to external entities within the system. All the functional code is just stubbed out and does nothing. Then once the “skeleton” is created, you create tests to assert what the behaviours and outputs of certain methods and interactions on your system should be. These tests initially fail, you write the code which fulfils the assertions until the tests pass.

This whole process has taught me a few things I feel like sharing.

Testing framework

The first is to pick a good xUnit-like framework and write tests to that. xUnit is not really the name of a standard, or a library, or anything in particular as it is to refer to a collection of tools available for most languages to craft tests in terms of individual tests, shared code between tests (fixtures) and pre-and-post-conditions after executing certain blocks of code. xUnit like frameworks help to bring consistency into how testing is done in your project, and typically adapt to most codebases quite well. For example, instead of writing something like this:

#include "mything.h"

bool test_that_mything_x_returns_42_after_foo ()
{
MyThing mything;
mything.setupState ();
mything.doOtherStuff ();

mything.foo ();
int result = mything.x ();

bool ret = false;

if (result == 42)
{
printf ("Passed: mything returned 42\n");
ret = true;
}
else
{
printf ("Failed: mything did not return 42\n");
ret = false;
}

mything.shutdown ();
mything.cleanup ();

return ret;
}

bool test_another_boring_thing ()
{
...
}

int main (void)
{
bool result = false;
result |= test_mything_x_returns_42_after_foo ();
result |= test_another_boring_thing ();

if (result)
return 0;
else
return 1;
}

You can write something like this:

#include "gtest/gtest.h"
#include "mything.h"

class TestMyThing :
public ::testing::Test
{
public:
virtual void SetUp ()
{
mything.setupState ();
mything.doOtherStuff ();
}

virtual void TearDown ()
{
mything.shutdown ();
mything.cleanup ();
}
};

TEST_F (TestMyThing, XReturns42AfterFoo)
{
mything.foo ();
ASSERT_EQ (42, mything.x ());
}

TEST_F (TestMyThing, OtherBoringThing)
{
...
}

The latter is much easier to read and also much more consistent, because all of the code required to get your object into a state where it can be tested is done in one place, and done for every test. In addition, we don’t need to manually call every test, and the results are printed for us. The ASSERT_* and EXPECT_* postconditions will automatically pass or fail the test, and the former will instantly bail out.

That was actually an example of Google Test usage, which is what we are using in compiz. Google Test is very versatile, and has been an excellent choice of unit testing framework. I can’t think of a single thing which I haven’t been able to get Google Test to do, and as you can see on its advanced guide, the kinds of testing setups you can do are very very varied indeed.

One of the reasons why we have over one thousand tests in compiz in less than the space of the year is that Google Test supports a kind of table-driven-testing framework through the use of type and value parameterized tests. This means that you can run the same test over and over with slightly different inputs and postconditions each time. This allows you to excersize every possible path that your application might take if it responds slightly differently to different inputs. This was very useful for testing the GSettings backend for example, where we have a single test which tests reading of every possible value type, but also reports every single value type as an independent test. The code that reads and writes setting values is different for every type of value, but similar in a number of ways. With that kind of test, we can test that the interface is consistent, and see where reading one value type fails where another passes.

Understanding the impact of testing on the codebase

One of the biggest issues I’ve seen come up in changing a project to use automated testing is that the test suite often becomes an afterthought rather than a core driving factor of the project.

The first thing to realize is that testing any codebase that was not architected for testing is hard. Really hard in fact. One of the fundamental aspects of automated testing is being able to verify causes and effects within your codebase, and if the effects are scattered and indirectly associated with the causes, then testing it becomes a real problem. Unfortunately, without the stricter requirements on architecture that automated testing forces upon engineers, codebases often tend to become large systems which are amalgamations of side effects. They work just fine internally, but are impossible to reason about in the language of direct causes and effects at a micro level.

What this means is that when working with a legacy codebase, you’ll often run into the problem where you just can’t get the thing you want to test into a test fixture because it depends on too much other stuff. For loosely typed and duck typed languages, sometimes you can get away with simulating the stuff you need for the code to work. For statically typed languages like C and C++ where compiling the code brings in dependencies, the only way out of the mess is to accept that you will need to redesign the code that you want to get under test so that it can be reasoned about in terms of direct causes and effects.

For engineers who are trying to do the right thing by writing tests for their code, this is often seen to be a slap in the face. Now, not only can you not write tests for your change, but in order to do so, you need to make invasive changes to the code just to get it under test so you can make your change. Understandably so, this is quite frustrating. Unfortunately there’s not much of a way around it, you just need to bite the bullet and accept its for the greater good. But the key is making a good start, and understanding that when a project adopts automated testing after it was developed, the core value of having automated testing shapes the codebase, rather than the codebase shaping the tests.

(As an aside, Feathers presents some solutions to the problems engineers often face in getting old codebases under test in Working Effectively with Legacy Code. The solutions he presents are usually quite intuitive and obvious in their implementation, for those who need a good way reference. It uses a mix of C++ and Java as the languages for samples).

Another thing to keep in mind is that the automated testing process really needs to a part-and-parcel of the build-deploy-test-run process. All too often test suites are abandoned within the tests/ subdirectory of the source code, with a few tests here and there that do not run very often and not built by default.

A test suite is only useful if its is built by default, and can be run in its entirety after a default build. Anything less means that your test suite is not doing its job.

C and C++ make this problem annoying to deal with. You can’t tests to executables at all. Usually, for plugin-based systems you cant link tests to plugins. Pulling in an entire library might unintentionally introduce runtime dependencies or symbol conflicts you really don’t want to care about. Compiling files twice is an abomination. Usually the only way out is to structure the build system around the test system.

In compiz, you might notice that we build a ton of static libraries. For example, gtk_window_decorator_settings_storage_gsettings is a small library which internally represents the process by which settings for gtk-window-decorator are stored in gsettings. Then, when we want to test it, we have an individual test binary which just links that library. If we need to test functionality across those libraries, then you just link in each library as needed to the various tests.

Thinking about the testing level

Unfortunately the world of automated testing tends to be dominated by professionals and managers who like to throw around a lot of jargon. This jargon tends to confuse newcomers, who would easily be led into thinking that they all mean the same thing. So I’ll define a few terms now:

  • Unit Testing: Testing at a “unit” level, which is generally the smallest level that one can feed inputs and observe outputs. It often means testing individual parts of your system and not the entire system itself. A good candidate would be, for example, testing the function that finds the nearest prime number to an input. A bad candidate would be a function that synchronizes a database to a remote server.
  • Integration Testing: Testing how different components of the system interact with each other where there is a clear dependency on the interfaces that either provides. Often times, it is the case that the external system is one that you don’t have any control over, and you want to test that you are using it properly. A good candidate is checking that when we make certain Xlib API calls, a window on the X Server ends up in a certain state.
  • Acceptance Testing / End-to-End Testing: Testing that the system works in its entirety. Unity has a very sophisticated acceptance testing framework called autopilot. If you’ve not seen autopilot in action, and you’re curious, I’d suggest trying it one day. What it does is automatically interact with certain elements in the shell and communicate with Unity over a D-Bus interface to verify certain parts of the program state. For example, whether or not clicking on the Ubuntu button opened the Dash. (It gets much more sophisticated though – all the way down to how the show-desktop behaviour operates when certain windows are open or minimized, to how the alt-tab behaviour operates when you have lots of windows from a certain app open).

Generally speaking, the different kinds of problems that you’ll face in development will be better suited to different kinds of testing. The most important thing to realize is that the test suite is not useful if the majority of the pre-and-post conditions can’t be verified in a matter of seconds, or can’t be done without other stuff running on the system. Continuous Integration (CI) servers are usually quite bare-bones, and the Ubuntu CI server will not run anything that depends on a working X11 server. The rule of thumb is that if it isn’t in your memory space and completely within your control, the unit tests should not touch it.

In Compiz, the goal with testing has generally been to start with tests at the unit level, and then test the how the units fit into the broader picture at higher levels. Unit tests are not an end-all solution to every testing problem. Mocking frameworks can ease some of the pain, but if your system’s job is mostly to interact with some other external system, then mocking out a bunch of API calls and checking that you made them is not very useful, because it doesn’t actually test that you’re using the external system correctly, or that the series of API calls puts it in the state you expect it to.

There are only two places we currently we do integration tests in compiz. The first is with GSettings, which has a special “memory only” backend, where you can use the library to manipulate an in-process structure of settings. The second is with xorg-gtest, which spawns a headless X-Server on each test and allows your application to connect to it and manipulate its state, and make verifications based on the events it sends back, or the synchronous requests that can be made to get the current state.

The most important thing to note about both of those is that they only exist as integration tests in order to test integration with those components. The preferable alternative is almost always to design the code under test so that the dependency not available in CI is optional.

One area which I really believe needs a good integration testing story is OpenGL. Partly by nature of its state-machine and bindful design, and partly by way of the fact that its outputs are pixels, OpenGL is extremely difficult to reason about from a testing perspective. The only ways to test usage of OpenGL at the moment are by doing pixel comparisons, or by mocking out the whole API. Both are unsatisfactory because one is imprecise, and the other doesn’t effectively test the system at hand. Its a project I’d be willing to get on board with, but I can imagine it would be very complicated to get right, as we don’t even know what good things to test in-between API calls and output pixels even are.

If you’ve made it this far, then congratulations. I’ve got a lot more I could write on this subject, but I’ve wanted to give an overview about automated testing in Compiz for quite some time.

Understanding the compiz blur plugin: alpha-only-blurring

A former compiz developer once told me that the blur plugin is (paraphrased, since I don’t have the original quote), a bunch of voodoo in a .cpp file. After implementing it for newer versions of compiz that use GLSL instead of ARB shaders, I’d be almost inclined to agree. It uses a number of tricks to do its work in places that one wouldn’t expect, but I think this information could be useful for other compositors. There’s lots of tricks, so I’ll try and space out what I find over different blog posts.

Blurring algorithms

Surface blurring in OpenGL is a tricky problem, because every pixel in the region that you need to blur is dependent on every other pixel, which means that you need to render the whole region first, and then re-render the pixels in that region with blurring applied. Generally speaking, there are two different ways to do this. The first is to copy the read buffer into a texture, and then draw that texture on-screen with blur post-processing. For example:

// draw stuff
GLuint read;
glGenTextures (1, &read);
glBindTexture (GL_TEXTURE_2D, read);

// set up texture

glCopyTexSubImage2D (GL_TEXTURE_2D, 0,
                     0, // x offset in texture co-ordinates
                     0, // y offset in texture co-ordinates
                     srcX,
                     srcY,
                     srcWidth,
                     srcHeight);

glBindTexture (GL_TEXTURE_2D, 0);

// set up blur program

glUseProgram (blurProgram);

float vertices[] =
{
    srcX, srcY, 0,
    srcX, srcY + srcHeight, 0,
    srcX + srcWidth, srcY, 0,
    srcX + srcWidth, srcY + srcHeight, 0
}

float texCoords[] =
{
    0, 0,
    0, 1,
    1, 0,
    1, 1
}

glActiveTexture (GL_TEXTURE_UNIT0_ARB);
glBindTexture (GL_TEXTURE_2D, read);
glEnableClientState (GL_VERTEX_ARRAY);
glEnableClientState (GL_TEXTURE_COORD_ARRAY);
glVertexPointer (3, GL_FLOAT, 0, vertices);
glTexCoordPointer (2, GL_FLOAT, 0, texCoords);
glDrawArrays (GL_TRIANGLE_STRIP, 0, 4);
glBindTexture (GL_TEXTURE_2D, 0);
glDisableClientState (GL_TEXTURE_COORD_ARRAY);
glDisableClientState (GL_TEXTURE_COORD_ARRAY);

Another way of doing it is to render the scene into a framebuffer object, and then draw that to the screen with blurs applied.

GLuint fb, tex;
glGenFramebuffers (1, &fb);
glBindFramebuffer (GL_DRAW_FRAMEBUFFER, fb);
glGenTextures (1, &tex);
glBindTexture (GL_TEXTURE_2D, tex);
glTexImage2D (GL_TEXTURE_2D, 0, GL_RGBA, screenWidth, screenHeight, 0, GL_RGBA, GL_UNSIGNED_BYTE, NULL);
glTexParameteri (GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri (GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glBindTexture (GL_TEXTURE_2D, 0);
glFramebufferTexture2D (GL_DRAW_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, tex, 0);

// render scene ...

glBindFramebuffer (GL_DRAW_FRAMEBUFFER, 0);

glActiveTexture (GL_TEXTURE_UNIT0_ARB);
glBindTexture (GL_TEXTURE_2D, read);
glEnableClientState (GL_VERTEX_ARRAY);
glEnableClientState (GL_TEXTURE_COORD_ARRAY);

// render the non-blurred regions first
float vertices[] =
{
    0, 0, 0,
    0, screenHeight, 0,
    screenWidth, 0, 0,
    screenWidth, screenHeight, 0
}

float texCoords[] =
{
    0, 0,
    0, 1,
    1, 0,
    1, 1
}

glVertexPointer (3, GL_FLOAT, 0, vertices);
glTexCoordPointer (2, GL_FLOAT, 0, texCoords);
glDrawArrays (GL_TRIANGLE_STRIP, 0, 4);

// now render the blurred region, in this case its just a single rect
float bVertices[] =
{
    blurSrcX, blurSrcY, 0,
    blurSrcX, blurSrcY + blurSrcHeight, 0,
    blurSrcX + blurSrcWidth, blurSrcY, 0,
    blurSrcX + blurSrcWidth, blurSrcY + blurSrcHeight, 0
}

float bTexCoords[] =
{
    screenWidth / blurSrcX, screenHeight / blurSrcY,
    screenWidth / blurSrcX, screenHeight / (blurSrcY + blurSrcHeight),
    screenWidth / (blurSrcX + blurSrcWidth), screenHeight / blurSrcY,
    screenWidth / (blurSrcX + blurSrcWidth), screenHeight / (blurSrcY + blurSrcHeight)
}

glVertexPointer (3, GL_FLOAT, 0, bVertices);
glTexCoordPointer (2, GL_FLOAT, 0, bTexCoords);
glDrawArrays (GL_TRIANGLE_STRIP, 0, 4);
glBindTexture (GL_TEXTURE_2D, 0);
glDisableClientState (GL_TEXTURE_COORD_ARRAY);
glDisableClientState (GL_TEXTURE_COORD_ARRAY);

After that’s done, you just render the object which had the transparent background on top of the blur, and it appears as though the background is blurred.

Alpha-as-blur

The blur plugin is a little more clever than this though. It takes the same blur texture (using the former method, and a combination of the two for gaussian blur), and uses that to paint alpha regions as blur. The original implementation looked something like this:

!ARBfp1.0
TEMP output, blur_fCoord, blur_mask, blur_sum, blur_dst, blur_t0, blur_t1, blur_t2, blur_t3, blur_s0, blur_s1, blur_s2, blur_s3;

// Sample texcoord[0] from texture[0] into output
TEX output, fragment.texcoord[0], texture[0], 2D;

// Multiply the fragment color with the sample
MUL output, fragment.color, output;

// Multiply fragment position with var0
MUL blur_fCoord, fragment.position, program.env[0];

// Add fCoord to var2 and store in t0
ADD blur_t0, blur_fCoord, program.env[2];

// Sample texture[1] at t0 and store in s0
TEX blur_s0, blur_t0, texture[1], 2D;

// Subtract var2 from fCoord, store in t1
SUB blur_t1, blur_fCoord, program.env[2];

// Sample texture[1] at t1
TEX blur_s1, blur_t1, texture[1], 2D;

// Multiply var2 with {-1.0, 1.0, 0.0, 0.0}. add fCoord store in t2
MAD blur_t2, program.env[2], { -1.0, 1.0, 0.0, 0.0 }, blur_fCoord;

// Sample texture[1] at t2, store in s2
TEX blur_s2, blur_t2, texture[1], 2D;

// Multiply var2 with {-1.0, 1.0, 0.0, 0.0}. add fCoord store in t3
MAD blur_t3, program.env[2], { 1.0, -1.0, 0.0, 0.0 }, blur_fCoord;

// Sample texture[1] at t3, store in s3
TEX blur_s3, blur_t3, texture[1], 2D;

// Multiply output.a by program.env[1] scalar, store in blur_mask
MUL_SAT blur_mask, output.a, program.env[1];

// Multiply sample0 by 0.25, store in blur_sum
MUL blur_sum, blur_s0, 0.25;

// Mutiply sample1 by 0.25, add to blur_sum
MAD blur_sum, blur_s1, 0.25, blur_sum;

// Multiply sample2 by 0.25, add to blur_sum
MAD blur_sum, blur_s2, 0.25, blur_sum;

// Multiply sample3 by 0.25, add to blur sum
MAD blur_sum, blur_s3, 0.25, blur_sum;

// Multiply blur_mask by -alpha, add blur_mask and store in blur_dst
MAD blur_dst, blur_mask, -output.a, blur_mask;

// Multiply sum by blur_dst alpha, add output, store in output.rgb
MAD output.rgb, blur_sum, blur_dst.a, output;

// Add blur_dst.a to output.a
ADD output.a, output.a, blur_dst.a;

// Put output into result.color
MOV result.color, output;
END

Its the first and last few lines that we care about the most. Lets have a look at them:

// Sample texcoord[0] from texture[0] into output
TEX output, fragment.texcoord[0], texture[0], 2D;

// Multiply the fragment color with the sample
MUL output, fragment.color, output;

...

// Multiply output.a by program.env[1] scalar, store in blur_mask
MUL_SAT blur_mask, output.a, program.env[1];

...

// Multiply blur_mask by -alpha, add blur_mask and store in blur_dst
MAD blur_dst, blur_mask, -output.a, blur_mask;

// Multiply sum by blur_dst alpha, add output, store in output.rgb
MAD output.rgb, blur_sum, blur_dst.a, output;

// Add blur_dst.a to output.a
ADD output.a, output.a, blur_dst.a;

// Put output into result.color
MOV result.color, output;

Here’s what it looks like in GLSL:

vec4 originalPixel = texture2D (objectTexture, objectTexCoord);
vec4 blurMask = clamp (threshold * originalPixel.a, 0, 1);
...
vec4 blurDestination = blurMask * -originalPixel.a + blurMask;
originalPixel.rgb = blurredPixel.rgb * blurDestination.a + originalPixel.rgb
originalPixel.a += blurDestination.a
gl_FragColor = originalPixel;

What that little bit of code does, is figure out what the original pixel would have been before blending it with the rest of the scene had we drawn it without the blur, then using its alpha value to determine how to mix the blurred pixel in. Then we mix in that burred pixel and draw it as the final pixel. It means that you can draw the blurred background-as-the-alpha-pixel on the texture, which saves another call to glDrawArrays.

Its also responsible for smoothly fading out the blur as the window becomes more transparent. If you’ve got the blur plugin available (Quantal and Raring users – its in my ppa), try fading out a window to see!

Next up, I’ll talk about mipmap blurring, optimizing out occluded areas, interaction with GLX_EXT_buffer_age and independent texture-coordinate fetches.