Serverless React Boilerplate

I’ve been keeping my eye on serverless as a way to do hosting and deployments for my next react project when I get around to it and this week I finally got a chance to look into it.

React Serverless.png

The tl;dr is that I’ve forked react-boilerplate as react-boilerplate-serverless if you just want to get started with serverless deployments. You should be able to immediately use npm run serverless-deploy.

For the uninitiated, the main difference brought with so-called “serverless” applications is that they don’t run on a dedicated virtual machine or container that is running 24/7. Instead, your code is abstracted behind infinitely scaleable architecture and you only pay for the execution time of your code as opposed to the server itself. This is the sort of automatic scaleability that people like to talk about with cloud computing.

There’s a few important distinctions with serverless computing that comes with its architecture. First, we call serverless deployments “functions” because they really are just that – the infrastructure hosting the serverless service takes care of all the network IO and just calls your “function” when a particular endpoint is accessed. In that sense, you lose control over how network IO is handled and the process which is handling that network IO. The “function” you are deploying in this case is really a loadable library which can be a collection of scripts and data but with no associated running “process”. Instead the library is loaded at runtime, which means that the serverless computing provider can bill you only for the executing time your library actually uses in handling requests.

In practice, this distinction tends not to matter so much. It turns out to be relatively trivial (with some caveats) to turn any express-like application into a serverless application thanks to tools like serverless-http. If you wrap your express application with the serverless wrapper from that package, infrastructure like AWS lambda will just call your route handlers directly after being invoked from a frontend server like AWS API Gateway. To your application, this is essentially no different than the route handlers being invoked asynchronously by libuv and node’s nextTick function.

There’s some other practical annoyances to be aware of as well, though they can all be mitigated.

Support for compression and binary data is still sketchy at best

Unfortunately, AWS API Gateway handles binary content in a particularly awkward way, expecting it to be serialised to base64 by the request handler (e.g., your serverless function) before it goes back to the API Gateway (which should, in theory, unserialise it back into binary). For a long time, API Gateway didn’t even support binary responses and only gained support for it last year making it possible to actually host a server that serves image data directly with Lambda. But you still need to manually specify the MIME types that should be treated as binary on the API Gateway configuration. I’ve found that this works well enough for things like images but it completely broke down when it came to compressed text.

Most node servers, including the server set up in react-boilerplate use compression-middleware which allows for the client and server to agree for responses to be gzipped. Unfortunately, API Gateway chokes on the binary data if you give it the compressed response directly and specifying that the binary data be encoded in base64 causes the client to choke. I haven’t found any way to get around this problem other than just disabling compression. Maybe the story will improve in time to come and we won’t have to worry about messing about with settings to support binary content.

Execution time

Since popular servers like node are single-threaded and event based you probably shouldn’t be performing any too-long-running or expensive tasks from within your server anyway. But you really need to watch out with in-function asynchronous tasks that could take longer than 10 minutes, since the execution time is limited to 300 seconds. This could be a problem if you need to upload large files with the help of the function. One workaround for that is to just have the client upload the files directly to an S3 bucket, though that might require a little more stuffing about than you would like.

Everything is stateless

Again, if you designed your web server in line with RESTful principles this shouldn’t be a problem in theory, though it could be a problem if you have logic that runs on first start.

You don’t own the infrastructure

If you’re used to Platform-as-a-Service this isn’t a huge deal, but its worth keeping in mind that you don’t control the runtime that your application is “running” on. In fact, you don’t even control whether your application is running at a given moment, other than when it is handling requests.

 

Check it out

So far I’ve had enough luck to be able to get a render of a page including images and associated scripts and styles, which proves that this definitely can be done. Hopefully this will become a more affordable and simpler way to deploy simple servers in the future, without having to worry about provisioning entire virtual machines or containers running 24/7 for the sake of hosting.

Advertisements

Exploring faster blurs

This week I a saw blog post on Hacker News entitled “Fastest Gaussian Blur” (by Ivan Kutskir). That caught my interest, because one thing that always irked me back when I worked on Unity was the fact that our UI blurs felt quite sluggish and there didn’t seem to be much we could do to speed them up given the size of the blur radii that we were using. I had a Sunday afternoon to kill and recently broke my ankle again (no gym or rock climbing), so I figured why not try implementing some of them.

Gaussian Blur

The post goes over four different algorithms for implementing gaussian blur, the first one being the standard approach of just doing an R-squared pass over every pixel and calculating its normalised gaussian distribution weights inline. As you can imagine, this is terribly slow, because you need to sample R-squared pixels for the entire image (and needlessly recompute the gaussian weights!). As is typically the case, its also the most trivial to implement in a fragment shader – you just take the input image and do two nested loops for the radius, sampling each neighbouring pixel and applying its calculated gaussian weight.

As a simple blur shader, its pretty satisfying to implement, since it is probably the most accurate in terms of what we consider a “blur” to be. But alas, the performance sucks. So can we do better?

Box Blur

Well, the article says yes, we can. Algorithms 2 and 3 in the article go over a technique called “box blur”, which is basically a way to approximate gaussian blur without having to compute weights. The trick here is to do several passes over the image each with different radii that you work out using some calculus. The article goes into more detail about why the math behind this works. As it says, the nice thing about box blurs is that you don’t need to worry about computing weights – the application of different blur radii tends to converge on the same result.

The only difference Algorithms 2 and 3 is that Algorithm 3 explicitly separates out the horizontal and vertical passes (thus, you have an O(2(nr)) operation as opposed to an O(n*r^2) operation. Most blurring approaches I’ve seen tend to do this because the complexity saving alone outweighs the cost of doing multiple framebuffer passes. In practice, it does turn out to be more performant, which is nice.

Precomputed Weights with a Data Texture

I was tempted to go back and try implement this in GNOME-Shell, though I saw that someone had already beaten me to it. Initially I was a little defeated, but then I became curious looking at the approach. It seems as though this extension uses a bunch of hardcoded weights separated into “bands”, though it initialises those weights on each pixel. I thought that was a little odd, so I dug a bit deeper into what it was doing by reimplementing it.

What was neat about this approach is that it is essentially a 2-pass gaussian blur, O(2(n * r)), except without the cost of having to recompute all those weights all the time. That’s handy, though since the weights are precomputed, its easy to run into a situation where you “run out” of weights if you want a bigger blur kernel.

Luckily, the author included a tool to compute weights up to a given blur radius. What was interesting is that the weights are put into “bands” for each stepping of 5 units on the radius. I presume that this was done to save on space inside the fragment shader since they are all hardcoded.

Now, the size of the computed weights array changes as you increase the radius, since that gaussian distribution function kind of “flattens out” and more area along the x-axis becomes significant. The original author dealt with this problem by hardcoding a size-31 array that only fills up once we get to the last “band” and assigning the each hardcoded weights array to it depending on which “band” your radius was in. Unfortunately, the notation the author uses for directly assigning the array isn’t implemented in OpenGL ES 2 (which is what WebGL uses). First I tried just brute-forcing the problem and using a regex to generate code that assigned the weights directly, but I realised that there is a much nicer alternative.

You can’t pass variable length arrays to shaders through uniforms in GLSL – I believe that doing so would probably violate the linker’s expectations as to memory layout etc. But you can pass handles to textures to sample instead!

Textures used to be this special datatype in OpenGL, but thanks to OES_texture_float and hacks where we treat textures as “memory”, they’re now useful as general purpose variable-length storage. So much so its how GPGPU in WebGL works  – the threejs people even have an example on how to use it. I once did a project where I used the depth buffer value to dynamically scale a blur radius – maybe that can be the subject of a future blog post.

So of course, what you can do to pass these gaussian weights down to the shader without having to hardcode or precompute them is to create a texture using GL_FLOAT as the storage type and set the red channel to the weight value. Then just sample the texture using texture2D inside the fragment shader to get the weight value for a given radius offset.

In my testing, such an implementation seemed to have pretty good performance even for very large blur kernels, however, it appears as though the sampling mechanism diverges from what I’m used to with gaussian blur  and we end up giving more weight to things outside the focal point of the pixel very quickly. Perhaps something to be looked into for next time.

 

Explaining Boyer-Moore

I had to implement the Boyer-Moore string search algorithm from scratch for an advanced algorithms class in 2015. It remember it took ages to get my head around the various concepts and I was so happy when I finally got it. So it was unsurprisingly disappointing that fast-forward to today, I’ve forgotten how it works.

Luckily, I’m on a bit of a mission to learn Rust and algorithms are a pretty good testing bed. So this article is partly for my benefit to summarise my own findings and partly for the benefit of the reader who might be scratching their head about how it works.

Naive Substring Search

To understand why such a big deal is made about Boyer-Moore, you need to appreciate why “naive” substring search scales so poorly.  “naive” substring search is basically exactly how you’d expect to implement a substring search:

char needle[] = "some word";
char haystack[] = "Why oh why must I always find the word, some word, any word";

size_t needle_len = strlen(needle);
size_t haystack_len = strlen(haystack);

for (size_t i = 0; i < (haystack_len - needle_len); ++i) {
    bool match = true;
    for (size_t j = 0; j < needle_len; ++j) {
        if (needle[i] != haystack[i + j]) {
            match = false;
            break;
        }
    }

    if (match) {
        return i;
    }
}

return -1;

This implementation of substring search is dutiful – it will try and match, from front to back, the needle at every position in the haystack. The problem is exactly that, it will try and match the needle at every single position in the haystack. In the worst case, it needs to check the length of the needle against the length of the haystack, haystack times. We say that this is O(nm).

If you’re just matching one or two characters against a very long body of text, its probably not too bad. But if you’re thinking of checking if a paragraph is contained a book, you’re gonna have a bad time.

If you visualize what is going on in the naive search, you’ll probably notice all the unnecessary work:

Why oh why must I always find the word, some word, any word
some word
 some word
  some word
   some word
    some word
     some word
      some word

The astute reader might see this and realize that in every case above, we’re just matching the letter “s” over and over again against cases that are futile anyway, because for the entire string to match, the last letter must also necessarily match and … it doesn’t. So lets handle that case now:

char needle[] = "some word";
char haystack[] = "Why oh why must I always find the word, some word, any word";

size_t needle_len = strlen(needle);
size_t haystack_len = strlen(haystack);

for (size_t i = 0; i < (haystack_len - needle_len); ++i) {
    bool match = true;
    for (size_t j = 0; j < needle_len; ++j) {
        /* Compare back to front */
        if (needle[(needle_len - 1) - j] != haystack[i + ((needle_len - 1) - j)]) {
            match = false;
            break;
        }
    }

    if (match) {
        return i;
    }
}

return -1;

Main difference here is that we compare back to front – so at least we’re catching all those futile mismatches early. While this is a little better, it is still technically O(nm) – the same problem could just happen in reverse, all the characters at the back could match, except for the first one. Now you’re back to square one.

Skipping Ahead

The impatient reader might wonder why we’re bothering to re-examine all those sub-patterns within the pattern we just failed. That alignment didn’t match, so why not check the next one instead? That would make the whole thing O(n), right?

Why oh why must I always find the word, some word, any word
some word
         some word
                  some word
                           some word
                                    some word
                                             some word
                                                  some word

Damn! We missed the match! The problem with this approach is that in skipping words you might partly overlap the match, then partly overlap another part of the match, but never actually see whole match. So this substring search approach is actually incorrect. What we’ll need to do “intelligently” figure out how many characters to skip ahead so that we’ll eventually align with the match in the haystack. And that is exactly what Boyer-Moore is about!

The Boyer-Moore algorithm defines two lookup tables, the bad character table and the good suffix table. The tl;dr of both of these tables is as follows:

  1. The bad character table tells you how many characters to shift the pattern forward if you want to align the character in the haystack string where the mismatch occurred with the closest character in the needle string such that the two will match. The theory goes that if there’s at least a character matching in the middle of the pattern, there’s some likelihood that the new alignment itself is a match.
  2. The good suffix table tells you how many characters to shift the pattern forward such that the substring into the pattern you just matched (eg, all the characters that did match up until the mismatch) occurs again. Generally speaking the good suffix table doesn’t come into play very much, but it is relevant in cases where you have repeating patterns within the needle string and you need to check the repeated pattern against the input string. Usually this comes up in the context of trying to match DNA strings, where you only have an alphabet of GATC.

That probably didn’t mean very much without examples, so lets start with the bad character rule. Look again at what we were doing when we tried to just blindly skip ahead by the text length:

Why oh why must I always find the word, some word, any word
some word
         some word
                  some word
                           some word
                                    some word
                                             some word
                                                  some word

If you’re observant, you’ll see that we got close, but we just missed out, specifically, here:

ord, some 
some word

We mismatched on the last character (remember, starting from the back!), d != e. But we know that the character ‘e’ occurs in our pattern! If we just shift the pattern forward such that the two ‘e’ characters align….

some word 
some word

Ah-hah! We have our match! So by using the bad character rule we were able to keep skipping words(*) all the way until we got close to the match, and then we just aligned with the match! That’s very very close to linear performance for something that was previously quadratic!

So where does the good suffix rule come in to it? Well, to illustrate this, I’ll have to use another example:

ababa abcbb abcab
abcab

Yes, this is slightly more contrived, but it demonstrates that the bad character rule on its own is not sufficient to ensure correct matching:

ababa abcbb abcab abccb
abcab
 abcab
      abcab
        abcab
         abcab
              abcab
                  abcab

Uh-oh, we missed it! So why did that happen? Well, we got close, but then this happened:

ababa abcbb abcab abccb
         abcab

“ab” matched “ab” at the end of the needle, which was a good start. But then “c” didn’t match “a”. Since ” ” does not occurr within our string, we skip forward by five characters (eg, the length of the needle) and completely miss the fact that we could have aligned “ab” at the start of the string with the “ab” that we just matched! The good suffix rule accounts for this. We kept a record of how far back in the string the same subpattern “ab” occurs again. In this case, it is three characters away, because we matched two characters, we shift forward by three:

ababa abcbb abcab abccb
            abcab

And we’re done!

Computing the tables

By far the most annoying part of implementing the algorithm the algorithm is computing the tables for the bad character rule and good suffix rule. Most of the content I’ve seen around this has literally been as useful as code snippits with cryptic variable names or just snapshots of the already-computed tables. I’m going to walk you through how the tables actually work, step by step.

Starting with the bad character rule table, recall what the bad character rule actually did:

Why oh why must I always find the word, some word, any word
some word
         some word
                  some word
                           some word        5 <- (shift for e from d)
                                    some word
                                         some word (shift by only 5)

We saw that we mismatched on ‘e’ at the last character in the needle. We know from there that the character ‘e’ is five characters before that last character, so we shift forward only five characters the next time. So it stands to reason that we’ll need to compute a 2D table – for every character in the alphabet and every position in the needle, how far back do we need to look in order to the character in the alphabet in the needle?Again, it isn’t the character in the needle at the given index that we’re interested in, its the character in the alphabet that we’re looking at (and we do it for each character in the alphabet). So yes, this is quite a large table:

    a b c d e f g h i j k l m n o p q r s t u v ... w
0 s
1 o                                     1
2 m                             1       2
3 e         1               1   2       3
4           2               2   3       4
5 w         3               3   4       5
6 o         4               4   5       6           1
7 r         5               5   6       7           2
8 d         6               6   7     1 8           3

Notice a convenient pattern? All we’re doing is figuring out how many characters backwards into the needle we need to go in order to reach that character in the alphabet. If we can’t find it, then no worries, we just don’t put an entry  there (or typically, we’d put some sentinel value like 0 or -1). In rust, you might have code that looks like this:

fn find_pending_character_index(chars: &Vec<char>, start: usize, needle: &char) -> usize {
    let len = chars.len();

    for i in (start +1)..len {
        if chars[i] == *needle {
            return i - start;
        }
    }

    return 0;
}
fn bad_character_table(pattern: &str) -> Vec<Vec<usize>> {
    /* Go backwards through the string for each character and work out how many characters
     * away that character is (backwards) */
    let mut table =vec![vec![(0asusize); pattern.len()]; ASCII_LOWER.len()];

    /* String gets reversed here */
    let chars: Vec<char>= pattern.chars().rev().collect();
    let len = chars.len();

    /* Enumerate through the characters, back to front (idx here starts
     * at zero, but think of it as (len - 1) - idx */
    for (idx, _) in chars.iter().enumerate() {
        for (alphabet_index, c) in ASCII_LOWER.iter().enumerate() {
            /* How many characters from this one until we reach the end
             * or see this character again */
            let res = find_pending_character_index(&chars, idx, c);
            /* Since idx was zero-index, we invert it here, insert
             * number of steps backwards we have to go to find this
             * character in the alphabet */
            table[alphabet_index][(len - 1) - idx] = res;
        }
    }

    return table;
}

Okay, now for the good suffix table. Thankfully, it is much smaller than the bad character table, but it is a little more complicated to generate. Recall what the good suffix rule was doing:

ababa abcbb abcab abccb
abcab
 abcab
      abcab
        abcab
         abcab <- (shift forward by 3 to align subpattern)
            abcab (shifted only by 3, not by 5)

So essentially we need to keep track of, for each suffix how many characters we can look backwards in the needle to find the same suffix again. Simple enough to say it, but how do we actually implement something like that?

Well, the best way to explain it is that for each suffix, we have a sliding window starting from one character before which is the same size of the suffix. We scan both the substring pointed to by the window and the suffix to see if there is a match, and if so, we take note of how many characters away our window was:

abcab
    b (suffix)
   _ (window)
  _
 b (suffix appeared again, 4 characters away)

Same thing with the “ab” suffix…

abcab
   ab (suffix)
  __
 __
ab (suffix appeared again, 3 characters away)

So our overall good suffix table will look like this:

0 1 2 3 4
a b c a b
---------
      3 4

What does that look like in implementation? Well, something like this:

fn good_suffix_table(pattern: &str) -> Vec<usize> {
    /* For each character in the string, take the subpattern from char[i]..char[len]
     * and see if such a subpattern exists elsewhere in the string, moving the window
     * back one by one. This is an O(N^2) operation over the pattern, since you have
     * to check the subpattern over each window from the current position. */
    let chars: Vec<char>= pattern.chars().rev().collect();
    let mut table =vec![(0asusize); chars.len()];

    for idx in 0..chars.len() {
        /* Add one here since the outer loop was exclusive, we do not want the inner
         * loop to be exclusive of the last index */
       for candidate in 1..((chars.len() +1) - idx) {
           /* Checking two empty subpatterns against each other doesn't make
            * much sense - we want a full shift in that case */
           if idx > 0 && matching_subpatterns(&chars, 0, candidate, idx) {
               table[(chars.len() - 1) - idx] = candidate;
               break;
           }
       }
   }

    return table;
}

Putting it all together

So now that we went to all that effort to construct our good suffix and bad character tables, how do we make use of them in the overall algorithm? Well, it is remarkably simple:

fn boyer_moore(input: &str,
               pattern: &str,
               bad_character: &Vec<Vec<usize>>,
               good_suffix: &Vec<usize>) -> Option<usize> {
    /* Some obvious cases */
    let pattern_chars: Vec<char>= pattern.chars().collect();
    let input_chars: Vec<char>= input.chars().collect();

    if pattern_chars.len() > input_chars.len() {
        return None;
    }

    /* Align at the first character */
    let mut alignment = 0;
    let max_alignment = input.len() - pattern.len();

    loop {
        let mut mismatch =false;

        for (pattern_index, i) in (alignment..(alignment + pattern_chars.len())).enumerate().rev() {
            if input_chars[i] != pattern_chars[pattern_index] {
               mismatch = true;

               /* If we're on the last alignment, return now,
                * as there are no matches */
               if alignment == (input.len() - pattern.len()) {
                  return None;
               }

               /* Mismatch occurred here, look up this character/index pair
                * in the good suffix and bad character tables to see how far
                * forward to shift. If the shift amount is zero, shift
                * forward by the entire pattern size */
               let charidx = (input_chars[i] as usize) - ('a' as usize);
               /* Shift amount is the higher of either the bad character
                * rule or the good suffix rule */
               let shift = std::cmp::max(bad_character[charidx][pattern_index],
                                         good_suffix[pattern_index]);

               if shift == 0 {
                   /* No shift found, shift as far as we can, either to
                    * the end or by the length of the needle */
                   alignment = std::cmp::min(max_alignment,
                                             alignment + pattern_chars.len());
               } else {
                   alignment = std::cmp::min(max_alignment, alignment + shift);
               }

               break;
            }
        }
    }

    /* No mismatches, return the current alignment */

    if !mismatch {
        return Some(alignment);
    }
}

Essentially what we’re doing is starting at the beginning and matching from back to front. If we find a mismatch, we record which character we found a mismatch on and look up both the input character and the mismatch character index in the pattern in the good suffix and bad character tables. If those tell us how far to shift, we shift by that amount. Otherwise, we shift forward by the full pattern length.

And that, is how you get super fast linear time string matching!

A better way to think about Quicksort

Recently I’ve been learning haskell, a purely functional programming language. I still don’t really understand why I’m doing that; I think functional style is great and it makes a lot of sense to divide programs up into mostly atomic units, composable at the domain level. In a functional style, you generally try to put the majority of your code into imdepotent functions whose outputs are strictly defined by their inputs. The function itself is kind of a black box and checks the “functional style” checkbox as long as there’s no observable side effects. Imperative shell, functional core. I find that programs that use a functional style is a lot easier to reason about, to test and to re-use. Purely functional programming on the other hand always seemed like overkill to me, sacrificing comprehensibility for the sake of functional programming principles.

So what does this have to do with Quicksort? Well, it turns out that Chapter 5 of Learn You a Haskell has a very motivating example:

quicksort :: (Ord a) => [a] -> [a]  
quicksort [] = []  
quicksort (x:xs) =  
    let smallerSorted = quicksort [a | a <- xs, a <= x] 
        biggerSorted = quicksort [a | a <- xs, a > x] 
    in  smallerSorted ++ [x] ++ biggerSorted

Now I appreciate that haskell’s syntax is a little obtuse, so lets rephrase that in another more widely known language:

def quicksort(elements):
    if elements:
        pivot = elements[0]
        smaller = quicksort([e for e in elements[1:] if e =< pivot])
        larger = quicksort([e for e in elements[1:] if e > pivot])
        return smaller + [pivot] + larger

    return []

When I first saw this example it was like a lightbulb went off in my head. Quicksort finally made sense. It wasn’t this weird looking algorithm that just happened to work and have decent time complexity for some reason. See, if you ever took a Data Structures and Algorithms course at university, you were probably taught (in-place) Quicksort like this:

quicksort(elements, left, right) {
    if (left == right) {
        return
    }

    pivot = partition(elements, left, right)
    quicksort(elements, left, pivot)
    quicksort(elements, pivot, right)
}

Which probably lead to the first “WTF” – in my case – “WTF does a partition and a pivot have to do with sorting?”

partition(elements, left, right) {
    cmp = elements[left];
    pivot = left + 1;

    for (i = (left + 1); i < right; ++i) {
        if (elements[i] < cmp) {
            tmp = elements[pivot];
            elements[pivot] = elements[i];
            elements[i] = elements[pivot];

            pivot++;
        }
    }

    tmp = elements[pivot];
    elements[pivot] = cmp;
    elements[left] = tmp;

    return pivot - 1;
}

At which point 95% of people in the class would think be thinking “WTF????????”

And then the prof will usually open their mouth and say something like “so first you pick a pivot” and then everyone in the room will immediately switch off and just pray that they can rote-learn their way through the upcoming test and forget about it.

Really, the functional way of looking at it makes far more sense.

But hang on, you say, “my textbook said that Quicksort was O(n log n) in the best case and O(n^2) in the worst case – why would you want to use it over merge sort when merge sort is guaranteed to be O(n log n) and both methods require concatenating lists together?”

That’s a good point. The purely functional style doesn’t allow for inner mutability of the array, so you can’t exactly sort in place. But it is a good way to understand what on earth is going on with the in-place sort. But to understand that, first you need to understand why the recurrent version of quicksort works and how it is related to merge sort.

If you’ve seen the recurrent version of merge sort, it looks sort of similar to the recurrent version of quicksort:

def mergesort(elements):
    if len(elements) <= 1:
        return elements;

    midway = floor(len(elements) / 2)
    left = mergesort(elements[0:midway])
    right = mergesort(elements[midway:1])

    merged = []

    leftIdx = 0
    rightIdx = 0
    while leftIdx < len(left) or rightIdx < len(right):
        if leftIdx < len(left) and left[leftIdx] < right[rightIdx]:
            merged.append(left[leftIdx])
            leftIdx += 1
        else if rightIdx < len(right):
            merged.append(right[rightIdx])
            rightIdx += 1

    return merged

An important property of comparison sorts is that lower time complexity bound is n log n. That is to say that in order to provide some sort of operation which gets the array into a “more sorted” state than it was before, you need to evaluate all the elements and at best you’ll probably have to do that operation about log n times. So what is the incremental thing that merge-sort is doing here? Well, it looks at two chunks of an array and once it is done with its O(n) operation, it guarantees the inner ordering of that chunk, so if you were looking at [1, 5] and [4, 7], the result would be [1,4,5,7]. But it doesn’t say anything about the ordering of the resultant chunk in relation to other chunks in the array, nor does it guarantee that that chunk contains all the elements in within the range 1-7 that also appear in the array. All that said, the more you keep comparing two separate chunks and guaranteeing their inner ordering, the more “sorted” the array becomes. Because we kept halving the array, it stands to reason that if you do this about log n times, you’ll eventually end up with a sorted array.

If you compare the two recurrent equations, you’ll realise that quicksort is the same thing but in reverse. Quicksort guarantees that the outer ordering between two chunks is maintained – each number in the right hand side will always be greater than each number in the left hand side. But it doesn’t say anything about the inner ordering of those two chunks. All that said, if you keep going and doing the same thing on each of the two chunks, you’ll eventually get to the point where there is no outer ordering to be maintained and thus each chunk has an inner ordering. Again, in the best case, you’ll probably be halving the array, so you can recursively maintain outer-ordering between two chunks in log n time (although in the worst case, you’ll be splitting the array into two parts – 1 and n – 1 elements, at which point you have to maintain outer-ordering n times).

Okay, so what does this have to do with the overly-complicated looking in-place partition function? Well, imagine there was some atomic function that could take an array and make it outer ordered, such that all the elements in one part were always greater than all the elements in some other part just by looking at each element in the list once. That’s what partition does. All the elements less than a certain value get swapped over to the left hand side and all the elements greater than it get swapped over to the right hand side. Then you just put the centrepiece back in place and you’re done.

I still think the functional style is a lot easier to understand though. Maybe purely functional programming isn’t so bad after all.

Vale Unity

Its been almost six years since Ubuntu shipped with Unity as the default desktop and compiz as the underlying compositor. For every release since then, a similar software stack has shipped on every subsequent release up until 17.04 . Those ten releases make up about half of the Ubuntu desktop’s lifespan and certainly more than half of the person-hours invested into the project, so today’s announcement to wind it down is a pretty significant moment.

I haven’t worked on Unity or Ubuntu for a very long time now, but the announcement today has filled me with a couple of emotions. On one hand its disappointing to know that your legacy and a project that you personally invested a lot into is going away. Time truly is the destroyer of all things. I know Mark and a lot of other people at Canonical must be feeling particularly gutted right now. On the other, maybe its a sigh of relief. Unity certainly hasn’t been the most conflict free or easy project and maybe with this difficult decision the people involved finally have the freedom to move on without feeling guilty.

The Team who Built Unity

I have nothing but good things to say about the Desktop Experience Team at Canonical – later the Product Strategy Team and the associated Ubuntu community. Our core engineering staff was a killer combo. We had two people, Jason and Neil, who had amazing experience with graphical toolkits and a passion for Desktop GL. We had Mirco, Gord and Jay who knew how to extract every last ounce of performance from a GPU. We had Tim and Alan who were a marvellous leaders and a champion of stability and software maturity. We had Alex, Thomi and Thomas who championed automated testing. We had Neil again because he literally filled far more shoes, so he deserves more than one mention. We had Michal and Mikkel –  who were a masters of desktop search. We had Didier and Seb, who could get things shipped, no matter what. We had Robert – who knew how to put the pieces together for groundbreaking web-integration. We Marco and Andrea, who took the lead in maintaining the desktop even when nobody ask them to. We had Daniel who was a leader in desktop quality. We had John, Otto and Rosie who were amazing designers. We had Jorge and Jono who were personable and enthusiastic community managers. There were countless others making invaluable contributions. And then we had me – and I’m just humbled to have crossed paths with all these people.

We were co-workers but we were also friends. Not only friends, but very close, lifelong friends. We often talked about non-work related stuff to each other and really went the extra mile to support each other. We were under a lot of pressure to make the dream a reality and when you gel well together, that pressure makes friendships even stronger. We didn’t see each other in person much, but when we did, it honestly made up some of the happiest times of my life.

The Fun Moments

We had a lot of great moments with each other building Unity. There was a time when we had to instruct Mark on how to pull and build a new build back when we didn’t have stable updates in the alpha release. Mark was on a cellular connection and had to log out of IRC in order to restart his session. I remember Jason saying “whelp, if he doesn’t come back in 1 minute, assume we are all fired”. That got a few laughs.

There was another time when the DX team and the Design team had an ongoing prank-war with each other. This started with the “flies in the ice-cream” prank and eventually culminated with a sardine pizza being sent to Jason from England. I still have no idea how that happened. Then there was also the bug report.

We slipped all sorts of easter eggs into Unity. One, which when launched with UNITY_NEKO=1, replaced all images in the dash with images of cats. Another which almost made it in filled the screen with fire when the konami code was pressed.

The Low Moments

Of course, Unity’s development had its bad times. The development was, understandably not well received by the established community members who saw the project as fragmentation. We faced numerous quality control problems when trying to get the desktop out on such a tight timeline. The stacking bugs kept me up day and night and required a lot of personal sacrifice to get to a state where they stopped occurring so frequently. Some of our users were randomly hateful and/or toxic. For instance, someone in a Reddit AMA told Jason that they hoped he would be “hit by a bus”. That’s not a very nice thing to say. In the later half of the 11.10 cycle, I began to feel overwhelmed with matters going on in both my personal life and the status of the project and entered a deep state of depression. There was additional pressure in the company as Unity 2D was developed and the projects inevitably, though perhaps not intentionally, were pitted against each other.

Then there were the post 12.10 staff departures. It was really sad to see the team break apart almost as quickly as it formed. I left during this time as well after most of my friends on the team had left.

Unity 8

Unity 8 was a total reboot of the project, learning from the failures of Unity 7. There were a lot of high hopes. Canonical had recently been on a recruitment drive and hired a lot of smart people. On paper, Unity 8 looked technically groundbreaking. Unity was going to move away from an ageing, unmaintainable and limited display server architecture to one which was pervasively multithreaded, could take better advantage of mobile and desktop hardware, had high test coverage and really top-notch developers working on it. Global package management was going away in favour of isolated and atomically upgradeable containers known as ‘snaps’. The OS was going to make mobile hardware a first class citizen. Touch was going to become the new interaction paradigm. Everything was going to become more internet connected and integrated than before. Apps were going to become smarter on power and resource usage. The project was moving to a toolkit which at the time had an incredible backing and momentum behind it. Legacy apps were going to be supported. “We have to go deeper” was the motto of the team.

The pieces were in motion. A very real demo of this was ready in 2013 for announcement to the world. The technology was so ground breaking that it was going to continue under wraps for a while until it could reach perfection. And then we’d be in free software utopia. Convergence would become a reality. Ubuntu would win, by sheer force of being better than anything that came before it. The plan, idea and resources available to execute on that plan were perfect.

And then it was delayed.

And then it was delayed again.

And then it was delayed for a third time.

And now its not shipping.

An outsider’s postmortem

I’d be lying if I said I didn’t really see this coming, but not for the reasons that other people might say. The strategic decisions made were necessary – X11 was seriously limiting our ability to deliver on features and stability, without atomic updates, its hard to give people a secure and stable phone – but they were ultimately fatal. I remember back when we decided to rewrite compiz and completely underestimated the scale and effort to make that work in a bug-free way. It killed the project. Once you rewrite one thing, its very difficult to resist the urge to start rewriting other things while you’re at it. Porting compiz to C++ wasn’t enough; we had to split rendering into plugins; we had to support the non-composting case; we had to support reparenting; we had to support a replacement for function pointer hooking; we had to drop multi-screen and we had to move to a more up to date rendering model. All at once. We had to understand code that we hadn’t written and code where the original authors were long gone from the project. And to make it look like progress was made, we had to bolt on more features on top of the existing rewritten ones with fewer development resources than the ones who had already burnt out and quit.

About this time last year Unity 8 started to look an awful lot like compiz++ on a larger scale. Not only was it going to use Qt, it was going to use an entirely new display server! Not only was it going to use an entirely new display server, it was going to use an entirely new driver model! Not only was it going to use an entirely new driver model, it was going to use Android as the base for mobile! Not only was it going to use Android and Debian, but Debian would go away and we’d have containerised applications! Not only was it going to use containerised applications, but the applications would be all-new and written in Qt! And not only all that, but its going to be perfect!

The delays just kept on happening. And after ever delay, so as to motivate the base and show that Unity was still relevant, more scope was added. More scope led to more delays! More delays led to more scope! And its not going to ship until its perfect!

I became worried around this time last year that it was never going to ship. The reality was that “perfect” became a moving target. And the thing about aiming for perfection is that you can never hit it until you’ve had your imperfect product in front of users for a few years. Ironically, Unity 7 is now seen as “perfect” for a lot of people, because they don’t remember the change from GNOME 2 to Unity. And now there’s going to be another transition in moving to GNOME 3.

Community

Unity was all about turning a community made platform into a product. The goals were noble – we wanted to get this amazing bit of community goodness into the hands of so many other users. We wanted to preach the word of free software. We wanted to show people that they do have an alternative to the big four companies that control the software they use on a daily basis. We wanted to show people that free software could win! And the only way to do that was to turn it into a product that was so irresistible and incredible that you couldn’t not buy it.

And that’s where the problem was. The developers making it wanted a community but in order to get it to where it needed to be, it needed to become a product. There needed to be one direction instead of many. Linux was no longer about choice. It was about the masses. Unity had lost its soul.

Where its all going

For now, it looks like Canonical has folded on client to double down on server. That was a very difficult decision to make and I applaud the leadership for their courage and for their ability to recognise what was becoming sunk cost. Its sad for the developers, product people and community involved, but at the end of the day, you have to make money.

Free software on the desktop might enter a dark age soon. There aren’t very many companies in this space now.  Intel was out as of halfway through last year. Google is focused on Android and Chrome OS. Novell and Sun folded a long time ago. Nokia fell off a burning platform. IBM is nowhere to be found. And now Canonical is pivoting away too. Samsung’s Tizen is potentially an interesting player, but everyone knows its a plan B.  Red Hat and SUSE remain and so does Endless. An of course there is still Collabora, Codethink, Igalia and other free software contracting firms, though (edit) their participation depends on the re-use of our technology by larger players outside the desktop space. There’s room to grow, but there has certainly been a lot of disruptive change lately.

(edit 10/04/2017: This section got a little more traffic than I was expecting and I wrote it quite quickly. I’ve made some changes to better reflect the state of affairs based on feedback. I’ve replaced “their participation depends on the larger players” with “their participation depends on re-use of our technology by larger players outside the desktop space” and clarified that its not all doom and gloom! Just that there has been a lot of change lately and things aren’t pointing in the most golden direction right now)

A new hope

Lets not forget where the desktop free software revolution came from. It came from ordinary people, you and I, who wanted to make a difference. It came from people who didn’t want to accept the Microsoft and Apple duopoly on computing. Even if the resources start to dry up now, it doesn’t mean that free software is gone forever. Some of the best innovations came during the post-bubble period of 2000-2010 where the software world had become stagnant with Windows XP and IE6.

There will always be people who want to do something different. A community will form again. And once everyone has a clearer head, free software will rise again.

Birthrights

When you relied on us to grow your population and economy, your birthright to exclude the people they loved too ceased.

When you wanted to spread your culture and cultural norms beyond your border, your birthright to exclude the people who accepted the change and allowed that culture to spread ceased.

When you wanted to mine our land for minerals, your birthright to exclude the workers and their families ceased.

When you wanted to give your sons and daughters the opportunity to marry the person they loved, your birthright to exclude their mothers, fathers, family and friends ceased.

When you wanted to rely on foreign talents to build your intellectual capital and hoard knowledge your birthright to exclude those who would give you that knowledge ceased.

When you wanted to rely on us to do the jobs nobody else wanted to do because they paid terribly and had awful working conditions, your birthright to exclude the workers and their community ceased.

When you wanted to rely on us to sell you goods and services at prices which are grossly unfair to us, your birthright to exclude the workers who toiled for hours to make those goods for you ceased.

When you wanted to rely on us to teach you our language and culture so that you could make your military intelligence even better, your birthright to exclude the people who carried that language and culture through the ages ceased.

When you bombed us, forcing millions of us to fear for our lives and flee, when you created instability on our home town because of your own agenda, when we all had to walk for months and pack ourselves into over-full boats and buses not knowing where we were going or if we were even going to live to make another home for ourselves, your birthright to exclude us ceased.

Apt is surprisingly flexible

After a break for a few months, I just shipped a new version of polysquare-travis-container. The main difference here is that we are now able to create and maintain containers without using proot at all, which is a slight improvement on the last big round of changes made in August.

The initial reason for using proot was to provide a simple way to fool apt and dpkg into thinking that it was running from the root directory when it was actually running from a directory owned by user. The theory goes that if you can fool apt into doing that, then you can install and run packages built for other distributions without the overhead of running virtual machines. As explained about two years ago, the best solutions we have available for doing this are chroot (and wrappers around chroot) and Docker. However both of those require root access to make the chroot system call and/or set up cgroups. proot solves that problem by running your program through ptrace, then intercepting and rewriting system calls such that programs think they are running on the root directory.

However, as time went on, I encountered problems with the proot approach. Mixing redirection with environment variables like PATH tended to not work out so well. On newer ubuntu releases proot ran incredibly slowly. Then, finally, it just stopped working at all on Travis-CI, which kind of defeats the purpose of using it in polysquare-travis-container.

Earlier in 2015 I realised the solution was to take the same approach used by polysquare-travis-container’s support for Windows and macOS – just set the right environment variables and support the packages which do the right thing and don’t hardcode absolute paths. This has worked out surprisingly well. Supporting linux package managers was far trickier. I suppose a part of the problem here was that they have historically always run as the root user and as a result wrote directly to /usr and kept their data in /var. After all – they are there to manage the entire filesystem, so having those assumptions makes sense.

After the changes made in August, I wasn’t too optimistic that it would be possible to run these package managers without using chroot or a chroot-alternative. But after some digging I found that Apt itself has a test suite which has to run under the exact same constraints. As it turns out, pretty much every path in Apt is configurable to some extent, so much so that with an Apt configuration like below, you can run apt as a non-root user and have it keep all its changes in a specified directory.

Apt {
    Architecture "amd64";
    Get {
        Assume-Yes true;
    };
};
debug {
    nolocking true;
};
Acquire::Queue-Mode "host";
Dir "fakeroot";
Dir::Cache "fakeroot/var/cache/apt";
Dir::State "fakeroot/var/lib/apt";
Dir::State::status "fakeroot/var/lib/dpkg/status";
Dir::Bin::Solvers "fakeroot/usr/lib/apt/solvers";
Dir::Bin::Planners "fakeroot/usr/lib/apt/planners";
Dir::Bin::Solvers "fakeroot/usr/lib/apt/solvers";
Dir::Bin::Methods "fakeroot/usr/lib/apt/methods";
Dir::Bin::Dpkg "fakeroot/usr/bin/dpkg.w";
Dir::Etc "fakeroot/etc/apt";
Dir::Log "fakeroot/var/log/apt";

Along with Apt, you’ll also need to tell Dpkg to run in a separate root directory. Thankfully, it has command line options to tell it to do this. The only problem is that Apt invokes the Dpkg binary on occasion and so you’ll need to write a wrapper script to ensure that Dpkg gets called with the right command line arguments.

#!/bin/bash
fakeroot/usr/bin/dpkg --root='fakeroot' \
--admindir=fakeroot/var/lib/dpkg \
--log=fakeroot/var/log/dkpkg.log \
--force-not-root --force-bad-path $@

You’ll notice above that I override the Dpkg binary with dpkg.w which contains the script contents above.

The final piece of the puzzle was to disable postinst, postrm and prerm scripts from running. These aren’t necessary in these containers since they are mostly responsible for updating things like system caches or updating configuration files. The containers are meant to be one-off environments so all we care about are the binaries. Disabling them was as simple as removing them.

With all of that effort out of the way, we can now create and run a container without the use of docker, chroot or proot and install a completely separate toolchain and run binaries from it.

$ psq-travis-container-create --distro Ubuntu --release precise --arch x86_64 container --local --packages PACKAGES --repositories REPOSITORIES

Configured Distribution:
 - Release: precise
 - Package System: DpkgLocal
 - Architecture: x86_64
 - Distribution Name: Ubuntu
✓ Using existing folder for proot distro Ubuntu precise amd64
-> Update repositories [apt-get update -y --force-yes]
   Get:1 http://archive.ubuntu.com precise Release.gpg [198 B]
   Get:2 http://archive.ubuntu.com precise-security Release.gpg [198 B]
   Get:3 http://archive.ubuntu.com precise-updates Release.gpg [198 B]
   Get:4 http://ppa.launchpad.net precise Release.gpg [316 B]
   Get:5 http://ppa.launchpad.net precise Release.gpg [316 B]
   Get:6 http://archive.ubuntu.com precise Release [49.6 kB]
   Get:7 http://ppa.launchpad.net precise Release [13.0 kB]
   Ign http://ppa.launchpad.net precise Release
   Get:8 http://llvm.org llvm-toolchain-precise-3.6 Release.gpg [836 B]
   Get:9 http://ppa.launchpad.net precise Release [12.9 kB]
   Ign http://ppa.launchpad.net precise Release
   Get:10 http://archive.ubuntu.com precise-security Release [55.5 kB]
   Get:11 http://ppa.launchpad.net precise/main amd64 Packages [592 B]
   Get:12 http://ppa.launchpad.net precise/main TranslationIndex [196 B]
   Get:13 http://archive.ubuntu.com precise-updates Release [55.4 kB]
   Get:14 http://ppa.launchpad.net precise/main amd64 Packages [49.1 kB]
   Get:15 http://llvm.org llvm-toolchain-precise-3.6 Release [3,355 B]
   Ign http://llvm.org llvm-toolchain-precise-3.6 Release
   Get:16 http://archive.ubuntu.com precise/main Sources [934 kB]
   Get:17 http://ppa.launchpad.net precise/main TranslationIndex [205 B]
   Get:18 http://ppa.launchpad.net precise/main Translation-en [310 B]
   Get:19 http://ppa.launchpad.net precise/main Translation-en [15.2 kB]
   Get:20 http://archive.ubuntu.com precise/restricted Sources [5,470 B]
   Get:21 http://archive.ubuntu.com precise/main amd64 Packages [1,273 kB]
   Ign http://llvm.org llvm-toolchain-precise-3.6/main TranslationIndex
   Get:22 http://archive.ubuntu.com precise/restricted amd64 Packages [8,452 B]
   Get:23 http://archive.ubuntu.com precise/main TranslationIndex [3,706 B]
   Get:24 http://archive.ubuntu.com precise/restricted TranslationIndex [2,596 B]
   Get:25 http://archive.ubuntu.com precise-security/main Sources [146 kB]
   Get:26 http://archive.ubuntu.com precise-security/restricted Sources [4,623 B]
   Get:27 http://archive.ubuntu.com precise-security/main amd64 Packages [664 kB]
   Get:28 http://archive.ubuntu.com precise-security/restricted amd64 Packages [10.8 kB]
   Get:29 http://archive.ubuntu.com precise-security/main TranslationIndex [208 B]
   Get:30 http://archive.ubuntu.com precise-security/restricted TranslationIndex [202 B]
   Get:31 http://archive.ubuntu.com precise-updates/main Sources [500 kB]
   Get:32 http://archive.ubuntu.com precise-updates/restricted Sources [8,840 B]
   Get:33 http://archive.ubuntu.com precise-updates/main amd64 Packages [1,045 kB]
   Get:34 http://archive.ubuntu.com precise-updates/restricted amd64 Packages [15.4 kB]
   Get:35 http://archive.ubuntu.com precise-updates/main TranslationIndex [208 B]
   Get:36 http://archive.ubuntu.com precise-updates/restricted TranslationIndex [202 B]
   Get:37 http://archive.ubuntu.com precise/main Translation-en_AU [4,434 B]
   Get:38 http://archive.ubuntu.com precise/main Translation-en [726 kB]
   Get:39 http://archive.ubuntu.com precise/restricted Translation-en_AU [2,407 B]
   Get:40 http://archive.ubuntu.com precise/restricted Translation-en [2,395 B]
   Get:41 http://archive.ubuntu.com precise-security/main Translation-en [269 kB]
   Get:42 http://archive.ubuntu.com precise-security/restricted Translation-en [2,793 B]
   Get:43 http://archive.ubuntu.com precise-updates/main Translation-en [431 kB]
   Get:44 http://archive.ubuntu.com precise-updates/restricted Translation-en [3,682 B]
   Get:45 http://llvm.org llvm-toolchain-precise-3.6/main amd64 Packages [6,216 B]
   Ign http://llvm.org llvm-toolchain-precise-3.6/main Translation-en_AU
   Ign http://llvm.org llvm-toolchain-precise-3.6/main Translation-en
   Fetched 6,328 kB in 16s (389 kB/s)
   Reading package lists...
   W: GPG error: http://ppa.launchpad.net precise Release: The following signatures couldn't be verified because the public key is not available: NO_PUBKEY C1DB487B944B6EA7
   W: GPG error: http://ppa.launchpad.net precise Release: The following signatures couldn't be verified because the public key is not available: NO_PUBKEY 1E9377A2BA9EF27F
   W: GPG error: http://llvm.org llvm-toolchain-precise-3.6 Release: The following signatures couldn't be verified because the public key is not available: NO_PUBKEY 15CF4D18AF4F7421
-> Downloading APT packages and dependencies [apt-get -y --force-yes -d install --reinstall nano cmake clang-3.6]
   Reading package lists...
   Building dependency tree...
   Reading state information...
   The following extra packages will be installed:
     adduser binutils bsdmainutils busybox-initramfs ca-certificates cmake-data
     cpio emacsen-common gcc-4.9-base gcc-6-base ifupdown initramfs-tools
     initramfs-tools-bin initscripts insserv iproute klibc-utils libarchive12
     libasan1 libasn1-8-heimdal libatomic1 libblkid1 libbsd0 libc-bin
     libc-dev-bin libc6 libc6-dev libcilkrts5 libclang-common-3.6-dev
     libclang1-3.6 libcomerr2 libcurl3 libcurl3-gnutls libdbus-1-3 libdrm-intel1
     libdrm-nouveau1a libdrm-radeon1 libdrm2 libedit2 libexpat1 libgcc-4.9-dev
     libgcc1 libgcrypt11 libglib2.0-0 libgnutls26 libgomp1 libgpg-error0
     libgssapi-krb5-2 libgssapi3-heimdal libhcrypto4-heimdal libheimbase1-heimdal
     libheimntlm0-heimdal libhx509-5-heimdal libidn11 libitm1 libk5crypto3
     libkeyutils1 libklibc libkrb5-26-heimdal libkrb5-3 libkrb5support0
     libldap-2.4-2 libllvm3.6 liblsan0 libmount1 libncurses5 libncursesw5
     libnettle4 libnih-dbus1 libnih1 libobjc-4.9-dev libobjc4 libp11-kit0
     libpciaccess0 libpcre3 libplymouth2 libpng12-0 libquadmath0
     libroken18-heimdal librtmp0 libsasl2-2 libslang2 libsqlite3-0 libssl1.0.0
     libstdc++-4.9-dev libstdc++6 libtasn1-3 libtsan0 libubsan0 libudev0 libuuid1
     libwind0-heimdal libxml2 libxmlrpc-core-c3 linux-libc-dev lsb-base
     module-init-tools mount mountall ncurses-bin openssl passwd plymouth procps
     sysv-rc sysvinit-utils udev upstart util-linux
   Suggested packages:
     liblocale-gettext-perl perl-modules binutils-doc cpp wamerican wordlist
     whois vacation gnustep gnustep-devel clang-3.6-doc gcc make libarchive1
     isc-dhcp-client dhcp-client ppp rdnssd net-tools bash-completion bootchart
     iproute-doc glibc-doc locales rng-tools gnutls-bin krb5-doc krb5-user
     pciutils libstdc++-4.9-doc nfs-common spell sysv-rc-conf bum sash watershed
     graphviz util-linux-locales kbd console-tools dosfstools
   Recommended packages:
     ecryptfs-utils llvm-3.6-dev python psmisc e2fsprogs libatm1 manpages-dev
     dbus libglib2.0-data shared-mime-info krb5-locales libgpm2 libsasl2-modules
     uuid-runtime xml-core plymouth-theme-ubuntu-text plymouth-theme
   The following NEW packages will be installed:
     adduser binutils bsdmainutils busybox-initramfs ca-certificates clang-3.6
     cmake cmake-data cpio emacsen-common gcc-4.9-base gcc-6-base ifupdown
     initramfs-tools initramfs-tools-bin initscripts insserv iproute klibc-utils
     libarchive12 libasan1 libasn1-8-heimdal libatomic1 libblkid1 libbsd0
     libc-dev-bin libc6-dev libcilkrts5 libclang-common-3.6-dev libclang1-3.6
     libcomerr2 libcurl3 libcurl3-gnutls libdbus-1-3 libdrm-intel1
     libdrm-nouveau1a libdrm-radeon1 libdrm2 libedit2 libexpat1 libgcc-4.9-dev
     libgcrypt11 libglib2.0-0 libgnutls26 libgomp1 libgpg-error0 libgssapi-krb5-2
     libgssapi3-heimdal libhcrypto4-heimdal libheimbase1-heimdal
     libheimntlm0-heimdal libhx509-5-heimdal libidn11 libitm1 libk5crypto3
     libkeyutils1 libklibc libkrb5-26-heimdal libkrb5-3 libkrb5support0
     libldap-2.4-2 libllvm3.6 liblsan0 libmount1 libncurses5 libncursesw5
     libnettle4 libnih-dbus1 libnih1 libobjc-4.9-dev libobjc4 libp11-kit0
     libpciaccess0 libpcre3 libplymouth2 libpng12-0 libquadmath0
     libroken18-heimdal librtmp0 libsasl2-2 libslang2 libsqlite3-0 libssl1.0.0
     libstdc++-4.9-dev libtasn1-3 libtsan0 libubsan0 libudev0 libuuid1
     libwind0-heimdal libxml2 libxmlrpc-core-c3 linux-libc-dev lsb-base
     module-init-tools mount mountall nano ncurses-bin openssl passwd plymouth
     procps sysv-rc sysvinit-utils udev upstart util-linux
   The following packages will be upgraded:
     libc-bin libc6 libgcc1 libstdc++6
   4 to upgrade, 108 to newly install, 0 to remove and 18 not to upgrade.
   Need to get 96.1 MB of archives.
   After this operation, 260 MB of additional disk space will be used.
   WARNING: The following packages cannot be authenticated!
     gcc-6-base libgcc1 libstdc++6 gcc-4.9-base libasan1 libatomic1 libcilkrts5
     libllvm3.6 libgomp1 libitm1 liblsan0 libtsan0 libubsan0 libquadmath0
     libgcc-4.9-dev libstdc++-4.9-dev libobjc4 libobjc-4.9-dev libclang1-3.6
     libclang-common-3.6-dev clang-3.6
   Get:1 http://archive.ubuntu.com/ubuntu/ precise-security/main libc-bin amd64 2.15-0ubuntu10.15 [1,177 kB]
   Get:2 http://ppa.launchpad.net//ubuntu-toolchain-r/test/ubuntu/ precise/main gcc-6-base amd64 6.2.0-3ubuntu11~12.04 [18.1 kB]
   Get:3 http://ppa.launchpad.net//ubuntu-toolchain-r/test/ubuntu/ precise/main libgcc1 amd64 1:6.2.0-3ubuntu11~12.04 [44.6 kB]
   Get:4 http://ppa.launchpad.net//ubuntu-toolchain-r/test/ubuntu/ precise/main libstdc++6 amd64 6.2.0-3ubuntu11~12.04 [391 kB]
   Get:5 http://llvm.org/apt/precise/ llvm-toolchain-precise-3.6/main libllvm3.6 amd64 1:3.6.2~svn240577-1~exp1 [11.5 MB]
   Get:6 http://archive.ubuntu.com/ubuntu/ precise-security/main libc6 amd64 2.15-0ubuntu10.15 [4,636 kB]
   Get:7 http://ppa.launchpad.net//ubuntu-toolchain-r/test/ubuntu/ precise/main gcc-4.9-base amd64 4.9.4-2ubuntu1~12.04 [16.9 kB]
   Get:8 http://ppa.launchpad.net//ubuntu-toolchain-r/test/ubuntu/ precise/main libasan1 amd64 4.9.4-2ubuntu1~12.04 [240 kB]
   Get:9 http://ppa.launchpad.net//ubuntu-toolchain-r/test/ubuntu/ precise/main libatomic1 amd64 6.2.0-3ubuntu11~12.04 [10.8 kB]
   Get:10 http://ppa.launchpad.net//ubuntu-toolchain-r/test/ubuntu/ precise/main libcilkrts5 amd64 6.2.0-3ubuntu11~12.04 [49.6 kB]
   Get:11 http://ppa.launchpad.net//ubuntu-toolchain-r/test/ubuntu/ precise/main libgomp1 amd64 6.2.0-3ubuntu11~12.04 [85.6 kB]
   Get:12 http://ppa.launchpad.net//ubuntu-toolchain-r/test/ubuntu/ precise/main libitm1 amd64 6.2.0-3ubuntu11~12.04 [34.3 kB]
   Get:13 http://ppa.launchpad.net//ubuntu-toolchain-r/test/ubuntu/ precise/main liblsan0 amd64 6.2.0-3ubuntu11~12.04 [136 kB]
   Get:14 http://ppa.launchpad.net//ubuntu-toolchain-r/test/ubuntu/ precise/main libtsan0 amd64 6.2.0-3ubuntu11~12.04 [324 kB]
   Get:15 http://ppa.launchpad.net//ubuntu-toolchain-r/test/ubuntu/ precise/main libubsan0 amd64 6.2.0-3ubuntu11~12.04 [125 kB]
   Get:16 http://ppa.launchpad.net//ubuntu-toolchain-r/test/ubuntu/ precise/main libquadmath0 amd64 6.2.0-3ubuntu11~12.04 [146 kB]
   Get:17 http://ppa.launchpad.net//ubuntu-toolchain-r/test/ubuntu/ precise/main libgcc-4.9-dev amd64 4.9.4-2ubuntu1~12.04 [3,761 kB]
   Get:18 http://archive.ubuntu.com/ubuntu/ precise-security/main libdbus-1-3 amd64 1.4.18-1ubuntu1.8 [146 kB]
   Get:19 http://archive.ubuntu.com/ubuntu/ precise-updates/main libnih1 amd64 1.0.3-4ubuntu9.1 [54.8 kB]
   Get:20 http://archive.ubuntu.com/ubuntu/ precise-updates/main libnih-dbus1 amd64 1.0.3-4ubuntu9.1 [16.0 kB]
   Get:21 http://llvm.org/apt/precise/ llvm-toolchain-precise-3.6/main libclang1-3.6 amd64 1:3.6.2~svn240577-1~exp1 [5,398 kB]
   Get:22 http://archive.ubuntu.com/ubuntu/ precise-updates/main libudev0 amd64 175-0ubuntu9.10 [27.8 kB]
   Get:23 http://archive.ubuntu.com/ubuntu/ precise-updates/main sysvinit-utils amd64 2.88dsf-13.10ubuntu11.1 [60.2 kB]
   Get:24 http://archive.ubuntu.com/ubuntu/ precise/main insserv amd64 1.14.0-2.1ubuntu2 [50.9 kB]
   Get:25 http://archive.ubuntu.com/ubuntu/ precise-updates/main sysv-rc all 2.88dsf-13.10ubuntu11.1 [44.6 kB]
   Get:26 http://archive.ubuntu.com/ubuntu/ precise/main ncurses-bin amd64 5.9-4 [151 kB]
   Get:27 http://archive.ubuntu.com/ubuntu/ precise-updates/main lsb-base all 4.0-0ubuntu20.3 [10.5 kB]
   Get:28 http://archive.ubuntu.com/ubuntu/ precise-security/main libpcre3 amd64 8.12-4ubuntu0.2 [149 kB]
   Get:29 http://archive.ubuntu.com/ubuntu/ precise-updates/main libglib2.0-0 amd64 2.32.4-0ubuntu1 [1,200 kB]
   Get:30 http://archive.ubuntu.com/ubuntu/ precise/main module-init-tools amd64 3.16-1ubuntu2 [105 kB]
   Get:31 http://archive.ubuntu.com/ubuntu/ precise-security/main initramfs-tools-bin amd64 0.99ubuntu13.5 [9,782 B]
   Get:32 http://archive.ubuntu.com/ubuntu/ precise/main libklibc amd64 1.5.25-1ubuntu2 [45.7 kB]
   Get:33 http://archive.ubuntu.com/ubuntu/ precise/main klibc-utils amd64 1.5.25-1ubuntu2 [181 kB]
   Get:34 http://archive.ubuntu.com/ubuntu/ precise-updates/main busybox-initramfs amd64 1:1.18.5-1ubuntu4.1 [183 kB]
   Get:35 http://archive.ubuntu.com/ubuntu/ precise-security/main cpio amd64 2.11-7ubuntu3.2 [116 kB]
   Get:36 http://archive.ubuntu.com/ubuntu/ precise/main libncurses5 amd64 5.9-4 [114 kB]
   Get:37 http://archive.ubuntu.com/ubuntu/ precise/main libslang2 amd64 2.2.4-3ubuntu1 [503 kB]
   Get:38 http://archive.ubuntu.com/ubuntu/ precise-updates/main libblkid1 amd64 2.20.1-1ubuntu3.1 [73.7 kB]
   Get:39 http://archive.ubuntu.com/ubuntu/ precise-updates/main libmount1 amd64 2.20.1-1ubuntu3.1 [71.5 kB]
   Get:40 http://archive.ubuntu.com/ubuntu/ precise-updates/main mount amd64 2.20.1-1ubuntu3.1 [166 kB]
   Get:41 http://archive.ubuntu.com/ubuntu/ precise-updates/main util-linux amd64 2.20.1-1ubuntu3.1 [596 kB]
   Get:42 http://ppa.launchpad.net//ubuntu-toolchain-r/test/ubuntu/ precise/main libstdc++-4.9-dev amd64 4.9.4-2ubuntu1~12.04 [1,867 kB]
   Get:43 http://archive.ubuntu.com/ubuntu/ precise-security/main initramfs-tools all 0.99ubuntu13.5 [49.0 kB]
   Get:44 http://archive.ubuntu.com/ubuntu/ precise/main libncursesw5 amd64 5.9-4 [137 kB]
   Get:45 http://archive.ubuntu.com/ubuntu/ precise-updates/main procps amd64 1:3.2.8-11ubuntu6.4 [233 kB]
   Get:46 http://archive.ubuntu.com/ubuntu/ precise/main adduser all 3.113ubuntu2 [133 kB]
   Get:47 http://archive.ubuntu.com/ubuntu/ precise-updates/main udev amd64 175-0ubuntu9.10 [324 kB]
   Get:48 http://archive.ubuntu.com/ubuntu/ precise-security/main libdrm2 amd64 2.4.52-1~precise2 [26.1 kB]
   Get:49 http://archive.ubuntu.com/ubuntu/ precise-updates/main libpciaccess0 amd64 0.12.902-1ubuntu0.2 [20.8 kB]
   Get:50 http://archive.ubuntu.com/ubuntu/ precise-security/main libdrm-intel1 amd64 2.4.52-1~precise2 [65.6 kB]
   Get:51 http://archive.ubuntu.com/ubuntu/ precise-security/main libdrm-nouveau1a amd64 2.4.52-1~precise2 [14.0 kB]
   Get:52 http://archive.ubuntu.com/ubuntu/ precise-security/main libdrm-radeon1 amd64 2.4.52-1~precise2 [27.8 kB]
   Get:53 http://archive.ubuntu.com/ubuntu/ precise-security/main libpng12-0 amd64 1.2.46-3ubuntu4.2 [133 kB]
   Get:54 http://archive.ubuntu.com/ubuntu/ precise-updates/main libplymouth2 amd64 0.8.2-2ubuntu31.1 [92.0 kB]
   Get:55 http://archive.ubuntu.com/ubuntu/ precise-updates/main plymouth amd64 0.8.2-2ubuntu31.1 [123 kB]
   Get:56 http://archive.ubuntu.com/ubuntu/ precise-updates/main mountall amd64 2.36.4ubuntu0.1 [67.5 kB]
   Get:57 http://archive.ubuntu.com/ubuntu/ precise-updates/main initscripts amd64 2.88dsf-13.10ubuntu11.1 [28.1 kB]
   Get:58 http://archive.ubuntu.com/ubuntu/ precise-updates/main iproute amd64 20111117-1ubuntu2.3 [444 kB]
   Get:59 http://archive.ubuntu.com/ubuntu/ precise-updates/main ifupdown amd64 0.7~beta2ubuntu11.1 [48.3 kB]
   Get:60 http://archive.ubuntu.com/ubuntu/ precise-updates/main upstart amd64 1.5-0ubuntu7.3 [309 kB]
   Get:61 http://archive.ubuntu.com/ubuntu/ precise-updates/main passwd amd64 1:4.1.4.2+svn3283-3ubuntu5.1 [959 kB]
   Get:62 http://llvm.org/apt/precise/ llvm-toolchain-precise-3.6/main libclang-common-3.6-dev amd64 1:3.6.2~svn240577-1~exp1 [1,756 kB]
   Get:63 http://ppa.launchpad.net//ubuntu-toolchain-r/test/ubuntu/ precise/main libobjc4 amd64 6.2.0-3ubuntu11~12.04 [162 kB]
   Get:64 http://archive.ubuntu.com/ubuntu/ precise-updates/main libuuid1 amd64 2.20.1-1ubuntu3.1 [12.8 kB]
   Get:65 http://archive.ubuntu.com/ubuntu/ precise-security/main libsqlite3-0 amd64 3.7.9-2ubuntu1.2 [349 kB]
   Get:66 http://ppa.launchpad.net//ubuntu-toolchain-r/test/ubuntu/ precise/main libobjc-4.9-dev amd64 4.9.4-2ubuntu1~12.04 [799 kB]
   Get:67 http://archive.ubuntu.com/ubuntu/ precise-updates/main libcomerr2 amd64 1.42-1ubuntu2.3 [57.2 kB]
   Get:68 http://archive.ubuntu.com/ubuntu/ precise-security/main libssl1.0.0 amd64 1.0.1-4ubuntu5.38 [1,055 kB]
   Get:69 http://archive.ubuntu.com/ubuntu/ precise-updates/main libroken18-heimdal amd64 1.6~git20120311.dfsg.1-2ubuntu0.1 [46.0 kB]
   Get:70 http://archive.ubuntu.com/ubuntu/ precise-updates/main libasn1-8-heimdal amd64 1.6~git20120311.dfsg.1-2ubuntu0.1 [220 kB]
   Get:71 http://archive.ubuntu.com/ubuntu/ precise/main libbsd0 amd64 0.3.0-2 [31.6 kB]
   Get:72 http://archive.ubuntu.com/ubuntu/ precise/main libgpg-error0 amd64 1.10-2ubuntu1 [14.5 kB]
   Get:73 http://archive.ubuntu.com/ubuntu/ precise-security/main libgcrypt11 amd64 1.5.0-3ubuntu0.6 [282 kB]
   Get:74 http://archive.ubuntu.com/ubuntu/ precise/main libp11-kit0 amd64 0.12-2ubuntu1 [34.3 kB]
   Get:75 http://archive.ubuntu.com/ubuntu/ precise-security/main libtasn1-3 amd64 2.10-1ubuntu1.5 [43.6 kB]
   Get:76 http://archive.ubuntu.com/ubuntu/ precise-security/main libgnutls26 amd64 2.12.14-5ubuntu3.12 [460 kB]
   Get:77 http://archive.ubuntu.com/ubuntu/ precise-security/main libkrb5support0 amd64 1.10+dfsg~beta1-2ubuntu0.7 [24.9 kB]
   Get:78 http://archive.ubuntu.com/ubuntu/ precise-security/main libk5crypto3 amd64 1.10+dfsg~beta1-2ubuntu0.7 [80.1 kB]
   Get:79 http://archive.ubuntu.com/ubuntu/ precise/main libkeyutils1 amd64 1.5.2-2 [7,862 B]
   Get:80 http://archive.ubuntu.com/ubuntu/ precise-security/main libkrb5-3 amd64 1.10+dfsg~beta1-2ubuntu0.7 [355 kB]
   Get:81 http://archive.ubuntu.com/ubuntu/ precise-security/main libgssapi-krb5-2 amd64 1.10+dfsg~beta1-2ubuntu0.7 [119 kB]
   Get:82 http://archive.ubuntu.com/ubuntu/ precise-security/main libidn11 amd64 1.23-2ubuntu0.1 [112 kB]
   Get:83 http://archive.ubuntu.com/ubuntu/ precise-updates/main libhcrypto4-heimdal amd64 1.6~git20120311.dfsg.1-2ubuntu0.1 [103 kB]
   Get:84 http://llvm.org/apt/precise/ llvm-toolchain-precise-3.6/main clang-3.6 amd64 1:3.6.2~svn240577-1~exp1 [37.1 MB]
   Get:85 http://archive.ubuntu.com/ubuntu/ precise-updates/main libheimbase1-heimdal amd64 1.6~git20120311.dfsg.1-2ubuntu0.1 [33.1 kB]
   Get:86 http://archive.ubuntu.com/ubuntu/ precise-updates/main libwind0-heimdal amd64 1.6~git20120311.dfsg.1-2ubuntu0.1 [77.8 kB]
   Get:87 http://archive.ubuntu.com/ubuntu/ precise-updates/main libhx509-5-heimdal amd64 1.6~git20120311.dfsg.1-2ubuntu0.1 [125 kB]
   Get:88 http://archive.ubuntu.com/ubuntu/ precise-updates/main libkrb5-26-heimdal amd64 1.6~git20120311.dfsg.1-2ubuntu0.1 [234 kB]
   Get:89 http://archive.ubuntu.com/ubuntu/ precise-updates/main libheimntlm0-heimdal amd64 1.6~git20120311.dfsg.1-2ubuntu0.1 [16.0 kB]
   Get:90 http://archive.ubuntu.com/ubuntu/ precise-updates/main libgssapi3-heimdal amd64 1.6~git20120311.dfsg.1-2ubuntu0.1 [108 kB]
   Get:91 http://archive.ubuntu.com/ubuntu/ precise-updates/main libsasl2-2 amd64 2.1.25.dfsg1-3ubuntu0.1 [69.1 kB]
   Get:92 http://archive.ubuntu.com/ubuntu/ precise-security/main libldap-2.4-2 amd64 2.4.28-1.1ubuntu4.6 [185 kB]
   Get:93 http://archive.ubuntu.com/ubuntu/ precise/main librtmp0 amd64 2.4~20110711.gitc28f1bab-1 [57.1 kB]
   Get:94 http://archive.ubuntu.com/ubuntu/ precise-security/main openssl amd64 1.0.1-4ubuntu5.38 [524 kB]
   Get:95 http://archive.ubuntu.com/ubuntu/ precise-security/main ca-certificates all 20160104ubuntu0.12.04.1 [208 kB]
   Get:96 http://archive.ubuntu.com/ubuntu/ precise-security/main libcurl3-gnutls amd64 7.22.0-3ubuntu4.17 [228 kB]
   Get:97 http://archive.ubuntu.com/ubuntu/ precise/main libedit2 amd64 2.11-20080614-3ubuntu2 [70.3 kB]
   Get:98 http://archive.ubuntu.com/ubuntu/ precise-security/main libxml2 amd64 2.7.8.dfsg-5.1ubuntu4.15 [677 kB]
   Get:99 http://archive.ubuntu.com/ubuntu/ precise/main libnettle4 amd64 2.4-1 [95.1 kB]
   Get:100 http://archive.ubuntu.com/ubuntu/ precise-security/main libarchive12 amd64 3.0.3-6ubuntu1.3 [274 kB]
   Get:101 http://archive.ubuntu.com/ubuntu/ precise-security/main libc-dev-bin amd64 2.15-0ubuntu10.15 [84.7 kB]
   Get:102 http://archive.ubuntu.com/ubuntu/ precise-security/main linux-libc-dev amd64 3.2.0-119.162 [850 kB]
   Get:103 http://archive.ubuntu.com/ubuntu/ precise-security/main libc6-dev amd64 2.15-0ubuntu10.15 [2,943 kB]
   Get:104 http://archive.ubuntu.com/ubuntu/ precise-security/main libcurl3 amd64 7.22.0-3ubuntu4.17 [237 kB]
   Get:105 http://archive.ubuntu.com/ubuntu/ precise-security/main libexpat1 amd64 2.0.1-7.2ubuntu1.4 [131 kB]
   Get:106 http://archive.ubuntu.com/ubuntu/ precise/main bsdmainutils amd64 8.2.3ubuntu1 [200 kB]
   Get:107 http://archive.ubuntu.com/ubuntu/ precise/main nano amd64 2.2.6-1 [194 kB]
   Get:108 http://archive.ubuntu.com/ubuntu/ precise-security/main binutils amd64 2.22-6ubuntu1.4 [2,653 kB]
   Get:109 http://archive.ubuntu.com/ubuntu/ precise-security/main libxmlrpc-core-c3 amd64 1.16.33-3.1ubuntu5.2 [180 kB]
   Get:110 http://archive.ubuntu.com/ubuntu/ precise/main emacsen-common all 1.4.22ubuntu1 [16.9 kB]
   Get:111 http://archive.ubuntu.com/ubuntu/ precise-updates/main cmake-data all 2.8.7-0ubuntu5 [754 kB]
   Get:112 http://archive.ubuntu.com/ubuntu/ precise-updates/main cmake amd64 2.8.7-0ubuntu5 [4,353 kB]
   Fetched 96.1 MB in 1min 36s (995 kB/s)
   Download complete and in download only mode
-> Unpacking  libc-dev-bin_2.15-0ubuntu10.15_amd64
-> Unpacking  libcilkrts5_6.2.0-3ubuntu11~12.04_amd64
-> Unpacking  libcomerr2_1.42-1ubuntu2.3_amd64
-> Unpacking  libasan1_4.9.4-2ubuntu1~12.04_amd64
-> Unpacking  libc-bin_2.15-0ubuntu10.15_amd64
-> Unpacking  util-linux_2.20.1-1ubuntu3.1_amd64
-> Unpacking  libpng12-0_1.2.46-3ubuntu4.2_amd64
-> Unpacking  libkrb5-3_1.10+dfsg~beta1-2ubuntu0.7_amd64
-> Unpacking  libasn1-8-heimdal_1.6~git20120311.dfsg.1-2ubuntu0.1_amd64
-> Unpacking  insserv_1.14.0-2.1ubuntu2_amd64
-> Unpacking  libgssapi3-heimdal_1.6~git20120311.dfsg.1-2ubuntu0.1_amd64
-> Unpacking  libdrm-intel1_2.4.52-1~precise2_amd64
-> Unpacking  libuuid1_2.20.1-1ubuntu3.1_amd64
-> Unpacking  ca-certificates_20160104ubuntu0.12.04.1_all
-> Unpacking  libitm1_6.2.0-3ubuntu11~12.04_amd64
-> Unpacking  liblsan0_6.2.0-3ubuntu11~12.04_amd64
-> Unpacking  libllvm3.6_1%3a3.6.2~svn240577-1~exp1_amd64
-> Unpacking  libexpat1_2.0.1-7.2ubuntu1.4_amd64
-> Unpacking  upstart_1.5-0ubuntu7.3_amd64
-> Unpacking  libstdc++6_6.2.0-3ubuntu11~12.04_amd64
-> Unpacking  libkrb5support0_1.10+dfsg~beta1-2ubuntu0.7_amd64
-> Unpacking  emacsen-common_1.4.22ubuntu1_all
-> Unpacking  libgpg-error0_1.10-2ubuntu1_amd64
-> Unpacking  libp11-kit0_0.12-2ubuntu1_amd64
-> Unpacking  libubsan0_6.2.0-3ubuntu11~12.04_amd64
-> Unpacking  libobjc4_6.2.0-3ubuntu11~12.04_amd64
-> Unpacking  libkeyutils1_1.5.2-2_amd64
-> Unpacking  libbsd0_0.3.0-2_amd64
-> Unpacking  libheimntlm0-heimdal_1.6~git20120311.dfsg.1-2ubuntu0.1_amd64
-> Unpacking  gcc-4.9-base_4.9.4-2ubuntu1~12.04_amd64
-> Unpacking  libheimbase1-heimdal_1.6~git20120311.dfsg.1-2ubuntu0.1_amd64
-> Unpacking  passwd_1%3a4.1.4.2+svn3283-3ubuntu5.1_amd64
-> Unpacking  ifupdown_0.7~beta2ubuntu11.1_amd64
-> Unpacking  libk5crypto3_1.10+dfsg~beta1-2ubuntu0.7_amd64
-> Unpacking  libsasl2-2_2.1.25.dfsg1-3ubuntu0.1_amd64
-> Unpacking  libnih1_1.0.3-4ubuntu9.1_amd64
-> Unpacking  binutils_2.22-6ubuntu1.4_amd64
-> Unpacking  libxml2_2.7.8.dfsg-5.1ubuntu4.15_amd64
-> Unpacking  libdrm-nouveau1a_2.4.52-1~precise2_amd64
-> Unpacking  libblkid1_2.20.1-1ubuntu3.1_amd64
-> Unpacking  libmount1_2.20.1-1ubuntu3.1_amd64
-> Unpacking  libldap-2.4-2_2.4.28-1.1ubuntu4.6_amd64
-> Unpacking  libdrm2_2.4.52-1~precise2_amd64
-> Unpacking  initramfs-tools-bin_0.99ubuntu13.5_amd64
-> Unpacking  libquadmath0_6.2.0-3ubuntu11~12.04_amd64
-> Unpacking  libclang-common-3.6-dev_1%3a3.6.2~svn240577-1~exp1_amd64
-> Unpacking  libpcre3_8.12-4ubuntu0.2_amd64
-> Unpacking  libnih-dbus1_1.0.3-4ubuntu9.1_amd64
-> Unpacking  linux-libc-dev_3.2.0-119.162_amd64
-> Unpacking  libudev0_175-0ubuntu9.10_amd64
-> Unpacking  libatomic1_6.2.0-3ubuntu11~12.04_amd64
-> Unpacking  libgcc1_1%3a6.2.0-3ubuntu11~12.04_amd64
-> Unpacking  libgcc-4.9-dev_4.9.4-2ubuntu1~12.04_amd64
-> Unpacking  libgomp1_6.2.0-3ubuntu11~12.04_amd64
-> Unpacking  libwind0-heimdal_1.6~git20120311.dfsg.1-2ubuntu0.1_amd64
-> Unpacking  ncurses-bin_5.9-4_amd64
-> Unpacking  libarchive12_3.0.3-6ubuntu1.3_amd64
-> Unpacking  libncurses5_5.9-4_amd64
-> Unpacking  libtasn1-3_2.10-1ubuntu1.5_amd64
-> Unpacking  cpio_2.11-7ubuntu3.2_amd64
-> Unpacking  gcc-6-base_6.2.0-3ubuntu11~12.04_amd64
-> Unpacking  libslang2_2.2.4-3ubuntu1_amd64
-> Unpacking  libdbus-1-3_1.4.18-1ubuntu1.8_amd64
-> Unpacking  libnettle4_2.4-1_amd64
-> Unpacking  libcurl3_7.22.0-3ubuntu4.17_amd64
-> Unpacking  libc6_2.15-0ubuntu10.15_amd64
-> Unpacking  libgcrypt11_1.5.0-3ubuntu0.6_amd64
-> Unpacking  libdrm-radeon1_2.4.52-1~precise2_amd64
-> Unpacking  plymouth_0.8.2-2ubuntu31.1_amd64
-> Unpacking  sysv-rc_2.88dsf-13.10ubuntu11.1_all
-> Unpacking  librtmp0_2.4~20110711.gitc28f1bab-1_amd64
-> Unpacking  iproute_20111117-1ubuntu2.3_amd64
-> Unpacking  module-init-tools_3.16-1ubuntu2_amd64
-> Unpacking  libpciaccess0_0.12.902-1ubuntu0.2_amd64
-> Unpacking  libxmlrpc-core-c3_1.16.33-3.1ubuntu5.2_amd64
-> Unpacking  nano_2.2.6-1_amd64
-> Unpacking  sysvinit-utils_2.88dsf-13.10ubuntu11.1_amd64
-> Unpacking  initscripts_2.88dsf-13.10ubuntu11.1_amd64
-> Unpacking  libsqlite3-0_3.7.9-2ubuntu1.2_amd64
-> Unpacking  libc6-dev_2.15-0ubuntu10.15_amd64
-> Unpacking  libhx509-5-heimdal_1.6~git20120311.dfsg.1-2ubuntu0.1_amd64
-> Unpacking  libedit2_2.11-20080614-3ubuntu2_amd64
-> Unpacking  libclang1-3.6_1%3a3.6.2~svn240577-1~exp1_amd64
-> Unpacking  libglib2.0-0_2.32.4-0ubuntu1_amd64
-> Unpacking  cmake-data_2.8.7-0ubuntu5_all
-> Unpacking  libklibc_1.5.25-1ubuntu2_amd64
-> Unpacking  initramfs-tools_0.99ubuntu13.5_all
-> Unpacking  lsb-base_4.0-0ubuntu20.3_all
-> Unpacking  libgnutls26_2.12.14-5ubuntu3.12_amd64
-> Unpacking  libroken18-heimdal_1.6~git20120311.dfsg.1-2ubuntu0.1_amd64
-> Unpacking  libidn11_1.23-2ubuntu0.1_amd64
-> Unpacking  openssl_1.0.1-4ubuntu5.38_amd64
-> Unpacking  libcurl3-gnutls_7.22.0-3ubuntu4.17_amd64
-> Unpacking  libobjc-4.9-dev_4.9.4-2ubuntu1~12.04_amd64
-> Unpacking  mount_2.20.1-1ubuntu3.1_amd64
-> Unpacking  cmake_2.8.7-0ubuntu5_amd64
-> Unpacking  clang-3.6_1%3a3.6.2~svn240577-1~exp1_amd64
-> Unpacking  procps_1%3a3.2.8-11ubuntu6.4_amd64
-> Unpacking  libhcrypto4-heimdal_1.6~git20120311.dfsg.1-2ubuntu0.1_amd64
-> Unpacking  libtsan0_6.2.0-3ubuntu11~12.04_amd64
-> Unpacking  adduser_3.113ubuntu2_all
-> Unpacking  busybox-initramfs_1%3a1.18.5-1ubuntu4.1_amd64
-> Unpacking  libkrb5-26-heimdal_1.6~git20120311.dfsg.1-2ubuntu0.1_amd64
-> Unpacking  libssl1.0.0_1.0.1-4ubuntu5.38_amd64
-> Unpacking  udev_175-0ubuntu9.10_amd64
-> Unpacking  libncursesw5_5.9-4_amd64
-> Unpacking  libgssapi-krb5-2_1.10+dfsg~beta1-2ubuntu0.7_amd64
-> Unpacking  libplymouth2_0.8.2-2ubuntu31.1_amd64
-> Unpacking  mountall_2.36.4ubuntu0.1_amd64
-> Unpacking  klibc-utils_1.5.25-1ubuntu2_amd64
-> Unpacking  bsdmainutils_8.2.3ubuntu1_amd64
-> Unpacking  libstdc++-4.9-dev_4.9.4-2ubuntu1~12.04_amd64
Container has been set up in container