When we merged in OpenGL|ES support for compiz, we didn’t have the resources to continue maintenance of some of the plugins which were were more complicated in their OpenGL usage. As such, those plugins were disabled for building until a later time.
I’ve taken some of my spare time to make the cubeaddon plugin work with the new OpenGL API.
I figure the sphere-deformation was the one people have asked me for the most, so I decided to go with that one. It was a good learning experience too – I’ve always wanted to know how the mathematics of the spherical deformation actually works. You can see how with the wireframe render below:
Or with a reduced mesh size:
The window deformation is complicated to explain, but the caps are quite simple. We use a TRIANGLE_FAN primitive to render the very tips of the sphere, like this:
Once you have that, you just render quads (or a TRIANGLE_STRIP with primitive restart for newer OpenGL versions) for the curvature until the windows are reached:
In this reduced-resolution version, two quads are rendered per cube face, so we only submit eight vertices each time we render, but the texture co-ordinate planes and object transformation matrices are rotated each time. This gives us our full image.
Other interesting parts
Some other fun parts were to remove fixed-function pipeline usage and replace it with client side or shader equivalents. For example, cubeaddon used this code:
s_gen[0] = texMat[0];
s_gen[1] = texMat[8];
s_gen[2] = texMat[4];
s_gen[3] = texMat[12];
t_gen[0] = texMat[1];
t_gen[1] = texMat[9];
t_gen[2] = texMat[5];
t_gen[3] = texMat[13];
glTexGenfv(GL_T, GL_OBJECT_PLANE, t_gen);
glTexGenfv(GL_S, GL_OBJECT_PLANE, s_gen);
glTexGeni(GL_S, GL_TEXTURE_GEN_MODE, GL_OBJECT_LINEAR);
glTexGeni(GL_T, GL_TEXTURE_GEN_MODE, GL_OBJECT_LINEAR);
glEnable(GL_TEXTURE_GEN_S);
glEnable(GL_TEXTURE_GEN_T);
in order to generate texture co-ordinates for the cubecaps inside OpenGL, but that is not supported in OpenGL|ES, so I had to replace it with a client side simulation:
GLVector sGen (texMat[0], texMat[8], texMat[4], texMat[12]);
GLVector tGen (texMat[1], texMat[9], texMat[5], texMat[13]);
/* Generate texCoords for the top section of the
* cap */
texCoords.reserve ((CAP_ELEMENTS + 2) * 2);
for (unsigned int i = 0; i < CAP_ELEMENTS + 2; i++)
{
GLVector v (mCapFill[i * 3],
mCapFill[i * 3 + 1],
mCapFill[i * 3 + 2],
1.0f);
float s = v * sGen;
float t = v * tGen;
texCoords.push_back (s);
texCoords.push_back (t);
}
Texture co-ordinate generation with GL_OBJECT_LINEAR just takes the dot product of the object vertex and texture plane.
Another challenge was the usage of glDrawElements
glDrawElements (GL_QUADS, CAP_NIDX, GL_UNSIGNED_SHORT,
mCapFillIdx);
glDrawElements uses a technique called index buffer objects, which is a clever optimization to prevent sending the GPU a lot of geometry. Instead of uploading lots of vertices which might overlap (you saw this in my diagrams earlier, it was often the case that v3 and v1 overlapped each other), you send OpenGL an array of unique vertices, then send it an array of indicies which reference which vertex in the vertex buffer should be drawn next. So for example you might have:
GLfloat vertices[] =
{
0.0f, 0.0f, 0.0f
1.0f, 0.0f, 0.0f
0.0f, 1.0f, 0.0f,
0.0f, 1.0f, 0.0f,
1.0f, 1.0f, 0.0f,
1.0f, 0.0f, 0.0f
};
glVertexPointer (3, GL_FLOAT, 0, vertices);
glDrawArrays (GL_TRIANGLES, 0, 6);
As you can tell, there are some overlapping vertices here, so just calling glDrawArrays means that the vertex processor needs to walk every vertex.
GLfloat vertices[] =
{
0.0f, 0.0f, 0.0f
1.0f, 0.0f, 0.0f
0.0f, 1.0f, 0.0f,
1.0f, 1.0f, 0.0f,
};
GLushort indicies[] =
{
0, 1, 2,
2, 1, 3
}
glVertexPointer (3, GL_FLOAT, 0, vertices);
glDrawElements (GL_TRIANGLES, 6, GL_UNSIGNED_SHORT, indicies);
Instead of providing 6 vertices, we only had to provide 4, and we ask OpenGL to use vertices 0, 1, 2 for the first triangle, and then 2, 1, 3 for the second.
Unfortunately, our GLVertexBuffer class doesn’t really support the semantics of using IBOs, so I had to add some code to extract the “real” vertex data in place. Thankfully this isn’t too hard.
std::vector <GLfloat> vertexData; vertexData.reserve (nIndicies * 3); for (unsigned int i = 0; i < nIndicies; ++i) { unsigned short vertexBase = indicies[i] * 3; vertexData.push_back (vertices[vertexBase]); vertexData.push_back (vertices[vertexBase + 1]); vertexData.push_back (vertices[vertexBase + 2]); }; vertexBuffer->addVertices (nIndices, vertexData);
Guide on adapting old plugins
I’ve used some of my experience to write up a guide on adapting old plugins for the new API. You can find it on the compiz wiki.
Great post, very informative. I love your handdrawn scribbles, they visualize the process of understanding quite well 🙂
Hi!
I made a branch in launchpad with a working OpenGL|ES port of firepaint.
I have a question about coding style: are std::vectors always preferable to arrays? Generally, are there any guidelines about which C++ features should be used and when? (I looked at http://wiki.compiz.org/Development/CodingStyle)
Thanks!
Hey Michail.
I didn’t see a merge proposal for your branch. Do you think you might be able to do that? I will be able to see it even if you set it to “Work in Progress”
Great to see you’ve ported it – the coding style document really only indicates the usage of whitespace and other such things, but doesn’t say anything about language features.
The generally accepted view on vectors is to use them where it makes sense to – eg, if you can have an array with a fixed size, which will always be full, then it makes sense to use arrays. If the array has a dynamic size, or the size might change, then a vector makes sense.
Generally speaking there is very little difference between the two in terms of performance.
Feel free to get in contact by email (smspillaz on gmail)
Thanks a lot!
I made a merge proposal.