Support me!
If you enjoy these webpages and you want to show your gratitude, feel free to support me in anyway!
Like Me On Facebook! Megabyte Softworks Facebook
Like Me On Facebook! Megabyte Softworks Patreon
Donate $1
Donate $2
Donate $5
Donate $10
Donate Custom Amount
14.) Geometry Shaders
<< back to OpenGL 3 series

Welcome to the 14th tutorial of OpenGL 3.3 series. In this tutorial, we will discover another type of shader - geometry shader. So let's get right into business.

Geometry shader

As you already know, vertices in the rendering pipeline vertices are first processed by vertex shader. Until now, these data were sent further into fragment shader (interpolated among fragments for example), and then fragment processing begun. The geometry shader is the shader that takes place between them and is used to produce additional (or maybe less) geometry. Or in other words - it doesn't receive vertices, but some primitives (like points, lines, or triangles), and outputs possibly new set of primitives further to the pipeline. It gets called once per every input primitive. As I read through the manuals, allowed INPUT primitive types are:

  • points - stands for itself
  • lines - stands for itself
  • triangles - stands for itself
  • line_adjacency - adjacency data for lines, so that you can access adjacent lines' data (I'm not sure how this works exactly)
  • triangle_adjacency - adjacency data for triangles, so that you can access adjacent triangles' data (I'm not sure how this works exactly either, so if anyone does, or knows a good source, let me know and I'll update this article)

Allowed OUTPUT primitive types are:

  • points - stands for itself
  • line_strip - stands for itself
  • triangle_strip - stands for itself

OK, so that would be the basics of geometry shader. Now let's get into more detail and into coding part, as this most important.

Working with geometry shaders

Most things you already know about shaders are gonna be the same - as other 2 shader types you already know, geometry shader must have its main function. Loading geometry shader will be done using the same CShader class, but shader type will be new constant - GL_GEOMETRY_SHADER. What will be different are some specific lines of code and functions that are found only in geometry shaders.

Before the first geometry shader code, let me tell you what will geometry shader in this tutorial do. It will take a triangle as input, find its centroid, and then subdivide the original triangle into 3 triangles. The triangle data will come from vertex shader, which won't do anything besides passing vertices further into geometry shader. Vertex shader is going to be that simple:

#version 330

layout (location = 0) in vec3 inPosition;
layout (location = 1) in vec2 inCoord;
layout (location = 2) in vec3 inNormal;

out vec2 vTexCoordPass;
out vec3 vNormalPass;

void main()
{
	gl_Position = vec4(inPosition, 1.0);
	vTexCoordPass = inCoord;
	vNormalPass = inNormal;
}

Notice one thing - everything except vertex position is transferred to geometry shader using out variable. Vertex position is transferred using built-in variable gl_Position, which can be read in geometry shader as well. Also, all matrix transformation are now performed in geometry shader (it's not necessary, but it's better this way I think), that's why no uniform matrix variables are present in vertex shader. The subdivision of triangle is shown in the picture below:

Moreover, the centroid point can be moved in the direction of normal of original triangle forwards and backwards, so that it will \"bend\" the original triangle (fBender value in code and in application tells, how far from triangle should be centroid moved). So for every input triangle, the shader outputs 3 triangles further into pipeline. Now let's get into geometry shader code finally. This is the beginning of main_shader.geom file:

#version 330

layout(triangles) in;
layout(triangle_strip, max_vertices = 9) out;

First line is - as usual - the preprocessor directive of GLSL version used. The second line tells compiler input primitive type and the third output primitive type. Notice, that there is also max_vertices parameter, which tells compiler what will be the maximum number of output vertices. For each input triangle, that has 3 vertices, we will output 3 triangles, and thus 9 vertices. This is not obligatory, but gives opportunity for compiler to optimize things somehow.

If input primitive type of geometry shader is triangle, it means we must have access to 3 vertices in order to be able to work with triangle. And yep, the in variables, that represent data that came from vertex shader are arrays, with indices representing different vertices (nothing new in the uniform matrix part):

#version 330

layout(triangles) in;
layout(triangle_strip, max_vertices = 9)out;

uniform struct Matrices
{
	mat4 projMatrix;
	mat4 modelMatrix;
	mat4 viewMatrix;                                                                           
	mat4 normalMatrix;
} matrices;

in vec2 vTexCoordPass[]; // Input variable is array, one value for each vertex
in vec3 vNormalPass[]; // Notice that name of out variable in vertex shader must match

Because defined input type is triangle, these arrays have size 3. As mentioned before, position is transferred through built-in variable gl_Position. In geometry shader, the gl_Position is residing in variable gl_in, so to access it, we'll use gl_in[index].gl_Position array (with specified index of vertex of course). To see it in action, here's the first part of main function of geometry shader:

// . . . 

smooth out vec3 vNormal;
smooth out vec2 vTexCoord;

uniform float fBender;

void main()
{
  // Calculate modelview times projection matrix
  mat4 mMVP = matrices.projMatrix*matrices.viewMatrix*matrices.modelMatrix;

  // Calculate the centroid point (just sum up all coordinates and divide by 3
  // You can see built-in variable gl_in here, notice adding normal multiplied by bender value
  vec3 vMiddle = (gl_in[0].gl_Position.xyz+gl_in[1].gl_Position.xyz+gl_in[2].gl_Position.xyz)/3.0+(vNormalPass[0]+vNormalPass[1]+vNormalPass[2])*fBender;

  // Centroid coordinate is average of three as well
  vec2 vTexCoordMiddle = (vTexCoordPass[0]+vTexCoordPass[1]+vTexCoordPass[2])/3.0;

  // Transform normals of 3 triangle vertices with transform matrix and store them in this array
  vec3 vNormalTransformed[3];
  for(int i = 0; i < 3; i++)vNormalTransformed[i] = (vec4(vNormalPass[i], 1.0)*matrices.normalMatrix).xyz;
  
  // Calculate centroid normal
  vec3 vNormalMiddle = (vNormalTransformed[0]+vNormalTransformed[1]+vNormalTransformed[2])/3.0;

  // . . . 

}

Note: in first version of this tutorial, I haven't used vNormalsTransformed array to store transformed normals, but I directly stored them in the same variable using:
vNormalPass[i] = (vec4(vNormalPass[i], 1.0)*matrices.normalMatrix).xyz;
Well, this didn't work on nVidia cards, but on AMD it did. nVidia compiler doesn't allow you to write into these input arrays, they're read-only, so have this in mind when writing geometry shaders runnable on cards from both vendors.

But let's finish the geometry shader main function, here is the second part:


void main()
{
  // . . . 
  for(int i = 0; i < 3; i++)
  {
    // Emit first vertex
    vec3 vPos = gl_in[i].gl_Position.xyz;
    gl_Position = mMVP*vec4(vPos, 1.0);
    vNormal = (vec4(vNormalTransformed[i], 1.0)).xyz;
    vTexCoord = vTexCoordPass[i];
    EmitVertex();

    // Emit second vertex, that comes next in order
    vPos = gl_in[(i+1)%3].gl_Position.xyz;
    gl_Position = mMVP*vec4(vPos, 1.0);
    vNormal = (vec4(vNormalTransformed[(i+1)%3], 1.0)).xyz;
    vTexCoord = vTexCoordPass[(i+1)%3];
    EmitVertex();

    // Emit third vertex - the centroid
    gl_Position = mMVP*vec4(vMiddle, 1.0);
    vNormal = vNormalMiddle;
    vTexCoord = vTexCoordMiddle;
    EmitVertex();

    EndPrimitive();
  }
}

As you can see there is a FOR cycle there. It gets executed 3 times, because we want to emit 3 triangles. So we take first vertex (on position i), second vertex that's in order (on (i+1)%3 position, hope there's no need to explain this), and centroid vertex. For each of these vertices, we set their transformed position and store it in gl_Position (just like we have been doing in vertex shaders), texture coordinate and normal, and then call the geometry shader specific function EmitVertex() to actually output the vertex (so the politically more correct phrase is to emit vertex rather than output vertex ).

After three vertices have been emitted, we call another geometry shader specific function EmitPrimitive(). This tells geometry shader that we have emmited enough vertices to form desired output primitive and we want to assembly and finish the primitive. In our case, we want to finish 3 triangles. Even though output type is triangle_strip, it doesn't interfere with anything - simply after calling EmitVertex() 3 times, we call EmitPrimitive() once. If you emitted 4 vertices with triangle_strip, you would actually get 2 triangles. And if you emit N vertices, you'll get N-2 triangles.

With each emmited vertex, you set desired vertex attributes before calling EmitVertex(). These attributes are out type variables, that are sent further to fragment shader, either interpolated over polygon when using smooth modifier, or just set to the same value for each fragment that the primitive occupies when using flat modifier. And now comes a random seed of wisdom : when you're using flat keyword, you need to set vertex attributes only for one vertex (that's intuitive, as these attributes are same across whole primitive). To specify which vertex should be source of the data, you use function glProvokingVertex. It has one parameter - provokeMode, which can be either GL_FIRST_VERTEX_CONVENTION or GL_LAST_VERTEX_CONVENTION. Refer to manual for more detailed info. My guess is that the default OpenGL behavior is GL_FIRST_VERTEX_CONVENTION, but correct me if I'm wrong.

And now here is one thing that makes me wonder, and needs approval from some OpenGL UberGuru : When rendering using glDrawArrays for example and using geometry shader in active shader program, render mode MUST match input primitive type in the geometry shader. Or doesn't have to? Well, I tried rendering building in this tutorial using GL_POINTS, and nothing got rendered. So I guess that these two things must match. And it makes sense that they do match.

Wireframe mode

To see the beauty of tesselation, I added wireframe mode to this tutorial. There is a function in OpenGL called glPolygonMode, which takes two parameters - faces to apply to (front, back, or both - more on these in next, 15th tutorial) and render mode. In this tutorial, first parameter is always GL_FRONT_AND_BACK to apply to both front and backfaces, and second will be either GL_FILL, if we want normal render mode, or GL_LINE to render only polygon contours, thus creating wireframe models.

Result

Finally, we got to the end of this tutorial. The programmed effect looks nice:

You can play around with key Q to toggle wireframe mode, and also keys Z and X to change bender value.

This tutorial was done after 2 months from previous, It's because I was really overloaded with school duties (Bachelor's Thesis and then State Exams). Thanks God, I have a Bachelor's title now from Informatics . Getting to know geometry shaders is important leap in your OpenGL career. These small thingys are actually capable of much more - few examples that come to mind right now are fur, some bezier surfaces, or even particle systems (you will see how to in later tutorials, it's actually quite easy). Until then, I hope you'll get very familiar with geometry shaders.

In code for this tutorial, major changes have happened. I finally decided (i.e. overcome my laziness ) to create a class for each effect programmed till now - directional light, point light and fog. So far, all the effects were in renderScene function written in ad-hoc fashion. So I did this not only because it's convenient, but also because rendering code gets a lot cleaner and it's a good programming habit.

If you think these tutorials start to lack creativity because of the same scene and that three-tori object in last 5 or so tutorials, don't worry - next tutorial will introduce loading of OBJ model files, so that we can start placing interesting things everywhere over scene . So stay tuned!

Download 2.22 MB (5726 downloads)