Support me!
If you enjoy these webpages and you want to show your gratitude, feel free to support me in anyway!
Like Me On Facebook! Megabyte Softworks Facebook
Like Me On Facebook! Megabyte Softworks Patreon
Donate $1
Donate $2
Donate $5
Donate $10
Donate Custom Amount
07.) Blending Basics
<< back to OpenGL 3 series

Welcome to the seventh of the OpenGL 3.3+ tutorials. In this one, we are going to discuss common operation used in graphics - blending. With it, we can achieve effects like object transparency, which we will do in this tutorial. This tutorial is going to be more interactive, because there is a camera added, with which you can move around and look at the scene from wherever you want. I'm also separately working on a series of camera tutorials, starting from a simple camera, like the one in this tutorial, to more complicated cameras with mouse rotation, collision detection and interaction with world etc.

Blending

Blending is mixing of colors of the pixels that are already drawn with those, that are about to be drawn. In OpenGL manuals, you can read about source and destination factors. These are just the names of colors already in framebuffer - the destination color, and incoming colors or the ones that are about to be drawn - the source color. What we must do is to tell OpenGL how to mix these colors. The default blending equation looks like this (just a pseudocode):

Rresult = Rsrc * facRsrc + Rdst * facRdst
Gresult = Gsrc * facGsrc + Gdst * facGdst
Bresult = Bsrc * facBsrc + Bdst * facBdst
Aresult = Asrc * facAsrc + Rdst * facAdst

For those of you who find this scary at first sight - don't worry. Look there again and notice that there is only addition and multiplication used - pretty basic things . And in the next lines, I'm going to explain what each of the expression means. We'll take an example to demonstrate it. Suppose we have a scene with a red opaque (non-transparent) quad that's already drawn on the scene, and in front of it, we want to draw another, half-transparent green quad (like a glass), so that we will be able to see through it the red quad:

First thing we may notice is the alpha component of the color. What's this for? Well, you can use for many purposes, but the most common and most intuitive way of using it is the how visible the object is, with 1.0 being full opaque object, and 0.0 being invisible object. You can see, that our red quad has alpha 1.0, so it's presented in pure red color, and the fully green quad doesn't have that strong green in final image, because the background is white and it's half-transparent, so we can see half of green color and half of white background. This may seem intuitive to you, and it's exactly what the equations above are about, if you set some things properly. We will learn how.

Let's take red color component as example and how will it be calculated if we render that green quad in front of the red, so their colors will mix. How do we calculate the final red color, that will be written to framebuffer: Rresult? You can see that it consists of two components: Rsrc, multiplied by facRsrc and Rdst, multiplied by facRdst.
Rsrc is the red color component of source color, that is the color of object, we are about to draw. In our case, we are going to draw green quad with RGBA(0.0, 1.0, 0.0, 0.5), so the Rsrc is 0.0.
Rdst is the red color component of destination color - this is the color that's already written in framebuffer. In our case, it's 1.0, because we have already drawn red quad with RGBA(1.0, 0.0, 0.0, 1.0).
These two values are multiplied by facRsrc and facRdst. This stands for red component source factor and red component destination factor, respectively. And this is the most important part of blending. It basically says, how much of each color we should add to mix the result color. So if we want to achieve the effect of half-transparent quad, so that the result is this:

we need to tell OpenGL to take 50% of source color and the rest, i.e. (100% - 50%) from destination color. If the alpha was 0.25, then we would take 25% from source color and (100% - 25%) from destination color. To set these mixing factors, function glBlendFunc function is used. It's got two parameters as you might expect - source and destination factor. In our case, we would put there GL_SRC_ALPHA for source factor (take as much as object is visible from source color), and GL_ONE_MINUS_SRC_ALPHA for destination factor (then take the rest from destination color). Try to sum these colors in your head to see that they are really results of the above equations.

Exactly same equations are used in other components - B, G and A. There is, however, a way to define the behavior for RGB and Alpha components separately, and it's done with function glBlendFuncSeparate. But we don't need this function in this tutorial, we are fine with setting all of them the same.

Another function for altering blending settings is glBlendEquation. As you can see, in this example we were adding source and destination components together. But what if we wanted to subtract them instead? This is what can be set with this function, among with reverse subtract, taking minimum or taking maximum of values. We don't need to call it, because the default equation is the addition, and that's what we need. There is a sibling of this function, glBlendEquationSeparate, with which we can define equation for RGB and alpha separately.

Let's take a look at the rendering code. There are three types of objects in this scene - opaque cube with texture, transparent cube and ground (just two big triangles):

void renderScene(LPVOID lpParam)
{
	// Typecast lpParam to COpenGLControl pointer
	COpenGLControl* oglControl = (COpenGLControl*)lpParam;

	glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);

	// First render textured objects

	glEnable(GL_TEXTURE_2D);
	spTextured.useProgram();
	glBindVertexArray(uiVAOs[0]);

	int iModelViewLoc = glGetUniformLocation(spTextured.getProgramID(), "modelViewMatrix");
	int iProjectionLoc = glGetUniformLocation(spTextured.getProgramID(), "projectionMatrix");
	int iColorLoc = glGetUniformLocation(spTextured.getProgramID(), "color");

	glm::vec4 vWhiteColor(1.0f, 1.0f, 1.0f, 1.0f);
	glUniform4fv(iColorLoc, 1, glm::value_ptr(vWhiteColor)); // Set white for textures

	glUniformMatrix4fv(iProjectionLoc, 1, GL_FALSE, glm::value_ptr(*oglControl->getProjectionMatrix()));

	glm::mat4 mModelView = cCamera.look();
	glm::mat4 mCurrent = mModelView;

	// Render ground

	tBlueIce.bindTexture();
	glUniformMatrix4fv(iModelViewLoc, 1, GL_FALSE, glm::value_ptr(mModelView));
	glDrawArrays(GL_TRIANGLES, 36, 6);

	// Render 5 opaque boxes

	tBox.bindTexture();

	FOR(i, 5)
	{
		float fSign = -1.0f+float(i%2)*2.0f; // This just returns -1.0f or 1.0f (try to examine this)
		glm::vec3 vPos = glm::vec3(fSign*15.0f, 0.0f, 50.0f-float(i)*25.0f);
		mCurrent = glm::translate(mModelView, vPos);
		mCurrent = glm::scale(mCurrent, glm::vec3(8.0f, 8.0f, 8.0f));
		mCurrent = glm::rotate(mCurrent, fGlobalAngle+i*50.0f, glm::vec3(0.0f, 1.0f, 0.0f));
		glUniformMatrix4fv(iModelViewLoc, 1, GL_FALSE, glm::value_ptr(mCurrent));
		glDrawArrays(GL_TRIANGLES, 0, 36);
	}

	// Now switch to only colored rendering

	glDisable(GL_TEXTURE_2D);
	spColored.useProgram();
	glBindVertexArray(uiVAOs[1]);

	iModelViewLoc = glGetUniformLocation(spColored.getProgramID(), "modelViewMatrix");
	iProjectionLoc = glGetUniformLocation(spColored.getProgramID(), "projectionMatrix");
	iColorLoc = glGetUniformLocation(spColored.getProgramID(), "color");
	glUniformMatrix4fv(iProjectionLoc, 1, GL_FALSE, glm::value_ptr(*oglControl->getProjectionMatrix()));

	// Render 5 transparent boxes

	glEnable(GL_BLEND);
	glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
	glDepthMask(0); // Disable writing to depth buffer

	FOR(i, 5)
	{
		float fSign = 1.0f-float(i%2)*2.0f; // Same case as before -  -1.0f or 1.0f
		glm::vec3 vPos = glm::vec3(fSign*15.0f, 0.0f, 50.0f-float(i)*25.0f);
		mCurrent = glm::translate(mModelView, vPos);
		mCurrent = glm::scale(mCurrent, glm::vec3(8.0f, 8.0f, 8.0f));
		mCurrent = glm::rotate(mCurrent, fGlobalAngle*0.8f+i*30.0f, glm::vec3(1.0f, 0.0f, 0.0f)); // Just a variation of first rotating
		glUniformMatrix4fv(iModelViewLoc, 1, GL_FALSE, glm::value_ptr(mCurrent));
		glUniform4fv(iColorLoc, 1, glm::value_ptr(vBoxColors[i]));
		glDrawArrays(GL_TRIANGLES, 0, 36);
	}
	glDisable(GL_BLEND);
	glDepthMask(1); // Re-enable writing to depth buffer

	fGlobalAngle += appMain.sof(100.0f);
	cCamera.update();

	oglControl->swapBuffers();
}

Now let's see what we're doing here. First we "look" at the scene with our camera - it returns the proper modelview matrix. This time, we will use two shader programs - one for textured objects, with one uniform variable to define color, and one for untextured objects with one uniform variable to define polygon color. First, we render opaque ground. Then we render 5 opaque textured cubes (and rotating, so that it's interactive ). Note that we used white color for these objects, so that when we modulate colors in fragment shader of this program, they have only their texture colors. Modulation of colors is just component-wise multiplication of colors. So when we grabbed a texel from texture and multiply it with white, i.e. with (1.0, 1.0, 1.0, 1.0), we will still have the same color (try other colors, like red, to see the difference).

Now we are done with rendering textured objects, we will switch shader programs to render objects without texture. Now we will render 5 colored rotating cubes, and then the glass. cubes will be red and green, and glass is blue.

There are some things that deserve attention. Look at the order of rendering. First we rendered objects that are fully opaque, and then we turned the blending on to render the transparent objects. And not only that, we called function glDepthMask(0) to turn off writing to depth buffer, and after the rendering glDepthMask(1) to restore it. Why doing that? Let's take an example: We want to render one opaque cube, and two transparent cubes in front of it After rendering opaque cube, we don't want to write values to depth buffer, because if there are two transparent object that pass Z-test, second transparent object can get behind the first, and after altering the depth buffer values with rendering the first transparent object, the second may not pass Z-test and thus won't be rendered at all. But since these objects are at least partially transparent, the second object should be visible as well. That's why we turn depth buffer writing off. Important lesson to take here is to pay attention to order of rendering - first opaque, then transparent with depth buffer writing disabled.

You can move around this scene with WSAD keys to see it from many views. Camera code will be explained in separate tutorials, but this one is quite simple, have a look at it.

Conclusion

This is one of the shorter tutorials from this series. With blending you can achieve many effect, which we will show in later tutorials, when some particles will be drawn to create effects like fire. For today, this is enough. There is a more sophisticated method for opaque and transparent object rendering called Order Independent Transparency, but it is beyond scope of this simple tutorial. Next time, we will begin with simple lighting.

Download 1.26 MB (5854 downloads)