Support me!
If you enjoy these webpages and you want to show your gratitude, feel free to support me in anyway!
Like Me On Facebook! Megabyte Softworks Facebook
Like Me On Facebook! Megabyte Softworks Patreon
Donate $1
Donate $2
Donate $5
Donate $10
Donate Custom Amount
06.) Textures
<< back to OpenGL 3 series

Hello there and welcome to the 6th OpenGL 3.3 Tutorial. Here we are going one of the most used thing in 3D graphics - texture mapping (texturing).

What is texturing (for total newbies)

Texturing is a method for adding details to our scene by mapping texture images to our polygon. When we have a 3D model, and we want to render it with image mapped on it somehow, we feed OpenGL desired image (texture), texture coordinates (we're working with 2D textures now, so we will feed OpenGL with 2D texture coordinates), and then do some bureaucracy , like enabling texturing, and we are ready to go.

Texture mapping - how to do it

OK, first thing we need to do, is to be able to load pictures from disk and put them in some easy-to-use format, like RGB pixel per pixel. OpenGL doesn't deal with image loading, it just wants us to provide data in one such format, so that it can create a texture from it. And for purposes of loading images, I decided to go with FreeImage library, that is, as the name suggests, free, so no one will chase you after using it in your product . So go to:
http://freeimage.sourceforge.net/
and download it. After unpacking it somewhere to your libaries directory, add a new entry to Include Directories and Library Directories in your Visual Studio like this (it's explained in the first tutorial, in case you don't know where it is):

Now that we are able to load images, we can start working with textures. Textures in OpenGL are used similarly as other OpenGL objects - first we must tell OpenGL to generate textures, and then it provides us a texture name (ID), with which we can address the texture. To make things easy, we will create a wrapper C++ class that will encapsulate creation, deletion and every important thing related to texturing. Here is how the class looks like:

class CTexture
{
public:
	bool loadTexture2D(string a_sPath, bool bGenerateMipMaps = false);
	void bindTexture(int iTextureUnit = 0);

	void setFiltering(int a_tfMagnification, int a_tfMinification);

	int getMinificationFilter();
	int getMagnificationFilter();

	void releaseTexture();

	CTexture();
private:
	int iWidth, iHeight, iBPP; // Texture width, height, and bytes per pixel
	UINT uiTexture; // Texture name
	UINT uiSampler; // Sampler name
	bool bMipMapsGenerated;

	int tfMinification, tfMagnification;

	string sPath;
};

We will get directly into loadTexture function, which is the maybe the most important function in this tutorial:

bool CTexture::loadTexture2D(string a_sPath, bool bGenerateMipMaps)
{
	FREE_IMAGE_FORMAT fif = FIF_UNKNOWN;
	FIBITMAP* dib(0);

	fif = FreeImage_GetFileType(a_sPath.c_str(), 0); // Check the file signature and deduce its format

	if(fif == FIF_UNKNOWN) // If still unknown, try to guess the file format from the file extension
		fif = FreeImage_GetFIFFromFilename(a_sPath.c_str());

	if(fif == FIF_UNKNOWN) // If still unkown, return failure
		return false;

	if(FreeImage_FIFSupportsReading(fif)) // Check if the plugin has reading capabilities and load the file
		dib = FreeImage_Load(fif, a_sPath.c_str());
	if(!dib)
		return false;

	BYTE* bDataPointer = FreeImage_GetBits(dib); // Retrieve the image data

	iWidth = FreeImage_GetWidth(dib); // Get the image width and height
	iHeight = FreeImage_GetHeight(dib);
	iBPP = FreeImage_GetBPP(dib);

	// If somehow one of these failed (they shouldn't), return failure
	if(bDataPointer == NULL || iWidth == 0 || iHeight == 0)
		return false;

	// Generate an OpenGL texture ID for this texture
	glGenTextures(1, &uiTexture);
	glBindTexture(GL_TEXTURE_2D, uiTexture);

	int iFormat = iBPP == 24 ? GL_BGR : iBPP == 8 ? GL_LUMINANCE : 0;
	int iInternalFormat = iBPP == 24 ? GL_RGB : GL_DEPTH_COMPONENT; 

	glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, iWidth, iHeight, 0, iFormat, GL_UNSIGNED_BYTE, bDataPointer);

	if(bGenerateMipMaps)glGenerateMipmap(GL_TEXTURE_2D);

	FreeImage_Unload(dib);

	glGenSamplers(1, &uiSampler);

	sPath = a_sPath;
	bMipMapsGenerated = bGenerateMipMaps;

	return true; // Success
}

First, when we provide an image file path to function, FreeImage will try to guess which file type it is (probably by extension, then by examining file headers maybe). We do this with functions FreeImage_GetFileType, FreeImage_GetFIFFromFilename, and FreeImage_FIFSupportsReading. This function will determine if the given file is image and if FreeImage is capable of reading it. Don't worry, it supports all major graphic formats, so it really shouldn't be a problem. If everything is good, we call FreeImage_Load to finally load the image to memory.

Very important thing about textures is, that their dimensions MUST be powers of 2. It is so for several reasons (well to be honest, I don't know exactly why it is so ), but I can think of several reasons, that seems likely - like when creating mipmaps (more on that later), it may be problematic, or some memory alignments. If someone knows more on this stuff, you can write it to comments and I will edit the article. There are, however, extensions that allows arbitrary rectangular textures to be loaded, but in this tutorial, we will use 256x256 texture size.

Now we are ready to create an OpenGL texture from loaded data. First we must retrieve image properties, for later use in OpenGL. We store them in iWidth, iHeight, and iBPP member variables. We also retrieve data pointer to with FreeImage_GetBits function (the name may be little misleading). Then we finally generate texture by calling glGenTextures. It takes two parameters - how many textures we want, and where to store their names (classic convention). After creating texture object, we must bind it to tell OpenGL we are gonna work with this one, by calling glBindTexture. Its parameters are target, which can be GL_TEXTURE_1D, GL_TEXTURE_2D, GL_TEXTURE_3D, or some other parameters, like GL_TEXTURE_CUBE_MAP (we'll get onto this in later tutorials). Refer to the manual pages for list of all. In this tutorial, we will stick to 2D textures, so the target is GL_TEXTURE_2D. Second parameter is texture ID generated previously.

Now it seems we can finally upload texture data to GPU, but there is still one thing we must solve. FreeImage doesn't store our images in RGB format, on Windows, it's actually BGR, and this thing should be platform-dependant as far as I know. But this is no problem, when sending data to GPU, we'll just tell it that they're in BGR format. And now we really are ready to upload data to GPU... or we are? Yes, but a little word about texture filters should be said.

Texture filtering

When telling OpenGL texture data, we must also tell it, how to FILTER the texture. What does this mean? It's the way how OpenGL takes colors from image and draws them onto a polygon. Since we will probably never map texture pixel-perfect (the polygon's on-screen pixel size is the same as texture size), we need to tell OpenGL which texels (single pixels (or colors) from texture) to take. There are several texture filterings available. They are defined for both minification and magnification. What does this mean? Well, first imagine a wall, that we are looking straight at, and its screen pixel size is the same as our texture size (256x256), so that each pixel has a corresponding texel:

In this case, everything is OK, there is no problem. But, if we moved closer to the wall, then we need to MAGNIFY the texture - because there are now more pixels on screen than texels in texture, we must tell OpenGL how to fetch the values from texture. In this case, there are two filterings:

NEAREST FILTERING: GPU will simply take the texel, that is nearest to exactly calculated point. This one is very fast, as no additional calculations are performed, but it's quality is also very low, since multiple pixels have the same texels, and the visual artifacts are very bold. The closer to the wall you are, the more "squary" it looks (many squares with different colors, each square represents a texel).

BILINEAR FILTERING: This one doesn't only get the closest texel, but rather it calculates the distances from all 4 adjacent texels, and retrieves weighted average from them, depending on the distance. This results in a lot better quality than nearest filtering, but requires a little more computational time (on modern hardware, this time is negligible). Have a look at the pictures:

As you can see, bilinear filtering gives us smoother results. You may wonder, that I have also heard of trilinear filtering. Soon, we'll get into that as well..

The second case is, if we moved further from the wall. Now the texture is bigger than the screen render of our simple wall, and thus it must be MINIFIED. The problem is, that now multiple texels may correspond to single fragment. And what shall we do now? One solution may be to average all corresponding texels, but this may be really slow, as whole texture might potentionally fall into single pixel. The nice solution to this problem is called MIPMAPPING. The original texture is stored not only in its original size, but also downsampled to all smaller resolutions, with each coordinate divided by 2, creating a \"pyramid\" of textures (this image is from Wikipedia):

Particular images are called mipmaps. With mipmapping enabled, GPU selects a mipmap of appropriate size, according to the distance we see object from, and then perform some filtering. This results in higher memory consumption (exactly by 33%, as sum of 1/4, 1/16, 1/256... converges to 1/3), but gives nice visual results at very nice speed. And here is another filtering term - TRILINEAR filtering. What's that? Well, it's the almost same as bilinear filtering, but addition to it is that we take two nearest mipmaps, do the bilinear filtering on each of them, and then average results. The name TRIlinear is from the third dimension that comes into it - in case of bilinear we were finding fragments in two dimensions, trilinear filtering extends this to three dimensions.

Another, most computationally expensive, but with best results is ANISOTROPIC filtering. But this will be covered in some later tutorial, not this one, which should serve as introduction to texturing.

Finalizing our texture

After a brief explanation of texture filters, we can proceed with its creation. All we need to do is to send texture data to GPU, and then tell OpenGL in which format we stored them. Function for sending data to GPU is glTexImage2D. It's parameters (in order) are:

  1. target - in our case it is GL_TEXTURE_2D
  2. texture LOD - Level Of Detail - we set this to zero - this parameter is used for defining mipmaps. The base level (full resolution) is 0. All subsequent levels (1/4 of the texture size, 1/16 of the texture size...) are higher, i.e. 1, 2 and so on. But we don't have to do it manually (even though we can, and we don't even have to define ALL mipmap levels if we don't want to, OpenGL doesn't require that), there is luckily a function for mipmap generation (soon we'll get into that).
  3. internal format - specification says it's number of components per pixel, but it doesn't accept numbers, but constants like GL_RGB and so on (see spec). And even though we use BGR as format, we put here GL_RGB anyway, because this parameter doesn't accept GL_BGR, it really only informs about the number of components per texel. I don't find this very intuitive, but it's probably because of some backwards compatibility.
  4. width - Texture width
  5. height - Texture height
  6. border - width of border - in older OpenGL specifications you could create a border around texture (it's really useless), in new 3.3 specification (and also in future specifications, like 4.2 in time of writing this tutorial), this parameter MUST be zero
  7. format - Format in which we specify data, GL_BGR in this case
  8. type - data type of single value, we use unsigned bytes, and thus GL_UNSIGNED_BYTE as data type
  9. data - finally a pointer to the data

Phew, so many parameters. There's no need to remember them in order, if you need to use it, always consult specification. Important thing is that you understand what this function does. Now, the last thing that hasn't been covered, is creation of mipmaps. There are two ways - either we resize images ourselves, and then call glTexImage2D with different LODs, or we easily call function that OpenGL provides right after we uploaded data - glGenerateMipmaps. The only parameter is the target, which is GL_TEXTURE_2D in our case.

Now that we have data sent to GPU, we need to tell OpenGL how to filter the texture. Well, for those who remember OpenGL in older days (2.1 and below), we would do something like this to set filtering:

// Set magnification filter
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
// Set minification filter
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);

But not now. Now we are ready to move on. Problem of the above was, that if we wanted to use the same texture with different filterings, we could do it by constantly changing its parameters. Well it could be done somehow, but isn't there a nicer, more elegant way? Yes there is - and it's called samplers.

Samplers

I couldn't find a definition of sampler on them internets , but I will try to explain it as easy as possible. Sampling is the process of fetching a value from a texture at a given position, so sampler is an object where we store info of how to do it. Like which texture to use and all filtering parameters. If we want to change filtering, we just bind different samplers with different propertiees, and we're done. This line is copied from spec:

"If a sampler object is bound to a texture unit and that unit is used to sample from a texture, the parameters in the sampler are used to sample from the texture, rather than the equivalent parameters in the texture object bound to that unit."

One part of it basically says, that if a sampler is bound to the texture, its parameters supersedes texture parameters. So instead of setting texture parameters, we will create a sampler, which will do exactly this. Even though in this tutorial we create one sampler per one texture (so it's like without samplers), it's a more general solution and thus it's better. As all OpenGL objects, samplers are generated (we get their names), and then we access them with that name. So when loading texture, we just call glGenerateSamplers, and then we set its parameters with our member function:

void CTexture::setFiltering(int a_tfMagnification, int a_tfMinification)
{
	// Set magnification filter
	if(a_tfMagnification == TEXTURE_FILTER_MAG_NEAREST)
		glSamplerParameteri(uiSampler, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
	else if(a_tfMagnification == TEXTURE_FILTER_MAG_BILINEAR)
		glSamplerParameteri(uiSampler, GL_TEXTURE_MAG_FILTER, GL_LINEAR);

	// Set minification filter
	if(a_tfMinification == TEXTURE_FILTER_MIN_NEAREST)
		glSamplerParameteri(uiSampler, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
	else if(a_tfMinification == TEXTURE_FILTER_MIN_BILINEAR)
		glSamplerParameteri(uiSampler, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
	else if(a_tfMinification == TEXTURE_FILTER_MIN_NEAREST_MIPMAP)
		glSamplerParameteri(uiSampler, GL_TEXTURE_MIN_FILTER, GL_NEAREST_MIPMAP_NEAREST);
	else if(a_tfMinification == TEXTURE_FILTER_MIN_BILINEAR_MIPMAP)
		glSamplerParameteri(uiSampler, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_NEAREST);
	else if(a_tfMinification == TEXTURE_FILTER_MIN_TRILINEAR)
		glSamplerParameteri(uiSampler, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);

	tfMinification = a_tfMinification;
	tfMagnification = a_tfMagnification;
}

We just pass in values from enumerator structure, defined in texture.h, and we change the filtering parameters. In application, you can press F1 and F2 keys, to switch between minification and magnification filterings of ice texture (run application in windowed mode, because in window title bar you can see actual texture filters). You may notice, that in enumerator structure, there are only 5 minification filters, and if you google things you can find 6. I just didn't put there filtering, that takes closest two mipmaps, performs nearest criterion on them, and then averages results - it simply doesn't make much sense to do that (even though OpenGL allows that). But if you really want, you can try it (set minification filter to GL_NEAREST_MIPMAP_LINEAR).

I hope, that I have demystified texture filterings for you, and now, we are ready to see how texture mapping is done.

Texture Coordinates

Yeah, that's it. Finally we came into it. Texture coordinates (also called UV coordinates) are the way how to map texture along the polygon. We just need to provide appropriate texture coordinates with every vertex and we're done. In our 2D texture case, texture coordinate will be represented by two numbers, one along X axis (U coordinate), and one along Y Axis (V Coordinate):

So if we would like to map our texture to quad, we would simply provide (0.0, 1.0) coordinates to upper left vertex, (1.0, 1.0) to upper right vertex, (1.0, 0.0) to bottom-right vertex and (0.0, 0.0) to bottom-left vertex. But what if we wanted to map texture to let's say a triangle? Well, you probably may guess now, and this picture will demonstrate it:

We simply need to copy the shape of our polygon also in texture coordinates in order to map texture properly. If we exceed the <0..1> range, our texture gets mapped more times, let's say, if we mapped coordinates (0.0, 10.0), (10.0, 10.0), (10.0, 0.0) and (0.0, 0.0) to the quad, texture would be mapped 10 times on X Axis and 10 times on Y Axis. This texture repeating is default behavior, it can be turned off actually, so the texture cannot exceed these values, or if it does, only border values are taken (this is used when creating skyboxes for example).

Now that we know which texture coordinate values are right, we must learn how to provide them. Texture coordinate is just another vertex attribute. So when creating data for rendering in VBO, we add two additional floats per vertex for texture coordinates. Nothing else. We'll also need to add few lines into shaders as well. Starting from this tutorial, I will use my CVertexBuffer class, which wraps VBO and allows for dynamic addition of data (so I don't have to count number of polygons and size of VBO before rendering, I just add as many as I want, and then upload data to GPU). I'm just going to say it uses std::vector internally, and you can have a look at its code, if you're interested in it. We'll use one such for cube, pyramid and ground (which is only one quad, made of 2 triangles, textured with grass texture). Then we'll call glDrawArrays, but with different offsets, and different textures bound.

One important thing I changed in ths tutorial is the format of data. We don't have one VBO for vertices and one for texture coordinates, but with each vertex, we have three floats for vertex followed by two floats for texture coordinate. After that, we just need to tell OpenGL, when calling glVertexAttribPointer, what's the distance between two consecutive attributes (the STRIDE parameter). In this case, the distance between two consecutive vertex attributes is sizeof whole vertex data, i.e. sizeof(vec3)+sizeof(vec2) (it's 5 floats). You can find it in the initScene function. Don't forget to enable texturing, by calling glEnable(GL_TEXTURE_2D), it's in the end of initScene.

Accessing texture in fragment shader

This is the last thing that's covered in this extremely long tutorial is how to access texture data in fragment shader. The first thing we must do, is to pass texture coordinate, that is an input variable in vertex shader, further to fragment shader. The second important thing, is to create an uniform sampler2D variable in fragment shader. Here is how fragment shader looks like (vertex shader is almost the same as in previous tutorial, I recommend to have a look at it as well):

#version 330

in vec2 texCoord;
out vec4 outputColor;

uniform sampler2D gSampler;

void main()
{
	outputColor = texture2D(gSampler, texCoord);
}

With this variable, we will fetch texture data based on texture coordinates. From program, we just need to set sampler to one integer. What does this integer mean? It's the TEXTURE UNIT number. Texture unit is another term, that's important. You may have heard of multitexturing - mapping multiple textures at once. Well, one we can have multiple texture units, each of them can have a different texture bound, and then differentiate between them with their numbers. To specify which texture unit we use, we use function glActiveTexture. The number of texture units supported is graphic-card dependant, but this number should be sufficient for most uses (I'm too lazy to find out how many my GTX 260 has , but I guess it's 32 or 64). Since we never need to use more than one texture at once (we only need data from one texture in fragment shader), we will only use texture unit 0. In our rendering, we must first bind our texture to texture unit 0, and then we must set sampler uniform variable to 0 as well, to tell OpenGL, that with that uniform variable we want a texture, that's bound to texture unit 0. Then in fragment shader, we just call function texture2D, that takes sampler variable as first parameter, and texture coordinates as second parameter.

Short word at the end...

This is what has been done today (You can play around by rotating objects in arrow keys):

I hope that you don't have a headache after reading this tutorial. It may take some time for all those things to settle down in your head, but once they do, you will realize that it isn't that difficult at all. I would say, that people in AMD and nVidia have it difficult, to actually implement this OpenGL specification But that's not something we need to worry about. They are (probably) happy to do it , and we are users, that are happy to use it.

If you have any questions, you can write them to comments, or send me an e-mail. Next tutorial is going to be about blending, we'll make transparent objects, so stay tuned!

Download 1.74 MB (9126 downloads)