(Return to list of OpenGL 3.3 tutorials)
Download (12.56 MB)
3104 downloads. 5 comments
Hello guys! This is the 25th tutorial from my series. This one is about bump mapping, a really interesting technique, that's not very computationally expensive and brings extra level of detail to bumpy surfaces without need of extra geometry. It's a effect that's been known for a long time already and it's pretty fundamental for your later graphics career. So let's begin learning!
So far, we've been always working with simple textures, or maybe we have used multitexturing to combine several textures (like in Terrain tutorial). We also had normals for every face, usually on per-vertex basis, which is pretty much enough for nice lighting. But what if we wanted more? What if we had normals not only on per-vertex basis, but on per-pixel basis? And that's what bump mapping is basically all about! The normals for each pixel on the surface are provided using a special texture called bump map. Using multitexturing we will extract the data from bump map texture to have normals basically for each point on the surface. This is how a typical bump map texture looks like (this one is used in this tutorial):
It doesn't look like regular texture you would texture surfaces with, however you can clearly see how it copies surface and you can clearly see the bumps that separate individual bricks the wall is made of. The question now is - how do we extract a normal from all of this, just a vector with X, Y, Z components? The question is really simple - how many channels does a usual color image has? That's right, 3 channels, RGB usually. And you are probably guessing now, that each channel will correspond to one normal component, so we kind of convert from RGB to XYZ and scale it appropriately .
Whenever we pass texture into a shader, we can access its data using texture2D function. We get an XYZ vector in return, that corresponds to our RGB channels and values range from 0.0 to 1.0. What we need to do is to scale every component to from range < 0.0, 1.0 > to range < -1.0, 1.0 > and this way we will get a vector with three components, that represents our normal! So the normal values are coded in colors and before using them, we must rescale them to the range above and we get our normal. That's it! Or is it?
We haven't thought about one thing so far - what space the extracted normal is in. Just think about it - will the normals be the same, if we map them on the regular walls, then map them on the roof, next time we want to wrap the texture around arbitrary model? Of course no! The normals are changing depending on the surface we are bump-mapping this texture onto. To be more exact, the normals extracted are in a so-called Tangent Space - now that's a new term .
Tangent Space is a space specific for each FACE of the polygon. It is a space that's local to the surface of our model. Each vertex of face has texture coordinates associated with it. Now we want the X Axis of tangent space aligned with the direction, in which the U texture coordinate value increases and Y Axis of tangent space aligned with the direction, in which the V texture coordinate value increases. The last axis we need to have is Z. But this is really simple, because Z axis is actually a normal of the face.
Every single triangle (face) has three vertices - P0, P1 and P2. These vertices have their corresponding texture coordinates (u0, v0), (u1, v1) and (u2, v2), respectively. Our ultimate goal is to find vectors T (tangent) and B (bitangent), so that we can express arbitrary vertex Q on that triangle with some linear combination of these two vectors. In other words - if we multiply textire coordinates of that vector Q with T and B vectors, we get that vertex. If we say, that point P0 is origin point, Q1 and Q2 are vectors obtained from subtracting P0 from P1 and P2, then we get this:
Now we have six equations and six unknows, so this linear equations system can be solved! You can google it on the web how or you can try to express these things by yourself - either way you will end up having T and B vector . Once you have this, we can create a TBN matrix, which will be used for going from tangent space to object space:
With TBN matrix, we have two options now. Either we can get from tangent space to object space by multiplying TBN matrix with object space vector or we can calculate the inverse matrix of TBN and get from object space to tangent space. We will choose the second option. Why would we do this? Because it saves us a lot of computation - isn't it simpler to get one single vector into the tangent space - in our case light vector and continue calculations in tangent space than to convert every single normal for every pixel into object space and then calculate lighting as usual in the object space? Of course it's easier to transform just one vector! And that's what we actually do - we convert our lighting direction vector into tangent space. For this reason, we must calculate the inverse TBN matrix. But because TBN matrix is orthogonal, inverse matrix is simply the transpose of that matrix (look it up why on Orthogonal matrix Wikipedia for example).
Once we have lighting direction and normals in tangent space, the lighting calculations continue as usual. The difference now is, that every pixel has a slightly altered normal defined by bump map that we added. So instead of using one (interpolated) normal amongst all pixels in the rendered scene, we will use different normals for every pixel! The lighting differences that come with it will create that bumpy effect, that tricks our minds that the surface isn't flat but bumpy . So that' basically all the theory behind this, let's get into the coding stuff .As I have said, now we need to calculate tangent and bitangent vectors for every vertex in the model. Then we need to send these vertices to shaders. And because every single vertex has its own tangent and bitangent vectors, we will provide them as vertex attributes as we are used to with texture coordinates and normals. So if you remember FinalizeVBO function, which just uploads all Assimp model data to the GPU, now we need to adjust it a little bit and also add tangent and bitangent vertices:
{
// ... Set vertex, texture coordinates and normals as usual
// Now add bump mapping data (tangent and bitanget vectors for every vertex)
vboBumpMapData.BindVBO();
vboBumpMapData.UploadDataToGPU(GL_STATIC_DRAW);
// Tangent vector
glEnableVertexAttribArray(3);
glVertexAttribPointer(3, 3, GL_FLOAT, GL_FALSE, 2*sizeof(aiVector3D), 0);
// Bitangent vector
glEnableVertexAttribArray(4);
glVertexAttribPointer(4, 3, GL_FLOAT, GL_FALSE, 2*sizeof(aiVector3D), (void*)(sizeof(aiVector3D)));
}
Now let's take a look at the vertex shader and changes that are made there. If we decide to use bump mapping, then we need to create the formerly defined TBN matrix and by inverting it (transposing in this case) we calculate the light direction in tangent space:
// ... Classic stuff here (matrices, ins and outs...)
#include "dirLight.frag"
uniform DirectionalLight sunLight;
uniform int bEnableBumpMap;
out vec3 vLightDirTanSpace;
void main()
{
mat4 mMV = matrices.viewMatrix*matrices.modelMatrix;
mat4 mMVP = matrices.projMatrix*matrices.viewMatrix*matrices.modelMatrix;
vTexCoord = inCoord;
vEyeSpacePos = mMV*vec4(inPosition, 1.0);
gl_Position = mMVP*vec4(inPosition, 1.0);
vNormal = normalize(mat3(matrices.normalMatrix) * inNormal);
vWorldPos = matrices.modelMatrix * vec4(inPosition.xyz, 1.0);
if(bEnableBumpMap == 1)
{
vec3 vTangent = inTangent;
vec3 vBitangent = inBitangent;
mat3 mTBN = transpose(mat3(vTangent, vBitangent, vNormal));
vLightDirTanSpace = normalize(mTBN * sunLight.vDirection);
}
else vLightDirTanSpace = vec3(0, 0, 0);
}
So as you can see - if we have bump mapping enabled, we calculate light's direction in tangent space and this value goes further into the fragment shader. If we don't have bump mapping enabled, we simply set this vector to zero (we don't even need to though). Let's look at the fragment shader now:
smooth in vec2 vTexCoord;
smooth in vec3 vNormal;
smooth in vec4 vEyeSpacePos;
smooth in vec3 vWorldPos;
out vec4 outputColor;
uniform sampler2D gSampler;
uniform sampler2D gNormalMap;
uniform vec4 vColor;
#include "dirLight.frag"
uniform DirectionalLight sunLight;
uniform vec3 vEyePosition;
uniform Material matActive;
uniform int bEnableBumpMap;
in vec3 vLightDirTanSpace;
void main()
{
vec4 vTexColor = texture2D(gSampler, vTexCoord);
vec4 vMixedColor = vTexColor*vColor;
vec3 vNormalized = normalize(vNormal);
if(bEnableBumpMap == 0)
{
vec4 vDiffuseColor = GetDirectionalLightColor(sunLight, vNormalized);
vec4 vSpecularColor = GetSpecularColor(vWorldPos, vEyePosition, matActive, sunLight, vNormalized);
outputColor = vMixedColor*(vDiffuseColor+vSpecularColor);
}
else
{
vec3 vNormalExtr = normalize(texture2D(gNormalMap, vTexCoord).rgb*2.0 - 1.0);
float fDiffuseIntensity = max(0.0, dot(vNormalExtr, -vLightDirTanSpace));
float fMult = clamp(sunLight.fAmbient+fDiffuseIntensity, 0.0, 1.0);
vec4 vDiffuseColor = vec4(sunLight.vColor*fMult, 1.0);
vec4 vSpecularColor = GetSpecularColor(vWorldPos, vEyePosition, matActive, sunLight, vNormalized);
outputColor = vMixedColor*(vDiffuseColor + vSpecularColor);
}
}
Some really important changes have happened in fragment shader. First, notice that we have another sampler2D variable. This one is for the bump map texture. We also have this light direction vector in tangent space. Now in the main function, we either proceed with usual calculations if the bump mapping is not enabled, or we use the bump mapping calculations. With bump mapping enabled, we simply take the normal extracted from the bump map and then we calculate diffuse intensity in tangent space. However, I do specular lighting calculations in object space, notice this.
This is the final result guys, I think it looks very neat :
So that would do it! You can't believe how glad I am that I finally managed to write article after 5 months! It's really a relief for me, because to be honest, my inner self suffers if I don't do anything creative, especially when I don't writ them tutorials :'( . I hope you will always like my tutorials and I will do my best to continue producing this kind of stuff .
Download (12.56 MB)
3104 downloads. 5 comments
Hey Bub (bryced@gmail.com) on 14.06.2015 21:58:45 |
//vWorldPos = matrices.modelMatrix * vec4(inPosition.xyz, 1.0); vWorldPos = (matrices.modelMatrix * vec4(inPosition.xyz, 1.0)).xyz; I changed this line and now the bumpmap shows up but the wheelchair guy is off in the middle of the scene. |
Hey bub (bryced@gmail.com) on 14.06.2015 20:37:38 |
[img]http://i.imgur.com/mHZm6oV.jpg[/img] Thanks for the great tutorials. I like them so far but this one crashes on me. I've windows vista and an nvidia gtx 260. |
kybio on 05.02.2015 01:59:19 |
several errors in windows 8.1...binary file with errors... vertex shader failed error(#160) Cannot convert: "4- component vector of vec4" to :"smooth out 3- component vector of vec3" |
ostylk (ostylk@googlemail.com) on 29.09.2014 21:15:48 |
When do you want to rewrite your tutorials for the physics engine? |
M on 14.09.2014 21:28:39 |
Thanks a lot for all your great tutorials! Concerning this one: Do you intend to also demonstrate advanced techniques like Normal Mapping or (Steep) Parallax Mapping? |
1