Tutorials
Articles
OpenGL Demos
Games
OpenGL Misc
MSG Board
About
Donate
Links
Home
Megabyte Softworks
C++, OpenGL, Algorithms




Current series: OpenGL 3.3
(Return to list of OpenGL 3.3 tutorials)

Download (3.32 MB)
4150 downloads. 7 comments
16.) Rendering To A Texture

Welcome to the 16th OpenGL 3.3 tutorial. This time we're going to learn how to render to a texture. What's this good for? Imagine a situation where you have security cameras somewhere in the scene, and in the other part of scene there's a terminal, where you want to see camera image. How would you do that? Exactly! You must look at the scene from camera's view, render the scene somewhere, and then copy the final image (to the texture). Then you can apply that texture on that terminal's camera screen. This is probably the most common use and it's calling rendering to a texture.

Significant change in code now

This tutorial also has some significant changes compared to all previous - my coding style opinions have changed recently and I renamed all functions to begin with a capital letter. You know, it just makes sense to differentiate between functions and variables. And the good way to do that is to have variable names starting with lower-case letter and function names starting with capital letter. Another small change is that I unified the deleting / releasing function names. Now they are all beginning with Delete, just like in OpenGL. It's because some classes had releasing function names beginning with Delete and some with Release and it was to avail.

In this tutorial, we'll create SpongeBob watching The Avengers . Well not actual Avengers movie, but a rotating and moving Thor model on the TV screen. So let's get familiar with some new terms.

Framebuffers and renderbuffers

If you haven't been introduced to framebuffers and renderbuffers and your reaction to them is like:

then this tutorial should help you (I wanted to put at least one picture into article other than the screenshot ). Framebuffers and renderbuffers are another types of OpenGL objects (so they're created the traditional way with functions starting with glGen), that allows us to do off-screen rendering. That means, that you render, but not onto screen, but somewhere on virtual screen, a.k.a framebuffer. After that, you can read the framebuffer contents, most generally produced final 2D image, and you create a texture from that, which you can apply anywhere. So what's a renderbuffer then? The thing is, that framebuffer consists of multiple renderbuffers, and you already know some of them. The default framebuffer (with name 0), which is normal on-screen rendering has a color buffer (storing RGBA), depth buffer (storing pixel depths), then optional stencil buffer and maybe some other buffers. All these sub-buffers are called renderbuffers. So that's how it is - nothing difficult .

Working with framebuffer

Now we're getting to part where you'll see how to use these objects. We can summarize using FBOs for the purpose of this tutorial in these 6 steps:

These are all important steps we need to do. We'll go through them one by one. But first, the framebuffer class to handle everything nicely in one place:


class CFramebuffer
{
public:
   bool CreateFramebufferWithTexture(int a_iWidth, int a_iHeight);

   bool AddDepthBuffer();
   void BindFramebuffer(bool bSetFullViewport = true);
   
   void SetFramebufferTextureFiltering(int a_tfMagnification, int a_tfMinification);
   void BindFrameBufferTexture(int iTextureUnit = 0, bool bRegenMipMaps = false);

   glm::mat4 CalculateProjectionMatrix(float fFOV, float fNear, float fFar);
   glm::mat4 CalculateOrthoMatrix();

   void DeleteFramebuffer();

   int GetWidth();
   int GetHeight();

   CFramebuffer();
private:
   int iWidth, iHeight;
   UINT uiFramebuffer;
   UINT uiDepthRenderbuffer;
   CTexture tFramebufferTex;
};

Let's get through the main functions. CreateFramebufferWithTexture does first two steps. It just calls glGenFramebuffers function, and creates initally empty texture for FBO (empty texture is created using normal glTexImage2D function, but with NULL pointer to data). Important thing to notice are the parameters width and height. Before using FBO, you must specifiy its dimension. They don't have to be powers of 2, it will work with any numbers as well. But I chose the framebuffer in this tutorial to have 512x256 dimension. Important OpenGL call in this function is:


glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, tFramebufferTex.GetTextureID(), 0);

This is attaching texture to the framebuffer. The first parameter must be GL_FRAMEBUFFER, second tells what part of framebuffer we want this texture to store. GL_COLOR_ATTACHMENT0 is the framebuffer colorbuffer. A FBO can have multiple color attachments, but GL_COLOR_ATTACHMENT0 is default and rendered image is stored there. If you want to have 2 color attachments, for example one for normal RGB image and one for let's say grayscale image, you can do this, but in the fragment shader, where you have output color specified, you would have to specify it like this:


out vec4 outputColor;                     // Normal output, GL_COLOR_ATTACHMENT0, no need for layout keyword
layout(location = 1)out float fGrayscale; // One float per pixel for grayscale value, GL_COLOR_ATTACHMENT1

Notice, that our texture is only RGB, but we actually output vec4. But it doesn't matter, OpenGL seems to be intelligent enough to copy only RGB values. It worked both on nVidia and AMD cards, so I don't care about deepest details how compatible these outputs and textures must be as long as it works.

Let's get into step 3 - adding depth buffer. It's all in the function AddDepthBuffer. The depth buffer isn't in newly created FBO by default, so we must add it there. Therefore, we create one renderbuffer using function glGenRenderbuffers. After that, we bind it (as with all OpenGL objects we're about to work with) and initialize its size and type with:


glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT24, iWidth, iHeight);

First parameter must be GL_RENDERBUFFER, second is the renderbuffer type, for depth buffer I used GL_DEPTH_COMPONENT24, which is depth buffer with 24-bit precision (3 bytes per pixel), and the last two important parameters are renderbuffer's width and height. These two must match dimension of FBO we want to attach renderbuffer to. Final step is actual attachment of renderbuffer to FBO using function:


glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, uiDepthRenderbuffer);

The first parameter must be GL_FRAMEBUFFER, second specifies that we're attaching depth buffer, third must be GL_RENDERBUFFER and the last is ID of previously generated renderbuffer.

Now, the FBO is ready to be used. Let's have a look into the RenderScene function. Before we render the real on-screen scene, we're going to render The Avengers scene into our FBO. That's why we bind our FBO using call glBindFramebuffer, which has two parameters - first is always GL_FRAMEBUFFER, and second is FBO ID. This is wrapped in function BindFramebuffer of our class and it's the step 4.

Now we're good to proceed with rendering our Avengers scene. We'll render normal way, just like rendering on-screen, but the results are stored in FBO and in its associated texture. This is step 5 and after all the rendering is done, our texture is ready, image is written directly into texture.

The last step is unbinding FBO and returning to normal on-screen rendering. This is done using glBindFramebuffer function with FBO ID 0. Our texture is ready to be mapped anywhere. Notice however, that if you want to use a filtering with mipmaps, you must recalculate them every frame. That's why the function BindFramebufferTexture of our class takes 2 parameters - first is texture unit, and second is whether the mipmaps should be recalculated. I selected mipmap filtering, even trilinear, so the mipmaps must definitely be recalculated.

Result

What we just programmed looks like this:

and it's not bad . Hope you enjoyed this tutorial and if you have never been rendering to a texture, I hope this tutorial makes it clear to you. If you don't understand something, feel free to ask in the comments or mail me. Have a nice day .



Download (3.32 MB)
4150 downloads. 7 comments
 
Name:

E-mail:
(Optional)
Entry:

Enter the text from image:



Smileys




YAW3LzejweA (cyxkftcoi@outlook.com) on 11.12.2015 21:25:00
My colleagues and I are biiudlng a medical application for the iPhone. The application has an important 3D Model component to it but we are not very well experienced in that area. Right now we have about 10 models that were designed in studio max and exported as 4000 verticie obj files with triangles. We use obj2opengl script to convert the models and load into the app. It works pretty well. We add a few lights and a color and it looks pretty good. Here's where the problem is. For some of our complex anatomical models we actually have two or three models to load simultaneously. One for the body, another for bones and another for organs for example. The script we use scales all the models to fit the screen. This auto scaling is a problem because the organ become as big as the whole body. Does anyone know how to chnage the scaling for each independent model? Can we add a unique color to each object in the scene? Can we add a separate transparency or opaqness value to each object? Is there another source or expert consulting team that could help us with this?Thanks - Jodyjodytversky@yahoo.com
46ETkekBc (l3mqz9v0@yahoo.com) on 11.12.2015 21:06:05
Yes. When you create a deafult OpenGL context you get an OpenGL 4.3 compatibility profile context by deafult (assuming the machine you work on supports GL 4.3). Then if you only use features that are supported also on Shader Model 2.0 hardware, i.e. GL 2.1 features then it will run on both your netbook and your strong machine.Of course, if you use such OpenGL 4 features that are available only on Shader Model 5.0 hardware then it won't run, but that's nothing different than doing the same thing in D3D as if you want to use Shader Model 5.0 features then you have to use feature level 11 and that means it won't run on your netbook (considering that it doesn't support Shader Model 5.0). Not to mention that if you are running XP then you are stuck with D3D 9 anyways, while OpenGL support could be way better (even GL 4.3).The thing that confuses you is that when people refer to using OpenGL 4 they mean actually using such GL 4 features that are supported only on Shader Model 5.0 hardware, but again, when people refer to using D3D 11 they also usually refer to using Shader Model 5.0, i.e. feature level 11, thus the same compatibility issue is there.Again, using OpenGL 4 can mean two things:1. actually using SM 5.0 features thus limited to new hardware2. making an application on a GL 4 implementation that only uses SM 2.0 features (roughly equivalent to GL 2.1 stuff) and that will be compatible with old hardwareThe same thing applies to D3D as using D3D 11 can mean two things:1. actually using SM 5.0 features (feature level 11) which limits the application to new hardware2. using feature level 9_3 that will be compatible with old hardware but does not support any new features (technically equivalent with D3D 9 if you don't consider the changed syntax and API convenience)
7Y8APqljcWHr (at0lbyia2@yahoo.com) on 17.09.2015 01:13:28
Yes. When you create a deaflut OpenGL context you get an OpenGL 4.3 compatibility profile context by deaflut (assuming the machine you work on supports GL 4.3). Then if you only use features that are supported also on Shader Model 2.0 hardware, i.e. GL 2.1 features then it will run on both your netbook and your strong machine.Of course, if you use such OpenGL 4 features that are available only on Shader Model 5.0 hardware then it won't run, but that's nothing different than doing the same thing in D3D as if you want to use Shader Model 5.0 features then you have to use feature level 11 and that means it won't run on your netbook (considering that it doesn't support Shader Model 5.0). Not to mention that if you are running XP then you are stuck with D3D 9 anyways, while OpenGL support could be way better (even GL 4.3).The thing that confuses you is that when people refer to using OpenGL 4 they mean actually using such GL 4 features that are supported only on Shader Model 5.0 hardware, but again, when people refer to using D3D 11 they also usually refer to using Shader Model 5.0, i.e. feature level 11, thus the same compatibility issue is there.Again, using OpenGL 4 can mean two things:1. actually using SM 5.0 features thus limited to new hardware2. making an application on a GL 4 implementation that only uses SM 2.0 features (roughly equivalent to GL 2.1 stuff) and that will be compatible with old hardwareThe same thing applies to D3D as using D3D 11 can mean two things:1. actually using SM 5.0 features (feature level 11) which limits the application to new hardware2. using feature level 9_3 that will be compatible with old hardware but does not support any new features (technically equivalent with D3D 9 if you don't consider the changed syntax and API convenience)
TheWeepingCorpse on 23.11.2012 00:46:15
Is it possible to use an existing depth buffer? I want to have several render to textures sequences, all using the same depth buffer for image composition.
Hertz (diehertz@gmail.com) on 24.08.2012 13:56:57
As we'll find out later, it's very useful for realistic water and other environment effects, where you want a reflection of the real scene :-)
Have you read some books like GPU Gems? Some real cool techniques described in them :-)
Michal Bubnar (michalbb1@gmail.com) on 25.08.2012 12:15:47
I know them, but I only hafe read a part of Gems 1, so not that much. And you're right with those effect
seo plugin (ecskwggc@gmail.com) on 26.05.2017 19:26:12
Hello Web Admin, I noticed that your On-Page SEO is is missing a few factors, for one you do not use all three H tags in your post, also I notice that you are not using bold or italics properly in your SEO optimization. On-Page SEO means more now than ever since the new Google update: Panda. No longer are backlinks and simply pinging or sending out a RSS feed the key to getting Google PageRank or Alexa Rankings, You now NEED On-Page SEO. So what is good On-Page SEO?First your keyword must appear in the title.Then it must appear in the URL.You have to optimize your keyword and make sure that it has a nice keyword density of 3-5% in your article with relevant LSI (Latent Semantic Indexing). Then you should spread all H1,H2,H3 tags in your article.Your Keyword should appear in your first paragraph and in the last sentence of the page. You should have relevant usage of Bold and italics of your keyword.There should be one internal link to a page on your blog and you should have one image with an alt tag that has your keyword....wait there's even more Now what if i told you there was a simple Wordpress plugin that does all the On-Page SEO, and automatically for you? That's right AUTOMATICALLY, just watch this 4minute video for more information at. <a href="http://www.SEORankingLinks.xyz">Seo Plugin</a>
seo plugin http://www.SEORankingLinks.xyz
Jump to page:
1