Please put your answers to written questions in this lab, if any, in a Markdown file named
README.md in your lab repo.
In Lab 9, you learned how OpenGL stores vertex data in two types of objects called vertex buffer objects (VBOs) and vertex array objects (VAOs). You learned about how to work with scene data in real time. But what about working with 2-dimensional data? In previous projects you worked with a canvas object that displayed your results onto the screen. But how does this work in OpenGL? So far, we have seen the real time pipeline up to the final step in this diagram, the framebuffer.
In an oversimplification, the framebuffer is the 2D canvas that OpenGL works with when using your shader program you wrote in the last lab. So far, you haven't had to worry about this since you have been working with the default framebuffer that OpenGL provides which happens to be your application window.
But what if you don't want to draw on the screen? What if you want to draw onto a texture and save it for later? This is where making your own framebuffer objects comes in!
By the end of this lab, you will be able to:
- Understand how textures work in OpenGL,
- Understand framebuffers and when to use them,
- Draw your framebuffers onto your screen,
- Apply cool post-processing effects in real-time!
Before we dive into the 2D data we draw to, let's think about a common form of 2D data we use in our own scenes…Textures!
In OpenGL, a texture is a regular grid of values which can be read from and written to. We'll only be interacting with 2D textures in this lab, though OpenGL also supports 1D and 3D (i.e. volumetric) textures. Texture data can flow from the CPU and GPU (just like vertex data) to be read in a shader. Or the other way around if you’d like to export a texture to the CPU.
In Qt creator, the most common form of an image is called a
QImage. Let's create one
of our own!
QImage constructor takes in two parameters:
a file path formatted as a
QString, and a format specification which is optional.
initializeGL, store a
QImage in the
m_image member variable using the relative file path
kitten_filepath) of the kitten.png image in our project.
Be sure to have set your working directory to your project directory in order for relative paths to work.
Now let's format our
QImage to fit OpenGL standards. Unlike OpenGL, which has its
UV coordinate space origin at the bottom left (which you will learn about soon), a
stores it in the top left. Therefore one of our tasks is to mirror the image vertically.
The second is we need to ensure that we have 8-bit color channels for R, G, B, and A (4 channels in total).
To do this let's overwrite our
m_image to be:
m_image = m_image.convertToFormat(QImage::Format_RGBA8888).mirrored();
Now that we have our Qt formatted image, let's put it in an OpenGL texture. To start, we need to generate a texture using the following function:
initializeGL, generate a texture and store it's id in
may find the following function useful:
n: This indicates the number of textures we wish to generate.
textures: This is the pointer this function will fill in with an id for the generated texture. Multiple textures can be filled in using a pointer to the first element of an array of sufficient size or some pointer followed by sufficient allocated space.
Before we work with the texture, we need to bind it to the state machine.
initializeGL, bind our
m_kitten_texture. You may find the following function useful:
target: This indicates the type of texture we are binding. Since we are using a 2D texture, our target will be
texture: This is the id of the texture we just generated.
Now we have an empty texture sitting in the
GL_TEXTURE_2D target in our state machine. Let's fill it with our
After binding our texture in
initializeGL, load our
m_image variable into
m_kitten_texture. You may find the following useful:
target: This indicates the type of texture we are binding. For now, use
level: This correlates with the level-of-detail of the image. Since we do not wish to produce a mipmap (scaled texture) at the moment, use 0 for this parameter.
internalformat: This indicates the color format that will be contained within the OpenGL texture object. This is different than the format parameter seen later! For now use
width: This is the width of the desired OpenGL texture. For our image, it should be
height: This is the height of the desired OpenGL texture. For our image, it should be
border: Indicates whether or not the image should have a border. For our purposeswe will use the value of 0 to indicate no border.
format: This indicates the color format of the input pixel data. In our case, the sample image uses
type: This indicates the data type of the input pixel data. We will use
GL_UNSIGNED_BYTEsince it assigns 8 bits per component of R, G, B, and A.
data: This is the pointer to our pixel data. We can get this from our
QImageby using the associated
Before we use the texture, we need to specify some behavior it should take on in particular if the image needs to be scaled up or down. Consider the situation where our fragment lies between two pixels in our texture. Which color should it output? These are parameters we can control and in our case we can ask for OpenGL to linearly interpolate between the nearby pixels.
After adding our image data in
initializeGL, set the minify
and magnify filters to use linear interpolation. You will find the following function
target: This indicates the target we bound our texture to. For our purposes, use
parametername: This is an enum for the parameter we wish to set. Hint: Look at the documentation linked to find the relavant parameter names ;)
parametervalue: This is the value we wish to set for our chosen parameter. See the documentation for a list of parameters and their possible values.
After setting our parameters in
initializeGL, unbind our texture from the
GL_TEXTURE_2D target. We can do this by binding texture id 0.
Now, how do we work with the texture we just created? We can create a uniform variable for it just how we did in the shaders lab for different data types!
Let's begin by creating a uniform variable in our shader that will hold our texture.
The data type of a texture is known as
sampler2D uniform variable to the
texture.frag shader file.
Now how do we set this variable? How textures work in OpenGL is using a concept called texture slots.
So far, we have told you that you can just bind a texture using the
This is actually only half correct. There is an additional piece of state information
that loads textures onto the GPU. This is the texture slot. Most devices support at
least 32 different texture slots and by default, slot 0 is bound. So when we called
glBindTexture, we were actually binding our texture object to slot 0. The reason we
have multiple slots is so that we can have multiple textures sampled by a shader at the
To set our uniform, we first need to load our texture into a texture slot, and then indicate which slot index should be sampled in our shader.
To load a texture into a texture slot, the steps are:
- Set the current active texture slot
- Bind the texture
The first call is:
texture: This is an enum that represents the texture slot. They are in the format of
iis an integer representing the slot number. For example
GL_TEXTURE3is texture slot 3.
The second call is the same
glBindTexture call we have seen before.
Before we previously bound our texture in
initializeGL, explicitly set the active texture
slot to slot 0.
To set the uniform value, it is represented by an int (HINT: Think
glUniform1i) that correlates to the texture
slot we want to use and sample from. That is, if we bind our texture to
should set our uniform value to 0 etc.
intializeGL, set the uniform value for your
sampler2D you created in your fragment shader
to be the same texture slot number we bound our texture to. Remember to call
and get the variable location before setting it! Also be sure to return to default state
of program 0 afterwards.
paintTexture, before calling
glDrawArrays, be sure to bind the input parameter
texture slot 0 (Consider using
At this point, we have set our uniform variable for our texture, and are going to use it within our shader program. However, we currently have no triangles to draw our textures on.
If you recall the OpenGL coordinate system, we can see the limits of the screen in
What we can do with this is use it to construct what is known as a fullscreen quad. Think of a fullscreen quad as a projector screen that we drape down in our scene that happens to be just the right size to cover the entire screen so that we can't see behind it, but also can see it in its entirety.
initializeGL, notice the fullscreen quad VBO data
fullscreen_quad_data. Edit it
in order to be the correct size of the screen.
Mess around with
fullscreen_quad_data! What happens when you change the z coordinate?
What happens when you only change a single vertex?
Great! Now we have the shape which we'll plaster our texture, but how do we (texture-)map the image to the surface? In steps a new vertex attribute: UV coordinates!
The UV coordinate attribute tells us at what point in the sampled texture should each vertex correspond to. The lower left corner is set to be (0, 0) and the upper right corner to be (1, 1) as in the following image:
Ahead is a small chunk of tasks in order to add UV coordinates to your fullscreen quad. Do not fret! Most of these calls are reminiscent and reminders of ones you have seen before.
Pick 6 corresponding UV coordinates to pair with each vertex position in
Make sure the bottom left corner of the fullscreen quad correlates with the bottom left corner
of the texture and the upper right corner of the fullscreen quad correlates with the
upper right corner of the texture!
Hint: This should be the same process as adding color to your triangle data from lab 9!
initializeGL, update the VAO attribute information to include the newly added UV
coordinates. Make sure to update the position attribute as well.
Add the UV attribute as a layout variable in
Remember that the attribute index you use in initializeGL for the VAO should match that of the layout variable in the shader!
Create an in/out variable pair to pass the UV coordinates from the
texture.frag. Also be sure to set the out variable of texture.vert to be equal to
the layout input variable we created in the last step!
Finally now that we have structured and funneled the appropriate data all the way down to the fragment shader, we can now color our pixels with our texture appropriately.
texture.frag, set the fragment color based on the texture color at the fragment's
UV coordinate. You can sample a
sampler2D uniform with the following function:
sampler: This is the name of the
sampler2Dwe wish to sample from.
P: This is the UV coordinate we wish to sample the texture at.
In glsl, there are special types for vectors of ints, unsigned ints, doubles, and floats which are given by
vec4 respectively. The same goes for sampler2Ds. The g in
gsampler2D is just a placeholder for any one of these types.
Uh Oh 😦
If you run the program at this stage only half of the cat is visible! Check
paintTextureto see if you can find the bug!
Hint: It looks like only half of the fullscreen quad's vertices were drawn
With the bug fixed, you should see something cute like this on your screen!
Congrats on making it through textures in OpenGL! Feel free to treat this point as a stopping point for week 1 of this lab. Take a well deserved break and come back for the finale: framebuffers!
Framebuffer: A portion of memory containing bitmaps that can drive displays.
Fancy language aside, lets break down the word itself into components:
- Frame: Canvas, screen space, originally used for the term of "next/current frame"
- Buffer: Data storage
- Object: Something, in this case a container that holds other information
Framebuffers contain things in OpenGL known as attachments. These include color buffers, depth buffers, and stencil buffers which you will learn more about shortly. Each buffer is represented by a sub-object being either:
- A Texture
- A Renderbuffer
So far you have already been working with textures. As you may recall, textures are usually 2D objects which have both read and write capabilities.
This is where renderbuffers differ! A renderbuffer can serve almost the same purpose of a texture, except it only has capability to be written to.
You can remember this functionality by thinking of the name. Renderbuffer refers to an object that can be rendered or written to.
Unknowingly, we have actually used framebuffers before. The default framebuffer happens to be: the application window!
Alongside this, we have also seen one of the attachments before. So far in the fragment
shader, we have explicitly written to a
fragColor which sets pixels in
the color buffer to whatever we choose!
The ones we have not directly seen are depth and stencil buffers.
Depth buffers contain information about how far away a specific pixel is from the camera.
We can write to the depth buffer by calling
glEnable(GL_DEPTH_TEST). This allows
OpenGL to store depth information in the default framebuffer's depth buffer such that if
we draw two triangles on top of one another, OpenGL can tell which should display on top.
Stencil buffers contain information that is generated by special masks that enables certain pixels to be drawn or not. This is not important at the moment, but if you wish to draw something like outlines or recursive portals, this comes in handy.
Next, we will generate and bind our own FBO.
makeFBO, generate an FBO and store it in
m_fbo. Then bind it. You may find the following useful:
n: This indicates the number of framebuffer objects to create.
framebuffers: This is the pointer this function will fill in with an id for the generated FBO. This way you can refer to it later by the same stored id. Multiple FBOs can be filled in using a pointer to the first element of an array of sufficient size or some pointer followed by sufficient allocated space.
target: This indicates the type of framebuffer we wish to bind. For our purposes, use
framebuffer: This is id of the FBO we wish to bind.
Before we configure our FBO, we need to generate containers for its attachments. As stated previously, these can either be Textures or Renderbuffers. Let's try using a Texture in place of our color attachment.
makeFBO, generate an empty texture,
m_fbo_texture and bind it to texture slot 0.
This will be used to store our color buffer.
Why are we binding to texture slot 0?
You may recall that earlier we already boud one texture to slot 0, so the question may be why we are overwriting that slot with a new texture? We could use a separate texture slot, and have both textures actively bound, but this is typically only necessary if we actually need to use both textures at the same time. For example this is useful when your shader contains multiple textures it needs to sample from at the same time. However, we will only ever be using one texture in our shader at a time.
To generate an empty texture, when calling
glTexImage2D, pass in a nullptr for the data. Be sure to set the same minify and magnify parameters as well as return to default state by unbinding our texture once we haveset all its parameters. When setting the width and height, use
What is the
m_fbo_width are all initialized with?
m_fbo_widthare all initialized with?
For high density displays such as retina displays on many Mac computers, there is a discrepancy between the physical pixels on the screen and the device-independent pixels in a window. This parameter will take into account this ratio and generate the proper width and height of the window in pixels. You can read more here.
Next let's use instead a Renderbuffer to store our depth and stencil attachments.
Why not use another Texture?
If you recall, as opposed to Textures, Renderbuffers can only be written to. This allows
for OpenGL to make some behind-the-scenes optimizations. For depth information, we
do not need to sample it like we do the colors of a texture. Later on, we will redraw our FBO by
using its color attachment texture just how we did the
m_kitten_texture to draw back
to the screen. But depth calculations are all self contained during the render process
and we do not need to sample it for our purposes.
makeFBO, generate a renderbuffer in
m_fbo_renderbuffer, bind it, then set its configuration using
may find the following functions useful:
n: This indicates the number of renderbuffer objects to create.
renderbuffers: This is the pointer this function will fill in with an id for the generated RBO. This way you can refer to it later by the same stored id. Multiple RBOs can be filled in using a pointer to the first element of an array of sufficient size or some pointer followed by sufficient allocated space.
target: This indicates the type of renderbuffer we wish to bind. For our purposes, use
renderbuffer: This is id of the RBO we wish to bind.
target: This indicates the type of framebuffer we wish to work with. For our purposes, use
internalformat: This indicates the pixel data format that will be contained within the renderbuffer. We wiil be using
GL_DEPTH24_STENCIL8which indicates 24 bits will be used for the depth component and 8 for the stencil component.
width: This is the width of the desired renderbuffer. For our purposes, use
height: This is the height of the desired renderbuffer. For our purposes, use
Be sure to unbind our Renderbuffer once we have set its configuration!
Now we can attach both of our attachments to the FBO we generated.
makeFBO after binding our FBO, attach both the color and depth/stencil attachemnts.
You may find the following functions useful:
target: This indicates the type of framebuffer we are working with. Just like in
attachment: This is the specific attachment we wish to add, since we are attaching a color buffer, use
textarget: This indicates the type of texture we are using. In our case, use
texture: Here we place the texture object we wish to use. In our case this is
level: This specifies which mipmap or level of detail texture we want to use. We want full detail so use 0.
target: This indicates the type of framebuffer we are working with. Just like in
attachment: This is the specific attachment we wish to add, since we are attaching a depth and stencil buffer, use
renderbuffertarget: This indicates the type of renderbuffer we are using. In our case, use
renderbuffer: Here we place the renderbuffer object we wish to use. In our case this is
makeFBO, return to default state by unbinding our FBO. Do this by binding the FBO with id:
This is a const GLuint currently set to 0, but may not work properly which you will find out later.
Let's see what happens when we draw to our FBO! But first, lets get our sphere from lab 10 showing on the screen again.
paintGL, comment out your current code. Uncomment the segment labeled for this task
which should get us back to the sphere from the previous lab.
At this point, you should see the same sphere you saw in lab 10 on your screen:
Next, lets try drawing to our custom framebuffer! In OpenGL,
glDrawArrays will utilize
whichever framebuffer is currently bound, so lets change the current FBO.
glClear is called, bind our FBO.
If you run the program, you should see a black screen:
Instead of drawing to the screen, we drew to an offscreen framebuffer. So where is our
output image stored now? In
m_fbo_texture! This is the texture that we set to store our
To fix this, lets create a way to just draw our color buffer back onto the screen. Luckily
this is the exact same operation we have been doing in
Next, lets make sure we draw to the default framebuffer after we already drew offscreen.
paintExampleGeometry is called, bind the default framebuffer by binding
m_defaultFBO value which is declared near the top of
as a value of 0. This my need to change in a little bit so be sure to bind the variable rather than
the plain value of 0!
Next we can call
glClear once more to clear our screen.
paintGL, after binding the default framebuffer, clear the screen using
glClear. Be sure to clear
both the color, and depth bits!
Now we want to draw our fbo color attachment onto the screen as a texture. This should be the same process we used to draw the kitten to our screen earlier!
paintGL, after clearing the screen, call
m_fbo_texture to draw our fbo color
attachment onto our previous fullscreen quad.
If you run the program now, you still may see a black screen. Why is this?
In Qt Creator, the default framebuffer does not actually correspond to slot 0 :(.
Instead, the default begins at a value of 1. Then, for each FBO that is created,
the default FBO handle will increment by 1. That is, since we created 1 FBO of our own, the default
FBO will have a handle value of 2. Near the top of
initializeGL, replace the m_defaultFBO
from a value of 0 to 2. If still nothing shows up, please let a TA know.
On high pixel density displays such as that of retina displays of Mac laptops, you may see the image appear in the bottom left quadrant rather than the full screen. Why is this?
This is because when we drew to our custom framebuffer, we never told OpenGL what size "screen" it was drawing to! As we have seen before, OpenGL space is represented by a box with x and y dimensions ranging from -1 to 1. What OpenGL needs to know is how to map the pixel space on the screen, say 600 x 400 to the OpenGL space as just defined with limits of -1 to 1 on both x and y. As best practice, whenever switching between framebuffers of differing sizes, it is important to call glViewport.
paintGL, after binding
m_fbo, set the viewport to the appropriate size. After binding
m_defaultFBO, also set the viewport to the screen size. The following function may be useful:
x: This is the x coordinate of the lower left corner of our screen. For our purposes this is 0.
y: This is the y coordinate of the lower left corner of our screen. For our purposes this is also 0.
width: This is the width of our FBO we wish to draw to. For our FBO, we can use
m_fbo_width, which should match what we used to generate our FBO attachments. For our screen, we can use
height: Similarly to width, this specifies the height of the FBO we are drawing to. For our FBO, we can use
m_fbo_height, which should match what we used to generate our FBO attachments. For our screen, we can use
Why are x and y both 0?
You can think of the screen as a giant texture with its origin at the bottom left corner. Because of this, in the above call we are specifying which pixel coordinates we wish to draw too. In general we would like the pixel at
At this stage, you should finally be back to seeing the same sphere you saw a few steps ago!
If you still are having issues with your image being shrunk or in the wrong location, it is first important to determine where this is happening: in the draw to the custom framebuffer, or in the draw to the screen. One way we can differentiate the two is in our drawing loop, if before clearing the screen the first time we set the clear color to red, and then if before clearing the screen the second time we set our clear color to blue, then we will be able to tell which step is not drawing to the correct region as we hoped. Here is an example of the above schema when we set the viewport the second time around to be half the desired size (with the lower left corner still being
And here is an example of the above schema when the viewport the first time around is set to be half the desired size (with the lower left corner still being
So far in our post-processing shader, we have done nothing but redraw our scene as is. Lets take advantage of having the pixels at our disposal and make some alterations.
First, let's add some customizabiliy to our texture shader program.
texture.frag, add a uniform bool which will be used to toggle on and off post-processing.
glRenderer.h, change the function signature of
paintTexture to include an extra
boolean parameter on whether or not to post-process the texture.
glRenderer.cpp, correct the function signature of
paintTexture to what you wrote in
glRenderer.h and be sure to update any previous calls you made to
take this into account. In particular, in
paintGL, update your call to
to turn filtering on.
A bool uniform can be set using the
glUniform1i function. In
glUniform1i to set your custom bool uniform in
m_texture_shader depending on the input
bool parameter you added to
If we wish to invert our colors, since each color value in OpenGL is on a 0 to 1 scale, we can simply take any color cahnnel value and subtract it from 1 in order to get the inverse of that color channel.
texture.frag, if your custom boolean is set to true, invert the channels of fragColor.
If you run the program, you should see your sphere has been warped into an inverse shadow realm!
There are many cool effects you can apply using framebuffers, including: kernel-based image filtering which you will do in Project 6: Action!, particle effects, or even draw portals with usage of the FBO's stencil attachment!
If you resize the screen, you will notice some bizarre behavior occurs. Why is this? Well,
while we resized our default framebuffer, our custom offscreen framebuffer hasn't changed
at all. Unfortunately, it is not easy to resize the FBO attachments we generated individually.
However, we can do something simple: delete and remake our FBO! Because we have a
this makes the step of remaking a lot simpler.
resizeGL, delete our FBO attachments and FBO itself by using
makeFBO to regenerate a new Framebuffer.
Note that we have already updated
m_fbo_heightfor you in
Before we exit, it is important to delete any memory we generated with
finish, delete our kitten texture, FBO texture, FBO renderbuffer, and FBO itself.
Submit your GitHub link and commit ID to the "Lab 11: Textures & FBOs" assignment on Gradescope, then get checked off by a TA at hours.
Reference the GitHub + Gradescope Guide here.