Register | Sign in

Home > Touch Control System (TCS) > TCS Tutorial: Graphics Shaders Basics


TCS Tutorial: Graphics Shaders Basics


Introduction

The awesome graphics of today’s video games are made possible by the fact that we run two processors in our computers: one Central Processing Unit (CPU) for regular data and a second, high-speed Graphics Processing Unit (GPU) for graphics data.

A Graphics Shader is a piece of code that is executed repeatedly on a video card. Game engines pass 2D and 3D content to Graphics Shaders. There, the content can be combined in nearly limitless ways before it is rendered to a 2D plane (your screen) on a per-pixel basis.

Post-processing steps involve taking these rendered 2D images and passing them again through the video card in additional Graphics Shaders to apply various effects like glow and blur.

Researchers have come up with many algorithms that attempt to mimic how light bounces off or through things, how shadows are displayed, for rendering particulate matter in the air, fluid dynamics, and all of the other physics systems that make up our reality.

Many of these systems are faked in one way or another to improve performance. The more visually impressive methods often come with huge processing overhead. Movie studios have the advantage of rendering high-quality graphics on supercomputing server farms, and even then, a single frame could take days to render.

We need to render our frames in basically the same way, but much faster, and on way less capable hardware. Rather than taking days to render a single frame, we need to render multiple copies of our 3D world to individual images and combine them in post-processing shaders before rendering a single frame. We’ll need more than 60 complete frames with effects and all, rendered every second. Because of this requirement, we are forced to find a balance between hardware capabilities and graphical realism by using simpler methods that are very fast and by using visual tricks to either distract from or display certain parts of our scene.

A common method used to improve performance without taking much away from quality is to create a very high-detail 3D model and slowly render high-quality images that are then wrapped around a simplified version of the model to give the appearance of high-quality. This is called texture baking.


What exactly does a Graphics Shader do?

A single 3D model is made up of one or more triangles and some data to say what color the triangle should be. A single triangle consists of 3 points (Vertices or Vertex Points) and lines going between them. When we create and save a 3D model using our favorite modeling editor, we get a list of all these triangle points in the form of 3D coordinates which are relative to the center of the object.

A Graphics Shader takes this 3D point data, calculates where it is in our 3D world, and then combines it with the color data, all to determine the eventual color of individual pixels. A typical Graphics Shader will have both Vertex Shader and a Pixel Shader components. A Vertex-specific shader deals only with the 3D triangle points that make up our geometry and a Pixel-specific shader is run once for every model on every pixel of the screen.

In early computer graphics, color data was on a per-triangle basis and there was no concept of lights or reflection. Nowadays, we take 2D images (Textures) and wrap them around our models to skin them. This is done with what is called UVW coordinates. We already have data for points that make up our triangles in 3D space and UVW coordinates allow us to map parts of our 2D images to our models in 3D space. UVW coordinates must be calculated in the Vertex-specific parts of our Model Shaders to take into account animations and other geometry deformations.

Everytime we draw a model, we pass geometry and texture data to the graphics card over a data pipe using several “channels”. The data is then processed directly on the video card in whatever way we choose fit.

We use the High Level Shader Language (HLSL) to write Graphics Shaders for Touch Control System. There are a few different HLSL versions to choose from. Individual graphics cards will support up to a specific shader version and each version adds additional capabilities and room for higher numbers of programmed instructions. This page highlights the differences between HLSL shader versions. You should always try to target the lowest possible shader version in an effort to be inclusive of older devices and their chipsets.


Techniques and Passes

You can put quite a bit of code into a single shader file. The use of multiple Techniques in a single shader allows you to basically combine multiple shaders into one file. Each Technique can render the same or different sets of geometry in entirely different ways. One Technique could draw models normally while a second Technique could draw models upside down, backward, and covered with what appears to be dog fur.

For each Technique, we can have multiple Passes. This allows us to apply the effects of multiple lights, or to do things like add additional iterations of blurring code. Limits on the number of programmed instructions in the different shader versions can make it necessary to split up code so that the desired effect is achieved, accumitavely, over two or more passes. The use of multiple Passes is generally not recommended because you’ll have to pay close attention to Blend States which dictate how RGB and Alpha components get combined in the Pixel Shader.

Multiple scenes, models, textures, transformation matrices, lights, and an enormous amount of different steps needed for basic and post-processing visual effects come together like a jigsaw puzzle. Knowing how the data flows is the first step to understanding the rules of the game. Once you understand the rules, you can break them like an artist. The entirety of all of this is often a very thought provoking and creative process. Achieving desired results with limited hardware will force you to think outside of the box to create novel and elegant solutions.


Matrices

Before we can start drawing 3D models, we need to define an imaginary mathematical world where we can determine the relationship between our 3D models, the camera position, and the focus of our camera lens. We do this with what we call a Matrix. A Matrix is a multi-dimensional array of data that represents position data within a 3D coordinate system. Multiple Matrices (plural for Matrix) are used to create our imaginary 3D worlds.

The Model Matrix defines vertex point positions relative to the center of the model.

The World Matrix redefines the model’s vertex data so that multiple models are positioned in World space, relative to the center of the world.

The Projection Matrix defines the area in 3D space that is captured by the camera, including near and far clip distances and the field of view angle.

The View Matrix converts data from 3D world space to 2D screen space.

The beautiful thing about these multi-dimensional data arrays is that we can easily combine and transform entire sets of data using multiple Matrices and mathematical operators. Transformations of geometry by some animation systems are automatically factored into these Matrices by the game engine before they are presented to the Graphics Shaders.


A Basic Graphics Shader

Here is a very basic Graphics Shader that contains code to process both Vertex and Pixel data:

float4x4 WorldViewProj;
struct VertexShaderInput
{
float4 Position : POSITION0;
float2 TexCoord : TEXCOORD0;
};
struct VertexShaderOutput
{
float4 Position : POSITION0;
float2 TexCoord : TEXCOORD0;
};
texture2D Texture;
sampler2D TextureSampler = sampler_state
{
Texture = ;
MinFilter = linear;
MagFilter = linear;
MipFilter = linear;
};
VertexShaderOutput SimpleVertexShader(VertexShaderInput input)
{
VertexShaderOutput output;
output.Position = mul(input.Position, WorldViewProj);
output.TexCoord = input.TexCoord;
return output;
}
float4 SimplePixelShader(VertexShaderOutput input) : COLOR0
{
float4 color;
color = tex2D(TextureSampler, input.TexCoord);
return color;
}
Technique SimpleTechnique
{
Pass
{
VertexShader = compile vs_1_1 SimpleVertexShader();
PixelShader = compile ps_2_0 SimplePixelShader();
}
}

The first line defines a single Matrix that is a result of multiplying the World, View, and Projection Matrices together:

float4x4 WorldViewProj;

The multiplication of these Matrices are done beforehand on the CPU side. This can also be performed on the GPU using the HLSL “mul” function. If you had separate Matrices for your World, View, and Projection, you could multiply them together in HLSL like this:

float4x4 World;
float4x4 View;
float4x4 Projection;
float4 worldView = mul(World, View);
float4 WorldViewProjection = mul(worldView, Projection);

The next few lines in this shader define a couple of structures for holding data as it gets passed to the Vertex Shader and from there to the PixelShader:

struct VertexShaderInput
{
float4 Position : POSITION0;
float2 TexCoord : TEXCOORD0;
};
struct VertexShaderOutput
{
float4 Position : POSITION0;
float2 TexCoord : TEXCOORD0;
};

These two data structures contain the same two data channels and could be used interchangeably. The channels that these structures contain will change once we start adding bones for skeletal animations, lights, and additional sets of geometry. So, for now, we’ll keep them separate.

The POSITION0 and TEXCOORD0 words are semantics that help define what the data channels will be used for. Different shader versions will allow for different sets of these semantics and there are also limits on how many can be used at once.

In this example, POSITION0 contains geometry position data while TEXCOORD0 contains texture coordinates. We use a Texture Sampler to get the color of a specific pixel on the 2D image that is wrapped around our model by passing the texture coordinate to the Texture Sampler.

Setting up the Texture Sampler:

texture2D Texture;
sampler2D TextureSampler = sampler_state
{
Texture = ;
MinFilter = linear;
MagFilter = linear;
MipFilter = linear;
};

Using the Texture Sampler:

float4 color = tex2D(TextureSampler, input.TexCoord);

At the end of our Graphics Shader, we create a Technique that contains a Pass with references to the functions that do the actual work:

Technique SimpleTechnique
{
Pass
{
VertexShader = compile vs_1_1 SimpleVertexShader();
PixelShader = compile ps_2_0 SimplePixelShader();
}
}

Here, we specify our VertexShader and PixelShader functions. The shader versions are set using vs_x_x or ps_x_x.

Our Vertex Shader function, SimpleVertexShader(), looks like this:

VertexShaderOutput SimpleVertexShader(VertexShaderInput input)
{
VertexShaderOutput output;
output.Position = mul(input.Position, WorldViewProj);
output.TexCoord = input.TexCoord;
return output;
}

We are using the VertexShaderInput structure to pass in data to the “input” variable and we specify the VertexShaderOutput structure as the final result of this function.

First, we create a new VertexShaderOutput variable to store the finalized data. Next, we take the initial geometry position and multiply it by the WorldViewProj Matrix to convert the data into the 2D space of our screen.

The third line in our function simply passes the Texture Coordinates through to the Pixel Shader without modifying them. If we had skeletal animations, we would use bone weight data from two additional channels to modify our POSITION0 and TEXCOORD0 data in the Vertex Shader.

The final line in the Vertex Shader returns our finished VertexShderOutput object which is then passed as an input to the Pixel Shader function:

float4 SimplePixelShader(VertexShaderOutput input) : COLOR0
{
float4 color;
color = tex2D(TextureSampler, input.TexCoord);
return color;
}

This Pixel Shader function now has, as an input, the finished output of our Vertex Shader. The flow looks like this:

3D Model > Vertex Shader > Pixel Shader > Monitor

This Pixel Shader function outputs a single float4 variable that contains R, G, B, and A components to represent the color and alpha of a single screen pixel. We get this color value by passing in the TEXCOORD0 data to our Texture Sampler which looks at our model’s texture and returns a color value from the image at a specific 2D coordinate.

Typically when you deal with RGBA components, you work with Byte variables where you would set a value of 0 – 255 for each channel. Float variables range from 0 to 1 instead of 0 to 255, so a value of 1 is equal to 100%.

Binary = Float
RGB(128, 128, 128) RGB(0.5, 0.5, 0.5)
RGB(255, 255, 255) RGB(1, 1, 1)


This is the first part of an ongoing series of tutorials that cover Graphics Shaders and how they can be used within TCS. Stay tuned for more!



< TCS Home