Rather than having to explicitly provide a texture coordinate for each vertex, we can use texture coordinate generation (texgen) functions to have OpenGL automatically compute texture coordinates.
This uses the function glTexGen, and the glEnable modes GL_TEXTURE_GEN_S & GL_TEXTURE_GEN_T.
planeCoefficients = [ 1, 0, 0, 0 ] glTexGeni(GL_S, GL_TEXTURE_GEN_MODE, GL_OBJECT_LINEAR) glTexGenfv(GL_S, GL_OBJECT_PLANE, planeCoefficients) glEnable(GL_TEXTURE_GEN_S) glBegin(GL_QUADS) glVertex3f(-3.25, -1, 0) glVertex3f(-1.25, -1, 0) glVertex3f(-1.25, 1, 0) glVertex3f(-3.25, 1, 0) glEnd()
Texgen modes GL_OBJECT_PLANE and GL_EYE_PLANE are generating texture coordinates based on the distances of vertices from a plane.
A plane in 3 dimensions can be defined by the equation:
Ax + By + Cz + D = 0
This is an implicit formula - it defines a plane as the set of points that satisfy the equation. In other words, it serves as a test to determine whether or not a particular point [x y z] is on the plane or not.
Another interpretation is that the value of
Ax + By + Cz + D
is proportional to the distance of the point [x y z] from the plane.
The vector [A B C] is normal (perpendicular) to the plane.
Thus, it defines the plane's orientation.
e.g.: A=1, B=0, C=0
defines a plane perpendicular to the X axis, or parallel to the
plane of the Y & Z axes.
The plane equation in this case would reduce to x = -D.
D controls the distance of the plane from the origin. If [A B C] is a unit vector, then D is equal to the distance from the plane to the origin.
Note that if, for a given [x y z], Ax + By + Cz + D = 0, then 2Ax + 2By + 2Cz + 2D = 0, and in general:
NAx + NBy + NCz + ND = 0
In other words, multiplying the four coefficients by the same constant N will still define the same plane.
However, the value of NAx + NBy + NCz + ND will be N times the value of Ax + By + Cz + D. This is useful in texture coordinate generation.
The texture generation mode can be one of
When the mode is GL_OBJECT_LINEAR, the texture coordinate is calculated using the the plane-equation coefficients that are passed via glTexGenfv(GL_S, GL_OBJECT_PLANE, planeCoefficients).
The coordinate (S in this case) is computed as:
S = Ax + By + Cz + D
using the [x y z] coordinates of the vertex (the values passed to glVertex).
If [A B C] is a unit vector, this means that the texture coordinate S is equal to the distance of the vertex from the texgen plane.
We can change how frequently the texture repeats (i.e. scale the texture) by multiplying the texgen coefficients by different amounts.
e.g., changing the coefficients from [1 0 0 0] to [3.5 0 0 0] changes
the results of contour.cpp
GL_EYE_LINEAR works similarly, with the coefficients passed via glTexGenfv(GL_S, GL_EYE_PLANE, planeCoefficients).
The difference is that GL_OBJECT_LINEAR operates in "object coordinates", while GL_EYE_LINEAR works in "eye coordinates".
This means that the texture coordinates computed with GL_OBJECT_LINEAR depend solely on the values passed to glVertex, and do not change as the object (or camera) moves via transformations.
The texture coordinates computed with GL_EYE_LINEAR use the positions of vertices after all modeling & viewing transformations - i.e. they use the positions of the vertices on the screen.
Both S & T coordinates can use texgen, to apply 2D textures with automatic coordinates.
This is done by making similar glTexGen calls for the T coordinate (but using a different plane).
SplaneCoefficients = [ 1, 0, 0, 0 ] glTexGeni(GL_S, GL_TEXTURE_GEN_MODE, GL_OBJECT_LINEAR) glTexGenfv(GL_S, GL_EYE_PLANE, SplaneCoefficients) glTexGenfv(GL_S, GL_OBJECT_PLANE, SplaneCoefficients) TplaneCoefficients = [ 0, 1, 0, 0 ] glTexGeni(GL_T, GL_TEXTURE_GEN_MODE, GL_OBJECT_LINEAR) glTexGenfv(GL_T, GL_EYE_PLANE, TplaneCoefficients) glTexGenfv(GL_T, GL_OBJECT_PLANE, TplaneCoefficients)
Sphere mapping, a.k.a. reflection mapping or environment mapping, is a texgen mode that simulates a reflective surface.
It is enabled by setting the texgen mode to GL_SPHERE_MAP.
There are no additional parameters.
glTexGeni(GL_S, GL_TEXTURE_GEN_MODE, GL_SPHERE_MAP) glTexGeni(GL_T, GL_TEXTURE_GEN_MODE, GL_SPHERE_MAP)
A sphere-mapped texture coordinate is computed by taking the vector from the viewpoint (the eye), and reflecting it about the surface's normal vector. The resulting direction is then used to look up a point in the texture. The texture image is assumed to be warped to provide a 360 degree view of the environment being reflected.
If you don't care about realistically exact reflections, though, any texture can be used.
A technique to produce advanced lighting effects using textures, rather than (or in addition to) ordinary OpenGL lighting.
OpenGL lighting is only calculated at vertices. The results are interpolated
across a polygon's face.
For large polygons, this can yield visibly wrong results.
Also, OpenGL lighting does not produce shadows.
A simple way to do this is to include shadows & lighting in the texture applied to an object.
Some modeling packages can do this automatically (called "baking" the shadows into the textures, in at least one case).
In the GL_MODULATE texture environment, texture and color/lighting information are multiplied together.
Normally, we have the texture provide RGB information and the
lighting control intensity.
This can be changed, however.
A lightmap is a greyscale texture, which can change the intensity of a material or glColor.
Dynamic lightmapping can be created using texgen.
The lightmap can then be applied to a scene 'externally' - without the individual objects having texture coordinates or otherwise expecting a texture.
Generating S & T texture coordinates via
S = x T = z
will cause the light map to lie in the X/Z plane, as if it were projected straight down the Y axis.
If the light is currently located at (LX, LY, LZ), then
S = x - LX T = z - LZ
will cause the texture to move with the light. The (0,0) texture coordinate will be located directly below the light.
Offsetting the coordinates by 0.5:
S = x - LX + 0.5 T = z - LZ + 0.5
will center the lightmap - texture coordinate (0.5, 0.5) will be directly below the light.
S = 0.1 * x - 0.1 * LX + 0.5 T = 0.1 * z - 0.1 * LZ + 0.5
will change the lightmap's size - it will cover a 10 x 10 unit area of the scene, rather than a 1 x 1 unit area.
We can expand to even more rendering passes to combine a large number of lightmaps.
In this case, we first render just the depth buffer information, to be used by all the subsequent passes.
Then, we render the lightmaps, adding them together using glBlendFunc(GL_ONE, GL_ONE).
Finally, the scene's color information is applied, multiplying it by the accumulated lightmap data, by using glBlendFunc(GL_DST_COLOR, GL_ZERO).