3D Animation Workshop: Lesson 110: Getting Under the Hood | 3
Lesson 110 - Getting Under the Hood - Part 3
The transformation process accounts for how geometric objects are moved (translated), rotated and scaled within a scene. In the end, it's a matter of converting all the vertices of each object from their original values based on a local coordinate system to corresponding values based on the world coordinate system. The transformation of vertices is one of the earliest and most important steps in the rendering process.
The rendering process is often logically divided into geometry processing and subsequent rasterization into image pixels. The former step involves transforming all local coordinate values into world coordinate values, and then converting them to a coordinate system based on the position and orientation of the camera (a camera coordinate system). Thus geometry processing is a matter of mathematically setting up the scene so that all locations (coordinates) in space are specified using a single measurement system centered on the camera. Once again, I encourage you to think through and visualize this process carefully. It's not a matter of words, but of visual and geometric ideas that only come to make sense over time.
Thus far, we've only considered the shape of geometric objects. But the scene description must also specify how the surfaces of objects should render. This is, at bottom, a question of color. The words material and appearance are often used to describe the way in which the surfaces should render, especially in response to light sources in the scene. The term textureshould not be misused. A material refers to qualities that are constant for an entire surface. For example, the diffuse color element of a material definition applies the basic color to a surface. If we want to vary the diffuse color over the surface, we apply a texture that overrides the overall diffuse color.
If a material definition includes a texture, there must be a way of determining how to distribute the varying color values over the surface of the object. This is achieved by texture coordinates. Each vertex on the mesh is assigned a corresponding point in the "space" of the texture. In the typical case, the texture is a 2D image, and thus each vertex on the mesh is associated with a location on the image. The result is that each polygon of the mesh is assigned a corresponding polygonal region of the 2D image, which is "mapped" to it.
That's more than enough to chew on for now. We'll finish up next time.
|To Return to Parts 1 and 2, Use Arrow Buttons||
Created: January 30, 2001
Revised: January 30, 2001