3D Animation Workshop: Lesson 109: Understanding 3D Technology | 3
Lesson 109 - Understanding 3D Technology - Part 3
It seems that almost every week I'm exposed to some emerging 3D technology that boasts a fair shot at "changing everything." Every person with a serious stake in 3D must be prepared to bet their energies on some assessment of the future. But you can't pretend to make an informed decision without a firm grasp of the basics of the technology in a way that transcends the mere operation of the user interfaces of the standard software packages. Unfortunately, this kind of education is very difficult to find, especially as directed to people whose fundamental background is as graphic artists.
I'm going to venture a bit here. In the next couple of columns, I'll introduce some underlying principles of 3D computer graphics that I consider essential to any serious 3D artist. We won't worry about how these concepts are implemented by programming, but any practical work with these concepts obviously requires programming skills.
We can begin by considering things in the most general terms.
Imagine placing a camera in a room and taking a photograph. The view through the lens of the camera is captured as a 2D image on the surface of the film. Light reflecting off of objects in the room will make it to the surface of the film only if it is visible within the camera's field of view.
The rendering process in 3D computer graphics may seem to be a virtual analogy to our photograph. This is true to an extent, but the process of "seeing" (which is what rendering amounts to) requires quite a bit of careful definition. First of all, the objects in the 3D scene are necessarily defined by locations in a 3D measurement system that we call a coordinate space. The location of each light source and each vertex on each polygonal mesh must be specified in some coordinate system common to the whole scene. The camera itself has a position in this coordinate system, which we call world space, and can therefore be assigned a precise location in (x,y,z).
In order to "see" through the virtual camera, we must treat the location of the camera as the center of the coordinate system to be used for rendering. That means transforming the location of every coordinate in world space to a value based on its position relative to the camera. The current position of the camera becomes (0,0,0) in this camera space used to perform the rendering.
With every vertex and light source properly transformed to camera space, we need to determine what objects are visible to the camera. This raises two distinct issues. A surface may not be visible because it is not within the camera's field of view. Imagine a pyramid-shaped volume extending forward from the location of the camera. If a surface of an object is not within this viewing volume, it is not seen by the camera.
But a surface could be within the viewing volume and still not be seen by the camera because it is blocked by another surface closer to the camera. Thus any rendering process must be able to figure out what geometric objects are within the viewing volume, and must also be able to figure out whether they are obscured by other surfaces within that volume. Only then can we know precisely what surfaces are exposed to the rendering eye.
We'll pick up from here next time.
|To Return to Parts 1 and 2, Use Arrow Buttons||
Created: January 2, 2001
Revised: January 2, 2001