[ / main / writing / object_space_rejection ]

    Object Space Rejection



©1998 Codex Software

Speed is an obvious concern when writing a 3D engine, particularly a software based engine. While speed-ups can present themselves in many forms, the most obvious, and greatest speed-up can be gained by merely rejecting points and polygons as fast as possible if they aren't needed.




Typically this job is left to backface culling which can remove approximately half the workload from all your 3D routines by cutting your object in half and removing all those polygons which are invisible (see my backface culling article for more information). However, this method also assumes that you've transformed all your points into world space, seeing as though you must cull against the camera's position, which is specified in world space.




What if it were possible to backface cull in object space? Not only could you skip transforming all your points into world space just to end up rejecting half of them, but you also would be able to shade your object in object space, thereby removing the need to transform the many polygon and vertex normals. This is all possible through a simple technique which involves transforming the camera into object space.




    The Bounding Sphere Test




Before going through with all this, however, it makes sense to check if the object will even be visible in the long run. This can be done by using a bounding sphere test. Your bounding sphere is a sphere which encompases the entire object. You can calculate this by merely storing the highest x, y and z values of your object, when loading, and then converting this into a distance (the bounding sphere's radius):

    radius = sqrt( highest_x * highest_x + highest_y * highest_y + highest_z * highest_z );



Now that you have a bounding sphere definition (all you need is its radius) you can use this to test if the object will be visible to the camera. This process involves transforming the object's center position ( which would be (0,0,0) ) into camera space (the space which the camera sees, sometimes called view space),

    object_center = new Vertex(0,0,0);
    object_center = object_center * to_view_space_matrix;



and checking the bounding sphere against the clip planes of the frustum. The frustum has six clipping planes, but some may choose to ignore the front and back planes. That is up to you, of course. The planes are described as:

    x =
    y =
    z = d
    z = f
    Where:
      2h is the view plane window dimension
      d is the view plane distance
      f is the far plane distance



If your object is visible, it will be withen these boundaries. If not, it stands to reason it will then be invisible to the camera, and can be rejected. A whole object is then removed from your work load with a minimal number of calculations. I do the bounding sphere test as follows:

    center = new Vertex(0,0,0);
    center = center * to_view_space;
    clip_x = (view->xoff * center->z) / view->d;
    clip_y = (view->yoff * center->z) / view->d;
    // Test the extremities of the bounding sphere
    visible += (center->x + radius < -clip_x);
    visible += (center->x - radius > clip_x);
    visible += (center->y + radius < -clip_y);
    visible += (center->y - radius > clip_y);
    visible += (center->z + radius < view->d);
    visible += (center->z + radius > view->f);
    // if not visible, simply quit
    if(visible > 0) return;



The process is very simple and requires only a couple multiplies and divides plus a matrix multiplication to possibly reject and entire object.




    Object Space Rejection




Now for the actual topic of this article; object space rejection. If you've decided, by means of the bounding sphere test, that your object is going to be visible it is now time to move the camera into object space. This is actually a fairly simple manoever, considering what you have at your disposal.




Your object will, or should, have a matrix to transform points from object space to world space. If we take the inverse of this matrix we can then transform points from world space back into object space. This is perfect, seeing as though the camera is in world space, and we want it in the object space of the object. We just have to act as though the camera is actually part of the object (Afterall, it is the object's matrix we're using. This matrix is very object specific). To do this, merely specify the camera's position as an offset from the object's position in world space. I do all the above as follows:

    // here's the inverse matrix calculation
    to_object_space.matrix[0][0] = to_world_space.matrix[0][0];
    to_object_space.matrix[0][1] = to_world_space.matrix[1][0];
    to_object_space.matrix[0][2] = to_world_space.matrix[2][0];
    to_object_space.matrix[0][3] = 0;
    to_object_space.matrix[1][0] = to_world_space.matrix[0][1];
    to_object_space.matrix[1][1] = to_world_space.matrix[1][1];
    to_object_space.matrix[1][2] = to_world_space.matrix[2][1];
    to_object_space.matrix[1][3] = 0;
    to_object_space.matrix[2][0] = to_world_space.matrix[0][2];
    to_object_space.matrix[2][1] = to_world_space.matrix[1][2];
    to_object_space.matrix[2][2] = to_world_space.matrix[2][2];
    to_object_space.matrix[2][3] = 0;
    to_object_space.matrix[3][0] = 0; to_object_space.matrix[3][1] = 0;
    to_object_space.matrix[3][2] = 0; to_object_space.matrix[3][3] = 1;
    // Now calculate the offset from the camera's position and the object's position
    camera.x = view->camera->matrix[0][3] - to_world_space.matrix[0][3];
    camera.y = view->camera->matrix[1][3] - to_world_space.matrix[1][3];
    camera.z = view->camera->matrix[2][3] - to_world_space.matrix[2][3];



As you can see, I've inverted the to_world_space matrix and stored it in to_object_space and I've also stored the camera's position as an offset from the object's position in a vertex. From here on in it's all downhill. The tough part is over. To transform the camera into object space merely multiply the above vertex by the above matrix:




    camera = camera * to_object_space;



You can now use the resultant vertex to perform backface culling, in object space. The only difference now is that when you find that a polygon is visible you must now transform it into world space to use it. It would be very wise, of course, to keep track of which vertices you've transformed already (considering one vertex could be shared by many polygons and you only need to transform it once). I do this by keeping a large array of 32-bit integers, one for each vertex. If the value in the array for the point in question is equal to the current frame number (I keep a running total of how many frames have been displayed so far) then the point has already been transformed, otherwise I transform it and set the array value equal to the frame number.




That about covers it. It is a fairly simple process, but the advantages are many. I would suggest doing your shading in object space as well because then you don't need to transform the vertex normals into world space at all. I would not recommend clipping in object space. Yes, it is possible to transform the view frustum into object space and clip there, but it is certainly not worth it. Think of it this way; most objects are made up of 3 or 4 point polygons and will almost always clip to polygons with more points than the original, therefore just making more work when transforming to world space.