1.
This is going to be the angle between the viewer and , minus the angle between the vector
, and the vector from
to the side of the object closest to the view cone, minus the angle between the viewer and the edge of the view cone (in other words,
).
The angle between the viewer and can be found simply through application of the dot product, giving us,
where is the direction of the viewer.
With a little thought, it becomes obvious the first angle we must subtract is equal to . (Hint: draw a right triangle, with the hypotenuse along the vector
, and the base, along the vector from the viewer to the side of the object closest to the view cone).
The angle we seek is therefore as follows:
2.
The minimum number of seconds is given by the following equation:
where is the angle calculated in the previous exercise.
3.
The algorithms are complicated, but they all involve basic intersection calculations and clippings of one polygon to another.
4.
Calculate the angles between the point and each vertex pair. Add the angles. If they sum to , the point is inside the triangle. If they sum to less than
, the point is outside the triangle.
5.
The rays will intersect the planes of polygons on their way from the character to the player -- but just because they intersect the planes, it does not mean they intersect the polygons themselves. So compute the intersection of the rays with these planes, and then check to see if the intersection points lie within the polygons (this is essentially a 2D problem, when viewed from a plane perpendicular to the ray).
6.
Generate 3 projections of the box, along the -,
-, and
-axes. Then check to see if all of the points of the polygon lie within the three 2D rectangles generated by the projections. If they do, the polygon is completely inside the box. Otherwise, it is not.
7.
First, render the scene from the viewer's point of view, storing depth information in a depth buffer. Then, render the scene from the camera's point of view, storing depth information in another depth buffer.
Suppose is the transformation that brings a point from world space to a point in the viewer's screen space, and
is the transformation that brings a point from world space to a point in the light's screen space. Then apply
to all the points in the light's depth buffer (the transformation is invertible because you have the depth of each pixel). The resulting set of points are in world space. These are the only points that are lit -- all others are in shadow. Take these lit points and transform them by
, storing their depths in an off-screen buffer. This transformation by
bring the points into the viewer's screen space. If the depth value of any of these points is equal to the depth value in the viewer's depth buffer at the corresponding location, then the point should be lit. Otherwise, it should be shadowed.
Here's a non-technical summary of the approach: take all the points the light sees, and find out where they are located in world space. Then transform those world space points into the viewer's screen space. Some of these points will fall outside the extents of what the viewer can see. These represent points that are lit, but that aren't currently visible to the user. Some of the points will fall inside but will be located further away from the viewer than the points rendered. These correspond to points that are lit, but that are located somewhere behind the polygons that are visible to the viewer. Finally, some of the points will fall inside and will have the same depth value as the rendered points. These points should be lit. All other points should be shadowed.