Month: April 2008

Shadow mapping effect in place

I have finally formalised shadow mapping as an effect within the Vistas framework. The first implementation supports only directional and spotlight shadows, with a point light version still in the works. Adding shadows is now as trivial as placing shadow casters and receivers under an effect node and attaching a shadow mapping effect to the node; the rest is handled by Vistas.

Shadow mapping is basically a two-pass algorithm, where, during a first pass, a depth buffer is generated by rendering the scene from the point of view of the light source. During the actual rendering of the scene to the framebuffer, which occurs in a second pass, the depth information is used to determine whether a fragment is occluded from the said light source or not. The depth buffer is projected as a texture map over shadow receivers, and for every fragment drawn, its distance from the light source is compared to the respective value in the projected depth buffer. If the fragment’s depth information is less than that projected, it is closer to the light source and hence lit; if on the other hand, the depth value is less than the projected value, the fragment is being occluded by some closer object and is thus not receiving direct light from the light source.

Effects in Vistas can propagate rendering techniques (state information and shader programs) to their children. These, on their part, accumulate techniques and use them during rendering to generate a final image. Besides being capable of propagating techniques, effects can also specify a separate culling-sorting-drawing pass; this pass is applied either before the siblings are rendered or, in deferred fashion, right after them being drawn.

The first pass explained above was implemented through the separate culling-sorting-drawing pass; this is required as there is no guarantee that the objects visible to the active camera are actually the same as those contained within the frustum of the light whose shadows we are required to draw. Once the visibility set is generated and sorted, a simple shader is applied to render depth information to an offscreen texture. This first pass is performed before the geometry in the effect subtree is drawn. The second pass is simply carried out by propagating a technique which “darkens” the areas in shadow for a given shadow receiver. Although propagating this technique will force at least two drawing passes, the advantage lies in the fact that this effect works perfectly well with object materials that provide shadow-map-agnostic shaders. Obviously, there is nothing that prevents one from implementing a single pass version of the shadow map effect, as the framework fully supports it.

 

 

Advertisements

The effects system : Take 3?

I made a u-turn and totally scrapped the idea of having an effect expose a collection of cameras in order to have visibility sets generated for additional views. Instead, I opted for an effect interface which supports both external and internal potential visibility set generation; external generation reuses the visibility information from the effect subtree culling process vis-a-vis the main camera. Internal generation allows an effect to manage its culling strategy and come up with its own visibility set(s) given an instance of the currently active culler. In doing so, the determination of visible elements for additional drawing passes (such as shadow map generation) is fully customisable and active only within the scope of the effect. Moreover, the main culling process, which is concerned with culling objects which are not in the main camera frustum, needs not be concerned or involved with specialised culling instances. While pondering on how to actuate an all encompassing design, it struck me that some effects may actually be optimised through the provision of precomputed visibility sets. Being in control of the culling process, I could now apply local optimisations to the culling process, instead of having sets generated for me by a delegate which is potentially ignorant of any such optimisations.

This new scheme brought its fair share of woes, nevertheless. Since internal culling is now an offshoot and no longer managed externally, sorting visibility sets does not come for free anymore. Therefore, being stuck with an unsorted set, I chose to provide the effects framework with an interface by which sorting can be managed internally too. Again, although the set can be sorted straight away by a call to the currently active sorter instance, some local optimisations can be carried out which a generalised solution cannot provide, which one might as well take advantage of.

This new setup provides a flexible and extensible foundation for effects. Of course, the proof is in the pudding; I will be able to fully assess the ramifications of the current design only once I’ve actually built some complex effects and had them integrated within the scene graph (the first of which I’m off to build is effectively shadow mapping).

The return of the laptop

I have most of the effects system in place, but there are still some slight things which need to be reconciled with the rest of the framework. For instance, let’s say the scene graph contains a subtree onto which shadow mapping must be applied. For a directional or a spotlight source, the shadow map must be rendered in a separate pass, which may or may not contain the same geometry as that visible within the frustum of the main camera. Therefore, a separate culling pass is required with the frustum adjusted to match the position and direction of the light source. With a point source model, light is emitted equally in all directions; this requires the use of more shadow maps and thus more culling passes. The situation is pretty much the same where dynamic environment maps and mirrors are concerned, so I thought of extending the Effect interface to allow access to a set of cameras; these cameras are queried during the culling stage by the active culler and the potential visibility sets for the respective frustums are generated and stored. The culling need not occur every frame, of course, and the interface will provide a means to specify a caching strategy and the expiration of a visibility set. This is also a means to avoid multiple culling calls within recursive effects during the same frame.