Image-based Lighting

In the last few days I’ve been working on integrating image-based lighting into the Illumina renderer. At the moment, I have a preliminary setup running on light probes from low-dynamic range (LDR) images. The light probes are sampled using a simple cosine-weighted distribution which is far from optimal. In fact, when low sampling rates are used, output suffers from excessive variance. Next step is that of implementing a better sampling strategy…

Panagia Angeloktisti Church (Kiti) model courtesy of Jassim Happa, University of Warwick.

Sandbox applications

For the last few days we’ve all been working on wrapping up the projects and writing the dissertations which are due in three weeks’ time. I have created an application object for Vistas, allowing users to create and set up applications easily and quickly. Consequently I built a basic sandbox on top of the application object which takes the onus of additional setup and world manipulation off the user’s shoulders.

Colin has been building his own Gravitas sandbox (based on the Vistas Sandbox), which additionally allows one to load physics scenes from a file based on a custom text format. The loader creates alternative representations of the physics counterparts in Vistas using the MeshForge object.

The video below shows the first fruits of the simulation platform:

And following are some screen shots…

Final touches

More than two months have gone by since I’ve last updated the Vistas progress log, and during this time, besides sitting for my exams (and passing all of them, thank God), I’ve started closing the first iteration of this project. I am pretty happy with the outcome, and more so considering the constraints within which research, design and development took place.

Since the last update, I’ve created a runtime typing system for better scene management and extension, modified the high level rendering pipeline, and re-written the effect system for the umpteen time. Although the former effect system was good, it was way too complex. The new version is simpler, but no less powerful.

I have also added an application framework for effortlessly setting up Vistas-based applications; now I’m working with Colin and Gordon to prepare some sort of sandbox within which we can have a unified graphics-physics-scripting system we can meddle with quickly and easily. The reason we’re working on a sandbox at this stage is simply that we need a showcase that is hassle free to set up.

Anyway, I’ll try to post in somewhat more detail next time. Before I leave I think credit is due to Mr. Paul Bourke for the cube map used in the screen shots above. As to what regards the Audi model, I don’t know who the author is – Ben, my brother, sent it to me, but he said he didn’t do it… oh, and thanks to Colin for UV mapping the model; it would have taken me ages to do.

Shadow mapping effect in place

I have finally formalised shadow mapping as an effect within the Vistas framework. The first implementation supports only directional and spotlight shadows, with a point light version still in the works. Adding shadows is now as trivial as placing shadow casters and receivers under an effect node and attaching a shadow mapping effect to the node; the rest is handled by Vistas.

Shadow mapping is basically a two-pass algorithm, where, during a first pass, a depth buffer is generated by rendering the scene from the point of view of the light source. During the actual rendering of the scene to the framebuffer, which occurs in a second pass, the depth information is used to determine whether a fragment is occluded from the said light source or not. The depth buffer is projected as a texture map over shadow receivers, and for every fragment drawn, its distance from the light source is compared to the respective value in the projected depth buffer. If the fragment’s depth information is less than that projected, it is closer to the light source and hence lit; if on the other hand, the depth value is less than the projected value, the fragment is being occluded by some closer object and is thus not receiving direct light from the light source.

Effects in Vistas can propagate rendering techniques (state information and shader programs) to their children. These, on their part, accumulate techniques and use them during rendering to generate a final image. Besides being capable of propagating techniques, effects can also specify a separate culling-sorting-drawing pass; this pass is applied either before the siblings are rendered or, in deferred fashion, right after them being drawn.

The first pass explained above was implemented through the separate culling-sorting-drawing pass; this is required as there is no guarantee that the objects visible to the active camera are actually the same as those contained within the frustum of the light whose shadows we are required to draw. Once the visibility set is generated and sorted, a simple shader is applied to render depth information to an offscreen texture. This first pass is performed before the geometry in the effect subtree is drawn. The second pass is simply carried out by propagating a technique which “darkens” the areas in shadow for a given shadow receiver. Although propagating this technique will force at least two drawing passes, the advantage lies in the fact that this effect works perfectly well with object materials that provide shadow-map-agnostic shaders. Obviously, there is nothing that prevents one from implementing a single pass version of the shadow map effect, as the framework fully supports it.

 

 

The effects system : Take 3?

I made a u-turn and totally scrapped the idea of having an effect expose a collection of cameras in order to have visibility sets generated for additional views. Instead, I opted for an effect interface which supports both external and internal potential visibility set generation; external generation reuses the visibility information from the effect subtree culling process vis-a-vis the main camera. Internal generation allows an effect to manage its culling strategy and come up with its own visibility set(s) given an instance of the currently active culler. In doing so, the determination of visible elements for additional drawing passes (such as shadow map generation) is fully customisable and active only within the scope of the effect. Moreover, the main culling process, which is concerned with culling objects which are not in the main camera frustum, needs not be concerned or involved with specialised culling instances. While pondering on how to actuate an all encompassing design, it struck me that some effects may actually be optimised through the provision of precomputed visibility sets. Being in control of the culling process, I could now apply local optimisations to the culling process, instead of having sets generated for me by a delegate which is potentially ignorant of any such optimisations.

This new scheme brought its fair share of woes, nevertheless. Since internal culling is now an offshoot and no longer managed externally, sorting visibility sets does not come for free anymore. Therefore, being stuck with an unsorted set, I chose to provide the effects framework with an interface by which sorting can be managed internally too. Again, although the set can be sorted straight away by a call to the currently active sorter instance, some local optimisations can be carried out which a generalised solution cannot provide, which one might as well take advantage of.

This new setup provides a flexible and extensible foundation for effects. Of course, the proof is in the pudding; I will be able to fully assess the ramifications of the current design only once I’ve actually built some complex effects and had them integrated within the scene graph (the first of which I’m off to build is effectively shadow mapping).