Image-based Lighting

In the last few days I’ve been working on integrating image-based lighting into the Illumina renderer. At the moment, I have a preliminary setup running on light probes from low-dynamic range (LDR) images. The light probes are sampled using a simple cosine-weighted distribution which is far from optimal. In fact, when low sampling rates are used, output suffers from excessive variance. Next step is that of implementing a better sampling strategy…

Panagia Angeloktisti Church (Kiti) model courtesy of Jassim Happa, University of Warwick.

Sandbox applications

For the last few days we’ve all been working on wrapping up the projects and writing the dissertations which are due in three weeks’ time. I have created an application object for Vistas, allowing users to create and set up applications easily and quickly. Consequently I built a basic sandbox on top of the application object which takes the onus of additional setup and world manipulation off the user’s shoulders.

Colin has been building his own Gravitas sandbox (based on the Vistas Sandbox), which additionally allows one to load physics scenes from a file based on a custom text format. The loader creates alternative representations of the physics counterparts in Vistas using the MeshForge object.

The video below shows the first fruits of the simulation platform:

And following are some screen shots…

Final touches

More than two months have gone by since I’ve last updated the Vistas progress log, and during this time, besides sitting for my exams (and passing all of them, thank God), I’ve started closing the first iteration of this project. I am pretty happy with the outcome, and more so considering the constraints within which research, design and development took place.

Since the last update, I’ve created a runtime typing system for better scene management and extension, modified the high level rendering pipeline, and re-written the effect system for the umpteen time. Although the former effect system was good, it was way too complex. The new version is simpler, but no less powerful.

I have also added an application framework for effortlessly setting up Vistas-based applications; now I’m working with Colin and Gordon to prepare some sort of sandbox within which we can have a unified graphics-physics-scripting system we can meddle with quickly and easily. The reason we’re working on a sandbox at this stage is simply that we need a showcase that is hassle free to set up.

Anyway, I’ll try to post in somewhat more detail next time. Before I leave I think credit is due to Mr. Paul Bourke for the cube map used in the screen shots above. As to what regards the Audi model, I don’t know who the author is – Ben, my brother, sent it to me, but he said he didn’t do it… oh, and thanks to Colin for UV mapping the model; it would have taken me ages to do.

Shadow mapping effect in place

I have finally formalised shadow mapping as an effect within the Vistas framework. The first implementation supports only directional and spotlight shadows, with a point light version still in the works. Adding shadows is now as trivial as placing shadow casters and receivers under an effect node and attaching a shadow mapping effect to the node; the rest is handled by Vistas.

Shadow mapping is basically a two-pass algorithm, where, during a first pass, a depth buffer is generated by rendering the scene from the point of view of the light source. During the actual rendering of the scene to the framebuffer, which occurs in a second pass, the depth information is used to determine whether a fragment is occluded from the said light source or not. The depth buffer is projected as a texture map over shadow receivers, and for every fragment drawn, its distance from the light source is compared to the respective value in the projected depth buffer. If the fragment’s depth information is less than that projected, it is closer to the light source and hence lit; if on the other hand, the depth value is less than the projected value, the fragment is being occluded by some closer object and is thus not receiving direct light from the light source.

Effects in Vistas can propagate rendering techniques (state information and shader programs) to their children. These, on their part, accumulate techniques and use them during rendering to generate a final image. Besides being capable of propagating techniques, effects can also specify a separate culling-sorting-drawing pass; this pass is applied either before the siblings are rendered or, in deferred fashion, right after them being drawn.

The first pass explained above was implemented through the separate culling-sorting-drawing pass; this is required as there is no guarantee that the objects visible to the active camera are actually the same as those contained within the frustum of the light whose shadows we are required to draw. Once the visibility set is generated and sorted, a simple shader is applied to render depth information to an offscreen texture. This first pass is performed before the geometry in the effect subtree is drawn. The second pass is simply carried out by propagating a technique which “darkens” the areas in shadow for a given shadow receiver. Although propagating this technique will force at least two drawing passes, the advantage lies in the fact that this effect works perfectly well with object materials that provide shadow-map-agnostic shaders. Obviously, there is nothing that prevents one from implementing a single pass version of the shadow map effect, as the framework fully supports it.