Renderman Learning on a Saturday
I went back to work today to read up on lighting and rendering tutorials. We are working with Pixar Renderman in our pipeline.
Some of my peers have already been doing lighting while I was on reservist. There were some training given to us by our lighting lead and a visiting CG supervisor.
I've recently learnt more about Ambient Occlusion as I know it. I learnt that the typical ambient occlusion pass that I've been doing all this while is termed as (this is recently a new term for me) -- NDC (Normalized Device Co-ordinates) or screen space occlusion.
This means that the occlusion pixels are in screen space, the occlusion values are relative to your camera view. If your camera moves, the position of the scene through your viewport changes and the occlusion pixel values would change with that.
I am now exposed to a new alternative for creating occlusion renders. We can bake the occlusion as point cloud values in world space (instead of screen space), and during rendering the renderer looks up these values according to which point the renderer is sampling in world space.
Apparently there are pros and cons, advantages and trade-offs between chooseing one method over the other.
Another type of occlusion is Reflective Occlusion. Until middle of last year, I did not really understand what it was. Here is a link I came across when I researched on this:
Ambient / Reflection Occlusion Tutorial (in pdf format) from LAMRUG.org.
More on that later when I learn more in-depth.
Some of my peers have already been doing lighting while I was on reservist. There were some training given to us by our lighting lead and a visiting CG supervisor.
I've recently learnt more about Ambient Occlusion as I know it. I learnt that the typical ambient occlusion pass that I've been doing all this while is termed as (this is recently a new term for me) -- NDC (Normalized Device Co-ordinates) or screen space occlusion.
This means that the occlusion pixels are in screen space, the occlusion values are relative to your camera view. If your camera moves, the position of the scene through your viewport changes and the occlusion pixel values would change with that.
I am now exposed to a new alternative for creating occlusion renders. We can bake the occlusion as point cloud values in world space (instead of screen space), and during rendering the renderer looks up these values according to which point the renderer is sampling in world space.
Apparently there are pros and cons, advantages and trade-offs between chooseing one method over the other.
Another type of occlusion is Reflective Occlusion. Until middle of last year, I did not really understand what it was. Here is a link I came across when I researched on this:
Ambient / Reflection Occlusion Tutorial (in pdf format) from LAMRUG.org.
More on that later when I learn more in-depth.
Comments
Post a Comment