+1 tag for Google+

Wednesday, 10 December 2014

Macdonald's Ebi TVC 2014


Here's a TV commercial I was part of. Most of the work I was involved in, were rotoscoping the girls out of their background on their original footage, and removing portions of clips visible under their hair accesories.

Monday, 8 December 2014

Jewl Changi Airport Brand Story


Here's a TV branding project that Iceberg Design undertook, and I was involved in.

My shots were just creating, laying out and rendering the coloured balls shots where lady touches her lips to the balls against the black background (0:29 - 0:30).

In the ending shot in the background where lines appear and form the final logo (1:09 - 1:15), I generated the lines and refined them under the art direction of the creative director.


Saturday, 6 December 2014

My Answer on Quora: What is the standard lens length in Maya 3D?

I chanced upon a question on Quora:

Question: 
What is the standard lens length in Maya 3D?

I decided to answer that, and here is my reply. I hope it sheds some light to people wanting to know how to establish a relationship between camera settings in the real world and the virtual counterpart in Maya.

Answer: 

Indeed, many artists with no experience in photography will not be able to easily understand the relationships between focal length and sensor sizes / ratios.

Having worked in the "camera department" on Hollywood films, it has given me great insight into the relationships between the physical camera and the virtual one in Maya.

Maya does not make it easier. What we know in the VFX/Film world as "sensor size" / "film back" / "film format", is known in Maya as "Camera Aperture". This attribute is found in any Maya camera's shape node. In the attribute editor under the Film Back section you will see: Film Gate, Camera Aperture, Film Aspect Ratio, and Lens Squeeze Ratio.

Just like you said there is a relationship between focal length and the sensor size. 
B
Back in the Attribute Editor for the Maya camera, the way we get the correct sensor size or film format for our camera gate is to set the Film Gate attribute to "35mm Full Aperture". This is actually a preset that sets the "Camera Aperture" attribute to 0.980 x 0.735. 

From the Wikipedia page for Super 35 film formats of printed film strips, (4th para), I quote "If using 4-perf, the Super 35 camera aperture is 24.89 mm × 18.66 mm (0.980 in × 0.735 in)". 

That is exactly what Maya is giving us (in inches, which is frustrating, when we are describing 35mm in millimeters).

From the same Wiki article we also learn of the film dimensions of 35mm Academy format which is  21.95 mm × 16.00 mm (0.864 in × 0.630 in). That is what the Maya Film Gate preset gives us if we switch to 35mm Academy.

However, we are not limited to only the presets found in the Film Gate attributes. Knowing that Maya is just filling in measurement of the film back dimensions now enables us to input measurements from our own camera sensors even if their measurements are non-standard.

In this context, whatever focal length that you now set, will give you the actual framing of a real world camera with the same sensor size / film back. 

We do all these, to make sure that the numbers will all make sense: the dimensions of the sensor size and the focal length.

Moving forward, we are faced with 2 settings, and two different framings when you look at the viewport: one is our camera's film back / sensor ratio, and the render resolution. Maya gives us the flexibility to have both. 

However it becomes wildly confusing if we do not know what we are doing. Without guides we will never know what we are seeing in our viewport is what. Even when you render, you are only seeing framing of your render resolution, not the framing of your film gate.

To see both, this is my standard workflow, for all cameras I want to look through and eventually render:

In the Camera's attribute editor:
- set "Fit Resolution Gate" to "Overscan", 
under display options section 
- turn on "display film gate". This displays our film back /sensor boundary
- turn on "display resolution". This displays our rendering resolution boundary
- turn off "display gate mask"
- set "overscan" to 1.05

All these will make sure you see 2 boundary boxes, one with a solid line, and the other one, drawn with a dotted line. The dotted box defines the film gate / sensor bounds, and the solid box defines your rendering boundary, defining the resolution of your rendered.

I hope this helps.


Tuesday, 25 November 2014

Distance Between 2 Positions in 3D Space

The ability to find the distance between 2 points in 3D space is a really important function I need to use time and again.

I found this website that is really helpful with clear explanations on the use of the formula.

http://darkvertex.com/wp/2010/06/05/python-distance-between-2-vectors/

From the webpage I learnt the formula for the distance d between points A (expressed as Ax, Ay, Az) and B (expressed as Bx, By, Bz), would be expressed as such:

d = √  Ax-Bx2 + Ay-By2 + Az-Bz2  

I was going to use the distance function on an expression in Maya, so I had to write it with MEL commands:

vector $a = `xform -q -ws -a -rp "objA"`;
vector $b = `xform -q -ws -a -rp "objB"`;
// doing the additions and squaring first
$myDist = `pow ($a.x-$b.x) 2` + `pow ($a.y-$b.y) 2` + `pow ($a.z-$b.z) 2`; 
// applying the square root
$myDist = `sqrt $myDist`; 

I hope you it helps if you are looking for the same information.

Additional notes:
I am writing this a few days after my post because I found a more efficient way to represent the formula.

Instead of using the back ticks "`" for the pow and sqrt, I found that in Maya expressions we can use the equivalent of these commands. They are pow( ) and sqrt( ).

So the shortened single-line expression would be:
$myDist = sqrt(pow(($a.x-$b.x), 2) + pow(($a.y-$b.y),2) + pow(($a.z-$b.z),2))

Saturday, 22 November 2014

Adding ObjectID Pass to Render Output in Maya Part 2: The Script

In my previous post, Autodesk's article showed us how to create an ObjectId pass in our scene.

In the last post I also posted a snippet of PyMel script that creates a unique Id number for each object so each one will turn up a uniquely different colour from its neighbour in the objectId pass.

However the process still needed us to manually create a Mental Ray Output Pass in the camera's Mental Ray section.

Since then I have gone on to write a PyMel script to automate the process. So here is the script.

# -- code start --
from pymel.core import *
def uniqObjIdAssign():
    # Written by Patrick Woo
    # usage:   
    # - make sure mental ray is loaded, and set as your current renderer
    # - select the camera
    # - then select all the objects to give unique objectIDs to, and run this script
    # for more info go to:
    # http://patrickvfx.blogspot.com/2014/11/adding-objectid-pass-to-render-output.html
    if ls(sl=True)[0].getShape().nodeType()=='camera':
        camShape = ls(sl=True)[0].getShape()
    else:
        print 'first object selected must be a camera. script aborted.'
        return
    if ls('*miOutputPass_ObjId', type='mentalrayOutputPass'):
        # user has run this before
        select(ls('*miOutputPass_ObjId', type='mentalrayOutputPass'), replace=True)
        print 'an existing mentalrayOutputPass already exists. delete it, or rename it, then run the script again'
        return
    sel = ls(sl=True)[1:]
    opPassNode = createNode("mentalrayOutputPass", name='miOutputPass_ObjId')
    opPassNode.dataType.set(11) # set frame buffer type to "label(integer)1x32bit"
    opPassNode.fileMode.set(1) # sets the outputPass node to output to a file upon rendering
    opPassNode.fileName.set("_objId")
    connectTriesCounter = 0
    connectedFlag = False
    while not connectedFlag:
        try:
            connectAttr(opPassNode.message, camShape.miOutputShaderList[connectTriesCounter], f=False)
            connectedFlag = True        
        except:
            connectTriesCounter += 1
        if connectTriesCounter > 20:
            print 'too many tries to connect %s to %s, aborting.'%(opPassNode.name(), camShape.name())
            print "check the connections on %s's mental ray -> output shaders before running again"%camShape.name()
            return
    
    counter = 1 # 0 is black in colour 
    for x in sel:
        if x.getShape():
            # makes sure that the transform node isn't just an empty group 
            if "miLabel" not in str(x.listAttr()):
                addAttr (x, ln='miLabel', at='long', k=True);
            x.miLabel.set(counter) 
            print '%s.miLabel set to %i'%(x,x.miLabel.get())
            counter += 1
    return
uniqObjIdAssign()
# -- code end --

These are the steps you need to do before running the script.
- set your renderer to Mental Ray (the Mental Ray plug-in must be loaded)
- select first your camera
- then select the rest of your objects in the scene you wish to assign objectIds

The script does the following:
- get your camera's shape node
- create a mentalrayOutputPass node
- set the mentalrayOutputPass node's frame buffer type to "Label(Integer)1x32bit"
- set the mentalrayOutputPass node to output to a separate file when rendering
- set a post-fix "_objId" for the output file  (so it does not clash with the file name of your main render)
- connect it to an empty entry in your camera shape's list of output shaders
- then go through the rest of the non-camera selection, and
  -  add the "miLabel" attribute if it does not exist
  - gives a unique integer to the object's "miLabel" attribute

On top of all these the script does some checks:
- to make sure the first selection is a camera
- to make sure the mentalrayOutputPass node with a name "miOutputPass_ObjId" does not already exist in the scene (I do not want the user to end up with a heap of mentalrayOutputPass nodes when he/she runs the script multiple times)

Additional tips and controls for the script:
The order of your selection matters.

If you do not  like the colours given to some of the objects, you can deselect those objects and re-add them to your existing selection in a different selection order, then run the script again. (Just make sure the first selected object is your camera).

If you want to re-run the script, make sure to delete the mentalrayOutputPass node connected to the camera, and you will be able to run the script again. 

If you'd like to keep an existing mentalrayOutputPass node, you can rename it with a "01" at the back, and the script will now run by adding yet another mentalrayOutputPass node, which you can then manually delete when the new node gets created. Ideally you should only have only one mentalrayOutputPass node attached to your camera, since all other mentalrayOutputPass nodes will command mental ray to re-render the same objectId passes and save the exact images back to the exact filenames, resulting in wasted rendering time.

I hope you find this useful. 

Drop me a note if it has helped you, or if you have ideas on how to improve it. :)