+1 tag for Google+

Monday, 24 April 2017

pymel.core Versus maya.cmds

Is PyMel slower than Maya's cmds module?

PyMel is an alternative to Maya's native maya.cmds. It was developed and maintained by Luma Pictures. It is open source, and lives in GitHub here: https://github.com/LumaPictures/pymel

Pymel was so intuitive and successful that Autodesk has shipped with pymel alongside its maya.cmds.

Today I stumbled across a discussion:

Someone was asking if pymel performs as quickly as native maya.cmds. Another person referenced this article: http://www.macaronikazoo.com/?p=271

The author (Hamish McKenzie) did a comparison with his following code:
import time
import maya.cmds as cmd
MAX = 1000
start = time.clock()
for n in xrange( MAX ):
print 'time taken %0.3f' % (time.clock()-start)

from pymel.core import ls
start = time.clock()
for n in xrange( MAX ):
 ls()  #NOTE: this is using pymel’s wrapping of the ls command
print 'time taken %0.3f' % (time.clock()-start)

When I run the code I get the following print-out.
time taken 3.983
time taken 77.059

The code lists all nodes in the current scene 1000 times with maya.cmds and pymel, then prints the time each method takes. The article was written in 2010. Hamish reports a speed difference of 350 times! It is now 2017, the pymel library has probably undergone iterations of improvements and optimisation. From the figures above, the execution time difference on my work machine is 19.34 times.

After reading that post, I went to modify his code to include a few more actions. Create a cube, rename the cube for a user-specified number of times, list all objects in the scene, then delete all the cubes.

The code now reads like this:

import time
import maya.cmds as mc
from pymel.core import *

def testMayaCmdsPymel(numIterList):
    a function to compare a time taken to perform similar 
    set of actions using maya.cmds and pymel.
    numIterList - a list consisting of the number of iterations
                    which to run the comparison tests on
    def testCmds(numIter):
        MAX = numIter
        start = time.clock()
        objList = []
        print '-- maya.cmds --'
        for n in xrange( MAX ):
            obj = mc.polyCube()[0]
            objList.append(mc.rename(obj, 'cube1'))
        timeCmds = time.clock()-start
        print 'time taken %0.3f' % (timeCmds)
        return timeCmds

    def testPymel(numIter):
        MAX = numIter
        start = time.clock()
        objList = []
        print '-- pymel --'
        for n in xrange( MAX ):
            # using pymel’s wrapping of the polyCube() command
         obj = polyCube()[0]
         # using pymel's wrapping of rename() command
         objList.append(rename(obj.name(), 'cube1')) 
         ls() # using pymel’s wrapping of the ls() command
        delete(objList) # using pymel’s wrapping of the delete() command
        timePymel = time.clock()-start
        print 'time taken %0.3f' % (timePymel)
        return timePymel
    timeDiffDict = {}
    mainStart = time.clock()
    for x in numIterList:
        print 'duplicating %i objects' % (x)
        timeCmds = testCmds(x)
        timePymel = testPymel(x)
        timeDiff = float(timePymel)/float(timeCmds)
        timeDiffDict[x] = [timeCmds, timePymel, timeDiff]
        print 'difference: %0.3f times\n' % (timeDiff)

    print 'Results of test series:'
    print 'num iterations\ttime cmds\ttime pymel\ttime difference'
    print '--------------\t---------\t----------\t---------------\t'
    for x in sorted(timeDiffDict.keys()):
        print '%04i objects\t\t%0.3f s\t\t%0.3f s\t\t%0.3f times' % \
                (x, timeDiffDict[x][0], timeDiffDict[x][1], timeDiffDict[x][2])
    print '\ntotal time taken:', time.clock()-mainStart

Invoking the function with the code:
testMayaCmdsPymel([10, 15, 20])

I get the following print-out:
Results of test series:
num iterations time cmds time pymel time difference
-------------- --------- ---------- ---------------
0010 objects 0.084 s 0.837 s 9.999 times
0015 objects 0.088 s 1.248 s 14.216 times
0020 objects 0.115 s 1.730 s 15.005 times

total time taken: 4.12330471431

Invoking the function with the code:
testMayaCmdsPymel([10, 50, 100, 200, 300, 400, 500, 1000])

I get the following output:
Results of test series:
num iterations time cmds time pymel time difference
-------------- --------- ---------- ---------------
0010 objects 0.085 s 0.847 s 9.980 times
0050 objects 0.294 s 4.299 s 14.628 times
0100 objects 0.610 s 9.391 s 15.391 times
0200 objects 1.280 s 20.519 s 16.030 times
0300 objects 1.963 s 34.734 s 17.692 times
0400 objects 2.617 s 50.107 s 19.150 times
0500 objects 3.484 s 68.700 s 19.720 times
1000 objects 8.033 s 190.390 s 23.700 times

total time taken: 397.417362303

I see that the time difference in execution generally increases, even though the increase starts to slow down after 300 objects. However, the increase in speed that we get from maya.cmds versus pymel is at least 10.8 times, and in the worst cases 24.5 times!

I know it sounds really late to be discovering this, but it is never too late to switch. I really like how pymel makes maya.cmds wrapper more pythonic, readable and consistent to work with.

Now I am deliberating whether to continue with pymel or completely jump back to maya.cmds. My inclination is to use both, native maya.cmds for operations I know will iterate across huge number of objects, and pymel for other less intensive tasks.

1 comment:

  1. I still use maya.cmds for most things but occasionally think of using pymel b/c of the class structure, then find a blog post like this which forces me back to maya.cmds.

    There's a lot of overhead in creating a pymel object from every node in the scene. One can imagine this might be sped up if the pymel ls command were multithreaded.

    Looking at the pymel.core source code, the ls command is relatively simple. While it does a lot of argument parsing, for the most part it's just:

    def ls(*args,**kwargs):
    res = mc.ls(*args, **kwargs))
    return map(PyNode, res)

    I haven't tried myself, but one could conceivably leverage python's threading or multiprocessing module to break the ls up into n number of chunks and feed it to multiple threads.

    It would still be a lot slower than a simple mc.ls, but you could conceivably improve the results of the pymel version by multithreading.