16 September, 2012

Resetting skin deformation for joints

There are times when you need to update the bind transforms for joints after applying the skinCluster deformer. Maya provides a tool called "Move Skinned Joints Tool" for doing the same. It works great if you want to move joints that are not constrained. But if you want to move a skinned joint that is constrained to a surface this tool does not work well. Here is a small function that will reset skin deformation for given skinned joints. First move/rotate all the skinned joints in desired position then run this function. This will reset the skinned mesh to its non-deformed state while keeping the joints in place.

def resetSkinnedJoints(joints):
    """ Reset skin deformation for selected joint(s) """
    for joint in joints:
        # Get connected skinCluster nodes
        skinClusterPlugs = mc.listConnections(joint + ".worldMatrix[0]", type="skinCluster", p=1)
        if skinClusterPlugs:
            # for each skinCluster connection
            for skinClstPlug in skinClusterPlugs:
                index = skinClstPlug[ skinClstPlug.index("[")+1 : -1 ]
                skinCluster = skinClstPlug[ : skinClstPlug.index(".") ]
                curInvMat = mc.getAttr(joint + ".worldInverseMatrix")
                mc.setAttr( skinCluster + ".bindPreMatrix[{0}]".format(index), type="matrix", *curInvMat )
            print "No skinCluster attached to {0}".format(joint)

09 September, 2012

Muscle using single chain stretchy IK

One way to create a simplified muscle movement is to have two joints in a single chain act as origin and insertion points. These joints are orient constrained such that they always aim at each other in a fixed axis. So when we move either of the joints the chain should stretch/squash and orient themselves pointing at each other. One of the ways to achieve this is using a single chain IK solver. Here is a script to do that.

def makeSingleChainStretchyIK(prefix, joint1, joint2, settings=None):
    ikHandle, effector = mc.ikHandle(n=prefix+"_ikHandle", sj=joint1, ee=joint2, sol="ikSCsolver")
    distNode = mc.createNode("distanceBetween", n=prefix+"_jntDist")
    mc.connectAttr(joint1+".translate", distNode+".point1")
    mc.connectAttr(ikHandle+".translate", distNode+".point2")
    mc.connectAttr(distNode+".distance", joint2+".tx")

06 September, 2012

Bottom-up rigging approach & Decoupling rigs

The general approach for rigging a character is to start building skeleton, define how it will move (control system) and then dive into deformation(enveloping process). This approach assumes that the target bind skeleton driven by the control rig is going to produce good deformation. Usually one should do some deformation tests first to make sure bind skeleton would deform the character nicely and then build the control rig. However if this is not done thoughtfully then you could get stuck with a rig that does not deform well. Of course we have tools at our disposal to correct the deformation if required, however this could significantly lengthen the skinning process and make it tedious.

Another approach is to start thinking about deformation and do the skinning first then build its control rig. I am working on a realistic character model for studying deformation based on musculature. Hence the focus is on getting the deformation looking as good as possible. The first stage is to built the bind skeleton and add most of the secondary deformation joints along with the enveloping process. This should give a better idea of how the skeleton would deform the mesh. We can go on refining the bind skeleton and deformation weights until we get a solid base. My goal here is actually go get the skinning closer to the final look and hook up the control rig afterwards that is created based on the bind skeleton.

Regardless of the approach we take, the goal should be to be able to focus on both deformation and control rigs while keeping them independent processes. By keeping control rig and deformation rig separate we can focus on one task without worrying about the other and work on them in parallel. Generally there will be some dependency between both rigs, for example matching the pivots for the movement and deformation or a joint driven by a surface that is deforming the mesh . But we can still decouple both the systems to a good extent. Also, it is possible to keep the pose based deformation independent of control rig reducing the coupling further.

In programming it's a good practice to keep the objects of the system independent so that changing one does not require to change the others. This is highly applicable to Rigging as well. If you are more into programming you should read Design Patterns by the Gang of Four. It's an excellent read.

24 June, 2012

GroupConstraint Intro Demo!

Finally, here is a demo of the dynamic multi-way constraint system that I have been working on.

Following are some of the features of this constraint system:

  1. Multi-way constraint allows invertible relationship without cycle issues.
  2. The driver of the system can be animated.
  3. Only one keyframe required for switching the driver.
  4. All the follower nodes can be animated while constrained by driver of the system.
  5. Auto matching for drivers at switching frames to avoid popping nodes effect.
  6. Current driver indicator.

Your feedback is welcome! :)

GroupConstraint Intro Demo from Maulik Kataria on Vimeo.

22 May, 2012

No-pop implementation challenges

One of the key features of the constraint system I am developing is auto correction of "pop effect" when animation changes. This problem is quite complex due to a lot of different scenarios involved and based on how Maya architecture is designed.

It is very easy to calculate offset and store it when a constraint switches between different spaces to avoid pop. That's the general idea, however maintaining this offset when animation is constantly changing is difficult due to some of the reasons mentioned below:

  • User can update keys in the graph editor at any time value, this does not cause the compute since you might be on a frame that is not related to the changes. Hence we have to detect this as animation callback.
  • Maya's animation callbacks are not much useful in giving details about which keyframe was deleted so in this case we have to update all the key offsets from the beginning to make sure we resolve pop. Because deleting a driver key means that space will change and hence the offset at the next nearest keyframe.  
  • A space can be animated not only by its own transform but any parent in the hierarchy or a constraint. The constraint driver(master) itself might be animated by a transform up in the hierarchy.
  • This means that when animation for a space changes in the scene we have to first detect if the animation affects the constraint. In addition to using generic animation callbacks we also have to look at the input/output graph and also the hierarchy to detect if our constraint is affected. 
  • What if drivers are not keyed exactly on the switching keyframes?  This means that if I update animation curve handles it will update the driver transform at the switch and hence creating a pop.
  • One change in the earlier keyframe can cause update of offsets in all the subsequent keyframes to remove pop effect. To remove the pop we need to update drivers by either updating offset or adding extra transform on the affected switching keys, however that could mean that the offset/positioning in next switching frame is invalidated and will create a pop. This can lead to a cascading effect, creating a pop in each next frame when fixing the current one. This is true in my case especially since all the driven nodes are free to move while constrained to the driver of the system. I have some ideas to avoid this cascading effect, but I still need to test them first. 
  • To remove the pop we can match the current driver and previous drivers of the system at switching keyframe, but there are few tricky situations to look for:
    • What if both, current and previous, drivers are updated at the switch? Which one should be matched to the other one?
    • What if a driver has incoming connections and its transform cannot be keyed directly?

Below is a simple example:
We have two objects A(bluish) and B(reddish). They are animated as shown in the image below.

Think of the following scenarios after switching at frame 3:
  1. If B, the current driver, moves at frame 3
    This means that A will move as well since B is the current driver
  2. If A, the previous driver, moves at frame 3
    In this case B does not move since A is not the current driver

As for the status of GroupConstraint, the basics work quite nice with the animation. The problem arises when the animation changes creating a pop. The constraint handles pop for certain cases, but not all. I have finished writing logic for detecting keyframe changes and checking if it affects the constraint output or not based on graph connections and also DAG hierarchy. I also have all the functions ready to update offset information at any switch keyframe. The last (hopefully last) part now is to make sure that the auto-pop correction works when changing the existing animation in different cases. For me, I have to look at few more scenarios since I am also working hard to make switching possible with only one keyframe instead of two consecutive keyframes. And the possibility of animating the followers independently while driven by the driver of the system also adds to the difficulty.

10 March, 2012

Group Constraint Update

It has been a great learning experience while building this constraint system. As I posted before, I have been working on a constraint that allows dynamic relationship between a group of transforms. Hence I have called this a "Group Constraint". I think this constraint will be very useful in building rigs where the driver in the group needs to be switched during animation. One such example could be a foot rig where multiple contact points are required. This constraint should allow bi-directionality(or multi-way directionality) so that between two or more transforms anyone can be the driver at any time during animation.

First step for me was to find out how to calculate the final output when switching driver transform in the group. I was suspecting that I would need some complex calculations, however after doing some research and learning a great deal about visualizing matrix multiplication I found that I just needed a simple calculation. Having a proper structure for attribute also helped in simplifying the calculations. First I used python to experiment and confirm that my logic would work the way I want and then I started building plugin using C++. I finished the basics first, building input-output attribtue structure, using setDependentsDirty to establish relationship and reading the attributes from DataBlock. When I tested this code I always got zero values in all the attributes in compute method. Later I found what the problem was and how to fix it (read about it Here).

Now I have got basic switching between drivers working. And during the switch all the offsets maintained. One of the advantages of the method that I am using is that all the followers are free to move while being controlled by the driver node. This allows for more freedom in animating. My main goal is to make this system such that there shall be no pop when switching between drivers, even when animation keys are updated. I am also looking for a way to avoid having to key driver attribute twice in consecutive frames to make a switch. Enum attribute type allows such stepped keys as default behavior, however the challenge is to calculate the output at this switch point. The way Group Constraint stores the offsets and how final output is calculated should allow for such feature, but I have to test it to see if it works in all cases.

The biggest challenge and what will actually be the backbone for many of the features is linked with detecting animation events. Exo Switch Constraint also utilizes callback functions to detect keying on the driver (master) node. However, I intend to use it for different purposes and it will allow me to correct any pop at the switching points when animation is updated. There are two options available in Maya to listen on animation update events:
  1. Listen for changes in all animation curves and in the callback check if animation change is related to the constraint node.
  2. Listen for only particular animation curves that are linked to input transforms of the constraint.
First option is actually simpler in a way that I don't have to keep track of animation nodes connected to constraint's inputs. However, it can affect the performance if a lot of attributes are being keyed which I do frequently while animating and I am sure many animators do too. So I have decided to take the second option. But it comes with a price of complexity. Second approach requires to detect when the animation nodes get connected to (or disconnected from) each individual element of  transform's translate, rotate and scale attributes. When connected we need to add callback for each animation node. And when animation is removed each of such callbacks should be removed for that constraint input. The callback function should know which input the call belongs to and process the event accordingly. Depending on number of inputs it can reach up to 30-40 callbacks. So I need to generalize the callback function. To do so I decided to pass some specific information(meta data) to this function in (void*) clientData pointer when callback is added. However, I need to allocate memory dynamically for such data so that it persists beyond its local scope. And this allocated memory needs to be freed when we remove the callback, otherwise we will get memory leakage. So it gets a little bit complicated to pass info through pointers during event calls and freeing the memory when we no longer need the callbacks. But when done it will be a flexible solution to manage multiple callbacks.

I haven't gotten around to create a demo of the first stage yet. But I would really like to share it some time soon to get some feedback. Let me know if you guys have any particular feature in mind for such constraint.

25 January, 2012

Notes: Building a multi-way (group) constraint

I have been highly inspired by the work that Andrea Maiolo and Tim Naylor did on multi-way constraint system, particularly what they presented at Siggraph 2006 on "Bi-Directional Constraining". I think I first saw their Siggraph work when I was at school in 2008 and that's what got me into understanding Maya's DG and matrix mathematics for 3D transformations. Since then I have tried to solve this kind of constraint system from time to time and I always got stuck as I didn't have enough knowledge at that time. Between 2008 and 2011 I have learned many things by challenging myself to new problems and also from inspirational work of talented technical artists. So I have picked up this project again and I think I have made some progress compared to my previous failed attempts. However, the progress it not hurdle free. Here are some notes on the issues I have encountered.

The challenges:
The first challenge we encounter when building a two-way constraint is the cyclic dependency. However, that's not really the main and only issue. It is possible to create connections and do calculations in such a way that we can avoid this cyclic dependency. The real challenge is that we can't really have true Bi-directional constraint (I say true for lack of a better word). When I started, my idea of a bi-directional constraint was that both nodes involved are the masters and both nodes affect each other at the same time without any switching. Based on this definition the main question is how to interpolate from one state to the other when both objects are moving each other. This leads us to another question, what should be the sequence of operations when calculating for the goal state? This is an important question and more so because the final state is the sum of transformations of both nodes. 

Sequence dependency:
One of the most important properties of rotational transformations is that the operations are non-commutative. We already know this based on what we know about rotation orders. Let's take an example. Let's say we have two transform nodes A and B in their initial state set apart by some distance with zero rotations on both. Let's assume that both A and B affect each other (bi-directionality). Now we apply 90 degree rotation in z-axis on both nodes. Now based on the sequence of operations we get different locations in the end result as illustrated below.

Main problem here is that we are trying to treat both nodes A and B as masters at the same time. If we consider only one node as the master at a given point of time, we can avoid the problem of sequence dependency since only one node affects the others at a given time. ExoSwitch constraint uses a concept of driver nodes and driven nodes. Using this concept, we assign one driver for the constraint system at a given point of time to drive all the other nodes. So ExoSwitch constraint does not have a problem of finding the right sequence at a single point of time.

I can't think of any simple way to record a sequence in which a user is manipulating the nodes involved in a multi-way constraint. However, I think it should be possible to implement a system where all the nodes are treated as masters (or drivers) at the same time. One idea would be to have some kind of iteration based solver that calculates the interpolation to reach the goal state when all the nodes are driving one another. Maya's FBIK comes to my mind, but it seems that it takes a bit different approach. This approach is to solve the system when user moves one of the nodes(effectors) and update all the nodes with their final coordinates. When you animate these coordinates on effectors, each node is interpolated independently. Even though it works for FBIK, this behavior is not quite desirable for a multi-way constraint system.

Still a long way to go for finishing a working multi-way constraint. I always get more hopeful when I solve a problem on the way. But I think I should look forward to the next problems on my path and be ready to challenge my small brain for some exercise :)

12 January, 2012

Mysterious "geometry" attribute and Geometry Constraint

Geometry constraint is a bit special because it does not connect its output to transform attributes of a constrained node allowing us to key them. Out of curiosity I looked at the connections in hyper graph and I see that output of the constraint node is connected to "geometry" attribute of the constrained node (any transform node). One would think that this attribute would have something to do with geometric data for display or deformation. However, the doc says "Geometry attribute used for positional constraints". Strange!

Here is an example of this connection used by geometry constraint:
locator1_geoConstraint.constraintGeometry -> locator1.geometry

We know following things about this "geometry" attribute based on the documentation:
  • It's a generic type attribute (it takes nurbsCurve, nurbsSurface, mesh etc. as input)
  • It affects translation attribute of a transform node (we can use it to control position)

Based on above information let's do some experiments and see what happens.
  • Create a nurbs sphere
  • Create a locator
  • Connect nurbsSphereShape1.worldSpace[0] -> locator1.geometry
And locator1sticks to the sphere! What happened here? My guess is that this particular attribute calculates/updates translation value based on given geometric data. Now, if you move the locator1 you will see another surprise, locator1 is constrained to the sphere! And this is working without a constraint! You can move the sphere around or change its shape and the locator1 will still be following it. So this is almost like a geometry constraint but without creating the constraint node. The only thing that would not work here is if you group the locator and move the group, i.e. if you move the parent of the locator. This is what is handled by geometry constraint and that's where attribute locator1.parentInverseMatrix comes into the picture. This attribute is used by the constraint to compensate for any transformation coming from parent hierarchy of locator1.

So this makes me think that "geometry" attribute was added when geometry constraint was added. Who knows!

05 January, 2012

MPoint '=' assignment operator and float[3]

All 3d packages define a 3D point as (x,y,z,w). Last property 'w' is included based on how homogeneous coordinate system works. And this 'w' always needs to be 1 when we do calculations. Maya's implementation of this 3D point is done in MPoint class. This class also defines '=' operator which will copy the values of a float[3] to the given point. But it's really not a good idea to do the following.

float myPt[3] = {1,1,1};
MPoint mayaPt = myPt;

The problem here is that Maya will assign values to x,y,z from myPt but not to 'w' leaving it 0. This is an invalid homogeneous coordinate as w=0 means the point is at infinity! So keep in mind that 'w' is important :)

04 January, 2012

Freedom of a constrained node

When you constrain a transform node using Maya's constraints, it stops inheriting transformations from its top hierarchy. However, if you just connect (for example) translation attributes directly or using simple arithmetic nodes, the connected node continues to inherit transformations from its parent. The difference is because of how a constraint calculates the final output. Let's do a simple experiment.
  1. Create a nurbsCircle and a locator.
  2. Group the locator (group1->locator1).
  3. Now point constrain locator1 by nurbsCircle1 (keep offset option off).

The locator1 should be stuck to nurbsCircle1. Notice that locator1 does not move if you move its parent (group1). Now zero them out, so everything is on the origin. Then move group1 2 units in y-direction. Check the position values of locator1, it changed to (0,-2,0)! So the constraint recalculates position of locator1 to keep it locked to nurbsCircle1. To do this, constraint node uses transformations of nurbsCircle1 and group1(parent of locator1). And the position of locator1 is calculated by converting world-space coordinates of nurbsCircle1 to local coordinates under group1 (parent of the constrained node). This calculation uses 'worldInverseMatrix' of group1 which is the same as 'parentInverseMatrix' of locator1. To get more idea about this have a look at this nice article by Hamish Mckenzie.

Now let's change this behavior. Disconnect this connection, locator1.parentInverseMatrix[0] -> locator1_pointConstraint1.constraintParentInverseMatrix and move group1, you should see that locator1 is moving with its parent now! Both group1 and nurbsCircle1 should affect position of locator1. What happened here is that by disconnecting parentInverseMatrix we stopped constraint node from compensating for movement of group1(parent of constrained node). Or the space in which final coordinates for locator1 needs to be calculated is fixed and not affected by locator1's parent.

That explains the reason behind why after constraining a node it stops inheriting transform values from its top hierarchy.

To make things interesting, let's try the following:
  1. Zero out all the transforms.
  2. Group "group1" (we get  group2->group1->locator1).
  3. connect "group2.worldInverseMatrix[0]" to "locator1_pointConstraint1.constraintParentInverseMatrix;"

What we did here is made point constraint consider group2 as the parent of locator1 instead of group1. So when constraint calculates position for locator1, it will compensate for group2, but not group1. This means that if you move group2, locator1 will not move but if you move group1 then locator1 will move!

Here is a diagram I put together that might be useful in understanding what I wrote. I have just repeated the same thing actually, but in different ways as I understood this.