25 January, 2012

Notes: Building a multi-way (group) constraint

I have been highly inspired by the work that Andrea Maiolo and Tim Naylor did on multi-way constraint system, particularly what they presented at Siggraph 2006 on "Bi-Directional Constraining". I think I first saw their Siggraph work when I was at school in 2008 and that's what got me into understanding Maya's DG and matrix mathematics for 3D transformations. Since then I have tried to solve this kind of constraint system from time to time and I always got stuck as I didn't have enough knowledge at that time. Between 2008 and 2011 I have learned many things by challenging myself to new problems and also from inspirational work of talented technical artists. So I have picked up this project again and I think I have made some progress compared to my previous failed attempts. However, the progress it not hurdle free. Here are some notes on the issues I have encountered.

The challenges:
The first challenge we encounter when building a two-way constraint is the cyclic dependency. However, that's not really the main and only issue. It is possible to create connections and do calculations in such a way that we can avoid this cyclic dependency. The real challenge is that we can't really have true Bi-directional constraint (I say true for lack of a better word). When I started, my idea of a bi-directional constraint was that both nodes involved are the masters and both nodes affect each other at the same time without any switching. Based on this definition the main question is how to interpolate from one state to the other when both objects are moving each other. This leads us to another question, what should be the sequence of operations when calculating for the goal state? This is an important question and more so because the final state is the sum of transformations of both nodes. 

Sequence dependency:
One of the most important properties of rotational transformations is that the operations are non-commutative. We already know this based on what we know about rotation orders. Let's take an example. Let's say we have two transform nodes A and B in their initial state set apart by some distance with zero rotations on both. Let's assume that both A and B affect each other (bi-directionality). Now we apply 90 degree rotation in z-axis on both nodes. Now based on the sequence of operations we get different locations in the end result as illustrated below.


Solution?
Main problem here is that we are trying to treat both nodes A and B as masters at the same time. If we consider only one node as the master at a given point of time, we can avoid the problem of sequence dependency since only one node affects the others at a given time. ExoSwitch constraint uses a concept of driver nodes and driven nodes. Using this concept, we assign one driver for the constraint system at a given point of time to drive all the other nodes. So ExoSwitch constraint does not have a problem of finding the right sequence at a single point of time.

Alternatives?
I can't think of any simple way to record a sequence in which a user is manipulating the nodes involved in a multi-way constraint. However, I think it should be possible to implement a system where all the nodes are treated as masters (or drivers) at the same time. One idea would be to have some kind of iteration based solver that calculates the interpolation to reach the goal state when all the nodes are driving one another. Maya's FBIK comes to my mind, but it seems that it takes a bit different approach. This approach is to solve the system when user moves one of the nodes(effectors) and update all the nodes with their final coordinates. When you animate these coordinates on effectors, each node is interpolated independently. Even though it works for FBIK, this behavior is not quite desirable for a multi-way constraint system.

Still a long way to go for finishing a working multi-way constraint. I always get more hopeful when I solve a problem on the way. But I think I should look forward to the next problems on my path and be ready to challenge my small brain for some exercise :)

12 January, 2012

Mysterious "geometry" attribute and Geometry Constraint

Geometry constraint is a bit special because it does not connect its output to transform attributes of a constrained node allowing us to key them. Out of curiosity I looked at the connections in hyper graph and I see that output of the constraint node is connected to "geometry" attribute of the constrained node (any transform node). One would think that this attribute would have something to do with geometric data for display or deformation. However, the doc says "Geometry attribute used for positional constraints". Strange!

Here is an example of this connection used by geometry constraint:
locator1_geoConstraint.constraintGeometry -> locator1.geometry

We know following things about this "geometry" attribute based on the documentation:
  • It's a generic type attribute (it takes nurbsCurve, nurbsSurface, mesh etc. as input)
  • It affects translation attribute of a transform node (we can use it to control position)

Based on above information let's do some experiments and see what happens.
  • Create a nurbs sphere
  • Create a locator
  • Connect nurbsSphereShape1.worldSpace[0] -> locator1.geometry
And locator1sticks to the sphere! What happened here? My guess is that this particular attribute calculates/updates translation value based on given geometric data. Now, if you move the locator1 you will see another surprise, locator1 is constrained to the sphere! And this is working without a constraint! You can move the sphere around or change its shape and the locator1 will still be following it. So this is almost like a geometry constraint but without creating the constraint node. The only thing that would not work here is if you group the locator and move the group, i.e. if you move the parent of the locator. This is what is handled by geometry constraint and that's where attribute locator1.parentInverseMatrix comes into the picture. This attribute is used by the constraint to compensate for any transformation coming from parent hierarchy of locator1.

So this makes me think that "geometry" attribute was added when geometry constraint was added. Who knows!

05 January, 2012

MPoint '=' assignment operator and float[3]

All 3d packages define a 3D point as (x,y,z,w). Last property 'w' is included based on how homogeneous coordinate system works. And this 'w' always needs to be 1 when we do calculations. Maya's implementation of this 3D point is done in MPoint class. This class also defines '=' operator which will copy the values of a float[3] to the given point. But it's really not a good idea to do the following.

float myPt[3] = {1,1,1};
MPoint mayaPt = myPt;

The problem here is that Maya will assign values to x,y,z from myPt but not to 'w' leaving it 0. This is an invalid homogeneous coordinate as w=0 means the point is at infinity! So keep in mind that 'w' is important :)

04 January, 2012

Freedom of a constrained node

When you constrain a transform node using Maya's constraints, it stops inheriting transformations from its top hierarchy. However, if you just connect (for example) translation attributes directly or using simple arithmetic nodes, the connected node continues to inherit transformations from its parent. The difference is because of how a constraint calculates the final output. Let's do a simple experiment.
  1. Create a nurbsCircle and a locator.
  2. Group the locator (group1->locator1).
  3. Now point constrain locator1 by nurbsCircle1 (keep offset option off).

The locator1 should be stuck to nurbsCircle1. Notice that locator1 does not move if you move its parent (group1). Now zero them out, so everything is on the origin. Then move group1 2 units in y-direction. Check the position values of locator1, it changed to (0,-2,0)! So the constraint recalculates position of locator1 to keep it locked to nurbsCircle1. To do this, constraint node uses transformations of nurbsCircle1 and group1(parent of locator1). And the position of locator1 is calculated by converting world-space coordinates of nurbsCircle1 to local coordinates under group1 (parent of the constrained node). This calculation uses 'worldInverseMatrix' of group1 which is the same as 'parentInverseMatrix' of locator1. To get more idea about this have a look at this nice article by Hamish Mckenzie.

Now let's change this behavior. Disconnect this connection, locator1.parentInverseMatrix[0] -> locator1_pointConstraint1.constraintParentInverseMatrix and move group1, you should see that locator1 is moving with its parent now! Both group1 and nurbsCircle1 should affect position of locator1. What happened here is that by disconnecting parentInverseMatrix we stopped constraint node from compensating for movement of group1(parent of constrained node). Or the space in which final coordinates for locator1 needs to be calculated is fixed and not affected by locator1's parent.

That explains the reason behind why after constraining a node it stops inheriting transform values from its top hierarchy.

To make things interesting, let's try the following:
  1. Zero out all the transforms.
  2. Group "group1" (we get  group2->group1->locator1).
  3. connect "group2.worldInverseMatrix[0]" to "locator1_pointConstraint1.constraintParentInverseMatrix;"

What we did here is made point constraint consider group2 as the parent of locator1 instead of group1. So when constraint calculates position for locator1, it will compensate for group2, but not group1. This means that if you move group2, locator1 will not move but if you move group1 then locator1 will move!

Here is a diagram I put together that might be useful in understanding what I wrote. I have just repeated the same thing actually, but in different ways as I understood this.