Motion capture has been a quite involved task ever since its inception. Frequently, it involved an array of expensive specialized cameras, set up around a designated volume, all calibrated and aligned just so. People have been attempting to streamline this with algorithms using multiple digital cameras with and without actual track markers. And there are others who use sensors equipped with magnets and accelerometers for camera-less motion capture.
Xsens is in the latter group — using MEMS inertial sensor technology. MEMS means Micro Electro Mechanical Systems — tiny machines with components between 1 and 100 micrometers in size. The whole system fits into a sensor slightly smaller than a matchbox, and it measures inertia and orientation. Combine a bunch of those together placed in key spots and you have the motion of a skeleton, captured via either a self-contained suit with sensors (MVN Link) or a lighter strap-based system with a fancy Lycra shirt (MVN Awinda).
I got to try out the MVN Awinda. The whole thing fits in a backpack. There are straps to go around your limbs, legs and head, and shirts in multiple sizes. Seventeen sensors slide into small pockets on the straps, and there are chargers that fit all the sensors to recharge after a long six-hour day of motion capturing. Setup was fast and generally trouble free. A USB receiver locates the sensors and provides realtime feedback in the Xsens software.
I got solid data with very little drift, and the strength of the signal allowed me to walk from one end of the house to the other and back again.
And finally, at GDC this year, Xsens announced an accessible price structure for education, small businesses, and indie projects. You invest in the hardware, the software is provided for no extra cost. To qualify, your business needs to make less than three-quarters of a million bucks! looking into it last year. But, this time, I was checking out the integration with Reallusion’s CrazyTalk Animator 3.
For those of you new to the Neuron, it was originally funded through a Kickstarter, and has since caught on as a lightweight, portable system for grabbing motion with very little prep time. The pro system I had a chance to play with has 32 sensors including ones for finger movement (a feature currently lacking in the Xsens solutions). Like the Xsens Awinda, the Neuron is strap-based and fits into a case the size of a lunchbox. Because of the number and minuscule size of the sensors, setup is slightly slower, and the Neuron needs a bit more love than the Xsens to get everything calibrated. The sensors are incredibly sensitive to magnetic fields, so it’s smart to break everything down after a session and get the sensors back into their protective carrying case.
That said, once you get everything up and running (support is very attentive to customers), you have realtime feedback in the packaged software — or you can tie it into Reallusion’s iClone and feed the data directly into characters for previs, games or full animation.
What you may not realize is that Reallusion’s CrazyTalk Animator 3 Pipeline Edition also can use the data and provide realtime feedback on 2D characters. Because the bone-hierarchy system that CTA3 uses to deform images mimics the structure we are used to in 3D animation programs to deform meshes, the data is transferrable to the 2D realm. So, whether you are making cartoons, or sprite-based video games, the motion-capture route is a quick way to prototype things out, or potentially use as final animation.
Together, the Perception Neuron and Reallusion’s iClone and/or CrazyTalk Animation 3, hit performance and price points that can provide most ambitious animators and storytellers tools to start bringing their ideas to life without necessarily the need for a studio behind them. Todd Sheridan Perry is a visual-effects supervisor and digital artist who has worked on features including The Lord of the Rings: The Two Towers, Speed Racer, 2012, Final Destination 5 and Avengers: Age of Ultron. You can reach him at firstname.lastname@example.org.