Perfomer: the machine-learning software behind Framestore’s Smart Hulk’s face animation

We were thrilled to learn our facial mocap technology had been used by Framestore in the the latest Marvel’s Avengers: Endgame.
Along with ILM, Framestore was responsible for creating Smart Hulk. Studio delivered 300 VFX shots of this never-seen-before Bruce Banner/Hulk’s Hybrid.
From ILM first developments, Framestore worked on their our own muscle system, added to the face shapes, and built their our own face rig. Hulk’s face rig was one one of the most complex rig Framestore had to create. Smart Hulk indeed needed to be able to express a wide range of emotions from comedy to seriousness. 

Dynamixyz software Perfomer was trained to achieve certain keyframes in specific shots.
That allowed us to generate a quick first-pass animation for the entire sequence,” explains Stuart Penn, Visual Effects Supervisor, Framestore.  “We would quickly render it out and send it back to the client so they had something quickly for their edit. We then moved to keyframe animation on the facial stuff to get what we weren’t getting from the solves. The solves also gave us a micro-movement pass that we could extract from the machine-learning data.”
Filmmakers could use those first solves to decide on the edit and begin the process of refining and adding face-shapes to use in the rig. That is clearly illustrated in the VFX Breakdown by Framestore video above released last September.
This feedback on their facial animation pipeline echoes to the workflow of Unit Image  using Performer to animate Tom and Greta’s faces in Beyond the Aquila Rift episode from Love, Death and Robots anthology series.
The big green guy has now joined Tom and Greta in our updated demoreel. Check it out!

Read more in:

0 replies

Leave a Reply

Want to join the discussion?
Feel free to contribute!

Leave a Reply

Your email address will not be published. Required fields are marked *