This week we focused on lip synching our face model and animating the face fully alongside audio. For the audio, we chose the final scene of Bladerunner (1982), as this was a monolouge, focusing on one character, and included detailed expressions we could work with.

We took this youtube clip and screen recorded it through OBS Studio. We then exported this to Adobe After Effects so that we could transform this to audio that would work in Maya. We rendered it out as a j-peg image sequence and a wav. file for the audio. Then on Maya it was easy to right click on the animation bar, innsert audio, and then inser the wav file created from this.

We then worked on firstly getting the lip synching correct to the words that were being said. I did this by using a combination of adjusting the jaw joint using rotation, using the blend shapes we had made prior, and creating new blend shapes for specific shapes a mouth would make of specific letters. For example, when we say ‘P’, our lips go inwards, and when we say ‘O’, we create a round shape with our mouth. I used myself as reference for a lot of these words and letters, and also refered to images of mouth shapes such as the one below.

When I had created appropriate blend shapes, I then made keys of the jaw movements, which set the basic outline for then the mouth opens and closes during the speech. At each word said I partially opened the mouth, or opened the mouth a lot, depending on which shape I would need to make with the mouth next.

When the basic opening and closing was done, I started to add the appropriate blend shapes I had created for the mouth according to the letters used at the right time. I keyed the blend shapes to appear when they needed to, and adjusted how much they would show at certain points.

Work from home
At home, I worked on creating overall expressions on the face, to accompany the lip sync. To do this, I used the main head joint to tilt the head and dip the head at certain points. I keyed these head movements to align with the words he was saying, and what looked natural during speech. I again used blend shapes I had created prior such as the frown and the closing eyes, and I created new ones to create the other minor expressions, such as blinking, frowning a little and smirking where these would fit. I had to add correction blend shapes to adjust some geometry that didn’t fit some of the blend shapes, and I kept this contant.

I made sure this animation looked good in the arnold render screen. On the render screen it looks much more realistic and different to the model on the viewport, everything blends in with each other better, and the shadow on the mouth and teeth looks much more real.

I then recorded my animation of this to put here for progress.