Providing VFX work on over 350 shots Framestore used machine learning and detailed environment work to create a photorealistic baby and stunning scenes for the offbeat HBO show
In this eight-episode dark comedy viewers are pulled into the world of protagonist Natasha who acquires a baby she does not want. On top of this the infant holds dark powers which leads to destruction and torment for not only Natasha but those around her. Tasked with helping build the world of The Baby, Framestore dug into their expertise in working on digi-doubles, face replacement and invisible environment effects.
The main question was: if one of the main cast members is a baby, how can we use VFX to help the show work, with all the limitations that entails? As a base, we had to find a practical method of approaching things as much as possible whilst leaning on VFX only when needed.
Owen Braekke-Carroll
With babies involved on set things were never predictable and shoot days could rapidly change at any point. At times there could be over five babies and a dog present, so the team built a bespoke and adaptable pipeline to complement the creative vision for the show.
Because we wouldn’t know how these performances would link up, we had to treat the baby like a CG creature in every take. There was a potential that any shot could become a VFX shot. In post it really became about getting the best outcome based on each individual shot’s needs. We’d end up having different workflows per shot and across each sequence. So, for one shot we might have a partially CG body over a prosthetic baby torso with a plate baby’s head, and the next only replace the arm and add a machine learning face replacement. It was never really a consistent process; it was always about getting the best outcome from the inputs that were available.
Owen Braekke-Carroll
The script often called for the baby to be asleep which turned out to be one of the biggest challenges the team faced. “You can’t shoot a baby asleep on a film set with a guarantee of the correct lighting and camera time. It might come together for some scenes, but you can’t rely on a whole shoot day doing that.
Owen Braekke-Carroll
To approach this problem, the babies on set were photographed at high frame rates in story context and played back at this slower speed. “We then took those plates and digitally closed their eyes, before using a machine learning layer to give them a realistic look on top. This was trained on a closed eye data set, and that created a very effective and photographic output to add.
Owen Braekke-Carroll
The ethos behind their approach however remained – to support the story invisibly rather than being the point of the story. The team tried not to completely replace the baby unless they had to, so they leant on real performance wherever possible, replacing only where it was necessary, or the script called for something eccentric or unconventional. They used a lot of reference, performance photography and video with the intent of creating additional material to use after the fact, whilst leveraging machine learning.
It was difficult to keep control of the facial elements because the team would have multiple shots of the babies which were also changing and growing over the course of the shoot. The same baby often looked different from shot to shot, let alone when they were trying to replace them or change their likeness or performance. Consequently, they had to work with ever-shifting boundaries about what a likeness actually is.
While it wasn’t always the solution, machine learning was crucial. It worked really well especially when combined with more traditional methods to help sell a photographic likeness and performance. On set we captured specific elements for the purpose of machine learning to drive those models, in addition to the regular capture of the 3D scans and other usual reference. It was very exciting to be able to work to integrate new and developing tools into the Framestore pipeline.
Owen Braekke-Carroll
The team also made good use of KeenTools, which are effectively smart tools for VFX and 3D artists, as a head tracking software to control facial performances such as closed mouths or smiles.
In the penultimate sequence of the series the Baby is seen drifting underwater. To achieve this safely with the cast, the VFX team used a combination of dry-for-wet photography combined with an underwater shoot, and CG digi-doubles.
With the mother holding the baby we had a blue screen stage which we then lit with polarised lights, cross polarised with the camera. This enabled us to remove much of the specularity off the face. We then shot it in high frame rate with a watery effect light, and that gave us the performance with the correct lighting of the baby. We used fans, so their hair and clothing moved, making it look like they were underwater. We then took the head of the performance to place on our digi body. The performances from the cast babies were so fantastic and it was a great way to keep them and still get the shots.
Owen Braekke-Carroll
The show also involved some detailed environment work including a digital ocean, digital matte painting skies and split location work on a dilapidated cottage.
We had a recurring story location which involved an impossibly large cliff on the Southeast coast of England and at the bottom of it was a dilapidated cottage fronting straight onto the water. This location can’t exist in reality, so we used a secondary location in Southeast Kent for the cottage set, which we were able to combine with a combination of CG, DMP and plate photography into the seafront location.
Owen Braekke-Carroll
It was a great experience working with Sister Productions and HBO. The show felt fresh and exciting throughout the shoot from pre-production to post and I hope that comes through in our work. At every level there was a great deal of passion behind the project, and the VFX team really worked incredibly hard throughout to help bring this story to life.
Owen Braekke-Carroll