Production of the remaining shots of Ara’s Tale is running under full steam now.
Now with the dragon finished its fun again to do the lighting and compositing for the shots.
I already had the tricks and workflow for all the shots where Ara and the environment is shown. Closeups have a completely different setup than wide shots e.g. For the dragon I had yet to find a good workflow and best practices.
This led to a revision of the linked dragon assets. I now have 3 versions of the dragon, each for a different level of detail. Depending on the shot, I use the most appropriate one. I do this to cut down render time and most importantly memory consumption. With complex shots and 1080p I currently reach a high of ~7GB. I am running 3 frames in parallel on my render machine which has 24 GB, so there is not much room left to play with.
The latest shot I have finished just today was by far the most complex one. And I guess this will hold true for the rest of the shots.
Its a shot with a long camera move covering almost all of the enviroment, partly quite close and also showing Ara and the dragon. That meant throwing in all the effects the evironment needs (clouds, mist, dynamics, sense of scale) and additionally balance this out to highlight the characters in the scene, which demand their own special treatments.
Up to now I was always good with the 20 layers blender provides for organizing all the objects in a scene. For that shot it soon showed, that this was not enough if I was to stick to my tested workflow.
The solution was to split the shot into 2 blender scenes ( one for the environment and one for the characters) and render a multilayer file for each scene. With this I have actually 40 layers at hand, which was plenty enough.
The total number of renderlayers in the combined multilayer files is 21, each one including color, z, alpha, shadow/AO and sometimes index and vector channels.
Putting this all together in the compositor is a tedious task and very iterative. Moreover, to gain the best visual experience, different parts of the shot require slightly different compositing values. This is where the everything-is-animatable paradigm of the new blender series come in very handy.
My workflow for this is to render out key frames ( 5 in this case out of 360) and tune the compositing node setup to the best result for these frames. This in turn triggers changes in the general lighting setup or maybe demands additional renderlayers ( special masks e.g). So its a constant back and forth.
When I am satisfied I turn on the render server and let it render out all of the frames. And after that I let it render out the composite. Thats the first time I actually see the shot in full color and lighting. This typically shows lighting and compositing problems very quickly and another round is due.
If I am lucky, no changes to the rendersetup has to done, which is true 90% of the time. Rendering out just the composite is very fast ( relatively spoken 🙂 ) and the turn around times are very low. Just as example the 360 frames took ~6hrs to render, whereas a full compositing run is finished after 5 mins.
And this shows very dramatically why I love this type of workflow ( separating all the lighting information into layers written to multilayer files). The final lighting tweaks and tuning/polishing can all be done in the compositor at almost no cost. This invites to do experiments and try different things.
And again, after so much dry talk here are 3 frames directly taken from the movie to show you a bit of what the state of production is. ( The last two frames are from the above mentioned complex shot)
This latest shot took me almost a week to complete. The next ones are simpler again and progress should be faster.
BTW, total mount of time spent is now at 1406 hrs., and I start to feel the wish to finally bring this whole project to a completion 🙂
And finally I want to put an almost nostalgic bit at the end of this post.
Looking through the old and very rough concept sketches for Ara’s Tale, I found this one, which dates back to 30. July 2009. It was this image, planned as a still image project, which triggered the whole movie. And if you compare this with the second shot from ones shown above, you should see some resemblance 😉