Sometimes it seems, that the sole purpose of this project is to test my patience, motivation and capacity for suffering.
Setting up a workflow for offline compositing, ( i.e. compositing not directly after rendering but from saved multilayer exr files ), sent me on a journey through storage demands, SSS deficencies, the current state of blender 2.5, back to 2.49, down into the source code of compositor nodes and finally to a somehow (shakily) working solution.
My idea was always to render each shot into multiple layers, save them to a multilayer exr file and use a separate compositing setup to compose these layers to a final frame/clip.
Using this approach allows me to work relatively fast on balancing the lighting, do color correction and effects such as DoF, Glare, Motionblur etc. I can identify the needed lights and layers, render the shot once (which can take up quite a while) and then have quick turnaround cycles working offline with the compositor. With this I am able to produce preview clips in a fraction of the time needed for rendering.
I found that I have to see a frame in the context of the whole shot to decide if the lighting is correct, or if the Dof is working the way I want it. Or if attributes have to change over time, this has to be checked in the context of the shot too.
E.g. the shot I showed in my last post takes about 1 min/frame. With 285 frames in total this gives me roughly 4,5 hrs rendertime. My current compositing node setup needs ~3 sec to create a frame, which gives me a full shot after ~15 minutes. Thats a time frame which I accept as manageable to be open to testing and experimenting on a whole shot.
For each light class (ambient, keylight, filllight, rimlight …) I render out a layer with only this light affecting the objects.
I do this separately for the background, foreground(Ara) and the cage. ( The lights in the cage will be yet another layer but will use some effects, so this is postponed a bit). This approach effectively gives me currently 8 layers to use in the compositing stage.
The First Obstacle
First compositing tests revealed a major shortcoming of the SSS implementation in blender:
The SSS preprocessing stage uses all lights from the active layers in the current scene (regardless if they are layer only lights or not) and uses them to create a lightmap.
The SSS render of a single layer with one light will therefore be too bright and wrongly lit as well, as you will see influences of all lights, where there should be only one. I haven’t found this documented officially, but the source code is quite clear in this regard.
The solution here is to use separate scenes as sort of super layers for the individual sss passes needed. It turned out that I only needed the key light pass with sss activated. These extra scenes are then written into separate exr files, beacuse I am not able to include them into the multilayer exr file.
This produces more files and higher probability to mess something up.
And after rendering out a single frame in FullHD I recognized that I was in dire need of a storage upgrade. One frame amounts to ~170MB in raw data to be fed into the compositor, resulting in a total storage demand of ~1.9 TB for 11000 frames.
Needless to say I have now an external harddisk with 2TB and already thinking about pruchasing another one.
My idea was to utilize the current blender 2.5x svn versions to do the actual compositing. With the possibility to use all of my cores and the very useful new node additions, this promised to be a real productive tool. And since the compositing was heavily used during sintel, it should be in a state where it is usable.
I started out with a single frame as input to test the node setup and was quite pleased how useful the new color balance node was working and that my idea of controlling the lights in post processing proved to be working extremely well.
The Next Obstacle
I then tried to use the multilayer files in a sequence, to see how navigating through frames worked out.
This clearly showed that multilayer files, especially in sequence mode, do not really work. The problems ranged from frequent crashes to wrongly displayed images. Additionally the objectID information from multilayer files is not extracted correctly. This resulted in me reporting 3 bugs in 3 hrs to the blender bugtracker.
The current state of blender does not allow me to use it in my workflow of compositing. I will try from time to time to see how it works but for the time being I go back to 2.49b.
I went back to 2.49 and tried to recreate the node setup I had in 2.54. Once you have used a color balance node you never will want to go back.
Well, I had to go back and that led to so much frustration that I dove into the source code and had a look at the color balance implementation in blender 2.54.
I tried a quick backport to 2.49 and have it running, the UI being ugly and not as nice to use as the one in 2.54. I have yet to decide if I want to pursue this direction further. The cool thing is, that blender 2.54 imports a file created with the back ported color balance node correctly, so the solution would be transferable once I switched to 2.54.
I will try the current scene setup both ways, the 2.49b way and the new backported colorbalance way. Then I will decide if I dare to go the way of a branched blender version.
The Latest (not last) Obstacle
The DoF node is quite handy. It ties closely with the camera settings and thus gives good results and responds to the animation as well.
The bad thing is that is tied closely with the camera.
If you are using multilayer exr files as input, all of the initial camera information is lost, as it is not stored into the files. Another journey into the source code revelead that the camera settings are taken from the current active camera of the current scene.
The following comment in the source code by Ton Rosendaal just states it clearly:(CMP_defocus, line 255)
// get some required params from the current scene camera // (ton) this is wrong, needs fixed
This just nicely sums it all up.
Next was to find a solution to this problem. I came up with the idea to use blenders asset linking to link the camera and its rig and associated action from the shot file into the compositing workfile. In the end I created an extra scene in the shot file itself. This extra scene is empty except the camera and its rig.
There are a lot of tiny details using this approach to get it working correctly. I was almost at the point of abandoning my offline workflow idea just to get the short done.
Its not enough to just add a camera to your scene. You have to somehow trigger the evaluation of the action ipos to update the camera position and focus point.
I eventually achieved this by adding an empty render layer of the current scene to my node setup (this triggers an evaluation when you actually render it) and I had to include a tiny 3d viewport in my compositing screen to have it update as soon as I change the actual frame. Only then you get a correct and corresponding camara setup and data to be consistent with the previously rendered multilayer files and their z information.
If you are still with me after so much rambling I finally show you a current breakdown of the compositing of my first shot.
Lets start with the raw render. Here all the lights are already setup to have roughly the light distribution they should have.
In the compositor I have 3 layers for the background: ambient, moon and cage light. Each of these layers is imported and basically corrected:
The next step is to combine these 3 layers and create a desired lighting balance
The same steps are done for Ara. First in this case 4 layers
Again all these layers combined and balanced gives you this
The next step is to combine background and foreground again balancing the intensity of both
Next step is to add defocus and motion blur ( not very visible in this case )
And finally adding some glow and glare to the dress and cage
This all sounds simple but is actually quite a bit of work and continuously fiddling with all the knobs in the node setup.
The result rendered out as a clip can be seen here. The resolution is only 50% of the final FullHD one and OSA is only set to 5 in the original render. With this settings it took my quadcore system ~ 1min to render one frame
And for the interested here the actual statistic on the hours spent on this project so far:
5:34 01-Script 43:54 02-Storyboard 73:54 03-Concept Art 14:48 04-Animatics 133:20 05-Modeling 76:04 06-Rigging 172:17 07-Animation 123:41 08-Texturing 64:51 09-Simulation 10:37 10-Project Management 108:54 11-Research 14:51 12-Lighting/Compositing ---------------------------------------------- 842:45 Total