In this post I try to portray the things I have learned and experimented with about combining image layers having both transparency information (alpha) as well as z information.
The starting point for me was quite simple. Up to now I had done the compositing work for the background, Ara and the cage. As I had not yet settled on the exact look of my cage lights I decided to add them later. This shouldn’t be so hard after all.
Well, somehow it is.
I created the cage lights using particle systems and effectors and rendered out a 250 frame cyclic animation. My idea was to have this animation used as alpha mapped texture on simple planes, which always face the camera ( very similar to billboards, but without the particle system).
A frame from this animation looks like this:
All the black parts are actually transparent.
In my shot I rendered my cage lights in an extra layer to have control over the color and brightness of the lights. Now as the light orbs are inside the cage I used the Z-Combine node to composite cage and orbs together. Doing this left me with the following result:
As one can clearly see, my planes are filled circles, and Z-Combine does not take the alpha information into account. This was the first indication that Z and alpha are not working well together. For this problem there are 2 solutions
- patch the z-combine node
- render the orbs with either all-z enabled or enable the cage layer as zmask for the orb layer and then use a normal alpha operation to blend them together.
Actually I did both, the result for solution 1 is shown below.
I have the following simple setup:
where the gray and green plane are on one layer and the red on a second one. The red plane has horizontal red lines on transparency and the green plane has green vertical lines on transparency. As you can see the planes slightly overlap. This is a very simplified version of my orbs-in-the-cage setup. Rendering these 2 layers and z-combining them together gives you the following result:
As with the orbs above, the alpha information is simply not taken into account.
Now the same but with a slightly patched z-combine node:
Looks as one would expect it to look. As the z-combine was not changed in blender 2.5 I guess I will try to get a feedback on the patch and see if this can be integrated into the official repository.
Now for solution 2, the z-mask and alpha operation. Using this I get the following result with the cage and orb:
This looks good, but note that for the actual compositing no direct z information was used. All masking was already done during the render phase. ( BTW I get the same result with my patched z-combine node).
With this problem solved, life could be so simple. And yet there’s more to come.
Further processing is done of course, the most prominent to come the defocus node. And I should have guessed it, since this node needs a z value as input, there should be problems along the way.
So I had my combined layers (done with alphaover) and had to supply a combined z to the defocus node. The Alphaover does not provide one (as the z-combine would have done) so I did a quick math node with both Z channels as input and used the Minimum function to get a combined Z, which I fed into the defocus node with this result:
Now this is odd, and more importantly, plainly wrong. (Look around the orb at the lower right side). Again the z value of the orb’s plane is taken fully into account when doing the blurring, which leaves the area of the full circle of less blurring (on the cage) around the orb. A quick look at the defocus node source code convinced me that a solution type #1 was not to be done by me.
So The idea then was to apply the defocus for each layer separately before combining them. Doing this doubled the defocus nodes and thus render time and got me this:
This is almost perfect, but I you look again at the lower right orb, where the cage rod is obscuring it, you will notice that said rod has very sharp edges. This is ok for all shots where the cage is not shown in closeup, but for those closeup shots this is not acceptable and is actually distracting. Since I have some (quite important) closeups in my short I had to find a solution (again).
Like with any endeavour, the last few percent of the work will take the most of your time. And this is especially true here. I tried several approaches and used google and the blenderartists forum search. I found references to the z/alpha topic. Especially the trick done by endi to use the mist buffer instead of the z-buffer to have transparency taken into account. Unfortunately this does not work very well (when trying this with the z-combine) and gives simply garbage results with the defocus node.
In the end I have found a solution. It is not elegant or simple. It adds 2 more defocus nodes and with this eats up rendertime and resources. Luckily I have both in abundance right now. I will explain my approach below, but if anyone who is reading this has an idea of how to do this in a simpler way I am all ears.
This is the complete node setup
The basic idea is to mix an image of the blurred cage on top of the image done so far, but only at those areas where the rods obsure the orbs. This has to be done all with masks.
To get a starting mask I extract the alpha channel from the combined cage + orb image
To get rid of the sharp edges I blur this mask and screen it on top of it. Now we have to somehow invert the mask and make the rod area as big as the burred cage rods are in the defocused result.
Lets start with a zmask for the cage, with all the orb planes excluded:
For the mask to have the same area as the defocused cage we simply apply the defocus node on this mask with the z-information from the cage alone.
If we now multiply this zmask with the previously created alpha mask for the orbs we get the following result:
Using a color ramp we can enhance and tweak the masks blending behaviour. This very mask is then used as input into a mix node were on top of the original combined cage/orb image a cage only defocused image is put. This essentially adds the missing blurred parts of the rods.
The final result can be seen as the image on top of this post.
phew, finished !
I really hope there is a better solution to this but for the time being I will use this setup, and only in the closeup shots. For all the others the simple combined 2 defocused images will do just fine.
For the interested here is a blend file with the setup shown above and, as a bonus, including one frame with almost all layers of the first shot in 50% resolution.( ~ 29MB )
And now I am going to setup my two 12(+12 hyperthreading) core machines to act as my little renderfarm.