While working on my current project to explore the differences between luxrender and blender I came in touch with setting up a linear workflow.
Until now I knew what it was and never found it necessary to test or use it in one of my projects. The steps involved using it in blender especially under linux seemed too tedious.
As this project was aiming at photorealism, it was a good opportunity to actually try a linear workflow.
And I have to say I am a convert now. There won’t be any serious projects done without applying a linear workflow.
Introduction
I will not go into detail about what a linear workflow actually is as there are a lot of very good articles out there, but I will more concentrate on the practical issues using it with blender under linux … especially under linux.
See the following links for a more in depth discussion on the theory and inner workings of a linear workflow:
- Linear Workflow Tutorial by Yves Poissant; very thorough and informative reading
- General site about using linear workflow
- Monitor setup article by Norman Koren; more targeted to the photographer but all the wisdom digital photography has to offer is mostly applicable to CG as well. Norman Koren’s site is a treasure chest with all sorts of useful information.
Some of the main advantages are:
- better and more natural tonal ranges
- less burnouts and clipping on highlights
- dark areas aren’t too dark anymore
- more natural look
- as calculation in the linear domain does not need any ‘tricks’ and correpsonds more to the physical world the resulting images reflect that fact.
- less fiddling with the light setup
- In the current project I started without linear workflow and had to use 5 lights, whereas the actual setup now just uses 2. And I remember getting the light setup right was always a time consuming and sometimes frustrating work.
- an absolute must when mixing with live action footage
- to get a natural lighting resembling that of the live action you need a calculation model which let you do this without too much hassle.
Nothing’s for free, so you buy these with some additional workload such as more planning in advance, some additional conversion steps for textures and colors and with that generally longer turn around cycles.
Overview
If you have read to above mentioned articles you see that there are some key points which have to be implemented for a linear workflow.
All color input data have to converted into linear space, that includes all diffuse textures as well as any colors directly chosen in blenders material system. All lights which offer an attenuation property should be set to inverse square as this is the law how light behaves in the real world.
Once all the input data is linearized, the render algorithm can act properly on it and produces an image which is of course in the linear space with the full range of exposure information, so the output is a high dynamic range (HDR) image. As the exposure information stored in such an image is too high to be displayed on a monitor, you have to apply a tone mapping and conversion back to a non linear space applicable for viewing.
To get the most out of the process the conversion steps done to the input data should not destroy information, so ideally the input data would be a HDR image as well or has at least 16 bit color depth.
If you work under windows and have the money, the way to go for these conversions is to use photoshop. With its support of the various bit depths and color management, image creation, manipulation and non destructive editing it is a dream to work with.
If you are using linux ( as I do ) and can’t afford photoshop and/or don’t want to have to run it under wine (as I do) your options are rather limited and none is perfect. Especially the higher bit depths raise a problem.
Available Tools
Lets have a look at some of the input data and the associated tools available and their possible use in the linear workflow.
- Photographic textures
If you are creating your own textures I highly recommend shooting in RAW mode to get the full exposure information the cameras sensor can deliver. As an additional effect RAW images are already in linear space.
To work with RAW images under linux some tools you can use are:- digikam (gpl):
Besides being a full blown photo managment tool and editor it also supports color management and RAW handling. The RAW handling core is Dave Coffin’s dcraw. - UFRaw (gpl):
A gtk based tool solely targeted at handling RAW images. This software is also based on dcraw. - Lighzone (commercial)
Full blown RAW image editor aimed for the professional photographer. Its implemented in java and has a native linux port. There used to be a free linux version available ( version 2.1) but unfortunately this is no longer available.
If you are using textures in LDR (low dynamic range) format such as all formats with 8bit colour depth (most notably jpeg) the same options apply as with
- digikam (gpl):
- Painted textures
Here lies the real shortcoming in tools for linux. Here you have to create content by actually painting, so a good tool support with higher colour depths is desirable.- gimp (gpl):
Often dubbed as the photoshop for linux I really have to say thats quite a flattering statement. The lacking of higher color depths prohibits its use in high quality image editing. The painting system is not its main focus and I surely miss non destructive editing.
Higher color depths and non destructive editing are now technically possible with the integration of GEGL into gimp, but this has yet to be put to use. There is no roadmap available and one has to hope for the best. - cinepaint (gpl):
Originally a fork of the gimp to provide high color depths and color managment for use in the post production in the movie industry it has now an impressive support for these features but really lacks the wealth of editing features or even painting functions gimp has to offer right now. - krita (gpl)
On paper it would deliver all the features wanted. Practically it is unusable for serious work. Slow and often crashing as of version 1.6.x. It remains to be seen if the new version 2.0, right now in the last stages of development, offers a better experience. I really would love to see it come to life.
- gimp (gpl):
With this limited set of tools I am currently creating my own photographic textures using RAW images and digikam, and for all the other textures I use gimp.
Once the input material is created it has to be converted to linear space. In most of the cases we have LDR images and before converting them we have to increase the colour depth to 16 bits as otherwise we loose too much information.
Tools for linearizing are all tools being able to apply a gamma correction on 16 bit images namely digikam, cinepaint, convert ( part of ImageMagick) and blender. As I usually create a lot of textures I prefer a scriptable approach and use therefore convert:
convert <inputfile> -depth 16 -gamma 0.45 <outputfile>
will take an input image increase its color depth to 16 bit and applies a gamma of 0.45 resulting in a linearized image. Keep in mind, that you have to choose an output format which is capable of handling higher color depths. My preferred format is PNG.
The conversion to a 16 bit color depth does of course not increase the information in the image. This can be seen if you look at the histogram:
What can be seen, is that there are a lot of missing values giving a quite quantized histogram. If nothing is done to get rid of this you may get noise and artifacts in the final rendered image. I have not the deeper theoretical knowlegde on how to best smooth out the histogram, but I found an feasable solution (see below).
With all this options present I settled on the following workflow
Workflow
Planning and Preparation
I always keep all my textures in a separate directory. For the sake of clarity I create a subdirectory beneath the texture dir which I call linear. This will be filled with all the linearized diffuse maps. To be able to automatically do the conversion I use a naming scheme for my diffuse maps of *color.png. With this in place I use a simple script to convert all diffuse color maps in one batch.
#!/bin/sh
mkdir -p linear
for file in *color.png; do
echo linearizing $file ...
convert $file -depth 16 -gamma 0.4545 -depth 8 linear/$file
done
echo done
For the final tonemapping I use the compositor in blender
The first RGB node is to compress the tonal range of the HDR image and the second one is just the conversion back to gamma 2.2. The final tonal tweaking can be done with an additional RGB curve node.
Whenever I create or change a diffuse color texture I run the script to populate the linear directory. Consequentyl I only use image textures from this directory. During developing the project I do not correct the quantized histograms in the converted textures as I do not yet have a reliable scriptable solution. For the final image I use digikam’s restoration tool in the preset ‘remove uniform noise’ to get a smooth histogram without disturbing the image.
This is the smoothed histogram from the previous example. Its not perfect but quite decent.
Lamps
The only lamps which provide inverse quare attenuation are the spot and the lamp. The types sun and hemi do not have any attenuation at all so its okay with them. The only problem for me seems to be the area light, but I have not further investigated into it. I can always use a spot with soft raytraced shadows to get a similar effect and with proper attenuation.
Colors and procedural textures.
I have to admit that this is an area where one has to do the most tweaking. If I have just a plain color I can easily correct it the following way.
- choose the desired color in blender
- use the color picker in gimp and get the selected.
- fill a dummy image with this color and do a gama correction to 0.45 with the levels tool.
- now use the color picker tool from blender and choose the corrected color.
This is tedious and often its not necessary as I will adjust the color by evaluating the result.
You can’t do this with procedural textures as there is also a dynamic range involved. My solution right now is to use the material node system to compensate. One has to be sure only to correct the diffuse part and to keep an eye on not converting already linearized diffuse textures again.
[edit: corrected screen shot; see comment by N30N]
This is sub optimal but has to do for the moment.
Not working
Any previews in blender itself such as the render and material preview will show you the image as it exists in linear space and therefore too dark on the monitor. For quick setups I use the preview as rough guide but always resort to the final render to tweak the materials. Alas this is less than sub optimal and one hopes for gamma corrected previews …
Conclusion
With all this said you may ask ‘is it really worth it ?’.
And I have to answer you ‘Yes every drop of sweat of it!’
The free version of LightZone is still floating around the internet (or at least my server), google for LightZone-rev.8224.tar.gz. There’s also Rawstudio (GPL) and RawTherapee (freeware) which are worth trying.
Regarding gamma correcting your material, it should be done to the colour input not the output otherwise it will affect the shading not just the colour. If you really want correct material previews it can be done via pynodes http://blenderartists.org/forum/showthread.php?t=136555
Hey N30N
Thanks for the information.
You are right of curse about the color correction on the node. I have to have a look at your pynode solution. The mentioned thread is an interesting read.
loramel
Thanks a ton for the info, I’ve a pretty good understanding of colorspace on theoretical level and your article has filled in most of the practical gaps.
You are likely aware, but if not qtpfsgui and the qtpfs cli apps would probably be very handy for producing linear space textures from photos.
cheers
Thanks ocular0b !
I discovered qtpfsgui and related apps quite recently and used it mainly for tonemapping the luxrender outputs, but yes it should work great for producing HDR images from photos.
I intend to create some real HDR angular env maps from an exposure series of RAW photos I will shoot.
Great work !!!
I love it.
Thanks so much!
On the high color depth front, you might also take a look at nip2 (a companion tool for the vips image manipulation library).
Nip2 is a strange program, where you arrange widgets in a workflow to manipulate images. The result is like a spreadsheet for working with images, or imagemagick with visual feedback. Once you have put together a “workspace” with one image, you can use in scripts to apply the same manipulations to other images.
PROS: It is much faster than imagemagick, and it gives you feedback along the way. No more trial-and-error guessing the right values for convert. It supports almost every bit depth and color space you can think of.
CONS: It can be very finicky about what inputs are in what format. Sometimes you are forced to use 8-bit inputs for certain operations. It has only the most rudimentary gimp/photoshop-like editing facilities, so you are pretty much working with entire images.
Kind of a strange experience, and a bit quirky in its implementation (despite being at version 7), but it definitely has a place in putting together an automated workflow.
Alex, thanks for the pointer. Didn’t know about this tool, but as it looks it certainly has its usage in a linear workflow.