Quality and Efficiency 179
appear much cleaner if computed at a higher bit depth. Even something as simple
as a blur can introduce artifacts (particularly banding) if it is computed at only
8 bits per channel. In such a situation, you should be able to specify that the 8-
bit image be temporarily converted to 16 bits for the purpose of the calculation,
even if it will eventually be stored back to an 8-bit image. As usual, this sort of
decision is something that you will need to deal with on a case-by-case basis.
Compositing systems are not necessarily limited to working at only 16 bits of
accuracy or less. Some newer systems can actually go beyond this, representing
the data that is being processed with up to 32 bits per channel. Unlike the jump
between 8 and 16 bits, using 32 bits per channel is not done simply to increase
the number of colors that can be represented. Rather, 32-bit systems are designed
so that they also no longer need to clip image data that moves outside the range
of 0 to 1. These 32-bit systems are usually referred to as ‘‘floating-point systems.’’
At first, the ability to represent data outside the range of 0 to 1 may not seem to
be terribly worthwhile. But the implications are huge, in that it allows images to
behave more like they would in the real world, where there is no upper limit on
the brightness a scene can have. Even though we will probably still want to
eventually produce an image that is normalized between 0 and 1, we can work
with intermediate images with far less danger of extreme data loss. For instance,
we can now double the brightness of an image and not lose all the data that was
in the upper half of the image. A pixel that started with a value of 0.8 would
simply be represented with a value of 1.6. For purposes of viewing such an image,
we will still usually represent anything above 1.0 as pure white, but there will
actually be data that is stored above that threshold. At a later time, we could
apply another operator that decreases the overall brightness of the image and
brings these superwhite pixels back into the ‘‘visible’’ range.
The ability to calculate in a floating-point mode is usually an optional setting,
which should be used wisely. Running in this mode will double or quadruple
the amount of data that is used to represent each frame and will increase the
amount of processing power necessary by an equivalent amount.
There are almost always ways to produce images of identical quality without
resorting to the use of floating-point calculations. Instead, a system’s floating-
point capabilities should only be used whenever other methods are not practical.
There may be an effect that can only be achieved in floating-point mode, or you
may simply be confronted with a script that is so complex that you would spend
more time trying to find a data-clipping problem than the computer would spend
computing the results in floating-point mode.
Although compositing systems that support floating-point calculations are still
fairly rare, they will almost certainly become the standard in a few more years
as memory and CPUs become faster and cheaper. Eventually we may even have
a scenario in which all compositing is done without bothering to normalize values