
device used to digitize the signal. Even though solid
state sensors are used in digital cameras, they pro-
duce an analog video signal. As a consequence, the
captured image resolution strongly depends on
the sampling frequency of the digitization device.
Other factors affecting the image resolution are the
file standard format adopted for the image storage
and the image processing application required to
postprocess the face images.
2. Responsivity. The amount of signal the sensor
delivers per unit of input optical energy. CMOS
imagers are marginally superior to CCDs, in gen-
eral, because gain elements are easier to be place on
a CMOS image sensor. This affects the illumina-
tion level required to capture a face image with a
sufficient contrast level.
3. Dynamic range. The ratio of a pixel’s saturation
level to its signal threshold. CCD sensors are
much better than CMOS in this regard. Some
CMOS sensors deliver 8 bit per pixel intensities,
corresponding to 128 real level variations. As
a consequence, the information content in the
image features is half than what is expected. A
higher dynamic range implies a higher image con-
trast even at low illumination levels and the possi-
bility to grab finer details. A gray level quantization
of 8 bit per pixel is generally sufficient for capturing
good quality face images. The sensor dynamic range
can be crucial when acquiring color images. In this
case, the color quantization may influence the in-
formation content in the face image itself, especially
if a low bit rate (with less than 8 bit per color
channel) is used for color coding.
4. Sensitivity to noise (signal to noise ratio – SNR).
The three primary broad compone nts of noise in a
CCD imaging system are photon noise (results
from the inherent statistical variation in the arriv-
al rate of photons incid ent on the CCD), dark
noise (arises from statistical variation in the num-
ber of electrons thermally generated within the
silicon structure of the CCD), and read noise (a
combination of system noise components inher-
ent to the process of converting CCD charge car-
riers into a voltage signal for quantification, and
the subsequent processing including the analog-
to-digital (A/D) conversion). A further useful
classification distinguishes noise sources on the
basis of whether they are temporal or spatial.
CCDs still enjoy significant noise advantages
over CMOS imagers because of qu ieter sensor
substrates (less on-chip circuitry), inherent toler-
ance to bus capacitance variations, and comm on
output amplifiers with transistor geometries that
can be easily adapted for minimal noise.
5. Uniformity. The consistency of response for differ-
ent pixels under identical illumination conditions.
Spatial wafer processing variations, particulate
defects, and amplifier variations create nonuni-
formities in light responses. It is important to
make a distinction between uniformity under illu-
mination and uniformity at or near dark. CMOS
imagers were traditionally much worse than CCDs
under both regimes. New on-chip amplifiers have
made the illuminated uniformity of some CMOS
imagers closer to that of CCDs, sustainable as
geometries shrink. This is a significant issue in
high-speed applications, where limited signal
levels mean that dark nonuniformities contribute
significantly to overall image degradation.
6. Shuttering. The ability to start and stop exposure
arbitrarily. It is a standard feature of virtually all
consumer and most industrial CCDs, especially
interline transfer devices, and it is particularly
important in machine vision applications. CCDs
can deliver superior electronic shuttering, with
little fill-factor compromise, even in small-pixel
image sensors. Implementing uniform electronic
shuttering in CMOS imagers requires a number
of transistors in each pixel. In line-scan CMOS
imagers, electronic shuttering does not compro-
mise fill factor, because shutter transistors can be
placed adjacent to the active area of each pi xel.
In area-scan (matrix) imagers, uniform electronic
shuttering comes at the expense of fill factor,
because the opaque shutter transistors must be
placed in what would otherwise be an optically
sensitive area of each pixel. A uniform synchro-
nous shutter, sometimes called a nonrolling shut-
ter, exposes all pixels of the array at the same time.
Object motion stops with no distortion, but this
approach reduces the pixel area because it requires
extra transistors in each pi xel. Users must choose
between low fill factor and small pixels on a small,
less-expensive image sensor, or large pixels with
much higher fill factor on a larger, more costly
image sensor.
7. Sampling speed. This is an area in which CMOS
arguably delivers better performances over CCDs,
310
F
Face Device