2.8 Time Resolution and Related Properties 95
the sensor, but remains black. If the illumination is narrow-banded, therefore, over-
lapping spectral channels should be used. Airborne and spaceborne cameras for
photogrammetry and remote sensing do not have this problem: the illumination is
broad-banded and we can use narrow-band channels.
If a sensor has N spectral channels with the mid-wavelengths λ1, ...,λN, it gener-
ates N measurement values S1, ...,SN (for every pixel). For normal digital cameras
(red, green, and blue channel), N = 3. Multispectral sensors typically have N <10
channels, whereas hyperspectral sensors may have N > 100 spectral channels, so we
can generate three colour values X
, Y
, Z
according to
⎛
⎝
X
Y
Z
⎞
⎠
=
⎛
⎝
a
1,1
... a
1,N
a
2,1
... a
2,N
a
3,1
... a
3,N
⎞
⎠
·
⎛
⎜
⎜
⎜
⎜
⎝
S
1
...
...
...
S
N
⎞
⎟
⎟
⎟
⎟
⎠
. (2.7-17)
For certain classes of spectral data, this result may be adapted to the stan-
dard values X, Y, Z, which may be computed for these data according to (2.7-9).
This approach was successfully implemented, for example, for the ADS40, in
which the panchromatic channel added to the three colour channels R, G, B gives
N = 4 (Pomierski et al., 1998). The procedure may also be applied to ordinary dig-
ital cameras (N = 3; colour calibration) if good true colour capability is required.
This is especially useful if the output medium is calibrated too (for example, sRGB
standard for monitors).
2.8 Time Resolution and Related Properties
In this section, we introduce formulae and relations to enable the user of airborne
cameras to estimate the expected
• exposure times (integration times),
• data rates and volumes,
• camera viewing angles (FOV, IFOV) and
• stereo angles.
Figure 2.8-1 shows the array principle. It is assumed that the detector arrays (line
or matrix) have been placed in the image plane of the lens. It is further assumed that
the lens produces ideal geometric images without spherical aberration and, in the
case of sensors with spectral filters, without chromatic aberration.
As shown in Fig. 2.8-1, the derivation can be performed using the theorem on
intersecting lines and the ratio of the focal length of the lens f to the pixel size p
with reference to the scanning distance GSD
y
to flight altitude h
g
(see (2.8-1)). For
example, when h
g
= 1km,f = 62 mm and p = 6.5 μm, then GSD
y
= 10.5 cm.
The distance GSD
y
corresponds to the pixel size on the ground across the direction