SECTION 10.4 APPLICATION: ESTIMATION IN THE VECTOR SPACE MODEL 261
a calibration by moving a spherical marker around in the scene and synchronizing the
cameras to record their various views of it at the corresponding times, as illustrated
in Figure 10.7. The observed data is of course inherently noisy, and you need to do
some processing to determine the “best” estimate for their relative poses. We describe
an algorithm for this, taken from [38], culminating in the programming exercise of
Section 10.7.3. Our source assumes that the cameras have been calibrated internally.
This involves a determination of the parameters of their optics and internal geome-
try, so that we can interpret a pixel in an image as coming from a well-determined
spatial direction relative to its optical axis. Geometrically, it turns the camera into a
measurement instrument for spatial directions.
LetusconsiderM + 1 cameras. We arbitrarily take one of them as our reference camera
number 0, and characterize the location of the optical center of camera j relative to the
center of camera 0 by the translation vector t
j
and its orientation by the rotor R
j
.We
mostly follow the notation of [38] for easy reference, which uses bold capitals for vectors
in the world, and lowercase for vectors within the cameras. We will simplify the situation
by assuming that the marker is visible in all cameras at all times (our reference deals with
occlusions; this is not hard but leads to extra administration).
The marker is shown N times at different locations X
i
in the real world. Relative to camera
j, it is seen at the location X
ij
given implicitly by
X
i
= t
j
+ R
j
X
ij
R
j
.
However, all that camera j can do is register that it sees the center of the marker in its
image, and (using the internal calibration) know that it should be somewhere along the
rayindirectionx
ij
from its optical center. The scaling factor along this ray is σ
ij
;ifwe
would know its value, the camera would be a 3-D sensor that would measure σ
ij
x
ij
= X
ij
,
and then R
j
and t
j
could be used to compute the true location of the measured points.
But the only data we have are the x
ij
for all the cameras. All other parameters must be
estimated. This is the external calibration problem.
The estimation of all parameters is done well when the reconstructed 3-D points are not
too different from their actual locations. When we measure this deviation as the sum of
the squared differences, it implies that we want to minimize the scalar quantity
Γ=
M
j=1
N
i=1
X
i
− t
j
− R
j
σ
ij
x
ij
R
j
2
.
Now partial differentiation with respect to the various parameters can be used to derive
partial solutions, assuming other quantities are known. This employs the geometric dif-
ferentiation techniques from Chapter 8. The results are simple to interpret geometrically
and are the estimators you would probably have proposed even without being able to
show that they are the optimal solutions.