9.6 Wiener–Hopf equations I 329
Now the inverse of a lower triangular matrix is also lower triangular, and so L(x − y)
itself is lower triangular. This means that the function U (x) is zero for negative x, whilst
L(x) is zero when x is positive.
If we can find such a decomposition, then on multiplying both sides by L, Equation
(9.103) becomes
x
0
U (x − y)u(y) dy = h(x),0< x < ∞, (9.107)
where
h(x)
def
=
∞
x
L(x − y)f (y) dy,0< x < ∞. (9.108)
These two equations come from the upper half of the full matrix equation represented
in Figure 9.9.
The lower parts of the matrix equation have no influence on (9.107) and (9.108): the
function h(x) depends only on f , and while g(x) should be chosen to give the column of
zeros below h, we do not, in principle, need to know it. This is because we could solve
the Volterra equation Uu = h (9.107) via a Laplace transform. In practice (as we will
see) it is easier to find g(x), and then, knowing the (f , g) column vector, obtain u(x) by
solving (9.105). This we can do by Fourier transform.
The difficulty lies in finding the LU decomposition. For finite matrices this decompo-
sition is a standard technique in numerical linear algebra. It is equivalent to the method
of Gaussian elimination, which, although we were probably never told its name, is the
strategy taught in high school for solving simultaneous equations. For continuously infi-
nite matrices, however, making such a decomposition demands techniques far beyond
those learned in school. It is a particular case of the scalar Riemann–Hilbert problem,
and its solution requires the use of complex variable methods.
On taking the Fourier transform of (9.106) we see that we are being asked to factorize
;
K(k) =[
;
L(k)]
−1
;
U (k) (9.109)
x
0
U
0
0
0
u
x
0
0
0
L
g
f
0
h
Figure 9.9 Equation (9.107) and the definition (9.108) correspond to the upper half of these two
matrix equations.