Astron. Astrophys. 348, 38-42 (1999)
2. A direct method to solve the variational principle
Direct methods in variational problems are well-known especially in
applied mathematics (see, e.g., Gelfand & Fomin 1963). Suppose
that one can find a complete set of functions
on the domain
(the full definition of "complete"
will be given below), so that any function on
can be represented as a linear
combination of the form
![[EQUATION]](img32.gif)
More precisely, we assume that for any function
, there is a choice for the
coefficients such that
![[EQUATION]](img35.gif)
Let us now introduce a sequence of trial mass maps
![[EQUATION]](img36.gif)
We further require that the function
minimizes the functional S:
in other words, are chosen so that
the functional S has minimum value. This obviously happens
when
![[EQUATION]](img39.gif)
Solving this set of n equations, we obtain the n
coefficients , and thus the function
. By repeating this operation for a
sequence of values of n, we find a sequence of functions
. These functions, under suitable
assumptions (verified in our problem), have the following properties
(see Gelfand & Fomin 1963 for a detailed discussion): (i) Let us
call the value of S when
is replaced by the function
. Then, obviously, the sequence
is not increasing. (ii) If the set
is complete, then the functions
converge to the solution
of the problem. This method thus
provides a way to obtain the function
with desired accuracy.
The method described here can be easily applied to our problem. In
fact, by expanding as in Eq. (8), we
find
![[EQUATION]](img44.gif)
The previous equation, for ,
represents a linear system of n equations for the n
variables . Its solution is thus the
set of coefficients to be used in Eq. (8).
However, we note that care must be taken in the choice of a
complete set of functions. Let us define, for the purpose, the
product between two generic vector
fields and
as
![[EQUATION]](img50.gif)
As our problem involves , the
completeness has to be referred to the set of the gradients. In other
words, the set is complete if
![[EQUATION]](img53.gif)
for every implies
. It is easy to show that this
condition is equivalent to Eq. (7).
The direct method can be further simplified if a set of functions
can be taken to satisfy a suitable
orthonormality condition, so that the gradients of the functions
satisfy
![[EQUATION]](img57.gif)
where for
and 0 otherwise. Then Eq. (10) can
be rewritten simply as
![[EQUATION]](img60.gif)
Thus, with the use of an orthonormal set of functions we have
secured two important advantages: (i) The linear system (10) has been
diagonalized, so that its solution is trivial. (ii) The coefficients
no longer depend on n: that
is, the coefficients of the exact solution are given by
.
Because of these advantages, whenever possible an orthonormal set
of functions should be used. We note, however, that the orthonormality
condition (13) depends on the field of observation
. Even if the existence of an
orthonormal set of functions is always guaranteed by the spectral
theory for the Laplace operator (see Brezis 1987), for "strange"
geometries, it may be non trivial to find a complete
orthonormal set of functions. In such cases, we need to solve the
linear system (10).
The direct method described above has several advantages with
respect to the "kernel" method and to the over-relaxation method: (i)
The method is fast in the case where an orthonormal set of functions
can be found. In fact, we need only to evaluate one integral for each
coefficient that we want to
calculate. (ii) The method does not require a large amount of memory:
we need to retain only the n values of the coefficients
. (iii) The precision of the
inversion is driven in a natural way by the value of n.
Typically, the larger is, the
smaller the length scale of (see
below). (iv) In some cases, the decomposition of the mass density
in terms of the functions
can be useful.
© European Southern Observatory (ESO) 1999
Online publication: July 16, 1999
helpdesk.link@springer.de  |