## Appendix A: non parametric analysisThe non parametric solutions of Eq. (3) and (4) are described by their projection onto a complete basis of functions of finite support, which are chosen here to be cubic B-splines (i.e. the unique function which is defined to be a cubic over 4 adjacent intervals and zero outside, with the extra property that it integrates to unity over that interval): The parameters to fit are the weights and . Calling or (the parameters) and or (the measurements) Eq. (3) and (4) then become formally where is a matrix with entries given by
The projection and the self consistent potential of B-splines can be computed analytically. Their knots can be placed arbitrarily in order to resolve high frequencies in the profiles which are believed to be signal rather than noise (this is a requirement when using a penalty function which operates on the spline coefficient since imposing a correlation between these coefficients would truncate the high frequency). The analytic properties of B-splines and their transform turns out handy in particular since Taylor expansions are available when dealing with exponential profile where the dynamical range is large. Another useful property of B-spline is extrapolation: the correlation of the spline coefficient induced by the penalty function yields an estimate for the behaviour of the profile beyond the last measured point; since the Abel transform requires integration to infinity, this estimate corrects in part for the truncation. Note that an explicit analytic continuation of the model can be added to the spline basis if required. Finally here the requirement is that is smooth, which is more strigent than requiring that (or ) are smooth. Assuming that we have access to discrete measurements of and (via binning as discussed above), and that the noise in and can be considered to be Normal, we can estimate the error between the measured profiles and the non parametric B-spline model as where the weight matrix is the inverse of the covariance matrix of the data (which is diagonal for uncorrelated noise with diagonal elements equal to one over the data variance). Linear penalty functions obey where is a positive definite matrix. In practice, we use where is a finite difference second order operator In short, the solution of Eq. (3) (or Eq. (4)) is
found by minimizing the quantity
where and
are respectively the likelihood and
regularization terms given by Eq. (A5) and (A6),
are the (large number) of
parameters, and where the Lagrange multiplier
allows us to tune the level of
regularization. The introduction of the Lagrange multiplier
The last remaining issue involves setting the level of
regularization. The so-called cross-validation method (Wahba 1990)
adjusts the value of We make use of the value for © European Southern Observatory (ESO) 1999 Online publication: March 1, 1999 |