 |  |
Astron. Astrophys. 342, 763-772 (1999)
3. Derivation of visual extinction and excess spectral shape
In the last decade, it has been realized that because of the
excess, the visual extinction could not be deduced directly from the
colors of T Tauri stars. Hartigan et al. (1991) proposed to
estimate the extinction towards T Tauri stars, combining colors
and veiling measurements in the V band. Recently, GHCC extended the
previous method by the use of spectrophotometry and veiling analysis
of a few photospheric absorption lines spanned over a large spectral
bandwidth. Then, combining the extinction and the calibrated spectra,
they were able to deduce the excess spectral distribution. We propose
here to generalize the discrete method of GHCC by a "continous"
approach which uses all the information contained in the
spectra.
In this section, the whole spectrum of our K7V star between
4000 Å and 6800 Å, except the deep Na doublet around
5893 Å, is used as the reference spectrum. As the former was
first normalized to its local continuum, the excess and the veiling
are represented, without any loss of generality, by the same
quantity.
3.1. The formalism
We assume that the object and the reference spectra,
and
, are observed at a spectral
resolution R, are already calibrated spectrophotometrically,
and that the reference spectrum has been corrected for extinction. In
the absence of noise, and
are strictly related by the
following equation (see GHCC):
![[EQUATION]](img90.gif)
where is the visual extinction
towards the object, describes the
extinction law, and is the continuum
excess flux (the other quantities are defined in Sect. 2.1).
It is useful to first examine the approach of GHCC. Let
be the number of fit photospheric
lines at the wavelengths ,
. The output of their discrete method
consists in a set of local scaling factors
and excesses
. In the absence of noise, each
is related to
and to the overall scaling factor
by:
![[EQUATION]](img100.gif)
In practice, and
are obtained by minimizing a set of
relations via Eq. (14), which in
turn, allows to derive the excess by inverting Eq. (13):
![[EQUATION]](img101.gif)
The method of GHCC is very interesting and its basic idea is
innovating. However, because of its discrete nature, it discards an
important part, perhaps most, of the information contained in the
spectra, in particular their low frequency structure. In the general
case, it is not possible to solve directly Eq. (13) because we do not
know the functional dependence of .
However, what we know is that it is a smooth function of the
wavelength and this information can conveniently be used in a
"continuous" modelling.
Let us now introduce our approach. In a given spectral interval
, any well-behaving physical function
like the excess can be decomposed into the sum of a straight line
joining its end points and the Fourier series of a periodical function
over . The excess being smooth, the
Fourier series can be truncated at a certain maximum order n.
Under these conditions, can be
expressed in the form:
![[EQUATION]](img103.gif)
The problem reduces to the estimate of the
parameters
, with
and
, but the formalism of Sect. 2.1 can
be generalized to any number of parameters. Assuming that the
wavelength is sampled at the Shannon frequency at values
and that
(for
), we define the vector
, of components
, whose square modulus is to be
minimized, with:
![[EQUATION]](img109.gif)
and:
![[EQUATION]](img110.gif)
The excess cut-off frequency is controlled by the parameter
n and the associated spectral resolution
is simply given by:
![[EQUATION]](img112.gif)
If is smaller than R,
then , our problem is well defined
and the solution of the fit is unique. The excesses determined by GHCC
on 17 T Tauri stars show that this is in practice always the
case. Indeed, they vary very smoothly with a cut-off frequency smaller
than a few tens, which corresponds to
, even at spectral resolutions as
low as a few hundreds.
3.2. How does it work?
To understand how the algorithm works, let us go back to the
discrete approach of GHCC. and
are derived from a least square fit
of the set of local scaling factors .
Assuming as in Sect. 2 that the noise on the reference spectrum is
negligible, the error on each , given
by Eq. (6), is inversely proportional the the spectrum contrast. For
constant veiling and signal to noise ratio
, the procedure will weight more the
regions of high contrast with respect to the regions of low contrast
which, incidentally, are also more sensitive to biasses. Now, if
instead of fitting individual lines, we use the whole spectrum divided
into equal intervals where the extinction and the excess can be
considered constant, the same analysis applies. By extrapolation, the
same argument is valid for our "continuous" approach, which can be
considered as the limit when the length of each interval tends to
zero. As we do not know a priori the excess cut-off frequency,
we have to vary the parameter n increasing it and computing at
each spectral resolution the outputs of the fit, until the visual
extinction and the overall scaling factor stabilize within the error
bars. This will happen when the true excess cut-off frequency is
reached.
In order to test the algorithm, we have performed a number of
simulations at low spectral resolution
( ) with the reference spectrum of
Fig. 1. The object spectra were generated by combining the reference
spectrum with a large number of "smooth" excess (or veiling) shapes,
visual extinction values, and white noise.
was varied from small values to
150. We find that in all cases, the computed visual extinction and
overall scaling factor stabilizes around the correct ones, within the
error bars, after a critical spectral resolution has been reached
(which depends on the excess cut-off frequency). The best estimates of
the parameters correspond to those where the errors are the smallest.,
i.e. when the parameters begin to stabilize. Figs. 3 and 4 show two
examples of simulations. The extreme excess shape of Fig. 3a (solid
line) was generated from a random function of unit mean and variance,
smoothed by gaussian filtering at a spectral resolution of 50. The
associated object spectrum of Fig. 3b exhibits a visual extinction of
2 and an average signal to noise ratio of 100. We see in Figs. 3c and
3d that the computed begins to
stabilize to the correct value and that the reduced chi-square value
reaches 1 and remains constant, at
about or, equivalently, at
. The optimum visual extinction
value, taken at the beginning of the plateau, is
. The corresponding excess, deduced
from Eq. (15) (dotted line in Fig. 3a) shows a quasi perfect agreement
with that in input. Fig. 4a represents a more realistic excess (solid
line) of unit mean value generated by summing up a constant and an
exponential function. The associated object of Fig. 4b exhibits a
visual extinction of 1 and an average signal to noise ratio of 50.
Here, the computed begins to
stabilize at or
, for which
. The spectral shape of the
corresponding excess (dotted line in Fig. 4a) shows an excellent
agreement with that in input, only 10% smaller. The reduced chi-square
value of Fig. 4d varies from 0.94 to 0.90, clearly showing that the
goodness of the fit has not to be judged on the basis of the reduced
chi-square value, but on the existence of a plateau in which the
computed visual extinction remains, within the noise, constant as a
function of .
![[FIGURE]](img134.gif) |
Fig. 3. a Solid line: model of the excess; dotted line: reconstructed excess at . b Object spectrum generated from the excess model and the reference spectrum at a spectral resolution ; the input visual extinction is and the average signal to noise ratio is 100. c Calculated visual extinction as a function of , begin to stabilize, within the noise, at the correct value, from or . d Reduced chi-square as a function of .
|
![[FIGURE]](img154.gif) |
Fig. 4. a Solid line: model of the excess; dotted line: reconstructed excess at . b Object spectrum generated from the excess model and the reference spectrum at a spectral resolution ; the input visual extinction is and the average signal to noise ratio is 50. c Calculated visual extinction as a function of , begin to stabilize, within the noise, at the correct value, from or . d Reduced chi-square as a function of (see text).
|
3.3. Error analysis
In this section we derive a formal expression for the errors on the
visual extinction and the overall scaling factor. A careful
examination of the matrix associated
to the discrete case shows that:
![[EQUATION]](img164.gif)
and that within a good approximation:
![[EQUATION]](img165.gif)
where and
represent the average veiling and
the average signal to noise ratio on the object spectrum,
respectively.
Eqs. (20) and (21) are also valid in the limit of our "continuous"
approach. To evaluate the coefficient of proportionality in Eq. (21),
we performed a number of simulations with our reference spectrum. The
object spectra were generated by adding a constant veiling to the
reference and introducing various visual extinctions. The spectral
resolutions, R and , were
controlled by filtering and by varying the number of parameters
n, respectively. Numerical calculations of the diagonal
elements of the corresponding matrix
show that the coefficient of
proportionality is, in a first approximation, independent on the
visual extinction. It increases by only a factor of two when
increasing the extinction from 0 to 5, and can be well approximated by
the product of two functions, and
. Hence, the error on the visual
extinction can be written as:
![[EQUATION]](img177.gif)
Figs. 5a and 5b show the functions
and
. As expected,
is a decreasing function of
R whereas is an increasing
function of approximatively equal
to 1 for . The visual extinction
error is basically controlled by the function
, which depends only on the input
reference spectrum. Comparison of Eqs. (6), (20) and (22) shows that
the higher the reference spectrum contrast, the smaller the function
. Hence, we can easily infer that
for M stars, whose spectra are generally more contrasted that those of
K stars, the corresponding function
is smaller than that shown in Fig. 5a. On the other hand, it will be
larger for G stars or earlier spectral types. GHCC pointed out that to
work their method needed absorption lines spanning over a large
spectral bandwidth. As well, in our "continuous" approach, we need
different regions of high contrast, otherwise the function
would be prohibitively large.
Therefore, it is illusory trying to estimate any visual extinction
from a spectrum by using only one absorption line and an arbitrary
amount of continuum.
![[FIGURE]](img172.gif) |
Fig. 5. a as a function of the spectral resolution R, b . The error on the visual extinction is proportional to the product of these two functions (see text).
|
3.4. Discussion and conclusions
A specific problem of the "continuous" approach could be the use of
large spectral bandwidths because of wavelength calibration errors.
For example, mismatches of spectral lines between the object and the
reference may introduce spurious high frequency noise in the
calculated excess. The smooth excesses derived by GHCC in a bandwidth
larger than 1000 Angstroms show that, at least here, this does not
seem to be a severe limitation (see in particular the case of
DS Tau). Also, it may occur that the calculated visual extinction
never stabilizes when increasing the veiling spectral resolution. For
a statistically correct fit, the absence of convergence means that
either the template is not adequate or the object has a complex
spectrum which cannot be fit by a simple extinction and excess
model.
Eq. (22) can be used to study the variation of the visual
extinction error as a function of the spectral resolution. Assuming
photon or background limited noise, for a given integration time
is inversely proportional to the
square root of R. The error
is represented in Fig. 6 as a function of R, for zero veiling
and visual extinction, and
at
. It is practically proportional to
and decreases from 0.04 at
, by about a factor of 2 at
, and a further factor of 4 going to
. Taking only into account the
statistical noise, there is not much gain at high spectral resolution
when using the whole spectrum in the "continuous" approach proposed
here. Moreover, the factor 4 quoted above is indeed an upper limit,
because in reality higher resolutions correspond to smaller optical
transmission due to the more sophisticated experimental set-up. The
error given in Eq. (22) represents, however, what we can hope for in
the absence of systematic errors. In practice, at high spectral
resolution the contrast of the spectra is higher, and consequently the
output parameters are less sensitive to biasses than at low spectral
resolution. It is also easier to identify emission lines and local
spectral mismatches, and hence, the usable spectral bandwidth is
larger. Therefore, it will probably be difficult to handle resolutions
of a few hundreds, for highly veiled T Tauri stars or when the
excess is dominated by emission lines.
![[FIGURE]](img192.gif) |
Fig. 6. Error on the estimated visual extinction for a given integration time, for zero veiling and visual extinction, derived from the spectrum of a K7V star between 4000 Å and 6800 Å, as a function of the spectral resolution R. The excess spectral resolution is and the average signal to noise ratio on the object spectrum is at the spectral resolution . The dominant source of error was assumed to be photon or background noise.
|
The statistical error given in Eq. (22) is much smaller than the
experimental errors estimated by GHCC who pointed out that they are
limited by systematic errors. Hence, if the systematic errors are not
coupled to the statistical noise, we conclude that it is in principle
possible to study objects much fainter than those so far studied by
using the "continuous" approach over a large spectral bandwidth. This
conclusion is further reinforced by the fact that there is no need to
correctly isolate some individual photospheric absorption lines for
veiling calculation, as it is the case with the discrete method. This
is an important advantage which can greatly help to work on noisy
spectra. There are some remaining questions about our "continuous"
approach: What is the best spectral resolution? What is the
sensitivity to systematic errors? What are the performances of the
proposed method with respect to others? All these questions have to be
adressed experimentally.
© European Southern Observatory (ESO) 1999
Online publication: February 23, 1999
helpdesk.link@springer.de  |