## 3. Derivation of visual extinction and excess spectral shapeIn the last decade, it has been realized that because of the
excess, the visual extinction could not be deduced directly from the
colors of T Tauri stars. Hartigan et al. (1991) proposed to
estimate the extinction towards T Tauri stars, combining colors
and veiling measurements in the V band. Recently, GHCC extended the
previous method by the use of spectrophotometry and veiling analysis
of a few photospheric absorption lines spanned over a large spectral
bandwidth. Then, combining the extinction and the calibrated spectra,
they were able to deduce the excess spectral distribution. We propose
here to generalize the discrete method of GHCC by a "continous"
approach which uses In this section, the whole spectrum of our K7V star between 4000 Å and 6800 Å, except the deep Na doublet around 5893 Å, is used as the reference spectrum. As the former was first normalized to its local continuum, the excess and the veiling are represented, without any loss of generality, by the same quantity. ## 3.1. The formalismWe assume that the object and the reference spectra,
and
, are observed at a spectral
resolution where is the visual extinction towards the object, describes the extinction law, and is the continuum excess flux (the other quantities are defined in Sect. 2.1). It is useful to first examine the approach of GHCC. Let be the number of fit photospheric lines at the wavelengths , . The output of their discrete method consists in a set of local scaling factors and excesses . In the absence of noise, each is related to and to the overall scaling factor by: In practice, and are obtained by minimizing a set of relations via Eq. (14), which in turn, allows to derive the excess by inverting Eq. (13): The method of GHCC is very interesting and its basic idea is innovating. However, because of its discrete nature, it discards an important part, perhaps most, of the information contained in the spectra, in particular their low frequency structure. In the general case, it is not possible to solve directly Eq. (13) because we do not know the functional dependence of . However, what we know is that it is a smooth function of the wavelength and this information can conveniently be used in a "continuous" modelling. Let us now introduce our approach. In a given spectral interval
, any well-behaving physical function
like the excess can be decomposed into the sum of a straight line
joining its end points and the Fourier series of a periodical function
over . The excess being smooth, the
Fourier series can be truncated at a certain maximum order The problem reduces to the estimate of the parameters , with and , but the formalism of Sect. 2.1 can be generalized to any number of parameters. Assuming that the wavelength is sampled at the Shannon frequency at values and that (for ), we define the vector , of components , whose square modulus is to be minimized, with: The excess cut-off frequency is controlled by the parameter
If is smaller than ## 3.2. How does it work?To understand how the algorithm works, let us go back to the
discrete approach of GHCC. and
are derived from a least square fit
of the set of local scaling factors .
Assuming as in Sect. 2 that the noise on the reference spectrum is
negligible, the error on each , given
by Eq. (6), is inversely proportional the the spectrum contrast. For
constant veiling and signal to noise ratio
, the procedure will weight more the
regions of high contrast with respect to the regions of low contrast
which, incidentally, are also more sensitive to biasses. Now, if
instead of fitting individual lines, we use the whole spectrum divided
into equal intervals where the extinction and the excess can be
considered constant, the same analysis applies. By extrapolation, the
same argument is valid for our "continuous" approach, which can be
considered as the limit when the length of each interval tends to
zero. As we do not know In order to test the algorithm, we have performed a number of simulations at low spectral resolution () with the reference spectrum of Fig. 1. The object spectra were generated by combining the reference spectrum with a large number of "smooth" excess (or veiling) shapes, visual extinction values, and white noise. was varied from small values to 150. We find that in all cases, the computed visual extinction and overall scaling factor stabilizes around the correct ones, within the error bars, after a critical spectral resolution has been reached (which depends on the excess cut-off frequency). The best estimates of the parameters correspond to those where the errors are the smallest., i.e. when the parameters begin to stabilize. Figs. 3 and 4 show two examples of simulations. The extreme excess shape of Fig. 3a (solid line) was generated from a random function of unit mean and variance, smoothed by gaussian filtering at a spectral resolution of 50. The associated object spectrum of Fig. 3b exhibits a visual extinction of 2 and an average signal to noise ratio of 100. We see in Figs. 3c and 3d that the computed begins to stabilize to the correct value and that the reduced chi-square value reaches 1 and remains constant, at about or, equivalently, at . The optimum visual extinction value, taken at the beginning of the plateau, is . The corresponding excess, deduced from Eq. (15) (dotted line in Fig. 3a) shows a quasi perfect agreement with that in input. Fig. 4a represents a more realistic excess (solid line) of unit mean value generated by summing up a constant and an exponential function. The associated object of Fig. 4b exhibits a visual extinction of 1 and an average signal to noise ratio of 50. Here, the computed begins to stabilize at or , for which . The spectral shape of the corresponding excess (dotted line in Fig. 4a) shows an excellent agreement with that in input, only 10% smaller. The reduced chi-square value of Fig. 4d varies from 0.94 to 0.90, clearly showing that the goodness of the fit has not to be judged on the basis of the reduced chi-square value, but on the existence of a plateau in which the computed visual extinction remains, within the noise, constant as a function of .
## 3.3. Error analysisIn this section we derive a formal expression for the errors on the visual extinction and the overall scaling factor. A careful examination of the matrix associated to the discrete case shows that: and that within a good approximation: where and represent the average veiling and the average signal to noise ratio on the object spectrum, respectively. Eqs. (20) and (21) are also valid in the limit of our "continuous"
approach. To evaluate the coefficient of proportionality in Eq. (21),
we performed a number of simulations with our reference spectrum. The
object spectra were generated by adding a constant veiling to the
reference and introducing various visual extinctions. The spectral
resolutions, Figs. 5a and 5b show the functions
and
. As expected,
is a decreasing function of
## 3.4. Discussion and conclusionsA specific problem of the "continuous" approach could be the use of large spectral bandwidths because of wavelength calibration errors. For example, mismatches of spectral lines between the object and the reference may introduce spurious high frequency noise in the calculated excess. The smooth excesses derived by GHCC in a bandwidth larger than 1000 Angstroms show that, at least here, this does not seem to be a severe limitation (see in particular the case of DS Tau). Also, it may occur that the calculated visual extinction never stabilizes when increasing the veiling spectral resolution. For a statistically correct fit, the absence of convergence means that either the template is not adequate or the object has a complex spectrum which cannot be fit by a simple extinction and excess model. Eq. (22) can be used to study the variation of the visual
extinction error as a function of the spectral resolution. Assuming
photon or background limited noise, for a given integration time
is inversely proportional to the
square root of
The statistical error given in Eq. (22) is much smaller than the experimental errors estimated by GHCC who pointed out that they are limited by systematic errors. Hence, if the systematic errors are not coupled to the statistical noise, we conclude that it is in principle possible to study objects much fainter than those so far studied by using the "continuous" approach over a large spectral bandwidth. This conclusion is further reinforced by the fact that there is no need to correctly isolate some individual photospheric absorption lines for veiling calculation, as it is the case with the discrete method. This is an important advantage which can greatly help to work on noisy spectra. There are some remaining questions about our "continuous" approach: What is the best spectral resolution? What is the sensitivity to systematic errors? What are the performances of the proposed method with respect to others? All these questions have to be adressed experimentally. © European Southern Observatory (ESO) 1999 Online publication: February 23, 1999 |