 |  |
Astron. Astrophys. 325, 1115-1124 (1997)
3. Stellar surface flux densities
3.1. Ca II H&K flux densities
The surface flux density in the cores of the
Ca II H&K lines, , has been
derived from the S -values following Rutten (1984), using
his `arbitrary' units:
![[EQUATION]](img26.gif)
where the conversion factor depends on
and luminosity class (Rutten 1984), and
has been taken from Flower (1977) for
giants with , and from
Böhm-Vitense (1981) for all other stars.
The flux density in the Ca II line cores only partially
originates in the active chromosphere. The other part, the so-called
minimum flux, is of photospheric (line wings) and of basal
(possibly acoustic) origin (see, e.g., Schrijver 1987). By
subtracting this (colour-dependent) minimum flux component, we derive
the excess flux density, which is listed in Table 2,
column 2. For this minimum flux we have used the empirical
minimum flux derived by Rutten (1987b), for a large sample of
stars with luminosity classes between II-III and V.
Rutten (1986) shows that this minimum flux is similar to the sum
of two theoretically expected contributions: a line-wing contribution,
and a minimum line-core contribution, both depending predominantly on
effective temperature. Therefore, the minimum flux is taken the same
for dwarfs and giants, in spite of the fact that the lowest observed
fluxes for dwarfs with are higher than this
minimum flux.
The minimum flux is given for the 1Å passband, so we
have converted the -values to
-values using the relations
, where and
are given in Table 3 as a function of
colour and luminosity class LC. We have derived
these relations from a sample of stars for which both
and values have been
measured (data listed by Rutten 1987a). The scatter
about the relationships is listed in
Table 3; this scatter has been taken into account as an
additional uncertainty in the value, caused by
the conversion. The conversion for stars with
is only slightly different from the conversion for cooler stars, but
still results in a difference of about 20% at the lowest activity
levels ( ), and after subtraction of the minimum
flux density this can lead to large differences in the excess flux
density.
![[TABLE]](img38.gif)
Table 3. Conversion from the value to the value using the relation . Listed are (left) and (middle) with their uncertainties (between parentheses). Also listed are the number of data-points n which define the relationship, and the scatter around the relationship.
The conversion depends on the profile of the line core emission
(basal and magnetic) and on the photospheric absorption profile. These
profiles depend strongly on colour and luminosity class, so it is not
surprising that we find different relations for the conversion of
to . However, we do not
find a significant difference between the conversion for giants (III)
and dwarfs (V). In trying to understand the different relationships,
we describe the S -value as a sum of two different parts: a
minimum (photospheric and basal) component and a magnetic emission
component. Two stars with different activity levels, but otherwise
identical, will only show a difference in the amount of magnetic
emission. Both stars have the same relative transmission of the
magnetic emission component through the 1Å passband, as
long as the width of the magnetic emission profile does not change
with activity level (which is valid for active regions on the sun,
Oranje 1983). The difference between the
-values of both stars is equal to the difference between the
-values scaled with the 1Å passband
transmission factor and with the ratio of the (constant) normalisation
factors of and . The slope
of the relations in Table 3 is the product
of these two scaling factors, and not the transmission factor alone,
as Schrijver et al. (1992) suggested. Wilson and
Bappu (1957) showed that the width of the line core emission peak
depends mainly on luminosity: about 0.5Å for dwarfs,
1Å for giants, and 2Å for supergiants. This
means that the transmission through the 1Å passband is
significantly smaller for bright giants than for giants, but the
difference between the transmission for giants and dwarfs is not so
pronounced, explaining the change in the slope of the conversion
relationship around luminosity class II. It is remarkable that
relatively cool stars ( ) appear to have a larger
transmission than relatively hot stars. This implies that the width of
the line-core emission is much larger (about a factor 2 to 3) for
stars with than for cooler stars. This effect
could partly be caused by rotational broadening in the (on average)
faster rotating early type stars, which has the effect of moving part
of the line-wing contribution to the line core. However, if we divide
the sample of stars with colours in two groups
according to their rotational velocities, the change in slope
is not very significant: the maximum difference
in slope occurs between stars with
( ) and stars with
( ). Wilson and Bappu (1957) found that the
width of the line core emission peak does not depend on effective
temperature. However, the stars they used for their study are
relatively cool: , so they could not have
noticed the dependence we find here. For giants and dwarfs with
we do not find a change in the slope
with colour, either, consistent with the
findings of Wilson and Bappu (1957).
3.2. X-ray flux densities
For each star detected in the ROSAT survey we derived the X-ray
flux density at the stellar surface, , from the
flux density on the detector, , following Oranje
et al. (1982):
![[EQUATION]](img47.gif)
The intrinsic colour index is from
Fitzgerald (1970), bolometric corrections from
Johnson (1966) for dwarfs and from Flower (1977) for giants.
The surface flux densities are listed in
Table 2, column 5. The uncertainties in the surface flux
densities are dominated by the uncertainties in the source count rate
and in the hydrogen column density
, the latter being caused by the relatively
large uncertainties in the distance and in .
© European Southern Observatory (ESO) 1997
Online publication: April 28, 1998
helpdesk.link@springer.de  |