2. Data reduction
2.1. Source detection
We investigated the ROSAT PSPC pointings in the direction of star No. 3 of the catalogue of UV Cet type stars and related objects of the solar vicinity (Gershberg et al. 1998), namely ROSAT pointings number 150026p and 700101p. First, the data were reduced in the manner similar to that described in detail by Neuhäuser et al. (1995). The X-ray sources for these two ROSAT PSPC pointings were identified using LDETECT, MDETECT and MAXLIK algorithms of EXSAS, the Extended Scientific Analysis Software System (Zimmerman et al. 1998) which runs under European Southern Observatory Munich Image Data Analysis System (ESO-MIDAS). In the LDETECT (local detect) algorithm a window is slid over the image. The source counts are determined within the window and background counts from an area surrounding it while the MDETECT (map detect) algorithm uses the background from the smoothed background map, which was created by masking the image with sources found in the previous step. As a result of these procedures the probability of a source existence is computed as a probability that the detected counts within the window and corresponding background do come from the Poisson distribution with the same expectation value whatever this expectation value is. Further, the merged source list derived from the above mentioned two algorithms is used in the maximum likelihood analysis (MAXLIK) for source detection, which works by varying parameters specifying source counts, extent and position until the likelihood is maximized. More detailed descriptions of these algorithms can be found in Cruddace et al. (1988) and Zimmerman et al. (1998). As a final result of this spatial analysis a list of X-ray sources having a maximum likelihood (ML) of existence larger than some threshold (the maximum likelihood can be converted into probability of existence through the equation ) is created.
2.2. Variability testing
As mentioned in the introduction, for flare event detection we have used a method developed by Scargle (1998) based on Bayesian statistics. The method is applicable to data that are known to be from a nearly ideal Poisson process, i.e. a class of independent, identically distributed processes, having zero lengths of dead time.
The data gathered in ROSAT observations (telescope+detector) allow the measurement of arrival times of individual X-ray photons to the nearest s (i.e., s), an accuracy which is much smaller than the shortest time scales that are considered to be responsible for flare events on stars. Another departure from Poisson ideal distribution of counts is owing to the detectors finite dead time (see, for discussion, Scargle & Bapu 1998).
Nevertheless, the arrival times of photons registered by ROSAT and presented in so called Photon Events Tables in this context can be considered as close to a Poisson process, i.e. the arrival of a photon in any interval is independent of that in any other non-overlapping one.
Scargle's (1998) method is designed for application to photon counting data and uses it for decomposition into a piecewise constant Poisson process. For example, let us assume that during a continuous observational interval of length T, consisting of m discrete moments in time, at which it was possible to make measurements (spacecraft's "clock tick" ), a set of photon arrival times D () is registered. Suppose now that we want to use these data to compare two competing hypotheses, The first hypothesis is that the data are generated from constant rate Poisson process (model ) and the second one from two-rate Poisson process (model ). Evidently, model is described by parameter of one rate Poisson process while the model by parameters , and describing two different parts of dataset D divided by any point from observational interval T (with lengths and ) at which the Poisson process switches from parameter (count rates) to .
By taking as a background information (I) the proposition that one of the models under consideration is true and by using Bayes' theorem we can calculate the posterior probability (the probability that () is the correct model) of each model by (see, e.g., Jaynes 1997)
where is the (marginal) probability of the data given , and is the prior probability of model . The term in the denominator is a normalization constant, and we may eliminate it by calculating the ratio of the posterior probabilities instead of probabilities directly. Indeed, the extent to which the data support model over is measured by the ratio of their posterior probabilities and is called posterior odds ratio
The first factor on the right-hand side of Eq. (2) is the ratio of the integrated or global likelihoods of the two models and is called the Bayes factor for against , denoted by . The global likelihood for each model can be evaluated by integrating over nuisance parameters and the final result for discrete Poisson events can be represented by (see, for details, Scargle 1998)
where B is the beta function , and , respectively are the number of recorded photons and the number of "clock tick" in the observational intervals of lengths and . is the time interval between successive photons, and the sum is over the photons' index.
The second factor on the right-hand side of Eq. (2) is the prior odds ratio, which will often be equal to 1 (see below), representing the absence of a priori preference for either model.
It follows that the Bayes factor is equal to the posterior odds when the prior odds is equal to 1. When , the data favor over , and when the data favor .
Applying this approach iteratively to the observational data set, the Scargle (1998) method returns an array of rates, , and a set of so called "change points" , giving the times when an abrupt change in the rate is determined, i.e. a significant variation. This is the most probable partitioning of the observational interval into segments during which the photon arrival rate was discernibly constant, i.e. had no statistically significant variations. Unlike most, this method does not stipulate or predetermine time bins - instead the data themselves determine an effective, non-uniform binning in time. Therefore this data analysis procedure does not itself impose a lower limit to the time scale on which variability can be detected. There are two free parameters in the method that are used to halt the segmentation process: The first is the minimum number of events that are allowed in a block (we have chosen two) and the second is a prior odds ratio that may be applied to disfavor segmentation. The prior odds ratio (second factor in the right-hand side of Eq. (2)) represents the relative likelihood assigned to the two models before the data is considered. Although this would appear to warrant a value of unity, in practice a larger value is used to prevent the method from making an incorrect decision to segment when two models are almost equally likely.
To have strong evidence in favor of segmenting, Scargle (1998) suggests to use as a prior odds ratio a quantity which is equal to the ratio of the length of the observational interval and the desired time resolution of the data. For interpreting Bayes factors () considered above, there are some rules of thumb for evidence of model against (Jeffreys 1961, Raftery 1994). Furthermore, it should be noted, that this point is not crucial - we find very similar results independent of the choice of the prior odds ratio.
As is well known, owing to the spacecraft orbits, ROSAT pointing observations contain data gaps. The integrated likelihoods that are used to decide between the above mentioned models and do not depend on the photon arrival times, only on the number of observing time resolution elements (number of ROSAT "clock tick"), and the number of registered photons in the observational interval (see, Eq. (3)). Therefore, in this context, the correct treatment of data gaps is to ensure that the gap length is not counted in the number of "clock tick" (numbers in Eq. (3)). In the case of ROSAT pointed observations by PSPC or HRI, we allowed to take into account this point with a special descriptor available in the Photon Events Tables (see Zimmermann et al. 1998).
A number of X-ray sources, having maximum likelihood of existence , were chosen for variability testing. For this purpose, we have extracted from the original observational data the part corresponding to a given source as follows: Around the center of each source a set of photon events are chosen using a circle with radius 2.5 times the Full Width at Half Maximum (FWHM) of a detected source, available from source detection algorithms and, for the corresponding background, an annulus with the same inner radius and an outer radius equal to times of the inner one (it covers the same area in the sky as the source). It should be noted that the used radius for the source extracts the overwhelming majority of photon events corresponding to the source. If in the area used for the background, there is any source with maximum likelihood of existence greater than 0 , the background is taken from the opposite direction of that source sector with angle with inner radius equal to the and outer radius times of the inner one. Finally, as a background for a given source, a sector is used not including any source with and covering equal area in the sky as overwhelming majority of photon events coming from the source.
© European Southern Observatory (ESO) 1999
Online publication: April 12, 1999