SpringerLink
Forum Springer Astron. Astrophys.
Forum Whats New Search Orders


Astron. Astrophys. 342, 687-703 (1999)

Previous Section Next Section Title Page Table of Contents

3. Data analysis

3.1. Fluxes measurement

The individual CCD frames are reduced using standard IRAF software procedures by substracting the bias frame and by flat-fielding using the median sky exposures. We choose at least three comparison stars with about the same brightness than the galaxy in the CCD frame. Faint sources in their neighbourhood and in the vicinity of the galaxy are substracted and replaced by the median value measured in annuli around. Then we use circular apertures to measure the fluxes of the comparison stars. For galaxies, we can use circular or elliptic apertures depending on the size and form of the galaxy. In fact, for large galaxies, like NGC 4051 or NGC 4151, we used two apertures: the first one to fit the background of the image at the galaxy position (the background fitting aperture), the second one (the photometric aperture), smaller, to measure the flux of the central nucleus (see Fig. 1). In the more general case, for starlike galaxies, these two apertures are the same and are circular.

[FIGURE] Fig. 1. Different apertures used to measure the central flux of a galaxy (see text). We show the case of NGC 4051, the biggest galaxy of our sample.

In order to fit the sky background in each aperture, we extract a subimage centred on each object, the size of this subimage being four times the radius of the background fitting aperture. We fit this subimage line by line and column by column with a 3 degrees polynomial, using only points outside the aperture. We take the average of the line by line and column by column fits to estimate the background flux. This flux is substracted at the total flux measures within the photometric aperture to obtain the intrinsic flux of the stars or of the galaxy. We repeat the treatment for each image of the run, which are recentered, if necessary, towards a reference image in order to compensate the telescope drifts.

3.2. Treatment and light curves achievment

3.2.1. General case

Our treatment rests on the small probability that two stars of a given image vary intrinsically by the same amount from their average behaviour. If it is the case, the variation is supposed to be due to an extrinsic perturbation like scintillation, seeing, or atmospheric extinction and all objects in the field of view are affected in the same way by this perturbation. It ensues from this that, in this image the two stars can play the role of standard stars . Actually, due to the different electronic and statistic noises, we can never detect stars varying exactly in the same manner. We used thus a minimizing method where the function to minimize, for a number [FORMULA] of comparison stars in the CCD field, can be expressed as follows (we minimize with respect to the variable N which plays the role of a normalized flux):

[EQUATION]

where

[EQUATION]

In Eq. (2), [FORMULA] and [FORMULA] are respectively the relative flux (i.e. normalized to the average flux of the star k on all the images of the run) and the corresponding relative noise of the comparison star k in the image i. The noise includes the photon and the read-out noises, and is usually dominated by the former.

3.2.2. Differences from standard [FORMULA] reduction

To see the interest of our approach, let us consider a situation where at least two stars are not variable while all the others vary independently. Neglecting, for the moment, the statistical noise, the algorithm will then naturally choose, for the normalization factor N, the common relative flux value [FORMULA] of all non variable stars, which makes the [FORMULA] function vanish. It is clearly different from the classical minimization of the [FORMULA] function that would give some weight to all stars, variable or not. However, due to the statistical noise, any weighted algorithm will tend to favor the brightest source. This is most apparent in the [FORMULA] case, where the [FORMULA] function reduces to the [FORMULA] function and the two methods become thus identical. For [FORMULA] however, they can give quite different results. We illustrate this with a simple model: we assume that we measure 5 stars, one of which (called star 1) is three times as luminous as each of the 4 others. We assume that star 1 is also intrinsically variable. We simulate the light curves of each star taking into account the statistical noise and the intrinsic variability of star 1. Then we applied the [FORMULA] method and our method to the simulated data. The standard deviations of each light curves computed by the 2 methods are plotted in Fig. 2 as functions of the amplitude of the intrinsic variability of star 1.

[FIGURE] Fig. 2. Plots of the relative standard deviations [FORMULA] of the simulated light curves of 5 stars, obtained with our method and the [FORMULA] one, as a function of the variability amplitude [FORMULA] of star 1. The other stars are only marred by statistical noise. The horizontal straight line in each plot gives the mean value of the real noise of star 2-5. The inclined straight line represents the [FORMULA] curve and must be normally followed by star 1. This is effectively the case with our reduction method contrary to the standard [FORMULA] one.

Clearly both methods are indistinguishable when the intrinsic variability is much lower than the mean statistical noise of stars 2-5. However, as soon as the variability is comparable to this value, the [FORMULA] method tends to underpredict the variability of star 1, because of its high statistical weight in the normalization, and overpredict the variability of stars 2-5. On the other hand, our method gives very good approximations of the standard deviation of all stars. In practice, to use at best the advantage of our method, we choose the largest possible number of comparison stars with approximatively the same brightness (the relative brightness of each object can be deduced from their relative noise [FORMULA] reported in Table 2). We have found at least 3 comparison stars for all galaxies excepted NGC 4051 (only 2) and NGC 4151 (only 1, see next).


[TABLE]

Table 2. Relative values of [FORMULA], [FORMULA] and [FORMULA] for each galaxy and comparison stars


For each image, the value of [FORMULA] represents thus the relative flux of a "virtual" standard star. We finally obtain the light curve of an object by dividing its relative flux by [FORMULA].

3.2.3. The particular case of NGC 4151

For this object, there is only one comparison star in the CCD field with about the same brightness as the galaxy. We obtain another comparison object by measuring the flux of the diffuse component of NGC 4151, excluding the central region. We have to use a large aperture and, for the same flux as the comparison star, the photon noise is 3 times as large due to the sky background.

3.3. Errors measurement

The variance of the light curve of an object depends obviously on the method of treatment used and can be expressed, in the more general case, as the sum of 2 terms:

[EQUATION]

In this expression, [FORMULA] would be the value of [FORMULA] obtained if the object was really non-variable and only marred by photons statistics. On the other hand, [FORMULA] represents a supplementary noise which can include a variable component or any artefact of the light curve due to the observations or the treatment. An estimation of [FORMULA] gives thus an estimation or an upper limit of the variability of the object. We assess [FORMULA] indirectly by evaluating [FORMULA]. We simulate in this way new sets of data, where the flux of each star s in each image i takes the following value:

[EQUATION]

In this expression [FORMULA] means the average flux of a star on all the images of the run and [FORMULA] means the average flux on all the stars of an image. The second term of the right member of Eq. (4) allows to take into account global variations of fluxes, image by image due for example to small clouds crossing. Finally we add a poissonian noise to each simulated value. Then, we treat the data with the same algorithm described above. The standard deviation of the light curves gives therefore an estimation of [FORMULA] and thus, of [FORMULA] from Eq. (3). Due to the limited number of images, there is a statistical inaccuracy on this estimation and we improved it by repeating the simulation many times and taking the average.

The value of [FORMULA], obtained in this manner, is very close (within a factor 2) to the true observationnal noise (photon noise and read-out noise) and proves, by the way, the robustness of the method.

3.4. The structure function

A way to detect a continuous trend in our data is to used the so-called first-order structure function (hereafter we simply refer to the "structure function", or "SF"), commonly employed in time-series analysis (Rutman 1978). It has been introduced in the field of astronomy by Simonetti et al. (1985, see also Paltani et al. 1997). It is defined, for data of minimum temporal sampling [FORMULA] between two consecutive images, by:

[EQUATION]

for the star k of the run. The brackets point out that we take the average on all the images i of the light curve. We can sum up the main aspects of the structure function as follows. For a non-variable object, the SF is constant and gives an estimation of the standard deviation of the white noise introduced by the measurement errors on the fluxes. For light curves with different variable components of different timescales, the SF is more complex, increasing with [FORMULA] until the maximum variability time scale is reach. Obviously, for small sample of images, the form of the structure function for the largest time lags is very noisy, since the average is done on a very small number of images.

Previous Section Next Section Title Page Table of Contents

© European Southern Observatory (ESO) 1999

Online publication: February 23, 1999
helpdesk.link@springer.de