Forum Springer Astron. Astrophys.
Forum Whats New Search Orders

Astron. Astrophys. 356, 490-500 (2000)

Previous Section Next Section Title Page Table of Contents

3. Light curve fitting

The basic idea of our eclipse mapping algorithm is to reconstruct the intensity distribution on the accretion stream by comparing and fitting a synthetic light curve to an observed one. The comparison between these light curves is done with a [FORMULA]-minimization, which is modified by means of a maximum entropy method. Sect. 3.1 describes the light curve generation, 3.2 the maximum entropy method, and 3.3 the actual fitting algorithm.

3.1. Light curve generation

In order to generate a light curve from the 3d model, it is neccessary to know which surface elements i are visible at a given phase [FORMULA]. We designate the set of visible surfaces [FORMULA].

In general, each of the three components (WD, secondary, accretion stream) may eclipse (parts of) the other two, and the accretion stream may partially eclipse itself. This is a typical hidden surface problem. However, in contrast to the widespread computer graphics algorithms which work in the image space of the selected output device (e.g. a screen or a printer), and which provide the information `pixel j shows surface i', we need to work in the object space, answering the question `is surface i visible at phase [FORMULA]?'. For a recent review on object space algorithms see Dorward (1994). Unfortunately, there is no readily available algorithm which fits our needs, thus we use a self-designed 3d object-space hidden-surface algorithm. Let N be the number of surface elements of our 3d model. According to Dorward (1994), the time T needed to perform an object space visibility analysis goes like [FORMULA]. Our algorithm performs its task in [FORMULA], with the faster results during the eclipse of the system. It is obviously necessary to optimize the number of surface elements in order to minimize the computation time without getting too coarse a 3d grid.

Once [FORMULA] has been determined, the angles between the surface normals of [FORMULA] and the line of sight, and the projected areas [FORMULA] of [FORMULA] are computed. Designating the intensity of the surface element i at the wavelength [FORMULA] with [FORMULA], the observed flux [FORMULA] is


Here, two important assumptions are made: (a) the emission from all surface elements is optically thick, and (b) the emission is isotropic, i.e. there is no limb darkening in addition to the foreshortening of the projected area of the surface elements. The computation of a synthetic light curve is straightforward. It suffices to compute [FORMULA] for the desired set of orbital phases.

While the above mentioned algorithm can produce light curves for all three components, the WD, the secondary, and the accretion stream, we constrain in the following the computations of light curves to emission from the accretion stream only. Therefore, we treat the white dwarf and the secondary star as dark opaque objects, screening the accretion stream.

3.2. Constraining the problem: MEM

In the eclipse mapping analysis, the number of free parameters, i.e. the intensity of the N surface elements, is typically much larger than the number of observed data points. Therefore, one has to reduce the degrees of freedom in the fit algorithm in a sensible way. An approach which has proved successful for accretion discs is the maximum entropy method MEM (Horne 1985). The basic idea is to define an image entropy S which has to be maximized, while the deviation between synthetic and observed light curve, usually measured by [FORMULA], is minimized (n is the number of phase steps or data points). Let [FORMULA] be


the default image for the surface element i. Then the entropy is given by


In Eq. (5), [FORMULA] and [FORMULA] are the positions of the surface elements i and j. [FORMULA] determines the range of the MEM in that the default image (5) is a convolution of the actual image with a Gaussian with the [FORMULA]-width of [FORMULA]. Hence, the entropy measures the deviation of the actual image from the default image. An ideal entropic image (with no contrast at all) has [FORMULA]. We use [FORMULA] for our test calculations and for the application to UZ For.

The quality of a intensity map is given as


where [FORMULA] is chosen in the order of 1. Aim of the fit algorithm is to minimize [FORMULA].

3.3. The fitting algorithm: evolution strategy

Our model involves approximately 250 parameters, which are the intensities of the surface elements. This large number is not the number of the degrees of freedom, which is difficult to define in a MEM-strategy. A suitable method to find a parameter optimum with a least [FORMULA] and a maximum entropy value is a simplified imitiation of biological evolution, commonly referred to as `evolution strategy' (Rechenberg 1994). The intensity information of the i surface elements is stored in the intensity vector [FORMULA]. Initially, we choose [FORMULA] for all i.

From this parent intensity map, a number of offsprings are created with [FORMULA] randomly changed by a small amount, the so-called mutation . For all offsprings, the quality [FORMULA] is calculated. The best offspring is selected to be the parent of the next generation. An important feature of the evolution strategy is that the amount of mutation itself is also being evolved just as if it were part of the parameter vector. We use the C-program library evoC developed by K. Trint and U. Utecht from the Technische Universität Berlin, which handles all the important steps (offspring generation, selection, stepwidth control).

In contrast to the classical maximum entropy optimisation (Skilling & Bryan 1984), the evolution strategy does not offer a quality parameter that indicates how close the best-fit solution is to the global optimum. In order to test the stability of our method, we run the fit several times starting from randomly distributed maps. All runs converge to very similar intensity distributions [FORMULA] (see also Figs. 10 and 12). This type of test is common in evolution strategy or genetic algorithms (e.g. Hakala 1995). Even though this approach is not a statistically `clean' test, it leaves us to conclude that we find the global optimum. Fastest convergence is achieved with 40 to 100 offsprings in each generation. Finding a good fit ([FORMULA]) takes only on tenth to one fifth of the total computation time, the remaining iterations are needed to improve the smoothness of the intensity map, i.e. to maximize S. A hybrid code using a classical optimization algorithm, e.g. Powell's method, may speed up the regularization (Potter et al. 1998).

Previous Section Next Section Title Page Table of Contents

© European Southern Observatory (ESO) 2000

Online publication: April 10, 2000