SpringerLink
Forum Springer Astron. Astrophys.
Forum Whats New Search Orders


Astron. Astrophys. 327, 365-376 (1997)

Previous Section Next Section Title Page Table of Contents

3. The fourier transform

In this section we discuss the effects of data sampling on the Fourier transform. The Fourier transform [FORMULA] of a continuous function [FORMULA] is defined by

[EQUATION]

and its inverse by

[EQUATION]

When the function [FORMULA] consists of N discrete measurements [FORMULA] ([FORMULA]), which constitute an equidistant time series of length T, then the discrete Fourier transform results in N values [FORMULA] ([FORMULA]) for the Fourier components

[EQUATION]

The inverse discrete transform is given by

[EQUATION]

In these expressions [FORMULA] with [FORMULA] so that the time steps are given by [FORMULA]. The coefficients [FORMULA] are related to the frequencies [FORMULA] so that the resolution in frequency is given by [FORMULA]. The highest frequency, corresponding to the Nyquist frequency, is given by [FORMULA]. Note that [FORMULA] corresponds in our case to the total number of photons [FORMULA] detected during time T. We define the power spectrum as

[EQUATION]

The relation between the continuous and the discrete Fourier transform for the datasets considered in this paper can be derived as follows. Each dataset consists of N measurements of the photon counts in a specific pixel. Each individual measurement consists of integrating the photon counts over a short time interval of length [FORMULA]. The total length of the time series is T. Let [FORMULA] be the number of photons arriving from the source at time t. Because of the finite integration time [FORMULA] the actual photon flux which is sampled corresponds to a function [FORMULA] with [FORMULA] indicating a (Fourier) convolution and with [FORMULA] the binning function

[EQUATION]

The convolution of [FORMULA] and [FORMULA] corresponds to averaging the actual photon counts over a bin of width [FORMULA] around time t so that [FORMULA] is given by

[EQUATION]

The discrete sampling of the data amounts to multiplying the function [FORMULA] with a window function [FORMULA] and with a sampling function [FORMULA]. The window function is given by

[EQUATION]

and the sampling function by

[EQUATION]

The sampling function indicates that function [FORMULA] is discretely sampled at times [FORMULA] while the window function accounts for the finite duration of the time series. Let [FORMULA] indicate a measured time series. From the discussion above it follows that [FORMULA]. Let [FORMULA] denote the Fourier transform and let [FORMULA], [FORMULA] and [FORMULA]. Then the Fourier transform of the measured time series [FORMULA] is given by

[EQUATION]

with

[EQUATION]

From Eq. (9) we find that

[EQUATION]

The relation of the continuous Fourier transform and the discrete transform is established by taking [FORMULA] so that

[EQUATION]

The components of the power spectrum are then given by

[EQUATION]

By studying the power spectrum of the observed counts we obtain information about the function [FORMULA]. Because [FORMULA] we have that [FORMULA] with [FORMULA] and

[EQUATION]

For the observations described in this paper [FORMULA], [FORMULA], [FORMULA] and [FORMULA]. A sample of 122 data points results in a power spectrum at 62 discrete frequencies (one at zero frequency). The highest frequency, the Nyquist frequency, corresponds to [FORMULA] or a period of [FORMULA]. Due to the discrete sampling any period in the signal shorter than 43 seconds will be aliased. In general the process of measuring the data points will automatically result in a suppression of high frequencies so that aliasing is a not a too serious problem. However in our case [FORMULA] is only 0.128 seconds so that aliasing will not be suppressed due to the finite time a measurement takes. This can also be seen in Eq. (15). At the Nyquist frequency [FORMULA] so that [FORMULA] over the whole frequency range considered and hence no suppression of high frequencies occurs. So care has to be exercised for the possibility of aliasing.

A second effect which occurs is caused by the data windowing. Function [FORMULA] is a box car function with length T. The transform of the signal is convolved with [FORMULA] which has a central peak of width [FORMULA] and side lobes. The effect of windowing is that the power at a given frequency is distributed over neighbouring frequency bins.

A third effect which occurs is due to the use of the discrete Fourier transform. When looking for a periodic signature with frequency [FORMULA], the associated power is only recovered when [FORMULA] corresponds exactly with one of the frequencies at which the power spectrum is evaluated. When [FORMULA] is exactly in-between two frequency bins the power is distributed over the neighbouring frequency bins and even some bins further away. So the spread of power over adjacent bins is caused by two effects: 1) the finite length of the time series which results in windowing; 2) the use of the discrete Fourier transform. The effect of windowing can be suppressed by using a window function for the time series which differs from the box car (e.g. Welch, Hanning, Parzen etc.). However, we decided not to use any of the above windowing functions, as such a function broadens the secular variations due to a slow increase/decrease in the counts and therefore affects the determination of the variations we seek to analyse.

The time series can be characterized as follows. The average number of photon counts per 0.128 seconds amounts to a few hundred counts, [FORMULA] 300 - 400 in most datasets although, as we will see later, some parts of a dataset can have counts in the 1000-1500 range while other parts are in the 50-100 range. In each dataset, there are secular variations due to a slow decrease or a slow increase of the counts. This occurs on time scales in the range [FORMULA]. Superimposed on these slow variations are faster variations. Simply looking at the time series suggests already that some variations are (quasi-)periodic. The amplitudes of the variations are larger than expected from pure Poisson statistics (e.g. [FORMULA] or [FORMULA]). Because of the high average counting level and the presence of secular variations it can be anticipated that there will be significant power at low frequencies, say [FORMULA]. This power can be reduced by subtracting some `average' from the observed counts, e.g., a first-order polynomial fit. However, there is little to be gained by this procedure.

A description of the statistical properties of the power spectrum can be found in Jenkins and Watts (1968), Leahy et al. (1983), and is comprehensively summarized in van der Klis (1989). The normalization of the power spectrum (Eq. (5)) is chosen in such a way that if the noise in the data is (only) Poissonian, then the distribution [FORMULA] is given by the [FORMULA] distribution with two degrees of freedom (dof). The probability that [FORMULA] exceeds a threshold power level [FORMULA] is

[EQUATION]

with Q the integral probability of the [FORMULA] distribution

[EQUATION]

For two dof the standard deviation of the noise powers is equal to their mean value [FORMULA]. This implies that in the power spectrum the magnitude of the noise component is not well defined. There exist basically two methods to decrease the noise in the power spectrum. One method is to rebin the power spectrum by averaging W consecutive frequency bins at the expense of a reduced frequency resolution. The other method, which can be used in combination with the previous, is to divide the data into M segments of equal length. For each of the data segments the power spectrum is determined and the resulting power spectra are then averaged. The resulting power distribution of the noise corresponds then to a [FORMULA] -distribution with [FORMULA] dof which is scaled with a factor [FORMULA]. In this case we have that [FORMULA]. The mean of the distribution is still equal to 2 but the standard deviation has been reduced to [FORMULA]. We note that the noise in the power spectrum can, effectively, only be reduced at the expense of frequency resolution. Increasing the observing time T will not change the mean and the standard deviation of the noise distribution but, in the end, longer observing times, in combination with segmenting and/or rebinning, do permit reduction of the standard deviation of the noise while achieving an improved frequency resolution.

Suppose that we have a power spectrum at N frequencies and want to establish which powers have a low probability of being caused by noise. The power at each of the frequencies can be considered as an independent trial. Define [FORMULA] as the probability that a power [FORMULA] exceeds detection level [FORMULA] and is not caused by noise. For N independent powers this probability is [FORMULA] so that the chance to exceed [FORMULA] and to be caused by noise is [FORMULA] for [FORMULA]. From this it follows that the detection level is given by

[EQUATION]

In this paper we use a confidence level of 99.9% ([FORMULA]) to determine [FORMULA]. For [FORMULA], the detection level is given by [FORMULA]. For [FORMULA], Eq. (17) results in an implicit relation for [FORMULA].

Previous Section Next Section Title Page Table of Contents

© European Southern Observatory (ESO) 1997

Online publication: April 8, 1998
helpdesk.link@springer.de