SpringerLink
Forum Springer Astron. Astrophys.
Forum Whats New Search Orders


Astron. Astrophys. 336, 697-720 (1998)

Previous Section Next Section Title Page Table of Contents

Appendix A: statistical properties of 1-dimensional random functions: average, variance, autocorrelation function, power spectrum, Allan-variance and [FORMULA]-variance

We shortly summarize the definitions and basic relations between the quantities commonly used to describe the statistical properties of random, or noise, functions. We closely follow the notion used in standard textbooks on the topic, e.g. Davenport & Root (1987) or Bracewell (1986). We also introduce a new quantity, the [FORMULA]-variance , which turns out to be very useful in characterizing the drift behavior and power spectrum also in higher dimensions.

We consider a 1-dimensional, real valued function [FORMULA], e.g. a time series, like the output voltage of a detector device, that varies randomly but has a well defined average, variance and other statistical properties. Its average over a time interval T centered at time t is [FORMULA] Its mean value is then

[EQUATION]

and is assumed to be independent of t. To be more precise, the time average [FORMULA] equals the statistical average [FORMULA] provided that [FORMULA] is a stationary random function with a finite time-correlation scale. In the following we assume [FORMULA]. This is no essential restriction as any random function with non zero average can be changed into one with zero average by subtracting the constant average value, [FORMULA] with [FORMULA]. The autocorrelation function is defined as [FORMULA] and the variance is [FORMULA]

The Fourier transform of [FORMULA] is defined as [FORMULA]. Due to [FORMULA] being real valued, the Fourier transform is hermitian: [FORMULA]. For the convolution of two functions [FORMULA] and [FORMULA], we use the notation [FORMULA]. Though somewhat more cumbersome than the common notation [FORMULA], it avoids inconsistencies inherent in the latter. These arise from the fact that

[EQUATION]

whereas the common notation suggests the incorrect result [FORMULA].

The above definition of the Fourier transform is not fully sufficient as it is not clear that the integral exists. In order to ensure convergence of the Fourier integrals of random functions, one rather defines a truncated function [FORMULA], using the rectangle function [FORMULA]. It approaches [FORMULA] in the limit of infinite T. Its Fourier transform is

[EQUATION]

where the last equality is due to the convolution theorem and uses the fact that the Fourier transform of [FORMULA] is the sinc -function, [FORMULA]. It is, of course, only valid if the Fourier transform [FORMULA] exists. The Fourier transform of the random function [FORMULA] can then be defined "in the limit" (Bracewell 1986) as

[EQUATION]

The second equality again assumes that the Fourier transform [FORMULA] exists. The last equality uses the fact that [FORMULA] approaches the impulse function [FORMULA] in the limit of infinite T; this relation confirms the consistency of the notation in case the Fourier transform exists.

We can now define the normalized autocorrelation function

[EQUATION]

The denominator approaches the variance [FORMULA] in the limit of infinite T, the nominator approaches the autocorrelation function [FORMULA]. Thus,

[EQUATION]

Due to [FORMULA] being real valued, using the convolution theorem and Rayleigh's theorem ,

[EQUATION]

the Fourier transform of the normalized autocorrelation function of [FORMULA] is

[EQUATION]

so that we can define the power spectrum [FORMULA] as the Fourier transform of the autocorrelation function

[EQUATION]

As [FORMULA] is real valued, the power spectrum is an even function, [FORMULA]. It is common practice to define, instead of [FORMULA], the power spectrum [FORMULA] for positive frequencies only, and to fold the power at negative frequencies into the positive frequency domain:

[EQUATION]

We can then write the autocorrelation function as

[EQUATION]

and its back transform

[EQUATION]

For the variance we get [FORMULA].

In order to describe the drift behavior of the random function [FORMULA] one often uses the two point correlation function

[EQUATION]

Another useful quantity, now commonly called Allan variance , was introduced by Allan (1966) in order to characterize the drift behavior. It is the variance of the difference between subsequent averages over a time interval T. With the average over time T as defined above, which we also can write as the convolution with the appropriately scaled rectangle function ,

[EQUATION]

we can write for the difference between subsequent averages

[EQUATION]

where [FORMULA] is the notation for the odd impulse pair function, and we introduced the notation

[EQUATION]

for the down-up-rectangle function. The Allan variance is the variance of these differences: [FORMULA], where the factor [FORMULA] is introduced to match the original definition. In the time domain [FORMULA] can be calculated from the autocorrelation function [FORMULA] (Barnes et al. 1971; Schieder, in prep. ), i.e.

[EQUATION]

In the Fourier domain it can be expressed as

[EQUATION]

where we have used the fact that the power spectrum of the convoluted function [FORMULA] is the product of the power spectra of the functions being convoluted, and the Fourier transform of [FORMULA] is [FORMULA], the Fourier transform of [FORMULA] is [FORMULA].

The statistical properties of the random function [FORMULA], such as variance and Allan variance , are thus completely determined by its power spectrum , or the Fourier transform thereof, its autocorrelation function . The Allan variance in particular is the filtered average of the power spectrum , the filter function being [FORMULA]. This filter function has successive maxima at the roots of the algebraic equation [FORMULA], with [FORMULA]. The first peak, at [FORMULA] is the highest with a peak value of 1.05 , independent of T. The width of this peak is [FORMULA]. Further peaks at higher frequencies are damped [FORMULA]. Using Rayleigh's theorem , its integral can be seen to give [FORMULA]. The filter function, including the additional factor [FORMULA], thus gives another representation of the [FORMULA]-function in the limit of large T,

[EQUATION]

In the same sense, the autocorrelation function can be regarded as the filtered average of the power spectrum with a filter function [FORMULA].

As this paper largely deals with random functions with a power law power spectrum we now summarize the results for this special case. For a 1-dimensional random function (e.g. a time series) with a power law power spectrum , [FORMULA] over a range of frequencies [FORMULA] between a properly defined low and high frequency cutoff as defined in Appendix C, an analytic expression has been derived both for the autocorrelation function and the Allan variance for times [FORMULA] (Barnes et al. 1971; Schieder, in prep. ). We only quote their result here and refer to the original papers and our more extensive discussion, including also 2- and 3-dimensional random functions, in Appendix C for the details. Note that [FORMULA] is the well known case of purely white noise, [FORMULA] is so called flicker noise , [FORMULA] corresponds to the well studied random walk . For the normalized autocorrelation function one obtains

[EQUATION]

where [FORMULA] is Euler's constant . Thus, [FORMULA] decreases from its value of unity at [FORMULA] in a power law fashion [FORMULA] for [FORMULA]. For the range [FORMULA] this power law behavior only holds for sufficiently large values of [FORMULA] and with an additional logarithmic term at [FORMULA] and [FORMULA] (see also Appendix C). For larger values of [FORMULA], i.e. [FORMULA], [FORMULA] drops [FORMULA] independent of [FORMULA]. Note that the high frequency cutoff matters only for [FORMULA]. For [FORMULA] and very large high frequency cutoff the [FORMULA]-function approaches the [FORMULA]-function and we get the well known result for white noise [FORMULA].

The two point correlation function

[EQUATION]

correspondingly shows a power law behavior for small [FORMULA] only as long as [FORMULA]. For [FORMULA], one gets [FORMULA], turning over into [FORMULA] at [FORMULA].

A similar expression gives the Allan variance

[EQUATION]

The behavior of the Allan variance is thus very similar to that of the two point correlation function , varying as [FORMULA] for [FORMULA],and as [FORMULA] for [FORMULA]. However, due to the fact that the integral over the filter function [FORMULA] exists and is finite, the Allan variance has a regular behavior for [FORMULA] even for large [FORMULA], whereas the autocorrelation function has a [FORMULA]-function irregularity. Similarly, the logarithmic divergence for [FORMULA] has been removed due to the filter function starting [FORMULA] for small f (see Appendix C).

Note that the validity of the expression above is limited to [FORMULA] for [FORMULA]. For very large integration time T the Allan variance always drops [FORMULA], i.e. like in the case of white noise, [FORMULA], independent of the spectral index of the power spectrum , and in fact independent on the shape of the power spectrum in general, as long as the power spectrum approaches a finite positive value in the low frequency limit, [FORMULA]. This behavior results because of the [FORMULA]-function like behavior of the filter function in the limit of large T, so that

[EQUATION]

(more precisely, [FORMULA] above should be replaced by its limiting value [FORMULA]). For a difference measurement with very long integration time T the noise level reached is thus always larger than the white noise value [FORMULA], as can be seen from the fact that [FORMULA] as long as [FORMULA] and [FORMULA]. These conditions are met with the power spectrum and its cutoffs as defined in Appendix C.

The definition of the Allan variance via the average difference [FORMULA] as above is, of course, ad hoc. Other definitions are possible and equally adequate. One could, for example, use a "before and after" difference, respectively double difference ,

[EQUATION]

where we have introduced the down-up-down rectangle function

[EQUATION]

for the convolving function. From its definition it is obvious that the power spectrum of [FORMULA] is [FORMULA]. We define the [FORMULA]-variance in analogy to the Allan variance as the variance of this double difference, i.e. [FORMULA]. Expressed via the power spectrum it is then obviously given by

[EQUATION]

For a power law power spectrum [FORMULA] one obtains (see Appendix C)

[EQUATION]

where the [FORMULA] can be determined by comparison with the highest order terms in the corresponding expression for [FORMULA] in Appendix C. Note that the [FORMULA]-variance has no special behavior at [FORMULA], in contrast to the standard Allan variance . It rather shows a logarithmic divergence and turns over into a [FORMULA]-independent power law slope, [FORMULA], at [FORMULA]. As explained in Appendix C, this is due to the fact that the double difference filter function starts [FORMULA] for small f.

The down-up-down rectangle function can also be written as the difference of a rectangle function with width T and a 3 times wider negative rectangle function :

[EQUATION]

This form readily lends itself to further generalization and can easily be extended to define the [FORMULA]-variance also in higher dimensions (see Appendix B). There is no need to define the average with equal weighting, as we have done above. Of particular interest might be a Gaussian weighting function, [FORMULA], thus defining the average as

[EQUATION]

The corresponding Gaussian down-up-down function, replacing the down-up-down rectangle function, is then

[EQUATION]

and the Gaussian weighted average double difference can be written as

[EQUATION]

As the Gaussian weighting function is its own Fourier transform , the Fourier transform of [FORMULA] is [FORMULA]. The variance of the Gaussian weighted average double difference , i.e. the Gaussian [FORMULA]-variance , is then given by

[EQUATION]

According to the results of Appendix C it has, for a power law power spectrum, the same behavior with delay time T as the [FORMULA]-variance [FORMULA].

Appendix B: the definition of the [FORMULA]-variance as the analogue of the Allan Variance in higher dimensions

The definition of the variance of the double difference as given in the preceding appendix can be straightforwardly extended to higher dimension. Before we do so, we shortly repeat the definition of the statistical properties of random functions, such as power spectrum and autocorrelation function in higher dimensions.

Consider a scalar function [FORMULA] in E-dimensional space. Its average over an E-dimensional sphere with diameter D, centered at r , is defined as the convolution with the normalized E-dimensional ball function , i.e.

[EQUATION]

where [FORMULA] is the volume of the E-dimensional unit sphere. Note that this definition is fully consistent with the definition for the 1-dimensional case given in Appendix A. Its mean value

[EQUATION]

is assumed to be independent of [FORMULA] (homogeneity) and is assumed to vanish: [FORMULA]. The autocorrelation function and variance are defined in analogy to the 1-dimensional case: [FORMULA], [FORMULA].

The E-dimensional Fourier transform is

[EQUATION]

the corresponding back transform

[EQUATION]

As [FORMULA] is real valued, [FORMULA] is hermitian: [FORMULA].

To ensure convergence of the Fourier integrals for a random function, we define the truncated function [FORMULA]. In complete analogy to the 1-dimensional case we can then define the Fourier transform , the normalized autocorrelation function and the power spectrum in the limit of [FORMULA]. In particular, we get

[EQUATION]

Due to [FORMULA] being real valued, [FORMULA] is even.

If we assume in addition, that the statistical properties of [FORMULA] are isotropic, the autocorrelation function will depend on the magnitude of [FORMULA] only, i.e. [FORMULA]. The power spectrum , as its Fourier transform , is then also only dependent on [FORMULA], as the angular dependence in the [FORMULA]-term can be integrated out. The relation between the two radial functions is then given by (Bracewell 1986, p. 254)

[EQUATION]

and the identical back transform obtained by exchanging [FORMULA] with [FORMULA] and [FORMULA] with f. Here, [FORMULA] is the surface of the unit sphere in E dimensions. The kernel in the integral, [FORMULA] reduces to [FORMULA] in the 1-dimensional case. In 2 dimensions it is [FORMULA], giving the Hankel transform ; in 3 dimensions it is [FORMULA]. We can now define the analogue to [FORMULA] in the E-dimensional case, by incorporating the E-dimensional solid angle into the definition of the power spectrum : [FORMULA]so that

[EQUATION]

The back transform then reads

[EQUATION]

Before we proceed, we will shortly discuss some properties of the normalized E-dimensional ball function and its Fourier transform . Straightforward algebra, using the fact that [FORMULA], gives for the Fourier transform of the normalized E-dimensional ball function

[EQUATION]

In the limit of [FORMULA], [FORMULA] is constant and equals unity everywhere. Its Fourier transform is the E-dimensional [FORMULA]-function. We thus obtain the useful result

[EQUATION]

For a power law power spectrum , [FORMULA], we derive in Appendix C the following behavior of the normalized autocorrelation function :

[EQUATION]

In analogy to the 1-dimensional case discussed in Appendix A, [FORMULA] thus drops from its value of unity at [FORMULA] in a power law fashion, i.e. [FORMULA], as long as [FORMULA]. For the range of [FORMULA] this is only true once [FORMULA] gets sufficiently large, i.e. ([FORMULA]). For [FORMULA], it is thus approximately independent of [FORMULA], except for the additional logarithmic term also present at [FORMULA]. For [FORMULA] it turns over into an decrease [FORMULA], i.e. a [FORMULA]-independent behaviour. For [FORMULA], we obtain, similar to the 1-dimensional case, in the limit of large [FORMULA]

[EQUATION]

Correspondingly, the E-dimensional two point correlation function [FORMULA] increases in a power law fashion for small [FORMULA] only as long as [FORMULA]. In the range [FORMULA] we have [FORMULA], turning over into a [FORMULA]-independent behavior [FORMULA] for [FORMULA], with an additional logarithmic term at [FORMULA]. Note that the simple dimensional argument given in Voss (1988), pp. 91-92, to derive the relation between the power spectrum power law index and the drift behavior power law index does give neither the turnover to the [FORMULA]-behavior for [FORMULA] nor the logarithmic divergences at [FORMULA] and [FORMULA].

We can now define the E-dimensional down-up-down ball function as the analogue to the down-up-down rectangle function used in the 1-dimensional case (note that down-up-down describes the variation along the diameter of the spherically symmetric function):

[EQUATION]

With the abbreviation [FORMULA], the E-dimensional difference of averages at average distance D is then obtained by convolution of [FORMULA] and [FORMULA]:

[EQUATION]

which is the analogue to the 1-dimensional double difference defined in Appendix A, [FORMULA].

The corresponding analogue of the [FORMULA]-variance in E dimensions is then

[EQUATION]

where we have introduced the factor [FORMULA] as the E-dimensional generalization of the historical factor [FORMULA] in the 1-dim Allan variance .

With the Fourier transform of the normalized E-dimensional ball function given above, the filter function in the [FORMULA]-variance can be written as

[EQUATION]

Series expansion of the Bessel -functions shows that the filter function starts [FORMULA] at small f. As in the 1-dimensional case, it has a first peak at [FORMULA], which narrows down in width as D grows larger. Rayleigh's theorem shows that its E-dimensional integral

[EQUATION]

For large D, the filter function thus approaches

[EQUATION]

As in the 1-dimensional case, this implies that for large D the [FORMULA]-variance goes as

[EQUATION]

similar to the white noise behavior (see below).

For a power law power spectrum , [FORMULA], one obtains (see Appendix C)

[EQUATION]

Note that the [FORMULA]-variance varies as [FORMULA] for the full range of [FORMULA]. It shows a logarithmic divergence and turns over into a [FORMULA]-independent power law slope, [FORMULA], at [FORMULA]. As explained in Appendix C, this is due to the fact that the corresponding filter function starts [FORMULA] for small f.

For white noise, [FORMULA], the [FORMULA]-variance drops inversely proportional to the averaging volume, i.e. [FORMULA], consistent with Gaussian statistics. The factor in front corresponds to the volume average squared weights of the inner ball and the outer shell, and the ad hoc factor [FORMULA]. Comparison with the general behavior for large D (see above) shows that the [FORMULA]-variance at large drift scales, although it drops [FORMULA] as in the white noise case, is always larger than the white noise value; this is due to the fact that [FORMULA] as long as [FORMULA] and [FORMULA].

Appendix C: relation between the power law index of the power spectrum and the drift behavior

In this Appendix we will derive the general connection between the power law index of the power spectrum and the drift behavior, as e.g. given by the autocorrelation function , the Allan variance for 1-dimensional time series, or in general for the [FORMULA]-variance of an E-dimensional random function whose statistical properties are isotropic and homogeneous, i.e. spherically symmetric. As shown in Appendix A and B, these statistical quantities are in general defined as integrals over the power spectrum weighted with some scaled filter function [FORMULA]. We thus consider in the general, E-dimensional case a quantity

[EQUATION]

As the filter function is the squared modulus of the Fourier transform of the corresponding convolving function in the spatial domain, defining the appropriate differences of averages characterizing the statistical quantity [FORMULA] under question, we know that [FORMULA] is even, [FORMULA]. Its series expansion for small frequencies thus contains only even power law terms. Moreover, as the convolving function in the spatial domain is square integrable, the same is true for its Fourier transform , so that the integral [FORMULA] exists.

Only in the special case of the autocorrelation function , where the expression for the E-dimensional Fourier transform gives for the filter function

[EQUATION]

the situation is slightly more subtle, and will be discussed separately below.

We now assume a power law power spectrum , i.e. [FORMULA]. This shape cannot be valid for all f. At low frequencies, we have to introduce a cutoff [FORMULA] where [FORMULA] turns over towards a constant, finite value [FORMULA]. The requirement of a finite value [FORMULA] is, due to the Fourier transform relation between the power spectrum and the autocorrelation function mathematically equivalent to the requirement that [FORMULA] exists, i.e. is finite. This is always the case if [FORMULA] drops faster than [FORMULA] for large [FORMULA]. We also assume that [FORMULA] for all frequencies higher than the high frequency cutoff [FORMULA]. The high frequency cutoff ensures, for a power law index shallower than [FORMULA], a finite total "energy", respectively a finite variance , i.e. guarantees that [FORMULA] exists and is finite. It is irrelevant for a steeper power spectrum .

Similar to the definition used by Schieder in prep. , we thus assume a power spectrum

[EQUATION]

This definition formally includes the case [FORMULA] if we additionally set [FORMULA] for [FORMULA]. With the normalization condition [FORMULA], a straightforward calculation shows that

[EQUATION]

We can then calculate the expression for [FORMULA] and obtain, after rearranging terms appropriately

[EQUATION]

where

[EQUATION]

Here we have introduced the abbreviation

[EQUATION]

Due to the properties of the filter function [FORMULA] discussed above, these integrals exist and have finite values for any finite value of z.

We now consider the limit of a large high frequency cutoff and a small low frequency cutoff, i.e. [FORMULA]. We can then write

[EQUATION]

Only certain terms survive in leading order in the expression for [FORMULA]:

[EQUATION]

Considering now values of [FORMULA] well in between the low and high frequency cutoff, i.e. [FORMULA], we have to worry about the behavior of [FORMULA] for large and small z. As discussed above, [FORMULA] approaches a finite value for large z: [FORMULA]. The same is true for [FORMULA] as the additional factor [FORMULA] in the integrand can only improve the convergence of the already integrable function [FORMULA] at large x.

Only in the special case that [FORMULA] is the autocorrelation function, i.e. [FORMULA], the behavior at large z is different. The integral can be evaluate analytically and gives

[EQUATION]

In this special case, [FORMULA] thus does not approach a definite value for large z, but keeps oscillating with decreasing amplitude. In the 1-dimensional case, this is exactly the [FORMULA]-function with its corresponding behavior, approaching the [FORMULA]-function in the limit of large high frequency cutoff.

At the low frequency cutoff, [FORMULA], we can use the series expansion of [FORMULA] to obtain

[EQUATION]

With [FORMULA] being the first non zero coefficient in the series expansion of [FORMULA], i.e. [FORMULA], we then get to leading order in [FORMULA]

[EQUATION]

Thus, [FORMULA] basically goes [FORMULA] for [FORMULA], and turns over into [FORMULA] behavior for larger values of [FORMULA].

In detail, one notes that for [FORMULA] only the low frequency cutoff is relevant, i.e. the expressions stay valid for [FORMULA], independent of [FORMULA]. For [FORMULA], [FORMULA] shows a logarithmic divergence for small [FORMULA] unless [FORMULA]. It always shows a logarithmic divergence for small [FORMULA] when [FORMULA], i.e. at the turnover from the [FORMULA]-behavior of [FORMULA] for [FORMULA] to the [FORMULA]-behavior at [FORMULA].

From this general expression we can derive the special cases given in Appendix A and B. Let us consider as an example the 1-dimensional autocorrelation function . With [FORMULA] and [FORMULA] we have [FORMULA], [FORMULA], i.e. [FORMULA] in the above definition. As the reader can verify, this immediately leads to the approximation for [FORMULA] as given in Appendix A. The integrals [FORMULA] can in this case be calculated in closed form:

[EQUATION]

where [FORMULA] is a Confluent Hypergeometric Function (Kummer's Function), and [FORMULA] is the nth-order Exponential Integral (see Abramovitz & Stegun 1972). The expression in Appendix A then results from the asymptotic form of [FORMULA] and [FORMULA] for large u, and their series expansions for [FORMULA] after a straightforward, but tedious calculation. The calculation for the standard Allan variance or for the 1-dimensional [FORMULA]-variance proceeds very similar. This is also the case for the corresponding 3-dimensional expressions. In the 2-dimensional case, the corresponding integrals involving integer order Bessel functions do not allow a straightforward manipulation into a closed form.

Previous Section Next Section Title Page Table of Contents

© European Southern Observatory (ESO) 1998

Online publication: July 20, 1998
helpdesk.link@springer.de