Astron. Astrophys. 336, 697-720 (1998) Appendix A: statistical properties of 1-dimensional random functions: average, variance, autocorrelation function, power spectrum, Allan-variance and -varianceWe shortly summarize the definitions and basic relations between the quantities commonly used to describe the statistical properties of random, or noise, functions. We closely follow the notion used in standard textbooks on the topic, e.g. Davenport & Root (1987) or Bracewell (1986). We also introduce a new quantity, the -variance , which turns out to be very useful in characterizing the drift behavior and power spectrum also in higher dimensions. We consider a 1-dimensional, real valued function , e.g. a time series, like the output voltage of a detector device, that varies randomly but has a well defined average, variance and other statistical properties. Its average over a time interval T centered at time t is Its mean value is then and is assumed to be independent of t. To be more precise, the time average equals the statistical average provided that is a stationary random function with a finite time-correlation scale. In the following we assume . This is no essential restriction as any random function with non zero average can be changed into one with zero average by subtracting the constant average value, with . The autocorrelation function is defined as and the variance is The Fourier transform of is defined as . Due to being real valued, the Fourier transform is hermitian: . For the convolution of two functions and , we use the notation . Though somewhat more cumbersome than the common notation , it avoids inconsistencies inherent in the latter. These arise from the fact that whereas the common notation suggests the incorrect result . The above definition of the Fourier transform is not fully sufficient as it is not clear that the integral exists. In order to ensure convergence of the Fourier integrals of random functions, one rather defines a truncated function , using the rectangle function . It approaches in the limit of infinite T. Its Fourier transform is where the last equality is due to the convolution theorem and uses the fact that the Fourier transform of is the sinc -function, . It is, of course, only valid if the Fourier transform exists. The Fourier transform of the random function can then be defined "in the limit" (Bracewell 1986) as The second equality again assumes that the Fourier transform exists. The last equality uses the fact that approaches the impulse function in the limit of infinite T; this relation confirms the consistency of the notation in case the Fourier transform exists. We can now define the normalized autocorrelation function The denominator approaches the variance in the limit of infinite T, the nominator approaches the autocorrelation function . Thus, Due to being real valued, using the convolution theorem and Rayleigh's theorem , the Fourier transform of the normalized autocorrelation function of is so that we can define the power spectrum as the Fourier transform of the autocorrelation function As is real valued, the power spectrum is an even function, . It is common practice to define, instead of , the power spectrum for positive frequencies only, and to fold the power at negative frequencies into the positive frequency domain: We can then write the autocorrelation function as and its back transform For the variance we get . In order to describe the drift behavior of the random function one often uses the two point correlation function Another useful quantity, now commonly called Allan variance , was introduced by Allan (1966) in order to characterize the drift behavior. It is the variance of the difference between subsequent averages over a time interval T. With the average over time T as defined above, which we also can write as the convolution with the appropriately scaled rectangle function , we can write for the difference between subsequent averages where is the notation for the odd impulse pair function, and we introduced the notation for the down-up-rectangle function. The Allan variance is the variance of these differences: , where the factor is introduced to match the original definition. In the time domain can be calculated from the autocorrelation function (Barnes et al. 1971; Schieder, in prep. ), i.e. In the Fourier domain it can be expressed as where we have used the fact that the power spectrum of the convoluted function is the product of the power spectra of the functions being convoluted, and the Fourier transform of is , the Fourier transform of is . The statistical properties of the random function , such as variance and Allan variance , are thus completely determined by its power spectrum , or the Fourier transform thereof, its autocorrelation function . The Allan variance in particular is the filtered average of the power spectrum , the filter function being . This filter function has successive maxima at the roots of the algebraic equation , with . The first peak, at is the highest with a peak value of 1.05 , independent of T. The width of this peak is . Further peaks at higher frequencies are damped . Using Rayleigh's theorem , its integral can be seen to give . The filter function, including the additional factor , thus gives another representation of the -function in the limit of large T, In the same sense, the autocorrelation function can be regarded as the filtered average of the power spectrum with a filter function . As this paper largely deals with random functions with a power law power spectrum we now summarize the results for this special case. For a 1-dimensional random function (e.g. a time series) with a power law power spectrum , over a range of frequencies between a properly defined low and high frequency cutoff as defined in Appendix C, an analytic expression has been derived both for the autocorrelation function and the Allan variance for times (Barnes et al. 1971; Schieder, in prep. ). We only quote their result here and refer to the original papers and our more extensive discussion, including also 2- and 3-dimensional random functions, in Appendix C for the details. Note that is the well known case of purely white noise, is so called flicker noise , corresponds to the well studied random walk . For the normalized autocorrelation function one obtains where is Euler's constant . Thus, decreases from its value of unity at in a power law fashion for . For the range this power law behavior only holds for sufficiently large values of and with an additional logarithmic term at and (see also Appendix C). For larger values of , i.e. , drops independent of . Note that the high frequency cutoff matters only for . For and very large high frequency cutoff the -function approaches the -function and we get the well known result for white noise . The two point correlation function correspondingly shows a power law behavior for small only as long as . For , one gets , turning over into at . A similar expression gives the Allan variance The behavior of the Allan variance is thus very similar to that of the two point correlation function , varying as for ,and as for . However, due to the fact that the integral over the filter function exists and is finite, the Allan variance has a regular behavior for even for large , whereas the autocorrelation function has a -function irregularity. Similarly, the logarithmic divergence for has been removed due to the filter function starting for small f (see Appendix C). Note that the validity of the expression above is limited to for . For very large integration time T the Allan variance always drops , i.e. like in the case of white noise, , independent of the spectral index of the power spectrum , and in fact independent on the shape of the power spectrum in general, as long as the power spectrum approaches a finite positive value in the low frequency limit, . This behavior results because of the -function like behavior of the filter function in the limit of large T, so that (more precisely, above should be replaced by its limiting value ). For a difference measurement with very long integration time T the noise level reached is thus always larger than the white noise value , as can be seen from the fact that as long as and . These conditions are met with the power spectrum and its cutoffs as defined in Appendix C. The definition of the Allan variance via the average difference as above is, of course, ad hoc. Other definitions are possible and equally adequate. One could, for example, use a "before and after" difference, respectively double difference , where we have introduced the down-up-down rectangle function for the convolving function. From its definition it is obvious that the power spectrum of is . We define the -variance in analogy to the Allan variance as the variance of this double difference, i.e. . Expressed via the power spectrum it is then obviously given by For a power law power spectrum one obtains (see Appendix C) where the can be determined by comparison with the highest order terms in the corresponding expression for in Appendix C. Note that the -variance has no special behavior at , in contrast to the standard Allan variance . It rather shows a logarithmic divergence and turns over into a -independent power law slope, , at . As explained in Appendix C, this is due to the fact that the double difference filter function starts for small f. The down-up-down rectangle function can also be written as the difference of a rectangle function with width T and a 3 times wider negative rectangle function : This form readily lends itself to further generalization and can easily be extended to define the -variance also in higher dimensions (see Appendix B). There is no need to define the average with equal weighting, as we have done above. Of particular interest might be a Gaussian weighting function, , thus defining the average as The corresponding Gaussian down-up-down function, replacing the down-up-down rectangle function, is then and the Gaussian weighted average double difference can be written as As the Gaussian weighting function is its own Fourier transform , the Fourier transform of is . The variance of the Gaussian weighted average double difference , i.e. the Gaussian -variance , is then given by According to the results of Appendix C it has, for a power law power spectrum, the same behavior with delay time T as the -variance . Appendix B: the definition of the -variance as the analogue of the Allan Variance in higher dimensionsThe definition of the variance of the double difference as given in the preceding appendix can be straightforwardly extended to higher dimension. Before we do so, we shortly repeat the definition of the statistical properties of random functions, such as power spectrum and autocorrelation function in higher dimensions. Consider a scalar function in E-dimensional space. Its average over an E-dimensional sphere with diameter D, centered at r , is defined as the convolution with the normalized E-dimensional ball function , i.e. where is the volume of the E-dimensional unit sphere. Note that this definition is fully consistent with the definition for the 1-dimensional case given in Appendix A. Its mean value is assumed to be independent of (homogeneity) and is assumed to vanish: . The autocorrelation function and variance are defined in analogy to the 1-dimensional case: , . The E-dimensional Fourier transform is the corresponding back transform As is real valued, is hermitian: . To ensure convergence of the Fourier integrals for a random function, we define the truncated function . In complete analogy to the 1-dimensional case we can then define the Fourier transform , the normalized autocorrelation function and the power spectrum in the limit of . In particular, we get Due to being real valued, is even. If we assume in addition, that the statistical properties of are isotropic, the autocorrelation function will depend on the magnitude of only, i.e. . The power spectrum , as its Fourier transform , is then also only dependent on , as the angular dependence in the -term can be integrated out. The relation between the two radial functions is then given by (Bracewell 1986, p. 254) and the identical back transform obtained by exchanging with and with f. Here, is the surface of the unit sphere in E dimensions. The kernel in the integral, reduces to in the 1-dimensional case. In 2 dimensions it is , giving the Hankel transform ; in 3 dimensions it is . We can now define the analogue to in the E-dimensional case, by incorporating the E-dimensional solid angle into the definition of the power spectrum : so that The back transform then reads Before we proceed, we will shortly discuss some properties of the normalized E-dimensional ball function and its Fourier transform . Straightforward algebra, using the fact that , gives for the Fourier transform of the normalized E-dimensional ball function In the limit of , is constant and equals unity everywhere. Its Fourier transform is the E-dimensional -function. We thus obtain the useful result For a power law power spectrum , , we derive in Appendix C the following behavior of the normalized autocorrelation function : In analogy to the 1-dimensional case discussed in Appendix A, thus drops from its value of unity at in a power law fashion, i.e. , as long as . For the range of this is only true once gets sufficiently large, i.e. (). For , it is thus approximately independent of , except for the additional logarithmic term also present at . For it turns over into an decrease , i.e. a -independent behaviour. For , we obtain, similar to the 1-dimensional case, in the limit of large Correspondingly, the E-dimensional two point correlation function increases in a power law fashion for small only as long as . In the range we have , turning over into a -independent behavior for , with an additional logarithmic term at . Note that the simple dimensional argument given in Voss (1988), pp. 91-92, to derive the relation between the power spectrum power law index and the drift behavior power law index does give neither the turnover to the -behavior for nor the logarithmic divergences at and . We can now define the E-dimensional down-up-down ball function as the analogue to the down-up-down rectangle function used in the 1-dimensional case (note that down-up-down describes the variation along the diameter of the spherically symmetric function): With the abbreviation , the E-dimensional difference of averages at average distance D is then obtained by convolution of and : which is the analogue to the 1-dimensional double difference defined in Appendix A, . The corresponding analogue of the -variance in E dimensions is then where we have introduced the factor as the E-dimensional generalization of the historical factor in the 1-dim Allan variance . With the Fourier transform of the normalized E-dimensional ball function given above, the filter function in the -variance can be written as Series expansion of the Bessel -functions shows that the filter function starts at small f. As in the 1-dimensional case, it has a first peak at , which narrows down in width as D grows larger. Rayleigh's theorem shows that its E-dimensional integral For large D, the filter function thus approaches As in the 1-dimensional case, this implies that for large D the -variance goes as similar to the white noise behavior (see below). For a power law power spectrum , , one obtains (see Appendix C) Note that the -variance varies as for the full range of . It shows a logarithmic divergence and turns over into a -independent power law slope, , at . As explained in Appendix C, this is due to the fact that the corresponding filter function starts for small f. For white noise, , the -variance drops inversely proportional to the averaging volume, i.e. , consistent with Gaussian statistics. The factor in front corresponds to the volume average squared weights of the inner ball and the outer shell, and the ad hoc factor . Comparison with the general behavior for large D (see above) shows that the -variance at large drift scales, although it drops as in the white noise case, is always larger than the white noise value; this is due to the fact that as long as and . Appendix C: relation between the power law index of the power spectrum and the drift behaviorIn this Appendix we will derive the general connection between the power law index of the power spectrum and the drift behavior, as e.g. given by the autocorrelation function , the Allan variance for 1-dimensional time series, or in general for the -variance of an E-dimensional random function whose statistical properties are isotropic and homogeneous, i.e. spherically symmetric. As shown in Appendix A and B, these statistical quantities are in general defined as integrals over the power spectrum weighted with some scaled filter function . We thus consider in the general, E-dimensional case a quantity As the filter function is the squared modulus of the Fourier transform of the corresponding convolving function in the spatial domain, defining the appropriate differences of averages characterizing the statistical quantity under question, we know that is even, . Its series expansion for small frequencies thus contains only even power law terms. Moreover, as the convolving function in the spatial domain is square integrable, the same is true for its Fourier transform , so that the integral exists. Only in the special case of the autocorrelation function , where the expression for the E-dimensional Fourier transform gives for the filter function the situation is slightly more subtle, and will be discussed separately below. We now assume a power law power spectrum , i.e. . This shape cannot be valid for all f. At low frequencies, we have to introduce a cutoff where turns over towards a constant, finite value . The requirement of a finite value is, due to the Fourier transform relation between the power spectrum and the autocorrelation function mathematically equivalent to the requirement that exists, i.e. is finite. This is always the case if drops faster than for large . We also assume that for all frequencies higher than the high frequency cutoff . The high frequency cutoff ensures, for a power law index shallower than , a finite total "energy", respectively a finite variance , i.e. guarantees that exists and is finite. It is irrelevant for a steeper power spectrum . Similar to the definition used by Schieder in prep. , we thus assume a power spectrum This definition formally includes the case if we additionally set for . With the normalization condition , a straightforward calculation shows that We can then calculate the expression for and obtain, after rearranging terms appropriately where Here we have introduced the abbreviation Due to the properties of the filter function discussed above, these integrals exist and have finite values for any finite value of z. We now consider the limit of a large high frequency cutoff and a small low frequency cutoff, i.e. . We can then write Only certain terms survive in leading order in the expression for : Considering now values of well in between the low and high frequency cutoff, i.e. , we have to worry about the behavior of for large and small z. As discussed above, approaches a finite value for large z: . The same is true for as the additional factor in the integrand can only improve the convergence of the already integrable function at large x. Only in the special case that is the autocorrelation function, i.e. , the behavior at large z is different. The integral can be evaluate analytically and gives In this special case, thus does not approach a definite value for large z, but keeps oscillating with decreasing amplitude. In the 1-dimensional case, this is exactly the -function with its corresponding behavior, approaching the -function in the limit of large high frequency cutoff. At the low frequency cutoff, , we can use the series expansion of to obtain With being the first non zero coefficient in the series expansion of , i.e. , we then get to leading order in Thus, basically goes for , and turns over into behavior for larger values of . In detail, one notes that for only the low frequency cutoff is relevant, i.e. the expressions stay valid for , independent of . For , shows a logarithmic divergence for small unless . It always shows a logarithmic divergence for small when , i.e. at the turnover from the -behavior of for to the -behavior at . From this general expression we can derive the special cases given in Appendix A and B. Let us consider as an example the 1-dimensional autocorrelation function . With and we have , , i.e. in the above definition. As the reader can verify, this immediately leads to the approximation for as given in Appendix A. The integrals can in this case be calculated in closed form: where is a Confluent Hypergeometric Function (Kummer's Function), and is the nth-order Exponential Integral (see Abramovitz & Stegun 1972). The expression in Appendix A then results from the asymptotic form of and for large u, and their series expansions for after a straightforward, but tedious calculation. The calculation for the standard Allan variance or for the 1-dimensional -variance proceeds very similar. This is also the case for the corresponding 3-dimensional expressions. In the 2-dimensional case, the corresponding integrals involving integer order Bessel functions do not allow a straightforward manipulation into a closed form. © European Southern Observatory (ESO) 1998 Online publication: July 20, 1998 |