SpringerLink
Forum Springer Astron. Astrophys.
Forum Whats New Search Orders


Astron. Astrophys. 342, 363-377 (1999)

Previous Section Next Section Title Page Table of Contents

2. The ISOCAM data processing

2.1. Reading the raw data

Raw telemetry from ISO is converted into different formats and delivered by ESA to the community. We chose to start from SPD FITS files. This means that the pair of reset and end-of-integration images are already subtracted. The CISP (almost raw) data are read into a dataset which contains, along with the data cube (of size 32 by 32 by the total number of readouts), a header describing the context of the observations, and various trend parameter arrays (CAM wheel positions, various ISO time definitions) which are or are not synchronised with the images. An important trend parameter set is the pointing information from the ISO satellite which is read from the appropriate IIPH (ISO Instrument Pointing History) FITS file and appended to the raw dataset. The equatorial right ascension (RA) and declination (Dec) coordinates (in the J2000 system) pointed to by the centre of the camera suffer from some noise (not corresponding to a real ISO jitter). These are smoothed with a running kernel with a FWHM of 2 arcseconds which give a relative pointing accuracy better than one arcsecond (Fig. 1). The pointing data are synchronised to the dataset cube of images by using the UTK time that is common to CISP and IIPH files. Data are usually taken in a raster mode where a regular grid of pointings on the sky is done successively. Care should be taken not to use the on-target flag delivered in the CISP file. This flag is set to one when the acquired pointing is within 10 arcsecond of the required raster pointing. This flag is obviously too loose if one uses the 6 or, even worse, the 3 arcsecond lens. A 2 arcsecond radius of pointing tolerance is used here for our own criterion.

[FIGURE] Fig. 1. ISO pointing history taken from IIPH22701702. Consecutive points are separated in time by half a second. The positions in RA and Dec (2000) are smoothed with a 2 second temporal kernel. One must mask out readouts which were taken when during some part of them, ISO was moving.

At this stage the data in the ISOCAM internal units of ADU (Analog-to-Digital Units) are converted to ADUG units by simply dividing by the gain (equal to [FORMULA]). A library dark image is then subtracted, the accuracy of which is not relevant in the following. In the next three sub-sections we consider each pixel as individual detectors and analyse their temporal behaviour without regard to their neighbours. A mask cube of the same size as the data cube is used in parallel to flag the data which are affected by various damaging effects.

2.2. Removing particle hits

2.2.1. Fast glitch correction

At first glance (Fig. 2), any readout obtained with ISOCAM not containing strong sources (fluxes larger typically than about 10 mJy in the broad filters) clearly shows the flat field response by the pixels to the zodiacal background. Ultimately, it is the accuracy to which one knows this flat-field that allows identification of faint sources (see Sect. 2.4).

[FIGURE] Fig. 2. A sample of elementary successive images with the ISO 32 by 32 pixel LW camera taken from the CISP22701702 data set, number 100 to 148 (starting at 0). After each image, a mask deduced from the short deglitching algorithm shows the numerous impacts of cosmic rays. (Configuration of the camera is LW3 filter, LGe6 lens, 36 camtu (=5 seconds) of elementary integration time.)

Detectable signals from some pixels which are often aligned in strings result from hits by cosmic ray particles. Most of the affected pixels recover at the next or second next readout. These are mainly due to primary and secondary electron energy deposition onto the array.

The affected pixels are found by an algorithm working on the temporal behaviour of each pixel. Readouts of one pixel which deviate from the running (14 readouts) median by more than a threshold value and tend to recover to the normal level by at least 10 percent of the maximum step after one or two readouts are simply masked for two readouts and excluded from further signal extraction. The threshold value is set by a number (typically 3) times a running window (14 readouts) standard deviation of the pixel signal, where the most deviant values have been excluded from the rms computation. This algorithm was tested in various cases, and in particular against false glitch detection around the typical signal of a moderately strong source. Fig. 3 shows an example of the temporal behaviour of one CAM pixel as a function of readout number.

[FIGURE] Fig. 3a and b. Temporal evolution of the raw signal (in ADUG i.e. Analog-to-Digital Units divided by the gain) from camera pixel 15, 15 (starting at 0, 0) taken in CISP22701702 (abscissa is in readout number starting at 0 for the first readout of the dataset). Upper panel - A mask of readouts affected by fast glitches as deduced by the algorithm described in the text (dotted vertical lines). Lower panel - the raw data

The number of glitches, or more precisely, the number of readouts which are affected by the masking process, is typically 9 per second over the entire array of 992 alive pixels: e.g. during an integration time of [FORMULA] = 36 camtu = 5 seconds, at any time 45 pixels cannot be used for measurement. This number varies by at least 15 percent apparently depending on the satellite orbiting position. Accumulated glitches cause a noticeable increase in the noise or an unreliable measurement. We thus decided to mask an isolated "good" readout value if it is between two successive glitches. The energy deposition has a continuum distribution that goes down to the intrisic noise level of the camera. For the method presented here, any undetected faint glitch contributes to increase the statistical noise and to modify slightly the signal.

2.2.2. Slow glitch correction

These are the most difficult to deal with. They are the main limitation in detecting weak point-like or extended sources and are thought to be due to ion impacts. These impacts affect the response of the hit pixels with a long memory tail. Slow glitches are of three types: 1) Positive decay of short time constant (5-20 seconds), 2) Negative decay of long time constant (up to 300 seconds) 3) A combination. It is possible that positive decay glitches are due to an ionisation or energy deposition which does not saturate the detector. Negative ones may have started with saturation (which is not apparent in the readout value) and hence cause memory effects, an upward transient of a type not very different from the usual CAM transient starting from a low flux.

Fig. 4a shows the effect of slow glitches. In Fig. 4b the corrected signal is shown along with the used mask. To detect slow glitches we correlate a running template of a typical glitch along the temporal signal [FORMULA] of a given pixel. A maximum in the correlation indicates a potential glitch at say [FORMULA]. This is then analysed with a least-square method in order to find the best decay time constant: one minimises the [FORMULA] quantity defined by

[EQUATION]

where [FORMULA] is the raw temporal data signal of a pixel, [FORMULA] the weight (0 or 1, defined by the previous masking of fast glitches), [FORMULA] the glitch model to fit, and [FORMULA] is given. Hence, the variables A, B, G, and [FORMULA] are found (H is the Heaviside jump function).

[FIGURE] Fig. 4a and b. Temporal evolution of the raw signal (in ADUG) from camera pixel 16, 15 (starting at 0, 0) taken in CISP22701702 (abscissa is in readout number starting at 0 for the first readout of the dataset). a Raw signal (except for the removal of the first camera overall transient which can be seen in the previous figure) b The signal after the removal of slow glitches and the adapted mask (40 is for a fast glitch, see Sect. 2.2.1, and 80 for a slow glitch,see Sect. 2.2.1): a non vanishing mask corresponds to a value which will not be used for further processing, masked values are not shown. The dip near readout 1250 is not corrected at this stage.

If the fit is satisfactory, we subtract the exponential part of the fit up to the end of the temporal signal. To assess whether the fit is satisfactory or not, we also calculate the least-square [FORMULA] corresponding to a linear baseline without the exponential part. We found that, a potential glitch can be taken as valid if [FORMULA]. Most of the time, the glitch beginning at time [FORMULA] corresponds to a previously detected fast glitch.

Type 1 glitches can be removed with the previous method but, sometimes, the same method also removes real source signals (for example, if there is a downward transient when a pixel leaves a source). We thus leave Type 1 glitches to be removed later in the method. Type 3 glitches are not dealt with at the moment. The running template to detect a glitch and find its starting time [FORMULA] is a simple exponential with successive time constants of 15, 30, and 60 readouts for negative glitches (Type 2 glitches). After a glitch is found at a significant level, and the exponential tail is corrected everywhere after [FORMULA], we mask (i.e. we will not further use) the readouts for times between [FORMULA] and the time [FORMULA] where the amplitude of the exponential correction is above twice the pixel noise per readout. An example can be found in Fig. 4a and b.

Typically one new slow negative glitch appears somewhere on the camera every 1.2 second. Its intensity (G) varies from 5 to 20 ADUG. The time constant varies from 20 to 200 seconds (45 is typical). Positive glitches are 10 times rarer.

2.3. Removing transient effects: a simple correction technique

ISOCAM, like many other infrared detectors, suffers a lag in its response to illumination. Fortunately, the LW detector has a significant instantaneous response [FORMULA] i.e. a jump in brightness is seen at once by the pixel at the level of 60% of the step relative to its stabilised asymptotic value. The remainder of the signal is obtained after a delay which is inversely proportional to the flux. In a first approximation, Abergel et al. (1996) have modelled this phenomenon with:

[EQUATION]

where the measured signal [FORMULA] at time t is a function of the illumination [FORMULA] at previous times, and

[EQUATION]

where an ADUG is the CAM analog to digital unit normalised by the used gain. The approximation for k in Eq. 4 makes it possible to invert the triangular temporal matrix. The inversion algorithm is independent of the position of the satellite and makes no a priori assumption as to the temporal evolution of the pixel intensity history. It also preserves the volume of the data cube. An example of a relatively strong source is shown in Fig. 5a. The inversion (getting I from D) yields the result shown in Fig. 5b. The inversion apparently enhances the high frequency noise of the pixel but the signal to noise ratio stays constant because the overall ISOCAM calibration must be updated after the transient correction has been applied.

[FIGURE] Fig. 5. a The transient phenomenon of ISOCAM is illustrated on a relatively strong source taken from ISOCAM guaranteed time deep surveys. The signal [FORMULA] is shown in ADUG as a function of readout number. Eight ISO raster positions are within the plot. At readout 575, just after ISO moved, the pixel (22, 28) instantaneously responds to the illumination added by a (rather strong) source on top of the zodiacal background, but it also has a long lagged response (mask is in dotted line). After ISO moved to the next raster position, one can still see the memory effect of the source. b The recovered illumination history [FORMULA] of the same pixel after inversion of the transient model (Eq. 4). The solid line is the result of the triple-beam switch fitting (Eq. 6, only the central part of the three-leg best fit of each raster position is shown for simplicity). The fit is done on valid readouts within each raster position, as delimited by the vertical lines. No sources in the HDF can be seen as vividly as here because they are much fainter.

The correction helps in improving the calibration accuracy because it gives the proper stabilised value that can be directly tied to a response measured on bright stabilised stellar standards. It also removes ghosts of sources which otherwise can still be seen after the pixel has been pointed away from the source to its next raster position (see Fig. 5a). We believe this correction works best for faint sources, the main objective of the present study. The correction is not yet perfect and further understanding of the camera lag behaviour will certainly provide improvements in the final calibration.

2.4. Removing long term drifts: the triple beam-switch method

The data cube should now contain a signal in the unmasked areas, which is almost constant for a given raster position and a given pixel. At this stage, we remove most of the slow positive glitches for a given pixel

  1. by computing the standard deviation found inside each configuration (i.e. one raster position) and by taking the median average of this deviation for all configurations.

  2. by masking a readout if it deviates by more than 4 times this typical noise from the median level of its configuration

The value of 4 was found by trial and error.

We are then left with a data cube where all or almost all "bad" pixels have been masked and therefore are not used further. As there is still some low-frequency noise for each pixel, we do not feel ready yet to project the total power value of each pixel on the sky but instead we prefer comparing the values during a raster position to the values in the two adjacent raster positions seen by the same CAM pixel. This is the classical approach for dealing with low-frequency noise. It is usually adopted when the background is much stronger than the sources: a regime which has long been the case in infrared astronomy and which is now appearing even in optical astronomy. This approach has been pioneered by e.g. Papoular (1983).

This is done with the following least-square method which is independently applied to each pixel (where i is the readout number proportional to the time [FORMULA] which runs along the 3 current raster positions centered at the mid time), by minimising:

[EQUATION]

where [FORMULA] is 1 for valid readouts and 0 for masked readouts or readouts which do not belong to the three current raster positions e.g. when ISO was moving (see Sect. 2.1). [FORMULA] is the pixel signal at readout i. [FORMULA] is the template of a source, typically a square pattern (0...0, 1,...,1,0...0) template, where the ones are set for the central raster position. The best a, b, and u are therefore obtained from the minimisation of Ls. The method gives an estimate of the noise on the [FORMULA], [FORMULA], and [FORMULA] parameters by assuming that roughly [FORMULA] (because 3 parameters are fitted) so that the noise per readout is [FORMULA] and [FORMULA] is obtained from formula 6. The value of [FORMULA] (simplified to u in the following) corresponds to the best estimate of the average signal for each pixel and raster position. The associated noise [FORMULA] and [FORMULA] are recorded for a given pixel at a given raster position. The uncertainty on the pixel signal [FORMULA] is itself quite noisy for a given raster position so in fact we replace it by the median of all [FORMULA] found during the raster for that particular pixel (the quality of the fit [FORMULA] is modified accordingly). As a complementary and very effective glitch removal, we mask the signal of a given raster position if its [FORMULA] deviates by more than a factor 2 from the pixel median value across all rasters or if the number of points used for the fit is less than a factor of 0.6 times the median value. No detection of a point source is made at this stage. Hence, the data cube is reduced to a few values per raster position and per pixel of the camera. For the first and last positions of the raster, we use a 2-beam differencing scheme similar to the 3-beam scheme presented except that no slope b can be found. We checked, a posteriori, that the distribution of all the values of the signal u divided by their respective noise [FORMULA] precisely follows a reduced Gaussian (actually we slightly overestimate the noise by up to 15 percent), except for the few pixels affected by sources. This is strong evidence that white noise dominates the output of this algorithm.

Note that the standard raster averaging method followed by ON-(OFF1+OFF2)/2 differencing scheme would have worked in most situations except that here the noise can be estimated independently, the least square statistics can be used as an effective glitch removal, and, in the case of several randomly placed masked values the baseline removal is better defined. Note also that the noise of the triple beam-switch method is [FORMULA] worse than for an absolute measurement (in case the flat field were perfectly known). But the low-frequency noise is here largely suppressed which overbalances the loss of sensitivity, which in principle costs an integration time longer by 50 percent on target.

Previous Section Next Section Title Page Table of Contents

© European Southern Observatory (ESO) 1999

Online publication: February 22, 1999
helpdesk.link@springer.de