2. The ISOCAM data processing
2.1. Reading the raw data
Raw telemetry from ISO is converted into different formats and delivered by ESA to the community. We chose to start from SPD FITS files. This means that the pair of reset and end-of-integration images are already subtracted. The CISP (almost raw) data are read into a dataset which contains, along with the data cube (of size 32 by 32 by the total number of readouts), a header describing the context of the observations, and various trend parameter arrays (CAM wheel positions, various ISO time definitions) which are or are not synchronised with the images. An important trend parameter set is the pointing information from the ISO satellite which is read from the appropriate IIPH (ISO Instrument Pointing History) FITS file and appended to the raw dataset. The equatorial right ascension (RA) and declination (Dec) coordinates (in the J2000 system) pointed to by the centre of the camera suffer from some noise (not corresponding to a real ISO jitter). These are smoothed with a running kernel with a FWHM of 2 arcseconds which give a relative pointing accuracy better than one arcsecond (Fig. 1). The pointing data are synchronised to the dataset cube of images by using the UTK time that is common to CISP and IIPH files. Data are usually taken in a raster mode where a regular grid of pointings on the sky is done successively. Care should be taken not to use the on-target flag delivered in the CISP file. This flag is set to one when the acquired pointing is within 10 arcsecond of the required raster pointing. This flag is obviously too loose if one uses the 6 or, even worse, the 3 arcsecond lens. A 2 arcsecond radius of pointing tolerance is used here for our own criterion.
At this stage the data in the ISOCAM internal units of ADU (Analog-to-Digital Units) are converted to ADUG units by simply dividing by the gain (equal to ). A library dark image is then subtracted, the accuracy of which is not relevant in the following. In the next three sub-sections we consider each pixel as individual detectors and analyse their temporal behaviour without regard to their neighbours. A mask cube of the same size as the data cube is used in parallel to flag the data which are affected by various damaging effects.
2.2. Removing particle hits
2.2.1. Fast glitch correction
At first glance (Fig. 2), any readout obtained with ISOCAM not containing strong sources (fluxes larger typically than about 10 mJy in the broad filters) clearly shows the flat field response by the pixels to the zodiacal background. Ultimately, it is the accuracy to which one knows this flat-field that allows identification of faint sources (see Sect. 2.4).
Detectable signals from some pixels which are often aligned in strings result from hits by cosmic ray particles. Most of the affected pixels recover at the next or second next readout. These are mainly due to primary and secondary electron energy deposition onto the array.
The affected pixels are found by an algorithm working on the temporal behaviour of each pixel. Readouts of one pixel which deviate from the running (14 readouts) median by more than a threshold value and tend to recover to the normal level by at least 10 percent of the maximum step after one or two readouts are simply masked for two readouts and excluded from further signal extraction. The threshold value is set by a number (typically 3) times a running window (14 readouts) standard deviation of the pixel signal, where the most deviant values have been excluded from the rms computation. This algorithm was tested in various cases, and in particular against false glitch detection around the typical signal of a moderately strong source. Fig. 3 shows an example of the temporal behaviour of one CAM pixel as a function of readout number.
The number of glitches, or more precisely, the number of readouts which are affected by the masking process, is typically 9 per second over the entire array of 992 alive pixels: e.g. during an integration time of = 36 camtu = 5 seconds, at any time 45 pixels cannot be used for measurement. This number varies by at least 15 percent apparently depending on the satellite orbiting position. Accumulated glitches cause a noticeable increase in the noise or an unreliable measurement. We thus decided to mask an isolated "good" readout value if it is between two successive glitches. The energy deposition has a continuum distribution that goes down to the intrisic noise level of the camera. For the method presented here, any undetected faint glitch contributes to increase the statistical noise and to modify slightly the signal.
2.2.2. Slow glitch correction
These are the most difficult to deal with. They are the main limitation in detecting weak point-like or extended sources and are thought to be due to ion impacts. These impacts affect the response of the hit pixels with a long memory tail. Slow glitches are of three types: 1) Positive decay of short time constant (5-20 seconds), 2) Negative decay of long time constant (up to 300 seconds) 3) A combination. It is possible that positive decay glitches are due to an ionisation or energy deposition which does not saturate the detector. Negative ones may have started with saturation (which is not apparent in the readout value) and hence cause memory effects, an upward transient of a type not very different from the usual CAM transient starting from a low flux.
Fig. 4a shows the effect of slow glitches. In Fig. 4b the corrected signal is shown along with the used mask. To detect slow glitches we correlate a running template of a typical glitch along the temporal signal of a given pixel. A maximum in the correlation indicates a potential glitch at say . This is then analysed with a least-square method in order to find the best decay time constant: one minimises the quantity defined by
where is the raw temporal data signal of a pixel, the weight (0 or 1, defined by the previous masking of fast glitches), the glitch model to fit, and is given. Hence, the variables A, B, G, and are found (H is the Heaviside jump function).
If the fit is satisfactory, we subtract the exponential part of the fit up to the end of the temporal signal. To assess whether the fit is satisfactory or not, we also calculate the least-square corresponding to a linear baseline without the exponential part. We found that, a potential glitch can be taken as valid if . Most of the time, the glitch beginning at time corresponds to a previously detected fast glitch.
Type 1 glitches can be removed with the previous method but, sometimes, the same method also removes real source signals (for example, if there is a downward transient when a pixel leaves a source). We thus leave Type 1 glitches to be removed later in the method. Type 3 glitches are not dealt with at the moment. The running template to detect a glitch and find its starting time is a simple exponential with successive time constants of 15, 30, and 60 readouts for negative glitches (Type 2 glitches). After a glitch is found at a significant level, and the exponential tail is corrected everywhere after , we mask (i.e. we will not further use) the readouts for times between and the time where the amplitude of the exponential correction is above twice the pixel noise per readout. An example can be found in Fig. 4a and b.
Typically one new slow negative glitch appears somewhere on the camera every 1.2 second. Its intensity (G) varies from 5 to 20 ADUG. The time constant varies from 20 to 200 seconds (45 is typical). Positive glitches are 10 times rarer.
2.3. Removing transient effects: a simple correction technique
ISOCAM, like many other infrared detectors, suffers a lag in its response to illumination. Fortunately, the LW detector has a significant instantaneous response i.e. a jump in brightness is seen at once by the pixel at the level of 60% of the step relative to its stabilised asymptotic value. The remainder of the signal is obtained after a delay which is inversely proportional to the flux. In a first approximation, Abergel et al. (1996) have modelled this phenomenon with:
where the measured signal at time t is a function of the illumination at previous times, and
where an ADUG is the CAM analog to digital unit normalised by the used gain. The approximation for k in Eq. 4 makes it possible to invert the triangular temporal matrix. The inversion algorithm is independent of the position of the satellite and makes no a priori assumption as to the temporal evolution of the pixel intensity history. It also preserves the volume of the data cube. An example of a relatively strong source is shown in Fig. 5a. The inversion (getting I from D) yields the result shown in Fig. 5b. The inversion apparently enhances the high frequency noise of the pixel but the signal to noise ratio stays constant because the overall ISOCAM calibration must be updated after the transient correction has been applied.
The correction helps in improving the calibration accuracy because it gives the proper stabilised value that can be directly tied to a response measured on bright stabilised stellar standards. It also removes ghosts of sources which otherwise can still be seen after the pixel has been pointed away from the source to its next raster position (see Fig. 5a). We believe this correction works best for faint sources, the main objective of the present study. The correction is not yet perfect and further understanding of the camera lag behaviour will certainly provide improvements in the final calibration.
2.4. Removing long term drifts: the triple beam-switch method
The data cube should now contain a signal in the unmasked areas, which is almost constant for a given raster position and a given pixel. At this stage, we remove most of the slow positive glitches for a given pixel
The value of 4 was found by trial and error.
We are then left with a data cube where all or almost all "bad" pixels have been masked and therefore are not used further. As there is still some low-frequency noise for each pixel, we do not feel ready yet to project the total power value of each pixel on the sky but instead we prefer comparing the values during a raster position to the values in the two adjacent raster positions seen by the same CAM pixel. This is the classical approach for dealing with low-frequency noise. It is usually adopted when the background is much stronger than the sources: a regime which has long been the case in infrared astronomy and which is now appearing even in optical astronomy. This approach has been pioneered by e.g. Papoular (1983).
This is done with the following least-square method which is independently applied to each pixel (where i is the readout number proportional to the time which runs along the 3 current raster positions centered at the mid time), by minimising:
where is 1 for valid readouts and 0 for masked readouts or readouts which do not belong to the three current raster positions e.g. when ISO was moving (see Sect. 2.1). is the pixel signal at readout i. is the template of a source, typically a square pattern (0...0, 1,...,1,0...0) template, where the ones are set for the central raster position. The best a, b, and u are therefore obtained from the minimisation of Ls. The method gives an estimate of the noise on the , , and parameters by assuming that roughly (because 3 parameters are fitted) so that the noise per readout is and is obtained from formula 6. The value of (simplified to u in the following) corresponds to the best estimate of the average signal for each pixel and raster position. The associated noise and are recorded for a given pixel at a given raster position. The uncertainty on the pixel signal is itself quite noisy for a given raster position so in fact we replace it by the median of all found during the raster for that particular pixel (the quality of the fit is modified accordingly). As a complementary and very effective glitch removal, we mask the signal of a given raster position if its deviates by more than a factor 2 from the pixel median value across all rasters or if the number of points used for the fit is less than a factor of 0.6 times the median value. No detection of a point source is made at this stage. Hence, the data cube is reduced to a few values per raster position and per pixel of the camera. For the first and last positions of the raster, we use a 2-beam differencing scheme similar to the 3-beam scheme presented except that no slope b can be found. We checked, a posteriori, that the distribution of all the values of the signal u divided by their respective noise precisely follows a reduced Gaussian (actually we slightly overestimate the noise by up to 15 percent), except for the few pixels affected by sources. This is strong evidence that white noise dominates the output of this algorithm.
Note that the standard raster averaging method followed by ON-(OFF1+OFF2)/2 differencing scheme would have worked in most situations except that here the noise can be estimated independently, the least square statistics can be used as an effective glitch removal, and, in the case of several randomly placed masked values the baseline removal is better defined. Note also that the noise of the triple beam-switch method is worse than for an absolute measurement (in case the flat field were perfectly known). But the low-frequency noise is here largely suppressed which overbalances the loss of sensitivity, which in principle costs an integration time longer by 50 percent on target.
© European Southern Observatory (ESO) 1999
Online publication: February 22, 1999