![]() | ![]() |
Astron. Astrophys. 342, 363-377 (1999) 2. The ISOCAM data processing2.1. Reading the raw dataRaw telemetry from ISO is converted into different formats and delivered by ESA to the community. We chose to start from SPD FITS files. This means that the pair of reset and end-of-integration images are already subtracted. The CISP (almost raw) data are read into a dataset which contains, along with the data cube (of size 32 by 32 by the total number of readouts), a header describing the context of the observations, and various trend parameter arrays (CAM wheel positions, various ISO time definitions) which are or are not synchronised with the images. An important trend parameter set is the pointing information from the ISO satellite which is read from the appropriate IIPH (ISO Instrument Pointing History) FITS file and appended to the raw dataset. The equatorial right ascension (RA) and declination (Dec) coordinates (in the J2000 system) pointed to by the centre of the camera suffer from some noise (not corresponding to a real ISO jitter). These are smoothed with a running kernel with a FWHM of 2 arcseconds which give a relative pointing accuracy better than one arcsecond (Fig. 1). The pointing data are synchronised to the dataset cube of images by using the UTK time that is common to CISP and IIPH files. Data are usually taken in a raster mode where a regular grid of pointings on the sky is done successively. Care should be taken not to use the on-target flag delivered in the CISP file. This flag is set to one when the acquired pointing is within 10 arcsecond of the required raster pointing. This flag is obviously too loose if one uses the 6 or, even worse, the 3 arcsecond lens. A 2 arcsecond radius of pointing tolerance is used here for our own criterion.
At this stage the data in the ISOCAM internal units of ADU
(Analog-to-Digital Units) are converted to ADUG units by simply
dividing by the gain (equal to 2.2. Removing particle hits2.2.1. Fast glitch correctionAt first glance (Fig. 2), any readout obtained with ISOCAM not containing strong sources (fluxes larger typically than about 10 mJy in the broad filters) clearly shows the flat field response by the pixels to the zodiacal background. Ultimately, it is the accuracy to which one knows this flat-field that allows identification of faint sources (see Sect. 2.4).
Detectable signals from some pixels which are often aligned in strings result from hits by cosmic ray particles. Most of the affected pixels recover at the next or second next readout. These are mainly due to primary and secondary electron energy deposition onto the array. The affected pixels are found by an algorithm working on the temporal behaviour of each pixel. Readouts of one pixel which deviate from the running (14 readouts) median by more than a threshold value and tend to recover to the normal level by at least 10 percent of the maximum step after one or two readouts are simply masked for two readouts and excluded from further signal extraction. The threshold value is set by a number (typically 3) times a running window (14 readouts) standard deviation of the pixel signal, where the most deviant values have been excluded from the rms computation. This algorithm was tested in various cases, and in particular against false glitch detection around the typical signal of a moderately strong source. Fig. 3 shows an example of the temporal behaviour of one CAM pixel as a function of readout number.
The number of glitches, or more precisely, the number of readouts
which are affected by the masking process, is typically 9 per second
over the entire array of 992 alive pixels: e.g. during an integration
time of 2.2.2. Slow glitch correctionThese are the most difficult to deal with. They are the main limitation in detecting weak point-like or extended sources and are thought to be due to ion impacts. These impacts affect the response of the hit pixels with a long memory tail. Slow glitches are of three types: 1) Positive decay of short time constant (5-20 seconds), 2) Negative decay of long time constant (up to 300 seconds) 3) A combination. It is possible that positive decay glitches are due to an ionisation or energy deposition which does not saturate the detector. Negative ones may have started with saturation (which is not apparent in the readout value) and hence cause memory effects, an upward transient of a type not very different from the usual CAM transient starting from a low flux. Fig. 4a shows the effect of slow glitches. In Fig. 4b the
corrected signal is shown along with the used mask. To detect slow
glitches we correlate a running template of a typical glitch along the
temporal signal where
If the fit is satisfactory, we subtract the exponential part of the
fit up to the end of the temporal signal. To assess whether the fit is
satisfactory or not, we also calculate the least-square
Type 1 glitches can be removed with the previous method but,
sometimes, the same method also removes real source signals (for
example, if there is a downward transient when a pixel leaves a
source). We thus leave Type 1 glitches to be removed later in the
method. Type 3 glitches are not dealt with at the moment. The running
template to detect a glitch and find its starting time
Typically one new slow negative glitch appears somewhere on the camera every 1.2 second. Its intensity (G) varies from 5 to 20 ADUG. The time constant varies from 20 to 200 seconds (45 is typical). Positive glitches are 10 times rarer. 2.3. Removing transient effects: a simple correction technique
ISOCAM, like many other infrared detectors, suffers a lag in its
response to illumination. Fortunately, the LW detector has a
significant instantaneous response where the measured signal where an ADUG is the CAM analog to digital unit normalised by the used gain. The approximation for k in Eq. 4 makes it possible to invert the triangular temporal matrix. The inversion algorithm is independent of the position of the satellite and makes no a priori assumption as to the temporal evolution of the pixel intensity history. It also preserves the volume of the data cube. An example of a relatively strong source is shown in Fig. 5a. The inversion (getting I from D) yields the result shown in Fig. 5b. The inversion apparently enhances the high frequency noise of the pixel but the signal to noise ratio stays constant because the overall ISOCAM calibration must be updated after the transient correction has been applied.
The correction helps in improving the calibration accuracy because it gives the proper stabilised value that can be directly tied to a response measured on bright stabilised stellar standards. It also removes ghosts of sources which otherwise can still be seen after the pixel has been pointed away from the source to its next raster position (see Fig. 5a). We believe this correction works best for faint sources, the main objective of the present study. The correction is not yet perfect and further understanding of the camera lag behaviour will certainly provide improvements in the final calibration. 2.4. Removing long term drifts: the triple beam-switch methodThe data cube should now contain a signal in the unmasked areas, which is almost constant for a given raster position and a given pixel. At this stage, we remove most of the slow positive glitches for a given pixel
The value of 4 was found by trial and error. We are then left with a data cube where all or almost all "bad" pixels have been masked and therefore are not used further. As there is still some low-frequency noise for each pixel, we do not feel ready yet to project the total power value of each pixel on the sky but instead we prefer comparing the values during a raster position to the values in the two adjacent raster positions seen by the same CAM pixel. This is the classical approach for dealing with low-frequency noise. It is usually adopted when the background is much stronger than the sources: a regime which has long been the case in infrared astronomy and which is now appearing even in optical astronomy. This approach has been pioneered by e.g. Papoular (1983). This is done with the following least-square method which is
independently applied to each pixel (where i is the readout
number proportional to the time where Note that the standard raster averaging method followed by
ON-(OFF1+OFF2)/2 differencing scheme would have worked in most
situations except that here the noise can be estimated independently,
the least square statistics can be used as an effective glitch
removal, and, in the case of several randomly placed masked values the
baseline removal is better defined. Note also that the noise of the
triple beam-switch method is ![]() ![]() ![]() ![]() © European Southern Observatory (ESO) 1999 Online publication: February 22, 1999 ![]() |