Appendix A: data reduction
The observations were processed with CIA 2 v2.0.
Each observation consisted in a sequence of frames, which had an elementary integration time of about 2 s. By this way the temporal behaviour of each pixel was known.
First, the dark current was subtracted from each raw frame, using the dark images present in the software library, flagging the bad pixels of SW and LW detectors.
The impact of charged particles (glitches) on the detectors create spikes in the pixel signal curve. To remove these spurious signals, we first used the Multiresolution Median Transform method (Starck et al. 1996), then every frame was inspected to make sure that the number of suppressed noise signals was negligible and finally a manual deglitching operation was done to detect the glitches left and flag them. Some glitches caused a change in the pixel sensitivity: in this case we flagged the pixel in all readouts after the glitch.
The library dark images were not good enough to remove all the effects of the dark current: the signals in rows and columns showed a saw-teeth structure, that was eliminated using the Fast Fourier Transform technique (Starck & Pantin 1996).
The response of the detector pixels to a change in the incident flux is not immediate and the signal reaches the stabilization after some time. This time interval depends on the initial and final flux values and on the number of readouts (ISOCAM Observer's Manual 1994). Therefore, the time sequence of a pixel signal shows, after a change in the incident flux, an upward or downward transient behaviour. At the beginning of every observation, after a certain number of frames, the signal should reach the stable value. As this ideal situation could not always been achieved, CIA provides different routines to overcome this problem and apply the transient correction. These routines use different models to fit the signal curves, in order to identify the stable value.
In the SW5 observations, the photons coming from PKS 2155-304 fell mainly in one or two pixels, whose signals showed an upward transient behaviour that never reached the stabilization. On the contrary, the background, being very low, was stabilized. No transient correction routines were able to adequately fit the source signals, either underestimating or overestimating the stable flux. Observing the signal curves, we noticed that the behaviour of the first part of the curves were far from the expected converging trends that are used in the models of the correction routines, while the remaining part of the curves seemed to be well described by a converging exponential trend. So, after having discarded the starting readouts, we fitted the signal with a simple exponential model , where the optimized parameters are c, and , that represents the stable signal. We chose the fit which showed a reasonable result and optimized the determination coefficient , where are the measured signals and is the mean of the part we considered. In three cases, the results were not acceptable and we could define only lower limits, as the upward transients had not reached the stabilization.
For the transient correction of the LW3 observations, the model developed at the Institut d'Astrophysique Spatiale (IAS Model) (Abergel et al. 1996) has been used. As the corrected curves attained stable values in the second half only, we did not use the first half of the frames.
In the spectral observation of May 27, the uninterrupted sequence of filters used created either upward or downward transients and the stabilization of the source signal was reached just in few cases. The five observations made with the SW channel were corrected using the same method as the SW5 ones, except for the SW11 filter, in which the stabilization was reached for all pixels. In this case, we just discarded the first half of the 162 frames. In the SW2 filter data, at the end of the observation, the source signal was so far from stabilization that we could define only a lower limit. The five observations made with the LW channel were corrected using the IAS model. As this model takes into account all the past illumination history, we fitted a unique curve that was built linking together all the LW filters data. This method worked fine for two filters only (LW8 and LW9), while for the other three filters again we defined lower limits.
We averaged all the frames neglecting the flagged signal values and then the images were flat fielded, using the library flat fields of CIA.
The total signal of the source was computed integrating the values of the signal in a box centered on the source and subtracting the normalized background obtained in a ring of 1 pixel width around the box. The boxes had dimension ranging from 33 to 77 pixels, depending on the filter and on the pixel field of view (pfov). The results were colour corrected and divided for the point spread function (PSF) fraction falling in the box. This fraction also depends on filters and pfov. To compute it, we extracted from the library, for each combination of filter and pfov, the nine PSF images centered more or less on the same pixels of PKS 2155-304. For calibration requirements, in each PSF image the centroid of the source was placed in a slightly different position inside the same pixel. As we do not know with enough accuracy the position of the centroid of PKS 2155-304 in the ISOCAM images, the nine PSF were averaged and the result was normalized. The PSF correction was calculated by summing the signal of the pixels in a box of the same dimension of that in which we extracted the source signal. For the LW detector, a further correction factor was applied to take into account the flux of the point-like source that falls outside the detector (Okumura 1997). For the SW channel, we adopted for all filters the SW5 PSF, because, along with SW1, was the only one present in the calibration library, however the error we introduced can only be of a few percent.
Finally, the source signal was converted to flux density using the coefficients in Blommaert (1997).
To compute the photometric error we divided the uncertainty sources in two parts: the first one took into account the dark current subtraction, deglitching, flat fielding operations and signal to flux conversion, while the second one considered the transient correction. The first group of error sources are derived from the Automatic Analysis Results (AAR; OLP v7.0 for the light curves data, OLP v6.3.2 for the spectrum data). The source flux values given by the AAR are not reliable because the transient correction is not performed, but the AAR absolute flux errors are a good estimate of the first group of errors (the AAR fluxes are given in Tables 4 and 6). We assumed that the fluxes that we derived have the same relative error . Thus, for our fluxes this part of error is , which accounts for all the uncertainties sources, but the transient correction. We estimated that the error due to the transient correction is of the order of 10%, which is the rounded maximum error on the stable signal , obtaining a total error of . We assumed then a of 10% for all our measurements (20% for SW4 and SW10 filters).
The observations were done in rectangular chopped mode: the observed field of view switches alternately between the source and an 180" distant off-source position. This is necessary in order to measure the background level. The chopping direction was along the satellite Z-axis, which was slowly rotating by about one degree per day. Thus, every time the background was sampled in different fields of the sky and a raster map was performed just to check the stability of the background all around the source. The standard deviation of the background flux measured in the central pixel of the C100 detector, in the eight off-source positions of the scan, is 37 mJy. This value is much less than the error of the source flux (see Table 5). This small background fluctuation would lead to a rise of the scatter of the source flux, in any case our results are compatible with absence of variability (see Sect. 3).
Each observation of an astronomical target was immediately followed by a Fine Calibration Source (FCS) measurement, using internal calibrations sources. These measurements were made in order to determine the detector responsitivity, which is necessary to compute the target flux.
Each observation consisted in a series of integration ramps, each one made by the sequence of voltage readouts between two destructive readouts.
PIA separates the operations to be performed on the data in different levels: at each level PIA creates a data structure on which it operates. This data structure takes its name according to the properties of the data. The first part of the data analysis was common for all the observations, then the procedures changed according to the different characteristics of the observation (whether it was chopped or not or whether the detector was receiving photons from the astronomical target or from the FCS).
At the beginning, PIA automatically converted the digital data from telemetry in meaningful physical units and created the structure of data, called Edited Raw Data (ERD). At the ERD level, some starting readouts and the last readout of each ramp were discarded, because they are disturbed by the voltage resetting; we also manually discarded the part of the ramp before or after a glitch (that causes a sudden jump of the readout value) in the cases where most part of the ramp was unchanged and the glitch did not modify the detector responsitivity. A correction for the non-linear responsitivity of the detector was applied, using special calibration files. Then, each ramp was fitted by a 1st order polynomial model. A signal (in V s-1) was obtained from the slope of every ramp: the slope is proportional to the incident power. At Signal per Ramp Data (SRD) level, the first half of the signals per chopper plateau were discarded, because of stabilization problems. As the signal value depends on the integration time, a correction factor was applied and the signal was normalized for an integration time of 1/4 s. The dark current was subtracted using the PIA calibration files, which take into account the satellite position in the orbit. An algorithm was applied to discover and discard the signals that were anomalously high, because of glitches; then, the signals of each chopper plateau were averaged. At Signal per Chopper Plateau (SCP) level, the responsitivity of each detector pixel was computed taking the median of the FCS2 signals of the calibration measurements; then, the vignetting correction was performed on the target observations. In the chopped measurements, the background, that was calculated at the off-source position, was subtracted to get the source signal.
As for the camera, the response of the photometer detectors has some delays after a change in the incident flux. This effect causes losses in the signal values measured in the chopped measurements, so a correction factor was applied. The signal was finally converted into power, using the responsitivity obtained from the FCS measurement.
In the observations performed with the 33 pixel C100 detector, only the central pixel was used to compute the source flux density, because, as the most of the Airy disk of a point-like source centered in the pixel lies in the same pixel (69% for C1_60 and 61% for C1_90), to use the outer pixels just adds more noise than signal. The source flux density is defined as , where is the incident power, is a conversion factor of each filter (as given in the PIA calibration file pfluxconv.fits) and is the fraction of PSF that falls on the pixel considered when the source is located in the centre (ISOPHOT Observer's Manual 1994, Tables 2 and 4).
The absolute photometric error was computed by PIA, during the data reduction process, and took into account the uncertainty in the determination of the slope of the ramp and the errors associated to the other performed correction operations.
© European Southern Observatory (ESO) 2000
Online publication: March 28, 2000