3. Image segmentation, identification, and tracking of objects
For further analysis we selected an ( pixel) field covering the large central umbral core of the leading sunspot (see box in Fig. 1). This umbral core contained 5 dark nuclei and a varying number of UDs. The boundary of the umbral core was defined individually in each frame at the iso-intensity level of after boxcar smoothing (which was done to eliminate the ragged borders of the umbral core). The intensity signal in the region outside the umbral core was set to zero.
UDs were isolated individually in each frame of the umbral core using a simple image segmentation method, based on an edge enhancement algorithm: For each frame a differential image was computed by subtracting a smoothed image (boxcar ) from the original one. From this differential image we computed a binary mask, setting pixel values higher than an empirically estimated threshold (0.015) to 1 and the rest to 0. The original image was then multiplied by this mask, producing a segmented image in which the bright peaks (UDs) conserved their original intensity, and the background was set to zero. An example of this image segmentation process is shown in Fig. 2.
To identify and track UDs in time, we developed a procedure with the following criteria: The objects under study (UDs) are formed from the non-zero intensity pixels. Only side-by-side neighbouring pixels belong to the same object. Single-pixel objects are considered noise and rejected. Pixels forming an object are labelled by an object identification number which, together with the pixel coordinates, is stored in memory. In the next step, the spatial coincidences of objects in each pair of subsequent frames are investigated. Two objects are identified as predecessor/successor if they coincide in the coordinates of at least one pixel in both frames. Formation, death, splitting, and merging of objects are taken into account. In the case of splitting, the brightest object is adopted as the successor, while if merging occurs, the predecessor of the merged object is identified as the object with the longest history. In moments of poor seeing some objects may "disappear" (they are not resolved) in one or more frames and then reappear when the seeing gets better. If the object is missing in only 1 or 2 successive frames, and the distance between the locations of disappearance and reappearance is less than 0:003, the history of the object (record of the maximum intensity together with its coordinates and the total number of pixels in the object) is regarded as uninterrupted.
To improve the reliability of the results we exclude all objects with lifetimes shorter than 3 frames (89 s). Variable image quality may cause spurious merging and splitting which can make independent objects appear related. This would lead to large displacements of the maximum intensity positions and to unrealistic velocities of proper motions. This problem can be partially fixed by elimination of objects with velocities larger than 1 km/s, since we do not expect such velocities in the umbra (cf. Kitai 1986, Molowny-Horas 1994, Sobotka et al. 1995).
After applying these restrictions, we obtained a sample of 662 UDs which we used for further statistical analysis. From the procedure described above, we obtained the lifetimes of the objects (given by the number of frames in which the objects are present) and for each frame the magnitude and position of the maximum intensities and the effective diameters, derived from the number of pixels (area), of the objects. The average proper motion velocities of the features were calculated from their positions by means of least-squares linear fits (cf. Molowny-Horas 1994). The results concerning the intensity variations and proper motions will be described in a separate paper.
© European Southern Observatory (ESO) 1997
Online publication: March 26, 1998