SpringerLink
Forum Springer Astron. Astrophys.
Forum Whats New Search Orders


Astron. Astrophys. 319, 995-1006 (1997)

Previous Section Next Section Title Page Table of Contents

3. Discretisation and numerical resolution

The spherical nucleus with the radius [FORMULA] is subdivided into concentric shells. The radial depth of the shells increases from surface to centre by an exponentially growing step. Each meridian is cut into [FORMULA] pieces unequal in size. The corresponding latitudinal surface belts are chosen so that each belt receives an identical amount of solar flux for a zero obliquity of the nucleus rotation axis. To each belt corresponds the energetically averaged mean latitude

[EQUATION]

The sphere is divided into segments of equal size. The subvolumes are discretised using a centred spatial grid.

The set of diffusion equations can be solved by applying a finite difference method. An efficient scheme for parallel computing is the operator splitting method (e.g. Hockney & Eastwood 1988; Press et al. 1992). The basic idea of this method is to split the time integration step [FORMULA]. The diffusion equation, written in operator form,

[EQUATION]

(with [FORMULA] and the differential operator [FORMULA]) is differenced implicitly in two time steps weighted with the constant value [FORMULA]

[EQUATION]

Rearranging and writing in matrix vector notation, we have

[EQUATION]

with [FORMULA]. [FORMULA] is a unit matrix. The right hand side can be evaluated readily as it contains only "old" values. The operators on the left hand side of Eq. (42) are tridiagonal matrices that can be solved by applying the Thomas algorithm.

In order to represent derivatives accurately at the boundaries by a central difference formula we used the standard introduction of a fictitious grid point beyond the boundary.

Stability is given for [FORMULA]. We have chosen [FORMULA] (consistent scheme) for the heat diffusion equation. For large diffusion coefficients very slow decaying finite oscillations can occur between [FORMULA] in the neighbourhood of discontinuities (Smith 1985). To avoid such a case we have chosen [FORMULA] for the gas diffusion equations.

The computation is performed with the message-passing interface MPI (Message-Passing Interface Forum 1995) on a CRAY T3D parallel supercomputer. The used partition has to account for the stepwise integration in the radial and meridional dimensions. We haven chosen a regular domain decomposition for each integration step. After each integration step the entire data matrix is updated by means of a collective communication. One could reduce the number of transferred data by using point to point communications with a structured data type, but, the communication would be more time expensive because of a non-contiguous data access. Various mathematical functions are vetorised by using their corresponding subroutines of the Benchlib library. For the given problem size we found the most interesting exploitation with 8 processors. In this configuration the average computation time for one time step is about 50 ms on a CRAY T3D.

Previous Section Next Section Title Page Table of Contents

© European Southern Observatory (ESO) 1997

Online publication: July 3, 1998
helpdesk.link@springer.de