SpringerLink
Forum Springer Astron. Astrophys.
Forum Whats New Search Orders


Astron. Astrophys. 322, 943-961 (1997)

Previous Section Next Section Title Page Table of Contents

3. Principles of the simulation process

In this section we give a general description of the implemented simulation methods. A more detailed description is presented in Appendix  7. We shall concentrate on the calculations performed using three-dimensional cloud models. The same principles also apply to clouds of one or two dimensions.

3.1. The basic simulation methods

We have used two basic methods in the calculations. The first of these (method A) is similar to the traditional Monte Carlo simulation as described by Bernes (1979). The radiation field is simulated with the aid of photon packages which are generated in random locations in the cloud. The cloud itself is divided into a number of cells in which physical parameters are assumed to be constant. Initially the photon package contains a number of photons which is calculated based on the local gas properties and the number of model photons used in the simulation. To improve the efficiency the package contains photons from all simulated transitions. The package is moved in some random direction until it exits the cloud. Every time the package goes through a cell the number of absorbed photons is subtracted from the photon package and added to counters in the cell. The cell contains a separate counter for each transition. Between simulation steps the equilibrium equations are solved in the cells with the aid of the updated counters and the simulation continues with the new population numbers.

The second method (method B) differs from the standard MC method in some important details. Instead of generating emission events in random locations, the model photons are always started at the outer boundary of the cloud. Initially the package contains only background photons and as it goes through a cell the number of absorption events are added to counters in the cells as usual. However, as the package passes through a cell a part of the photons emitted by the cell during one iteration are also added to the photon package. The package goes through a cell with just one step and since this addition is done only at the borders of the cells the photons absorbed within the emitting cell must be dealt with explicitly. Furthermore, since only some of the photons emitted from the cell are added to each passing package the total number of passing photon packages must be known. For this purpose we keep track of the total length that photon packages travel within each cell during one simulation step. On later iterations this knowledge can be used in dividing the total number of emitted photons between the photon packages that go through the cell. In principle the number of passing model photons should be the same for all cells. However, if we repeat the simulation always with the same random numbers we can eliminate the errors caused by the fact that all cells are not always hit by quite the same number of model photons.

Method B has some advantages over method A. First of all, the method guarantees that the photon package will never become empty. However, since photons are added to the package only at the boundaries between the cells, the number of photons emitted and absorbed within the same cell must be calculated explicitly. In normal Monte Carlo simulation there is no need for this (nor is it possible) since the emission events are generated at random locations and the absorption events in the emitting cell are treated in the same way as in any other cell. This means, however, that in the normal MC method the model photons must be created in many different locations within the cell in order to get the right number of photons out of a cell. If the cell is optically thick only photons emitted on the surface of the cell would be likely to escape. Method B does not present such problems. The number of escaping photons is calculated explicitly, and to achieve the same accuracy using the normal MC method one should generate a large number of emission events along the path of the model photon and literally calculate the integral of escaping photons using Monte Carlo integration. Clearly, doing this integration explicitly is a great advantage if some cells are optically thick. In clumpy clouds the density differences may be large between the cells and method B should be more efficient.

3.2. The use of random numbers

The random numbers used in the simulation process induce random errors into the calculated quantities. This random noise acts as an important indicator of the quality of the results. However, the random component makes it difficult to accurately track the convergence of the population levels.

We have used both a normal pseudorandom number generator, mzran (Marsaglia & Zaman 1994), and a quasirandom number generator, sobseq (Press & Teukolsky 1989). Pseudorandom numbers simulate true random numbers while the quasirandom numbers are distributed more evenly and are in fact generated to avoid each other. In a real cloud the photons are emitted from truly random locations and towards random directions. During the simulation this large number of real photons must be approximated with a much smaller number of simulated photons and therefore the random fluctuations in the simulated radiation field tend to be much larger. The use of quasirandom numbers should be helpful in this respect since it ensures a more uniform distribution for the generated model photons.

During normal Monte Carlo simulation the number of simulated photon packages going through a cell depends on the location of the cell. In method B, on the other hand, as the packages are always started at the edge of the cloud the density of the paths is approximately constant. It is essential that the number of passing photon packages is the same for all cells, since in method B this affects also the number of photons emitted from the cell. For that reason we favour the use of quasirandom numbers.

The random number generator can be reset after each iteration if the real radiation field is accurately sampled with the model photons generated during one iteration step. In that case the photon package will be sent from the same locations and to the same directions as on the previous iterations and we know the exact number of photon packages passing each cell and the exact amount of photons to be added to each package passing through a cell. The random fluctuations can be thus eliminated (see Sec.  3.1), the changes in level populations during the iterations will be smooth and the convergence can be followed with great precision.

3.3. Some efficiency issues

The physical conditions within the cells are assumed to be constant and therefore a photon package can always be moved through a cell with just one step. Since the step length is the same irrespective of the photon frequency there is no reason why all transitions should not be treated simultaneously. For this reason a photon package usually contains photons from different transitions.

This approach can be taken one step further. Instead of calculating a single Doppler shift for the emitted model photon we treat the whole line profile at the same time i.e. each photon package contains for each transition an array that gives the number of photons as the function of frequency. This is also needed in order to be able to add photons from different cells to a single photon package when the simulation method B is used. This approach is very efficient since less separate photon packages need to be generated and some common intermediate results can be used to update all channels.

The number of channels must be sufficiently large so that the interactions between cells with different velocities can be calculated with the required accuracy. The cost of calculating N channels is, of course, only N times the cost of calculating one channel or even less if one considers that some calculations are common to all channels. In order to get a comparable accuracy by using model photons with random Doppler shifts, as in the traditional MC method, the required number of model photons far exceeds N and the decrease in random errors is proportional only to 1/ [FORMULA].

We have found that using method B almost all of the CPU time is used in updating the photon packages i.e. calculating the number of absorption and emission events in the cells. The time used to calculate the photon path or even the time spent solving the equilibrium equations is insignificant in comparison. In order to avoid calculating the emission and absorption profiles each time a photon package passes a cell, we create at the beginning a two-dimensional array containing Gaussians with different widths. During the simulation emission and absorption profiles are read from this array. A pointer is used to point to the gaussian with the right width and Doppler shifts are taken into account by moving the pointer by the correct number of channels.

In one- and two-dimensional clouds the cells are of different size and e.g. the probability of a package going through the smallest inner spheres in a 1D cloud is very small. Therefore it may be necessary to implement a weighting scheme so that the generated packages are preferentially directed towards the inner parts of the cloud. The weighting is easy to implement in a normal MC-simulation but is even more simple when method B is used. All calculations on 1D models in this paper have been made using such weighting.

3.4. Convergence acceleration

The calculations are usually started with LTE conditions. The number of iterations required to reach an equilibrium state with a given accuracy depends on various properties of the cloud and the molecule, e.g. the number of populated excitation levels. The convergence can be very slow especially if the optical depth of the cloud is very high. This problem is well known and several acceleration methods have been developed for methods other than Monte Carlo simulation (see e.g. Rybicki & Hummer 1991; Dickel & Auer 1994).

If the random fluctuations of the level populations are eliminated by resetting the random number generators after each iteration, the change of the level populations will be smooth. This provides a mean to speed up the calculations of a Monte Carlo simulation.

We have used the following simple scheme. On each iteration the changes in the level populations are computed in the usual manner. In each cell and for each energy level the computed change is first multiplied by a weight w before it is added to the values from the previous iteration. Initially the weights are set to w =1.0 which corresponds to the normal updating. However, the weights themselves are also changed during the calculations. Each time the change in the level population has the same sign as on the previous iteration the corresponding weight is multiplied by a factor [FORMULA] where [FORMULA] 0.0. On the other hand, if the sign changes the weight is again set to w =1.0. This method prevents the program from taking numerous small steps while the current solution is still far from the correct one and the corrections are weighted as long as the computed change is to the same direction as on the previous iterations. As the solution converges it is also necessary to decrease the value of k in order to prevent the level populations from oscillating around the correct value. This can be done by multiplying k on each iteration with some constant [FORMULA]. With correct parameter values this method brings the solution quickly close to the correct one while during the final iterations it has no effect on the calculations. Note that direct extrapolation of the level populations would give very little savings in execution times.

As an example we take the cloud model of Bernes (1979). For calculations the one-dimensional model cloud is divided into 30 spheres of equal volume. In Fig. 1 a we show the change in level populations with and without this acceleration method. The iterations were stopped after the relative change of all level populations in all of the 30 spheres was less that 0.005 on one iteration step. With suitable weighting factors the number of required iterations could be brought down by half. At least in this case the solution obtained using the acceleration method seems also to be closer to the correct solution and e.g. the level population of the level J=2 is after 20 iterations the same as it would be after about 50 iterations without acceleration.

[FIGURE] Fig. 1a and b. a The change in the level populations of the 12 CO molecule in the innermost sphere of a spherically symmetric, one-dimensional cloud. The cloud is identical to the model cloud of Bernes (1979) but divided into 30 spheres of equal volume. The thick lines show the convergence during normal iterations and the thin lines the convergence using our simple acceleration method. The dashed lines indicate the correct values for the level populations. b The excitation temperatures of individual cells as the function of distance from the cloud centre in a three-dimensional model cloud similar to the cloud in Fig 1 of Park & Hong (1995). The cloud consists of 31 [FORMULA] 31 [FORMULA] 31 cells and has a constant density of n=2.0 [FORMULA] 103 cm-3 and a kinetic temperature of 15 K. The excitation temperatures of the transitions 12 CO(1-0) and 12 CO(2-1) are shown. The points for the transition J=1-0 have been moved up by 2.0 K. The number of iterations was 15 and the number of photon packages 20 000 per iteration

In some other clouds we have been able to bring the number of required iterations even down to a third. The best values for the parameters change somewhat from cloud to cloud, but acceleration can normally be achieved within a wide parameter range. Therefore, we think that the method is also of practical use when studying optically thick clouds. On the other hand, clouds like those in the article by Liszt & Leung (1977) usually require only about ten or less iterations if the same convergence criterion is used. The use of this method may in such cases even increase the number of iterations with one or two.

From Fig. 1 a one can also see that testing only the relative changes in the level populations is not always a sufficient or a good convergence criterion. Even though the change per iteration is small the solution might still be far from the correct one. In this case the random number generators were reset after each iteration which means that there is no random noise in the level populations. If this were not so it would be very difficult to devise a meaningful convergence criterion based only on the level populations.

Previous Section Next Section Title Page Table of Contents

© European Southern Observatory (ESO) 1997

Online publication: June 5, 1998

helpdesk.link@springer.de