next up previous contents
Next: Disk space Up: Computational Issues Previous: Post-processing   Contents

Hardware requirements

Theoretically, these pieces of software have modest hardware requirements, however as with all simulation problems of this nature ``bigger is better''. Ideally, a machine with a minimum of one gigabyte of RAM should be used to maximise the size of the potential problem which can be solved.

Each discrete cell within a micromagnetic problem to be solved with
OOMMF consumes approximately one kilobyte of RAM, therefore to solve a system with $ 1\times 10^6$ cells, one gigabyte of RAM is required just to run the simulation. This is without taking into account the size of the simulation package itself, which must be loaded into RAM and creates a fixed overhead.

Once operating system overheads are considered, it is clear that the amount of physical system RAM available to a machine should be greater than the amount of RAM required by the simulation -- this is primarily to avoid ``thrashing'', a situation where the operating system is forced to temporarily write (``swap'') areas of the RAM to the hard disk and read other areas back into RAM from the disk. The precise amount of RAM required for operating overheads will vary from system to system; a system dedicated and optimised for performing only simulations may only need a few megabytes reserved for the operating environment, but a workstation which is running other applications concurrently (e.g. visualisation software, e-mail clients, document editors and Internet web browsers) may require several hundreds of megabytes.

Bearing in mind that the access times in modern hard disks are several orders of magnitude greater than those of RAM (these access times are measured in milliseconds for hard disk drives and nanoseconds for RAM), this will slow down any particular simulation by this factor, making successful completion of the simulation impossible from a practical standpoint. Even in an optimised scenario where data seek latency is eliminated, the hard disk can be expected to deliver data approximately 100 times more slowly than RAM (Barclay et al., 2003).

The speed at which the processor can perform floating point calculations is overwhelmingly the primary factor when considering the time a simulation will take to complete. Any processor which has a fast floating point unit coupled with a compiler which is able to take maximum advantage of this floating point unit when optimising the simulation source code is ideal -- through our own studies we note that carefully chosen compiler options can increase the execution speed of the simulation threefold.

Additional methods such as high-throughput batch processing (Litzkow, 1987, Litzkow et al., 1988) and clustering (Ridge et al., 1997) allow either sets of simulations to be performed (e.g. many small computations such as those needed for phase diagrams) or larger computations which would be impossible to compute with neither the memory capacity nor the processing power of a supercomputer. OOMMF, unlike magpar, is unable to take advantage of the message-passing interface (Snir et al., 1995, Walker, 1992) common to computational clusters.


next up previous contents
Next: Disk space Up: Computational Issues Previous: Post-processing   Contents
Richard Boardman 2006-11-28