The University of Southampton

The Lyceum Linux Teaching Cluster

The Linux teaching cluster, Lyceum 2, is a service providing a powerful Linux High Performance Computing facility.

Linux is targetted at undergraduate and MSc student projects which require substantially greater computational power or memory than is available on the Windows workstations. It is particularly designed to provide for projects where the number or length of runs required is impractical on individual PCs. Research postgraduates and staff users should use the much larger Iridis research cluster for their research computing needs. Iridis is not available for teaching use.

Project/Course tutors who would like their students to have access to Lyceum should send a request to ServiceLine with a list of usernames. Individual students should get their tutor to submit a request to Serviceline on their behalf.


The cluster is accessed via a powerful login node with 16 processor-cores and 128 GB of RAM. This can be accessed interactively and used to submit jobs to a further 32 compute nodes, each with 16 processor-cores and at least 32GB of memory (a cluster node is an individual computer). Eight of the compute nodes have 64GB of memory, providing for jobs with very high memory requirements Finally, there is a separate management node which provides a dedicated filestore for the cluster of around 8TB. The login node and compute nodes use 2.2 GHz Intel "Sandybridge" processors. Individual nodes in the system are connected by gigabit-ethernet. The theoretical peak performance of the system is around 9 TFlops (9 million million floating point operations per second!)

Operating System and Job Control

The operating system is Red Hat Enterprise Linux version 6.3. If users are not already familiar with UNIX/Linux operating systems they will need be prepared to acquire a basic knowledge. In practice this is not usually too big a problem, as with software packages such as Fluent, Ansys or Matlab, the GUI will look the same as it does under Windows.

Small, short jobs can be run on the head node in a multi-user environment. When a user needs to scale up to running larger or longer jobs, or to run several jobs at once, jobs can be submitted to run on the compute nodes via a batch queue system. This allocates exclusive access to the node to the user for the duration of their jobs. Job scheduling policies are designed to prevent domination of the system by a small group of users. Example job templates are available for common software packages, to assist with submitting batch jobs to the compute nodes. So, often, all that is required to get started is the use of a GUI-based text editor and the use of a simple command to submit a job.

We use cookies to ensure that we give you the best experience on our website. If you continue without changing your settings, we will assume that you are happy to receive cookies on the University of Southampton website.