Cluster Details

The Callisto cluster consists of a head node, 8 compute nodes and a GPU server node.  Students and faculty who have accounts on the Callisto cluster will log into the head node and interact with the Callisto cluster through the head node, either compiling code, submitting jobs and interacting with the hard drive storage.

The head node and compute nodes consist of 2 quad-core 2.4 GHz Opteron processors, each processor having a 512K cache.  The peak theoretical performance of each compute node in the cluster is 76.8 Gigaflops (billion floating point operations per second), so that the compute nodes' and head node's combined top theoretical performance is 691.2 Gigaflops.  Each of these nodes contains 16 GBytes of 800 MHz memory, for a total of 144 GBytes of RAM.  In addition, the head node has a redundant power supply and 4 TBytes of data storage, accessed via a RAID-6 storage array.  RAID-6 uses an advanced mathematical algorithm for duplicating memory across several hard drives, preventing the loss of data when one hard drive fails.

The GPU server node is the Helios Xn-1113G 480-Core GPU server, consisting of 1 Quad-Core Intel Xeon processor, 12 GBytes of memory, and 2 Nvidia Tesla GPU C1060.  The Tesla GPU units are the work horses for this server, each consisting of 240 processing cores, running at 1.3 GHz with a total peak performance of 933 Gigaflops for each Tesla unit.

The cluster components are connected via an Enterasys D2 Gigabit ethernet switch, allowing for rapid communication between the head node and either the compute nodes or the GPU node, or amongst the compute nodes.

The power and benefit of the Callisto cluster lies in the ability to run several computationally intensive applications at the same time on the various processors or to run much larger applications using all 8 processors in a compute node or using the processors on two or more compute nodes for an extremely large scale simulation.  The Callisto cluster has significantly more computational resources than the typical desktop computer, so a researcher can complete several times as many simulations or computationally intensive jobs on the Callisto cluster or a researcher can study much larger problems due to the increased memory available to the Callisto cluster.

The GPU server provides an order of magnitude increase in computational power, due to its specialized design.

The Callisto cluster is running Linux RocksCluster CentOS, as the operating system for the cluster.  This software includes the Portable Batch System for job submission and a Firefox-based webviewer of the cluster usage, via Ganglia.

In order to use several processors at the same time, a computer code must be re-designed and re-written using parallel programming techniques, enabled by specialized communication software such as the Message Passing Interface (MPI) software.  To use the NVidia GPU, a computer code must be re-designed using the libraries developed by NVidia to maximize the performance of the code.  Educational courses at UCA are offered regularly to equip students to understand these programming subtleties.

Several large scale computational software packages have already been developed for use on parallel architectures, such as the Callisto cluster.  Some of these packages are used by educational institutions, by industry and by government laboratories for research in specific areas of entertainment, science and engineering, while other software packages are currently being developed at UCA to take advantage of cluster and GPU computing environments.

We welcome members of the UCA community to participate in the use of the Callisto cluster.   Please contact one of the Research Associates for details.

 

Timeline:

The Callisto cluster was launched on Tuesday, May 25.

Initial testing will be performed during the summer.

Available for general UCA community by the fall semester of 2010.

 

Thanks to the following individuals:

Amy Apon and the Arkansas SuperComputing Center for providing encouragement and expertise at the early stages of the planning for this project.

Albert Everett at the University of Arkansas at Little Rock for providing guidance and insight during the planning, procurement and installation phases of this project.

Brent Herring of the Information Technology division at the University of Central Arkansas for providing physical space for the cluster, for providing guidance during each stage of the procurement process and for providing us with the insight and support needed to successfully integrate the cluster within the IT infrastructure at UCA.

University of Central Arkansas's University Research Council and College of Natural Sciences and Mathematics for providing the funding for the Callisto cluster.