Clusters

The Oden Institute has a number of small clusters that are owned by centers and are used only by those affiliated with the respective center.

CSEM Peano

Peano is a 40 node compute cluster made up of stampede1 nodes acquired from TACC. Each node contains 2 x 8 core E5-2680 (Sandy Bridge) processors with 32GB RAM, a single 250GB drive, and Infiniband FDR interconnect (Mellanox ConnectX-3).

The login node is a Dell PE R515 with 2 x hex core Opteron processors, 32GB RAM, dual gigE nics, Inifiniband FDR (Mellanox ConnectX-3 Pro), with an attached MD1000 storage array. Storage for home directories is 3.6TB in a RAID 5 configuration. An additional scratch area, ‘/scratch/’, is available and is RAID 5 group with 16TB of available storage.

The array was configured using OpenHPC (http://openhpc.community/). The queuing engine is Slurm (https://slurm.schedmd.com/) and provisioning is handled by Warewulf (http://warewulf.lbl.gov). A brief howto is provided below and is also contained in the MOTD when logging in:

To run an interactive shell, try:
         srun -p normal -t 0:30:00 -n 32 --pty /bin/bash -l

For an example slurm job script for MPI, see:
         /opt/ohpc/pub/examples/slurm/job.mpi

Further information:

  • This cluster is only available to students in the CSEM program.

  • SSH access is only from campus or UT’s VPN.

  • Home directories are not mounted.

Also note, OpenHPC makes available a pre-packaged builds for a variety of MPI families such as OpenMPI, MPICH, and MVAPICH. Only the GNU family of pre-packaged builds have been installed. Available Intel built MPI families in the module system have not been tested, use at your risk.

To request a user account to access Peano or for any help, please submit a help request to RT.

Note

Home directories and scratch areas on peano are not backed up.