About ICES Facilties
ICES and the POB Building
The Institute for Computational Engineering and Sciences (ICES) is located in the O’Donnell Building for Applied Computational Engineering and Sciences (POB) on the University of Texas at Austin main campus. This facility has offices and work areas equipped with desktop computers, printers and copiers, mini-clusters, computational visualization facilities, and extensive network access for faculty, staff, students, and visitors. A large machine room houses supercomputers, servers, and large-scale storage devices. The building has a 196-seat auditorium with Ethernet ports at each seat. The auditorium also furnishes wireless networking, video conferencing and remote learning capabilities. There are 15 networked seminar rooms with high-resolution audio visual systems, some with video conferencing and video taping facilities.
The POB building networks are designed to support both bandwidth-intensive computational research and to accommodate new technology when available. The networks are built around high-performance, multilayer Cisco 6509, 2960 and 4003 network switches, with Lucent Gigaspeed copper Ethernet and Multimode Fiberoptic to each desktop and work area. Wireless networking is available throughout the building and courtyard area.
The ICES workstation environment encompasses all offices, cubicles, work areas, and laboratories. Over 300 general-purpose workstations are available, including Linux-based PCs, Macs, and Windows PCs. Several color printers and scanners are available. File and email service is provided by a number of Linux servers with over 40 terabytes of disk storage. Other Mac and Linux-based computers function as web servers, LDAP authentication servers, domain name servers, directory servers, application servers, and compute servers.
On-site Linux-Based Clusters
ICES systems and networking team currently supports ten Linux-based Clusters with others in the planning and design stages. These include the following:
- Bevo3, a 180-core cluster,
- Prism2, a 64-core rendering cluster,
- Deanston, a 16-node compute cluster,
- Halifax, an 960-core compute cluster,
- Junior, a 184-cores compute cluster,
- Stampede_1, a 512-cores compute cluster,
- Ronaldo, a 120-core compute cluster,
- Moilgpu, a 10-node GPGPU compute cluster,
- Reynolds, a 256-core compute cluster,
- Euclid, a 184-core compute cluster.
Off-site Supercomputing Facilities
At TACC, the two primary HPC production systems include:
- the Lonestar cluster has 1,888 Dell M610 PowerEdge blade servers and a peak performance of 302 Tflop/s
- the Dell-Intel supercomputer, Stampede, which has 102,400 processing cores, 205TB of total memory, 14 PB of on-line disk storage, and a peak performance of approximately 9.6 Pflop/s. This system was place in production mode in the first quarter of 2013.
As part of the Lonestar system described above, ICES researchers have priority access to approximately 27 million CPU hours in a separate queue at TACC. Compute cycles in this queue are managed by the Institute with allocations awarded weekly.
The long-term storage solution at TACC is an Oracle Mass Storage Facility, called Ranch. Ranch utilizes Oracle's Storage Archive Manager Filesystem for migrating files to/from a tape archival system with a current storage capacity of 30 PB. A 122-TB disk cache enables users to move files between compute resources and tape. Two Oracle SL8500 Automated Tape Library devices house all of the off-line archival storage. Each SL8500 library contains 10,000 tape slots and 64 tape drive slots. Two types of tape media are available, capable of holding 1 terabyte and 5 terabytes of compressed data per tape.
Facilities at TACC also include Corral, a storage system designed to support data-centric science. Corral consists of 6 PB of online disk and a number of servers providing high-performance storage for all types of digital data. The system supports MySQL and Postgres databases, a high-performance parallel file system, web-based access, and other network protocols for storage and retrieval of data to and from sophisticated instruments, HPC simulations, and visualization laboratories.
In 2012-13 Corral was expanded to include 10 PB of raw storage capacity, split between the TACC facility and the Arlington Data Center, which provides geographical replication and high-availability accesses to research data. This repository provides research data storage and access services to researchers at all 15 University of Texas academic and health campuses.
POB Visualization Laboratory
The POB Visualization Laboratory, managed by TACC, provides an end-to-end infrastructure for data-intensive and display-intensive computing and is available to all UTA investigators as well as UT System users. The lab includes a Dell visualization cluster, Stallion, with a 16 x 5 - 328 megapixel tiled display; Bronco a Sony 9M pixel flat projection system driven by a high-end Dell workstation; Lasso, a 12.4-megapixel touch sensitive display screen; and Mustang, a 55-inch Sony flat-panel display with active 3D stereo capabilities. These systems provide a unique environment for interactive and immersive visual exploration.
Brief descriptions of these systems are given below.
Dell Visualization Cluster and 328 Megapixel Tiled Display (Stallion)
The Stallion cluster provides users with the ability to perform visualizations on a large 16 x 5 tiled display of Dell 30-inch flat panel monitors, for 328 megapixel resolution. This configuration allows for exploration of visualizations at an extremely high level of detail and quality. The cluster allows users to access to over 82GB of graphics memory, 1.2TB of system memory, and 240 processing cores. This configuration enables the processing of datasets of a massive scale, and the interactive visualization of substantial geometries.
Sony SRX-S105 (9M Pixel) Projection System (Bronco)
The Sony projection system, Bronco, features a 20 ft. x 11 ft., 4096 x 2160 resolution flat-screen display, driven by a Sony SRX-S105 overhead projector and a high-end Dell workstation. This configuration provides users with the added flexibility to run a wide variety of applications, as only one workstation is required to drive the display. The projector gives exceptional brightness and a high resolution, 9M pixel viewing area. In addition, Bronco may be configured to accept inputs from up to four simultaneous video sources, allowing for a hybrid display of multiple systems.
12-Megapixel Touch Display System (Lasso)
Lasso is a touch display system consisting of six - 46 inch HD thin-bezel displays driven by a single compute node. The compute node features AMD Eyefinity technology for a seamless display surface, allowing for a tiled-display environment without the need to write parallel graphics applications. The display surface is supplemented by an infrared touch-sensitive perimeter with 5mm touch precision and the capability to detect 32 touch points simultaneously. Lasso is also augmented with a Microsoft Kinect for touchless interactions.
Collaboration Room (Saddle)
The collaboration room offers the opportunity for small groups to work together on developing and exploring visualizations. The display is provided by a high resolution projector with many possible input combinations. The room also includes a 5.1 theater stereo system with Blu-Ray capability. Users may develop their visualizations in the room, and then easily transition them to one of the two larger display systems in the main lab area at a later time.
Stereoscopic 3D Visualization System (Mustang)
The Vislab also includes Mustang, a Stereoscopic 3D system that can be used in visualization to render depth as a result of parallax generated by active and passive stereoscopic technologies. Mustang is equipped with the latest active stereoscopic technology, using Samsung’s 240 Hz stereo output modes in conjunction with a 55-inch LED display panel.