The Institute for Computational Engineering and Sciences (ICES) is located in the O’Donnell Building for Applied Computational Engineering and Sciences (POB) on The University of Texas at Austin main campus. This facility has offices and work areas equipped with - desktop computers, printers and copiers, mini-clusters, computational visualization facilities, and extensive network access for faculty, staff, students, and visitors. Several machine rooms distributed throughout the POB house supercomputers, servers, and large-scale storage devices. The building has a 196-seat auditorium with wireless networking, video conferencing and remote learning capabilities. There are fourteen networked seminar rooms with high-resolution audio visual systems, some with video conferencing and video taping facilities.
Networking Infrastructure. The POB building networks are designed to support both bandwidth-intensive computational research and to accommodate new technology when available. The networks are built around a high-performance, multilayer, Cisco router and 2960 series network switches, with Lucent Gigaspeed copper Ethernet and Single mode and Multimode Fiberoptic to each desktop and work area. Wireless networking is available throughout the building and courtyard area.
Workstation Environment. The ICES workstation environment encompasses all offices, cubicles, work areas, and laboratories. Over 300 general-purpose workstations are available, including Linux-based PCs, Macs, and Windows PCs. Several color printers and scanners are available. File and other support services are provided by a number of Linux servers with over 40 TB of disk storage. In particular, Mac and Linux-based computers function as web servers, LDAP authentication servers, domain name servers, directory servers, application servers, and compute servers.
On-site Linux-Based Clusters. ICES systems and networking team currently supports ten Linux-based Clusters. These include the following:
· Bevo3, a 180-core cluster,
· Buildbot, a 640-core cluster,
· Halifax, a 1344-core compute cluster,
· Junior, a 184-core compute cluster,
· Peano, a 640-core cluster,
· Prism2, a 64-core rendering cluster,
· Ronaldo, a 120-core compute cluster,
· Senior, a 640-core cluster,
· Sverdrup, a 1008-core compute cluster, and
· Yacc, a 640-core cluster.
Off-site Supercomputing Facilities. ICES has access to supercomputing facilities via high-speed networking at the Texas Advanced Computing Center (TACC) at the J. J. Pickle Research Center, eight miles north of the main campus. At TACC, the three primary HPC production systems include:
As part of the Lonestar 5 system described above, ICES researchers have priority access to over 25 million CPU hours. These compute cycles are jointly managed by the Institute and TACC with allocations awarded weekly.
The long-term storage solution at TACC is an Oracle Mass Storage Facility, called Ranch. Ranch utilizes Oracle's Storage Archive Manager Filesystem for migrating files to/from a tape archival system with a current storage capacity of 100+ PB. A 960-TB disk cache enables users to move files between compute resources and tape. Two Oracle SL8500 Automated Tape Library devices house all of the off-line archival storage. Each SL8500 library contains 10,000 tape slots and 64 tape drive slots. Three types of tape media are available, capable of holding 1, 5, and 8 terabytes of uncompressed data per tape.
Facilities at TACC also include Corral, a storage system designed to support data-centric science. Corral consists of 12 PB of replicated online disk space, 1 PB of unreplicated disk space, and 8 core file system servers providing high-performance storage for all types of digital data. The system supports MySQL and Postgres databases, a high-performance parallel file system, web-based access, and other network protocols for storage and retrieval of data to and from sophisticated instruments, HPC simulations, and visualization laboratories.
This research data repository, funded by The University of Texas System, provides research data storage and access services to PIs at all 14 academic and health institutions, with highly available and highly secure facilities for storing and managing research data.
The POB Visualization Laboratory, managed by TACC, provides an end-to-end infrastructure for data-intensive and display-intensive computing and is available to all UTA investigators as well as UT System users. The lab includes a Dell visualization cluster, Stallion, with a 16 x 5 - 328 megapixel tiled display; Bronco a Sony 9M pixel flat projection system driven by a high-end Dell workstation; Lasso, a 12.4-megapixel touch sensitive display screen; and Mustang, a 55-inch Sony flat-panel display with active 3D stereo capabilities. These systems provide a unique environment for interactive and immersive visual exploration.
Brief descriptions of these systems are given below.
The Stallion cluster provides users with the ability to perform visualizations on a large 16 x 5 tiled display of Dell 30-inch flat panel monitors, for 328 megapixel resolution. This configuration allows for exploration of visualizations at an extremely high level of detail and quality. The cluster allows users to access to over 82GB of graphics memory, 1.2TB of system memory, and 240 processing cores. This configuration enables the processing of datasets of a massive scale, and the interactive visualization of substantial geometries.
The Sony projection system, Bronco, features a 20 ft. x 11 ft., 4096 x 2160 resolution flat-screen display, driven by a Sony SRX-S105 overhead projector and a high-end Dell workstation. This configuration provides users with the added flexibility to run a wide variety of applications, as only one workstation is required to drive the display. The projector gives exceptional brightness and a high resolution, 9M pixel viewing area. In addition, Bronco may be configured to accept inputs from up to four simultaneous video sources, allowing for a hybrid display of multiple systems.
Lasso is a touch display system consisting of six - 46 inch HD thin-bezel displays driven by a single compute node. The compute node features AMD Eyefinity technology for a seamless display surface, allowing for a tiled-display environment without the need to write parallel graphics applications. The display surface is supplemented by an infrared touch-sensitive perimeter with 5mm touch precision and the capability to detect 32 touch points simultaneously. Lasso is also augmented with a Microsoft Kinect for touchless interactions.
The collaboration room offers the opportunity for small groups to work together on developing and exploring visualizations. The display is provided by a high resolution projector with many possible input combinations. The room also includes a 5.1 theater stereo system with Blu-Ray capability. Users may develop their visualizations in the room, and then easily transition them to one of the two larger display systems in the main lab area at a later time.
The Vislab also includes Mustang, a Stereoscopic 3D system that can be used in visualization to render depth as a result of parallax generated by active and passive stereoscopic technologies. Mustang is equipped with the latest active stereoscopic technology, using Samsung’s 240 Hz stereo output modes in conjunction with a 55-inch LED display panel.