Research Cyberinfrastructure (RCi)
The Research Cyberinfrastructure (RCi) team at 葫芦影业, offers support for high-performance computing (HPC) and high-velocity research data transfer services within the South Dakota Board of Regents (SDBOR). RCi manages the largest HPC platform within the SDBOR system for use in research and education throughout the state.
Research Cyberinfrastructure (RCi) manages/involves research computing, research data flows and state higher ed / regional network research platforms.
High-Performance Computing
The SDSU Division of Technology and Security (DTS) manages a 240 TFLOP (approx.) cluster and parallel file system. Funding for this resource came from an NSF MRI program award: 鈥淢RI: Acquisition of a High-Performance Cluster to Enable Advanced Bioscience and Engineering Research,鈥 NSF 15-504, MRI Award 1726946. This cluster instrument includes a substantial general compute pool, 3TB of high memory, and NVIDIA GPU nodes to support CPU, high memory, and GPU-intensive computing. The parallel file system includes 1.2 PB of high-speed storage space for data-intensive compute jobs within the cluster. The cluster leverages three networks consisting of 100Gbps Infiniband (Cluster data application processing), 10Gbps Ethernet (science data transfers) and 1Gbps (cluster management).
Network
In 2018, the state of South Dakota implemented a high-speed fiber-optic network, which brought 100Gbps of high-speed broadband connectivity to support research activities within state, regional (Great Plains Network Research Platform) and the national research networks (Internet 2). In the fourth quarter of 2017, the SDSU Division of Technology and Security (DTS) upgraded its local area network infrastructure to support 100 GBPS of data transmission capacity within its core network router system. This work also included an upgrade of the sub-core switching to 40Gbps and all network building switching to a minimum of 10Gbps of connectivity.
Science DMZ
In 2014, the 葫芦影业 (SDSU) Division of Technology and Security (DTS) were awarded an NFS CC* Cyberinfrastructure Program grant: 鈥CC*IIE Networking Infrastructure: Building a Science DMZ and Enhancing Science Data Movement to Support Data-Intensive Computational Research at 葫芦影业,鈥 NSF 14-521, CC*IIE Project Reference: 1440622. The project goals revolve around enhancing cross-institutional research collaboration by deploying a high-speed frictionless (Science DMZ) network. Upgrade work in 2018 involved the implementation of a robust data management sharing service (Globus) and a new 100 Gbps Flash I/O Network Appliance (FIONA) data transfer node, an integral element for the inclusion of SDSU in the Great Plains Network (GPN) Research Platform.
Storage/Archival Services
SDSU DTS manages a Global Parallel File System (GPFS) and various high-capacity block storage area network systems that provide enterprise-class storage for single instance servers and cluster systems. Raw data, metadata, and research products are archived and made publicly available within SDSU鈥檚 Open PRAIRIE, a product contracted between SDSU and the Bepress corporation (Digital Commons).
Data Center Support Services
The SDSU DTS (Division of Technology and Security) houses all servers, storage, and central networking equipment for the Brookings, SD campus within the Morrill Hall building, rooms 112 and 114. The room is configured in a hot-and-cold aisle layout with all cabling overhead in existing trays. The raised flooring panels are rated for 1000 PSI concentrated load. The GDP panel supplies all power within the Data Center and is rated at 1000 amps. Also, the GDP panel is equipped with a TVSS rated at 160 kA. UPS-1 and UPS-2 are both rated at 160 kVA (144 kW). Generator 1 and Generator 2 are both rated at 150 kW. Two DX CRAC cooling units have 22 tons and 16 tons of cooling capacity (38 tons of cooling total). Also, the room cooling includes three chilled water CRAC units with 8 tons of cooling capacity each (24 tons of cooling total).
DTS Research CyberInfrastructure Expertise
SDSU DTS permanent staff support for HPC includes an Assistant Vice President, Director, Research High-Performance Computer Specialist, HPC Software Development Engineer, Cyberinfrastructure Engineering Specialist and Systems administrator. Graduate student workers are also employed to help support research computing applications and programming.
- New users of the SDBOR research community and existing users with new projects may obtain access to the research computing resources by following the link to .
- Existing users who have already completed the project onboarding process and need help with their project are asked to complete a
"Innovator" HPC cluster
The next-generation cluster has become the major tool for the computational research available for our scientific and engineering community. This cluster boasts the most advanced CPU architectures for parallel computing and acceleration of machine learning applications. The computational tasks of AI researchers and simulations gain a significant boost with the latest generation of the GPU hardware. The Innovator cluster addresses the most pressing demand for fast storage and access to a huge volume of scientific data.
Hardware acceleration resources
Besides commonly found parallel computing with processes and threads, HPC cyberinfrastructure offers GPU and CPU-centric performance accelerators. The latest GPU technology is available both within the cluster environment and stand-alone servers. Differently from GPU, the CPU acceleration takes advantage of the latest generation processors. It employs the combination of the smart compilers on the specific CPU platform, both within the cluster and stand-alone parallel processing servers. Please, reach out to our team for more information on performance acceleration of your computational pipeline.
Data Storage Services
Storage and data flow planning solution services are available to enable researchers to store and share data securely in a collaborative environment.
GPU Computing Overview
GPU resources are available both on the cluster and stand-alone servers.
GPU Resources for Machine Leaning Applications
One of the very common applications for GPU computing is training of the neural nets. Both stand-alone server and the cluster can use effectively employed for boosting neural net training.
GPU Acceleration for Open-Source Software
Wide range of open source software were designed to accelerate scientific computation by transferring computational load to GPU hardware.
Please, if you need more information on the topics presented, feel free to reach out to RCI.
Graphical User Interface for Linux Servers
The graphical user interface facilitates access to visual applications running within Linux operating systems. This technology makes navigating your Linux tasks more intuitive and does not require command-line resources for daily work. Please, see a demo for an introduction to Linux GUI.
Introduction to Cluster Access and Cluster Computing
This training session is intended for those researchers who aim at harnessing parallel computational approaches available through multiple nodes, multiple cores and, possibly, parallel GPU computation.
Machine Learning with GPU
This set of training sessions is intended for researchers who plan to harness the parallel processing power of GPU hardware to boost their research pipeline. We address issues of usability and performance of neural net training using both open-source and proprietary platforms.
Master's and Ph.D. Research Project Management
A Ph.D. research project is full of challenges but a Master's degree can be overwhelming for recent graduates. Deadlines are tough and technical work is daunting. Know-how and extensive industrial and academic experience with computational research project management from RCi may serve your project from the early stages until your final defense. Reach out to our group to learn more about smart project management strategies applied to computationally intensive research.