Industry: Computer Aided Engineering
THE CHALLENGE: An existing storage system hindered the performance of a research organization's computing cluster by frequently hanging and limiting productivity. A scalable solution was needed that could maximize cluster utilization.
THE SOLUTION: The organization deployed a fully integrated software/hardware storage solution from Panasas that eliminated the I/O bottleneck within hours. The object-based architecture empowered building larger, faster clusters.
THE RESULT: The new storage solution maximized cluster productivity by consistently achieving 100% utilization. It also provided easy installation and management, accelerating research results.
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
Stanford Eliminates Storage Bottleneck with Panasas
1. Customer Success Story
Stanford University
“By leveraging
the object-based
Stanford University
Institute for Computational and Mathematical Engineering
architecture, we were
The Stanford University Institute for Computational and Mathematical Engineering
able to completely (ICME) is a multidisciplinary organization designed to develop advanced numerical
eliminate our storage simulation methodologies that will facilitate the design of complex, large-scale
I/O bottleneck.” systems in which turbulence plays a controlling role. Researchers at the Institute use
Steve Jones
high performance computing as part of a comprehensive research program to better
Technology Operations Manager, predict flutter and limit cycle oscillations for modern aircraft, to better understand the
Stanford Institute for Computational impact of turbulent flow on jet engines, and to develop computational methodologies
and Mathematical Engineering
that can be expected to facilitate the design of naval systems with lower signature
levels. Their research can lead to improvements in the performance and safety of
aircraft, reduction of engine noise, and new underwater navigation systems.
Researchers at the Institute are using requirements of the system. As a result,
the 164-node Nivation Linux cluster the storage system often hung and
to tackle large-scale simulations. The limited the productivity of the cluster.
Institute has recently benefited from “It’s imperative that our clusters be fully
SUMMARY deploying Rocks software on several operational at all times,” said Steve
large-scale clusters in support of its Jones, Technology Operations Manager
Industry: Computer Aided Engineering
work, including the Nivation cluster. at the Institute. “The productivity of our
Rocks provides a collection of integrated organization is dependent upon each
THE CHALLENGE
software components that can be used cluster running at peak optimization.”
An existing storage system hindered the to build, maintain, and operate a cluster.
compute performance of this research
organization’s work in designing systems Its core functions include installing the Adding a second NFS server was an
free of performance and safety issues Linux operating system, configuring the initial option, but was quickly dismissed
related to turbulence. Their storage compute nodes for seamless integration, because of poor scalability and
system often hung and limited the
productivity of the cluster. A critical and managing the cluster. increased management burden. The
issue for a new system was installation Institute needed a solution that could
and amount of time required for ease of The Challenge scale in both capacity and the number of
integration.
While deployment of Rocks software cluster nodes supported while providing
helped the Institute maximize the exceptional random I/O performance.
THE SOLUTION compute power of the Nivation cluster, “High performance is critical to our
The fully integrated software/hardware an existing storage solution hindered success at the Institute,” said Jones.
solution included the Panasas® overall system performance. Behind “We needed a storage solution that
Operating Environment and the PanFS™
parallel file system with the Panasas the Nivation cluster, an NFS server was would allow our cluster to maximize its
DirectFLOW® protocol. initially deployed to store and retrieve CPU capabilities.” Finally, since Jones
data. At first, the solution appeared is supporting several clusters by himself,
THE RESULT capable of supporting the number of ease of installation and management
• Elimination of storage bottleneck cluster nodes and their associated was essential. A critical goal was to
data; however, it quickly became quickly install a storage system and
• Maximized cluster utilization
apparent that it could not meet the load immediately move data.
• Ease of integration and management
• Exceptional price/performance
1-888-panasas www.panasas.com