SlideShare ist ein Scribd-Unternehmen logo
1 von 4
Downloaden Sie, um offline zu lesen
Customer Success Story
University of Leicester




       University of Leicester
      and the ALICE Supercomputer
                           May 2011
University of Leicester and the ALICE Supercomputer




                                                             “
Background
Dr. Chris Rudge had big challenges to overcome when
he was tasked with creating a central supercomputing
facility at The University of Leicester – one of the first
universities in the UK to centralize its high performance
computing (HPC) systems inside a private cloud
infrastructure. Most academics were used to owning                  HPC clusters have developed to a
their own HPC capabilities, or sharing departmental                 level where they are now professional
facilities between small groups of colleagues. To                   products which we should expect to
succeed, Dr. Rudge had to deliver a system that
                                                                    “just work.” The complete solution
was better than the custom HPC clusters that were
scattered across the campus if departmental users were              chosen by the University of Leicester
to accept the new centralized HPC facility.                         does exactly that with best of breed
                                                                    storage from Panasas. Ease of
In challenging economic times, the new system would
have to deliver significant cost savings over many
individual HPC storage systems. Another major
consideration was that an increasing number of HPC
resource users did not work in traditional “computation
heavy” disciplines like physics, astronomy, chemistry and
engineering. The new system therefore had to scale out
performance on a single system, but also be reliable and
easy to use, particularly for academics that did not have
a background in using HPC clusters.
                                                                                               “
                                                                    management allows researchers
                                                                    to optimize their use of the system
                                                                    without having to spend valuable
                                                                    time simply keeping the system
                                                                    running. This is very important on an
                                                                    HPC system which supports a broad
                                                                    range of research across a diverse
                                                                    set of academic disciplines.

Previously, Dr. Rudge was responsible for the largest
                                                                         Dr. Chris Rudge,
HPC system at the university which was dedicated to
                                                                    University of Leicester
the Physics & Astronomy Department. He was selected
to manage the project because he understood the
challenges of running a small HPC system within a
single department.

“Academic departments typically struggle to secure           The Solution
research funding to employ skilled IT support staff
simply to administer their HPC systems,” commented           By consolidating its high performance storage and
Rudge. “Without dedicated support staff, HPC systems         compute budget into a centralized private cloud, the
do not usually function well and researchers find it         university avoided the cost of over-provisioning for peak
difficult to make best use of these expensive resources.”    workloads as is typical within individual departments.
                                                             The facility is far bigger than would be possible for any
By centralizing HPC support, Dr. Rudge assembled a           individual departmental system, allowing researchers
team of six to manage the new system, the Advanced           to test new ideas with smaller problem sizes, and then
Leicester Information and Computational Environment,         to do a small number of large runs that leverage all
dubbed ALICE. The large team dramatically hastened           of ALICE’s processing cores. The team worked with
the process of designing, procuring and installing the       a leading HPC server provider to create a system
HPC infrastructure, and the final system was delivered       with 256 compute nodes, two login nodes and two
on time and on budget, something that individual             management nodes, all connected by a fast InfiniBand
departments often fail to achieve with smaller systems.      network fabric. Each computational node offers a pair of




 1-888-panasas                                                                                      www.panasas.com

                                                                                                                     2
University of Leicester and the ALICE Supercomputer


quad-core 2.67GHz Intel Xeon X5550 CPUs, providing        to the powerful computer system while providing the
2048 CPU cores for running jobs. The popular 64-bit       flexibility to meet the diverse needs of many researchers.
Scientific Linux operating system – a version of Redhat   A high performance storage system was critical as Dr.
Enterprise Linux – was chosen for the operating system.   Rudge knew that while the compute nodes would have
                                                          very similar performance regardless of supplier, storage




“
Perhaps one of the most important considerations was      performance could materially differ. The need to store
selecting the accompanying data storage system. The       data for HPC projects from all university departments in
optimal solution would need to quickly deliver data       a global namespace required a solution that delivered all
                                                          this, but at a very competitive cost.



                                                          Panasas¼ ActiveStorℱ Parallel Storage
      Panasas storage performance and                     The team developed a suite of benchmarks during
      manageability were the key reasons                  the tender process, including one designed to stress
      that the University of Leicester chose              the storage and the network infrastructure linking the
                                                          compute nodes to the storage. This critical benchmark
      Panasas for ALICE. HPC clusters
                                                          was designed to run with varying degrees of parallelism
      have largely standardized on the use                and on different numbers of nodes, ranging from running
      of x64 CPUs with dual socket nodes                  one job on a single node to running eight jobs on each
      and InfiniBand interconnect. The type               of 32 nodes. The benchmark would find the limits of
      of storage and its integration into the             bandwidth per node as well as identify any degradation
      complete system was seen as one                     in performance as the number of jobs per node and
      of the few areas where there would                  number of nodes running the benchmark increased. Two
                                                          storage solutions, utilizing parallel file systems, were
      be differences between suppliers.
                                                          shortlisted for benchmarking: Panasas ActiveStor, a
      Selecting the wrong product could                   high performance parallel storage system running over
      have potentially crippled the system                10GbE and InfiniBand, and an open source Lustre-
      – however fast the CPUs are – they                  based solution employing commodity hardware. At first
      are rendered useless if there is no                 glance the Lustre system seemed to offer the greatest
      data to process. ALICE is the largest               flexibility and lowest cost, but further testing proved it



                                 “
      HPC system ever purchased by the
      University and benchmarking during
      the tender process demonstrated
      clearly that Panasas offered the best
      performance both on a per node basis
      and also without degradation of the
      aggregate performance with access
      from multiple nodes.
                                                          would not hold its own on the performance or simplicity
                                                          fronts.

                                                          As soon as the benchmark was scaled across several
                                                          nodes, Panasas ActiveStor demonstrated dramatically
                                                          higher parallel file system performance than the Lustre-
                                                          based system. Dr. Rudge knew that although he could
                                                          not predict how many jobs would run on the cluster at a
                                                          time, he had seen an increasing number of data mining
                                                          research tasks – jobs that would likely replicate the
                                                          demands of the benchmark running over many nodes.
            Dr. Chris Rudge,                              Simple to use storage management was another
       University of Leicester                            consideration. Dr. Rudge was keen to ensure that
                                                          the storage solution selected would allow ALICE to




 1-888-panasas                                                                                    www.panasas.com

                                                                                                                  3
University of Leicester and the ALICE Supercomputer


maximize time for research projects, commenting that,                                Conclusion
“improved management will produce greater research
                                                                                     The consolidation of its HPC system into a centralized
output per pound sterling, providing better value, even if
                                                                                     cloud infrastructure allowed the University of Leicester
the initial cost is the same.”
                                                                                     to achieve significant cost savings while dramatically
Panasas ActiveStor is a high performance storage                                     increasing the peak performance capabilities of individual
system that brings plug-and-play simplicity to large                                 departments. Typically, ALICE serves around 300 jobs
scale storage deployments while pushing aggregate                                    at any one time, with perhaps 10 jobs occupying half the
performance to 150GB/s – the industry’s ultimate                                     cluster, and the remaining jobs running on an average of
system throughput per gigabyte of storage. Dr. Rudge                                 two to three CPUs. The users love the system’s ability
knew this scale-out storage system would provide the                                 to run demanding algorithms quickly and efficiently; one
cost effective performance, reliability, flexibility, and                            economist has run 300,000 one-hour jobs in the first six
critical ease-of-use needed to support expansion within                              months of operation – the equivalent of more than 30
his limited project budget. He built a plan that allowed                             years processing on a single CPU. The facility supports
for scalable pay-as-you-grow capacity. In addition, with                             a very broad range of research projects under a simple
features like user quotas and per-user charge back,                                  to manage global namespace: Economists model the
the Panasas HPC storage system was optimized for a                                   world’s stock markets, Geneticists analyze genome
shared cloud environment.                                                            sequences, and Astrophysicists model star formation, for
                                                                                     example. By any measure ALICE has been a fantastic
 Eventually it became clear that performance                                         success.
requirements and ease of use considerations meant
that there was no real alternative to Panasas ActiveStor.
With research projects expected to place increasing
demands upon storage system performance, it was
critical that the storage infrastructure did not introduce
bottlenecks that would reduce ALICE’s ability to deliver
results quickly and efficiently.




                                                                                                                                                     05242011 1071




                                                    |     Phone: 1.888.PANASAS                      |     www.panasas.com
               © 2011 Panasas Incorporated. All rights reserved. Panasas is a trademark of Panasas, Inc. in the United States and other countries.

Weitere Àhnliche Inhalte

Mehr von Panasas

Mehr von Panasas (20)

PanasasÂź activestorÂź and ansys
PanasasÂź activestorÂź and ansysPanasasÂź activestorÂź and ansys
PanasasÂź activestorÂź and ansys
 
Panasas Âź California Institute of Technology Success Story
Panasas Âź California Institute of Technology Success StoryPanasas Âź California Institute of Technology Success Story
Panasas Âź California Institute of Technology Success Story
 
Panasas Âź Los Alamos National Laboratory
Panasas Âź Los Alamos National LaboratoryPanasas Âź Los Alamos National Laboratory
Panasas Âź Los Alamos National Laboratory
 
Panasas Âź University of Cologne Success Story
Panasas Âź University of Cologne Success StoryPanasas Âź University of Cologne Success Story
Panasas Âź University of Cologne Success Story
 
PANASASÂź ACTIVESTORÂź AND STAR-CCM+
PANASASÂź ACTIVESTORÂź AND STAR-CCM+PANASASÂź ACTIVESTORÂź AND STAR-CCM+
PANASASÂź ACTIVESTORÂź AND STAR-CCM+
 
Panasas Âź Deluxe Australlia
Panasas Âź Deluxe Australlia Panasas Âź Deluxe Australlia
Panasas Âź Deluxe Australlia
 
Panasas Âź University of Oxford
Panasas Âź  University of OxfordPanasas Âź  University of Oxford
Panasas Âź University of Oxford
 
Panasas Âź Terraspark Geosciences Customer Success Story
Panasas Âź  Terraspark Geosciences Customer Success StoryPanasas Âź  Terraspark Geosciences Customer Success Story
Panasas Âź Terraspark Geosciences Customer Success Story
 
Panasas Âź UCLA Customer Success Story
Panasas Âź  UCLA Customer Success StoryPanasas Âź  UCLA Customer Success Story
Panasas Âź UCLA Customer Success Story
 
Panasas Âź The Defence Academy of the United Kingdom
Panasas Âź  The Defence Academy of the United Kingdom Panasas Âź  The Defence Academy of the United Kingdom
Panasas Âź The Defence Academy of the United Kingdom
 
PanasasÂź Utah State Univercity
PanasasÂź  Utah State UnivercityPanasasÂź  Utah State Univercity
PanasasÂź Utah State Univercity
 
Accelerate Discovery
Accelerate DiscoveryAccelerate Discovery
Accelerate Discovery
 
Accelerate Financial Simulation & Analytics
Accelerate Financial Simulation & AnalyticsAccelerate Financial Simulation & Analytics
Accelerate Financial Simulation & Analytics
 
Accelerate Oil & Gas Discovery
Accelerate Oil & Gas DiscoveryAccelerate Oil & Gas Discovery
Accelerate Oil & Gas Discovery
 
PanasasÂź ActiveStor Âź 16
PanasasÂź ActiveStor Âź 16PanasasÂź ActiveStor Âź 16
PanasasÂź ActiveStor Âź 16
 
Accelerating Design in Manufacturing Environments
Accelerating Design in Manufacturing EnvironmentsAccelerating Design in Manufacturing Environments
Accelerating Design in Manufacturing Environments
 
Rutherford Appleton Laboratory uses Panasas ActiveStor to accelerate global c...
Rutherford Appleton Laboratory uses Panasas ActiveStor to accelerate global c...Rutherford Appleton Laboratory uses Panasas ActiveStor to accelerate global c...
Rutherford Appleton Laboratory uses Panasas ActiveStor to accelerate global c...
 
Panasas ActiveStor 11 and 12: Parallel NAS Appliance for HPC Workloads
Panasas ActiveStor 11 and 12: Parallel NAS Appliance for HPC WorkloadsPanasas ActiveStor 11 and 12: Parallel NAS Appliance for HPC Workloads
Panasas ActiveStor 11 and 12: Parallel NAS Appliance for HPC Workloads
 
Genomics Center Compares 100s of Computations Simultaneously with Panasas
Genomics Center Compares 100s of Computations Simultaneously with PanasasGenomics Center Compares 100s of Computations Simultaneously with Panasas
Genomics Center Compares 100s of Computations Simultaneously with Panasas
 
The Andrej Sali Lab Processes Millions of Small Files with Panasas
The Andrej Sali Lab Processes Millions of Small Files with PanasasThe Andrej Sali Lab Processes Millions of Small Files with Panasas
The Andrej Sali Lab Processes Millions of Small Files with Panasas
 

KĂŒrzlich hochgeladen

+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
?#DUbAI#??##{{(☎+971_581248768%)**%*]'#abortion pills for sale in dubai@
 

KĂŒrzlich hochgeladen (20)

Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
 
Polkadot JAM Slides - Token2049 - By Dr. Gavin Wood
Polkadot JAM Slides - Token2049 - By Dr. Gavin WoodPolkadot JAM Slides - Token2049 - By Dr. Gavin Wood
Polkadot JAM Slides - Token2049 - By Dr. Gavin Wood
 
Ransomware_Q4_2023. The report. [EN].pdf
Ransomware_Q4_2023. The report. [EN].pdfRansomware_Q4_2023. The report. [EN].pdf
Ransomware_Q4_2023. The report. [EN].pdf
 
Navi Mumbai Call Girls đŸ„° 8617370543 Service Offer VIP Hot Model
Navi Mumbai Call Girls đŸ„° 8617370543 Service Offer VIP Hot ModelNavi Mumbai Call Girls đŸ„° 8617370543 Service Offer VIP Hot Model
Navi Mumbai Call Girls đŸ„° 8617370543 Service Offer VIP Hot Model
 
MS Copilot expands with MS Graph connectors
MS Copilot expands with MS Graph connectorsMS Copilot expands with MS Graph connectors
MS Copilot expands with MS Graph connectors
 
Apidays Singapore 2024 - Scalable LLM APIs for AI and Generative AI Applicati...
Apidays Singapore 2024 - Scalable LLM APIs for AI and Generative AI Applicati...Apidays Singapore 2024 - Scalable LLM APIs for AI and Generative AI Applicati...
Apidays Singapore 2024 - Scalable LLM APIs for AI and Generative AI Applicati...
 
TrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
TrustArc Webinar - Unlock the Power of AI-Driven Data DiscoveryTrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
TrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
 
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
 
Emergent Methods: Multi-lingual narrative tracking in the news - real-time ex...
Emergent Methods: Multi-lingual narrative tracking in the news - real-time ex...Emergent Methods: Multi-lingual narrative tracking in the news - real-time ex...
Emergent Methods: Multi-lingual narrative tracking in the news - real-time ex...
 
Powerful Google developer tools for immediate impact! (2023-24 C)
Powerful Google developer tools for immediate impact! (2023-24 C)Powerful Google developer tools for immediate impact! (2023-24 C)
Powerful Google developer tools for immediate impact! (2023-24 C)
 
Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...
Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...
Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...
 
Boost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdfBoost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdf
 
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
 
"I see eyes in my soup": How Delivery Hero implemented the safety system for ...
"I see eyes in my soup": How Delivery Hero implemented the safety system for ..."I see eyes in my soup": How Delivery Hero implemented the safety system for ...
"I see eyes in my soup": How Delivery Hero implemented the safety system for ...
 
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost SavingRepurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
 
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
 
Data Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt RobisonData Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt Robison
 
AWS Community Day CPH - Three problems of Terraform
AWS Community Day CPH - Three problems of TerraformAWS Community Day CPH - Three problems of Terraform
AWS Community Day CPH - Three problems of Terraform
 
presentation ICT roal in 21st century education
presentation ICT roal in 21st century educationpresentation ICT roal in 21st century education
presentation ICT roal in 21st century education
 
How to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerHow to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected Worker
 

ALICE Supercomputer Runs Computationally Intensive Research with Panasas

  • 1. Customer Success Story University of Leicester University of Leicester and the ALICE Supercomputer May 2011
  • 2. University of Leicester and the ALICE Supercomputer “ Background Dr. Chris Rudge had big challenges to overcome when he was tasked with creating a central supercomputing facility at The University of Leicester – one of the first universities in the UK to centralize its high performance computing (HPC) systems inside a private cloud infrastructure. Most academics were used to owning HPC clusters have developed to a their own HPC capabilities, or sharing departmental level where they are now professional facilities between small groups of colleagues. To products which we should expect to succeed, Dr. Rudge had to deliver a system that “just work.” The complete solution was better than the custom HPC clusters that were scattered across the campus if departmental users were chosen by the University of Leicester to accept the new centralized HPC facility. does exactly that with best of breed storage from Panasas. Ease of In challenging economic times, the new system would have to deliver significant cost savings over many individual HPC storage systems. Another major consideration was that an increasing number of HPC resource users did not work in traditional “computation heavy” disciplines like physics, astronomy, chemistry and engineering. The new system therefore had to scale out performance on a single system, but also be reliable and easy to use, particularly for academics that did not have a background in using HPC clusters. “ management allows researchers to optimize their use of the system without having to spend valuable time simply keeping the system running. This is very important on an HPC system which supports a broad range of research across a diverse set of academic disciplines. Previously, Dr. Rudge was responsible for the largest Dr. Chris Rudge, HPC system at the university which was dedicated to University of Leicester the Physics & Astronomy Department. He was selected to manage the project because he understood the challenges of running a small HPC system within a single department. “Academic departments typically struggle to secure The Solution research funding to employ skilled IT support staff simply to administer their HPC systems,” commented By consolidating its high performance storage and Rudge. “Without dedicated support staff, HPC systems compute budget into a centralized private cloud, the do not usually function well and researchers find it university avoided the cost of over-provisioning for peak difficult to make best use of these expensive resources.” workloads as is typical within individual departments. The facility is far bigger than would be possible for any By centralizing HPC support, Dr. Rudge assembled a individual departmental system, allowing researchers team of six to manage the new system, the Advanced to test new ideas with smaller problem sizes, and then Leicester Information and Computational Environment, to do a small number of large runs that leverage all dubbed ALICE. The large team dramatically hastened of ALICE’s processing cores. The team worked with the process of designing, procuring and installing the a leading HPC server provider to create a system HPC infrastructure, and the final system was delivered with 256 compute nodes, two login nodes and two on time and on budget, something that individual management nodes, all connected by a fast InfiniBand departments often fail to achieve with smaller systems. network fabric. Each computational node offers a pair of 1-888-panasas www.panasas.com 2
  • 3. University of Leicester and the ALICE Supercomputer quad-core 2.67GHz Intel Xeon X5550 CPUs, providing to the powerful computer system while providing the 2048 CPU cores for running jobs. The popular 64-bit flexibility to meet the diverse needs of many researchers. Scientific Linux operating system – a version of Redhat A high performance storage system was critical as Dr. Enterprise Linux – was chosen for the operating system. Rudge knew that while the compute nodes would have very similar performance regardless of supplier, storage “ Perhaps one of the most important considerations was performance could materially differ. The need to store selecting the accompanying data storage system. The data for HPC projects from all university departments in optimal solution would need to quickly deliver data a global namespace required a solution that delivered all this, but at a very competitive cost. PanasasÂź ActiveStorℱ Parallel Storage Panasas storage performance and The team developed a suite of benchmarks during manageability were the key reasons the tender process, including one designed to stress that the University of Leicester chose the storage and the network infrastructure linking the compute nodes to the storage. This critical benchmark Panasas for ALICE. HPC clusters was designed to run with varying degrees of parallelism have largely standardized on the use and on different numbers of nodes, ranging from running of x64 CPUs with dual socket nodes one job on a single node to running eight jobs on each and InfiniBand interconnect. The type of 32 nodes. The benchmark would find the limits of of storage and its integration into the bandwidth per node as well as identify any degradation complete system was seen as one in performance as the number of jobs per node and of the few areas where there would number of nodes running the benchmark increased. Two storage solutions, utilizing parallel file systems, were be differences between suppliers. shortlisted for benchmarking: Panasas ActiveStor, a Selecting the wrong product could high performance parallel storage system running over have potentially crippled the system 10GbE and InfiniBand, and an open source Lustre- – however fast the CPUs are – they based solution employing commodity hardware. At first are rendered useless if there is no glance the Lustre system seemed to offer the greatest data to process. ALICE is the largest flexibility and lowest cost, but further testing proved it “ HPC system ever purchased by the University and benchmarking during the tender process demonstrated clearly that Panasas offered the best performance both on a per node basis and also without degradation of the aggregate performance with access from multiple nodes. would not hold its own on the performance or simplicity fronts. As soon as the benchmark was scaled across several nodes, Panasas ActiveStor demonstrated dramatically higher parallel file system performance than the Lustre- based system. Dr. Rudge knew that although he could not predict how many jobs would run on the cluster at a time, he had seen an increasing number of data mining research tasks – jobs that would likely replicate the demands of the benchmark running over many nodes. Dr. Chris Rudge, Simple to use storage management was another University of Leicester consideration. Dr. Rudge was keen to ensure that the storage solution selected would allow ALICE to 1-888-panasas www.panasas.com 3
  • 4. University of Leicester and the ALICE Supercomputer maximize time for research projects, commenting that, Conclusion “improved management will produce greater research The consolidation of its HPC system into a centralized output per pound sterling, providing better value, even if cloud infrastructure allowed the University of Leicester the initial cost is the same.” to achieve significant cost savings while dramatically Panasas ActiveStor is a high performance storage increasing the peak performance capabilities of individual system that brings plug-and-play simplicity to large departments. Typically, ALICE serves around 300 jobs scale storage deployments while pushing aggregate at any one time, with perhaps 10 jobs occupying half the performance to 150GB/s – the industry’s ultimate cluster, and the remaining jobs running on an average of system throughput per gigabyte of storage. Dr. Rudge two to three CPUs. The users love the system’s ability knew this scale-out storage system would provide the to run demanding algorithms quickly and efficiently; one cost effective performance, reliability, flexibility, and economist has run 300,000 one-hour jobs in the first six critical ease-of-use needed to support expansion within months of operation – the equivalent of more than 30 his limited project budget. He built a plan that allowed years processing on a single CPU. The facility supports for scalable pay-as-you-grow capacity. In addition, with a very broad range of research projects under a simple features like user quotas and per-user charge back, to manage global namespace: Economists model the the Panasas HPC storage system was optimized for a world’s stock markets, Geneticists analyze genome shared cloud environment. sequences, and Astrophysicists model star formation, for example. By any measure ALICE has been a fantastic Eventually it became clear that performance success. requirements and ease of use considerations meant that there was no real alternative to Panasas ActiveStor. With research projects expected to place increasing demands upon storage system performance, it was critical that the storage infrastructure did not introduce bottlenecks that would reduce ALICE’s ability to deliver results quickly and efficiently. 05242011 1071 | Phone: 1.888.PANASAS | www.panasas.com © 2011 Panasas Incorporated. All rights reserved. Panasas is a trademark of Panasas, Inc. in the United States and other countries.