The University of Louisville in Kentucky is a leading center for advanced
research, using the power of its computing cluster to help develop new
cancer drugs, find effective treatments for spinal cord injuries and develop
new solar cell materials.
Researchers engaged in scientific fields of study such as bioinformatics,
engineering and computational chemistry, however, did not always have a
centralized high-performance computing cluster to facilitate their work,
says Priscilla Hancock, Ph.D., vice president and chief information officer
at the University of Louisville.
Cardinal Research Cluster takes flight at University of Louisville
1. IBM Systems and Technology Education
Case Study
Cardinal Research Cluster
takes flight at University
of Louisville
With a high-performance IBM System x iDataPlex
solution
The University of Louisville in Kentucky is a leading center for advanced
Overview research, using the power of its computing cluster to help develop new
cancer drugs, find effective treatments for spinal cord injuries and develop
The need
new solar cell materials.
The University of Louisville needed
to expand its secure, centralized, high-
performance computing cluster to help Researchers engaged in scientific fields of study such as bioinformatics,
faculty and students conduct a wide engineering and computational chemistry, however, did not always have a
variety of computationally demanding
research.
centralized high-performance computing cluster to facilitate their work,
says Priscilla Hancock, Ph.D., vice president and chief information officer
The solution at the University of Louisville.
The school implemented
204 IBM System x® iDataPlex®
“When I arrived there wasn’t any central computing research capability,”
dx360 class servers with Intel® Xeon®
processors, IBM System Networking she explains. “What I sometimes say is that before our Cardinal Research
switches, Mellanox switches and four Cluster, all we offered was email.”
IBM System Storage® DS3500 Express
devices.
Not having a supercomputing resource meant academic departments
The benefit had to build or find their own computing nodes, explains Mike Dyre,
The IBM solution doubles the capacity director of research computing at the University of Louisville.
of the school’s supercomputer cluster,
delivers peak speeds of 40 teraflops,
and provides a flexible platform to drive “A researcher who didn’t have funding basically had nothing but a
advanced research and future innovation. desktop PC, or perhaps something in the department to use for doing
research,” says Dyre. “We had a very uneven implementation to do
research, and that’s why we established a centralized computer that’s
fairly large and accessible to everyone.”
A supercomputer built on IBM solutions, in two
phases
The university initially tapped federal grants to implement the school’s
first centralized and secure high-performance computing center in 2009,
says Hancock. Incorporating 312 IBM System x iDataPlex dx340 nodes,
2. IBM Systems and Technology Education
Case Study
the first phase of what became known as the Cardinal Research Cluster
delivered a much-needed dose of high-performance computing. And it
“I wanted a high- didn’t take long to reach 100 percent utilization. “I wanted a high-
performance computing performance computing cluster that would be used to the max, and that’s
cluster that would exactly what the university got with IBM,” says Hancock.
be used to the max, When additional funding came through for an expansion of the solution,
and that’s exactly a primary goal was to achieve greater efficiency and cost effectiveness
through a simplified networking and storage strategy. In both phases,
what the university researchers examined solutions from IBM and others. “When they voted
got with IBM.” on the proposals, IBM was the unanimous choice not just once, but
twice,” says Hancock. “Not only were all of the researchers unanimous in
—Priscilla Hancock, Ph.D., vice president their decision, all of them were happy. That was pretty amazing.”
and chief information officer, University
of Louisville The second phase expansion of the Cardinal Research Cluster included
204 IBM System x iDataPlex dx360 class servers featuring Intel Xeon
processors paired with IBM System Networking switches, Mellanox
switches and four IBM System Storage DS3500 Express devices.
IBM Business Partner Sumavi, Inc. assisted with systems integration
along with IBM STG Lab Services and Training.
“The thing that drew us and the faculty to IBM was the variety of com-
puting solutions they had,” says Dyre. “Other vendors wanted to sell us a
cluster, but IBM was able to offer a complete package that also integrated
the network and the storage as one entity.”
The resulting solution delivers 5,052 processing cores with between two
and four gigabytes of memory per core. Additional hardware components
include IBM System x3650 class servers, which function as the head
nodes for the System Storage DS3500 devices with 500 terabytes of
shared storage capacity, as well as fourteen general-purpose computation
on graphics processing unit (GPGPU) nodes to provide the cluster with
additional parallel processing capabilities.
On the software side, the Cardinal Research Cluster uses Red Hat Linux
and a range of specialized applications such as the MATLAB numerical
computing environment and the BLAST algorithm for comparing pri-
mary biological sequence information. Storage across the data center is
orchestrated with IBM General Purpose File System (GPFS™).
2
3. IBM Systems and Technology Education
Case Study
Powerful performance for diverse computing
Solution components needs
One of the most significant benefits of the solution is its ability to handle
Hardware
both massive parallel processing jobs, which involve rapid calculation, as
●●
IBM System x® iDataPlex® dx340 and
dx360 class servers well as high throughput jobs, where a large amount of addressable
●●
IBM System x3650 class servers memory is most critical, says Hancock.
●●
IBM System Storage®
DS3500 Express
●●
IBM System Networking switches In benchmark tests, the cluster has achieved peak speeds of
●●
Intel® Xeon® processors 40 teraflops—40 trillion floating point operations per second, says
Dyre. The new switches from IBM and Mellanox, meanwhile, have
Software
virtually eliminated any previous networking bottlenecks in the system.
●●
IBM General Parallel File
System (GPFS™)
●●
Red Hat Linux This has in turn freed up researchers to submit increasingly more com-
plex jobs. “We’ve more than doubled our capacity as far as the number
Services
of cores and we have more than tripled the storage capacity,” says Dyre.
●●
IBM STG Lab Services and Training
“And what’s great is that it’s being fully utilized. When we talk about
people submitting thousands of jobs, they literally submit 10,000 jobs at
once and so the cluster runs all the time. We’re not having any trouble
finding any people to use it.”
IBM gave University of Louisville a Shared University Research (SUR)
Award to help further its research efforts. This award includes the
donation of extra computing systems and gives the university access
to IBM engineers who work closely with the university’s IT staff to get
maximum performance from the supercomputer.
A partnership built on shared goals
Hancock says working with IBM has allowed her to provide university
researchers with everything they need to conduct important work such
as discovering new types of cancer drugs and treatments. And even after
the implementation was complete, IBM continues to be available.
“IBM gave us a very fair offer, they stayed with it from start to finish, and
they’re still here,” says Hancock. “IBM wants to make sure it works for
us. And even if something goes wrong, they have stayed at the table until
everyone is satisfied.”
Dyre couldn’t agree more: “I think we’re in a pretty good spot right now
with IBM. We’re happy with the product we have, we’re happy with the
service we’re getting, and we’re happy with the reps we have. What more
can you ask for?”
3