3. Contents
! About CSC
– High Performance Computing
! Drivers
– Why a new datacenter?
– Why Kajaani?
! The project
! What we built
! Construction and commissioning
! Lessons learned
5. Who am I?
! Computer Science background
! 6 years at Sun Microsystems
! HPC consultant in Financial Services
! 3 years in Finland working for CSC
! I’m not a datacenter specialist!
– It helps!
6. My role at CSC
IT
• Rapid change
• New hardware
challenging datacenters
Facilities
• Traditionally very stable
• Disruptive innovations in
last 5 years
• Traditionally separate departments must now
work together
• I try to bridge the gap
7. About
CSC
• 100%
owned
by
ministry
of
educaGon
– Public
sector,
Non
profit
• Major
service
areas:
– High
Performance
CompuGng
(HPC)
– Storage
and
archive
– FUNET
–
Academic
network
in
Finland
– Managed
services
and
IaaS
cloud
– Server
hosGng
and
co-‐lo
14. 0
1
2
3
4
5
6
7
8
9
10
2005
2006
2007
2008
2009
2010
2011
Energy
GWh
CSCs
Datacenter
Energy
Use
2005
-‐
2011
DC
2
Infra
DC
2
IT
DC
1
Infra
DC
1
IT
16. New
site
drivers
• Capacity
limiGng
growth
– Super
computers
– Managed
hosGng
• Costs
increasing
– Tax
– Energy
– Carbon
neutral
• New
site
focused
on
efficiency
and
capacity
0
1
2
3
4
5
6
7
8
9
10
2005
2006
2007
2008
2009
2010
2011
Energy
GWh
CSCs
Datacenter
Energy
Use
2005
-‐
2011
DC
2
Infra
DC
2
IT
DC
1
Infra
DC
1
IT
17. 2011 Energy cost per MW/hour
Espoo Kajaani
VAT
Tax
Transfer
Electricity
31. Power capabilities
! Within perimeter fence:
– National grid connection access to 340 MW
– 110 kV / 10 kV main transformer capacity
! Current capacity 240 MW
– Biopower on site
! Green power options
– 3 hydro power plants within 3 km
! feeding directly to site.
! Diverse power supply = reliable power
33. Related
datacenter
sites
Facebook:
Luleå
120
MW
Free
air
cooling
CSC:
Kajaani
1.4
MW
Free
air
cooling
0.9
MW
Water
cooling
PUE
design
Google:
Hamina
???
MW
Sea
water
cooling
CSC:
Espoo
1.6
MW
ConvenGonal
PUE
1.4
&
1.8
34. Government support
! Google Hamina = wakeup call
– Unused assets ideal for fast growing industry
– Jobs, skills, international competitiveness
! Government acted
– Regional development money
– Extra money to CSC to build a new site
! Site selection: long story short
– Initial concept study 2010
– Several former paper mills considered
– Kajaani was successful in bidding
36. Approach
! Design goal: multi-MW facility PUE <1.2
! Leverage features of site
– Matched to business requirements
– Avoid redundancy and backup
! Only 100kW UPS from day one
– < 5% of load
– core network, management, automation
– Emphasis on monitoring and rapid recovery
! No generators day one
! Option to add 100% UPS and generators
37. Approach continues
! Free cooling year round
! Use modular to right-size and scale quickly
! Green
– CSC buys certificates of carbon neutral energy
– 100% Finnish hydro power
38. Out
side
Timeline
Paper
mill
2010
2011
2012
Site
selecGon
Planning
Warehouse
2013
1st
ITT
Datacenter
Supercomputer
October
2nd
ITT
Analysis
Delivery
Conven&onal
build
Modular
build
44. Specification
! 2.4 MW combined hybrid capacity
! 1.4 MW modular free air cooled datacenter
– Upgradable in 700kW factory built modules
– Order to acceptance in 5 months
– 35kW per extra tall racks – 12kW common in
industry
– PUE forecast < 1.08 (pPUEL2,YC)
! 1MW HPC datacenter
– Optimised for Cray super & T-Platforms
prototype
– 90% Water cooling
49. Hub DC facts
! Due November
! 900kW water cooling
– + 100kW air from hub
! Purpose built for water
cooled HPC
50. IT
summary
• Cray
“cascade”
supercomputer
– 10M€
-‐
five
year
contract
– Fastest
computer
in
Finland
due
mid-‐November
– Phase
one
385
kW
2012
of
Intel
processors
– Very
high
density,
large
racks
• T-‐Plamorms
prototype
– Very
high
density
hot-‐water
cooled
racks
– Intel
processors,
Intel
and
NVIDIA
accelerators
– TheoreGcal
400
TFlops
performance
54. SGI Ice Cube R80
! One head unit and two expansion modules
! More modules can be added
! Fully automated free cooling system
– Dozens of cooling fans, louvers and sensors
! Extremely energy efficient – pPUE 1.08
! Set point allowed to vary (10-27C for us)
! Adiabatic cooling on warm days
! Exhaust heat used to warm incoming air
59. ! “Super Cluster”
– 4.5M€ five year contract
– 1,152 Intel CPUs
– 190 TFlop/s
– 30kW 47U racks
! HPC storage
– 3PB of fast parallel storage
– Supports Cray and HP systems
IT summary
63. Head
load
test
• 700kW
of
load
banks
– 20
racks
– 120
x
6kW
load
banks
– 2
days
to
rack
mount
– Half
MDC
capacity
• pPUE
1.05
• Very
useful
• Silenced
the
criGcs
64. Commissioning
! For MDC it took longer than
construction!
! MDC internal workings and
drawings are sensitive
– Test plan process challenging
– Training required for new free
cooled system type
! Where possible plan in
advance
! Allow plenty of time
66. Lessons learned
! Simplify your procurement!
! Share every detail of your site with bidders
! A modular datacenter is still a datacenter
– Even if design fixed is expect many decisions
! Pay special attention to integration
– fire, security, power, electricity, etc.
– Any small issue can cause major delay
! Get expert advice
– don't assume!
67. Reference or define every ’standard’
Don't assume
! Racks
– Define depth, height, RU, posts
– Strongly consider alternatives like
opencompute
! Door heights
! Make a list of required standards
– Fire suppression – insurance, building permit
! Get a datacenter contract
– Modular buildings contracts != IT contracts
– If possible multiple phases of acceptance
68. Things we got right
! Make the selection based on 10 year TCO
– IT TCO was 5 years per phase
– highlights energy efficiency without defining 'green’
– could also use an artificially high energy price
! Require that IT vendors list
– PDU efficiency use 80Plus
– Temperature and humidity ranges
– ASHRAE TC 9.9 2011
! All our suppliers support A2 range (5 to 35C)