SlideShare ist ein Scribd-Unternehmen logo
1 von 20
Downloaden Sie, um offline zu lesen
Lustre2.5 Performance Evaluation:
Performance Improvements
with Large I/O Patches,
Metadata Improvements, and Metadata
Scaling with DNE
Hitoshi Sato*1, Shuichi Ihara*2, Satoshi Matsuoka*1
*1 Tokyo Institute of Technology
*2 Data Direct Networks Japan
1
Tokyo Tech’s TSUBAME2.0 Nov. 1, 2010
“The Greenest Production Supercomputer in the World”
TSUBAME 2.0
New Development
32nm 40nm
>400GB/s Mem BW
80Gbps NW BW
~1KW max
>1.6TB/s Mem BW >12TB/s Mem BW
35KW Max
>600TB/s Mem BW
220Tbps NW
Bisecion BW
1.4MW Max
2
TSUBAME2 System Overview
11PB (7PB HDD, 4PB Tape, 200TB SSD)
“Global Work Space” #1
SFA10k #5
“Global Work
Space” #2 “Global Work Space” #3
SFA10k #4SFA10k #3SFA10k #2SFA10k #1
/data0 /work0
/work1 /gscr
“cNFS/Clusterd Samba w/ GPFS”
HOME
System
application
“NFS/CIFS/iSCSI by BlueARC”
HOME
iSCSI
Infiniband QDR Networks
SFA10k #6
GPFS#1 GPFS#2 GPFS#3 GPFS#4
Parallel File System Volumes
Home Volumes
QDR IB(×4) × 20 10GbE × 2QDR IB (×4) × 8
1.2PB
3.6 PB
/data1
Thin nodes 1408nodes (32nodes x44 Racks)
HP Proliant SL390s G7 1408nodes
CPU: Intel Westmere-EP 2.93GHz
6cores × 2 = 12cores/node
GPU: NVIDIA Tesla K20X, 3GPUs/node
Mem: 54GB (96GB)
SSD: 60GB x 2 = 120GB (120GB x 2 = 240GB)
Medium nodes
HP Proliant DL580 G7 24nodes
CPU: Intel Nehalem-EX 2.0GHz
8cores × 2 = 32cores/node
GPU: NVIDIA Tesla S1070,
NextIO vCORE Express 2070
Mem:128GB
SSD: 120GB x 4 = 480GB
Fat nodes
HP Proliant DL580 G7 10nodes
CPU: Intel Nehalem-EX 2.0GHz
8cores × 2 = 32cores/node
GPU: NVIDIA Tesla S1070
Mem: 256GB (512GB)
SSD: 120GB x 4 = 480GB
・・・・・・
Computing Nodes: 17.1PFlops(SFP), 5.76PFlops(DFP), 224.69TFlops(CPU), ~100TB MEM, ~200TB SSD
Interconnets: Full-bisection Optical QDR Infiniband Network
Voltaire Grid Director 4700 ×12
IB QDR: 324 ports
Core Switch Edge Switch Edge Switch (/w 10GbE ports)
Voltaire Grid Director 4036 ×179
IB QDR : 36 ports
Voltaire Grid Director 4036E ×6
IB QDR:34ports
10GbE: 2port
12switches
6switches179switches
2.4 PB HDD +
GPFS+Tape Lustre Home
Local SSDs
3
TSUBAME2 System Overview
11PB (7PB HDD, 4PB Tape, 200TB SSD)
“Global Work Space” #1
SFA10k #5
“Global Work
Space” #2 “Global Work Space” #3
SFA10k #4SFA10k #3SFA10k #2SFA10k #1
/data0 /work0
/work1 /gscr
“cNFS/Clusterd Samba w/ GPFS”
HOME
System
application
“NFS/CIFS/iSCSI by BlueARC”
HOME
iSCSI
Infiniband QDR Networks
SFA10k #6
GPFS#1 GPFS#2 GPFS#3 GPFS#4
Parallel File System Volumes
Home Volumes
QDR IB(×4) × 20 10GbE × 2QDR IB (×4) × 8
1.2PB
3.6 PB
/data1
Thin nodes 1408nodes (32nodes x44 Racks)
HP Proliant SL390s G7 1408nodes
CPU: Intel Westmere-EP 2.93GHz
6cores × 2 = 12cores/node
GPU: NVIDIA Tesla K20X, 3GPUs/node
Mem: 54GB (96GB)
SSD: 60GB x 2 = 120GB (120GB x 2 = 240GB)
Medium nodes
HP Proliant DL580 G7 24nodes
CPU: Intel Nehalem-EX 2.0GHz
8cores × 2 = 32cores/node
GPU: NVIDIA Tesla S1070,
NextIO vCORE Express 2070
Mem:128GB
SSD: 120GB x 4 = 480GB
Fat nodes
HP Proliant DL580 G7 10nodes
CPU: Intel Nehalem-EX 2.0GHz
8cores × 2 = 32cores/node
GPU: NVIDIA Tesla S1070
Mem: 256GB (512GB)
SSD: 120GB x 4 = 480GB
・・・・・・
Computing Nodes: 17.1PFlops(SFP), 5.76PFlops(DFP), 224.69TFlops(CPU), ~100TB MEM, ~200TB SSD
Interconnets: Full-bisection Optical QDR Infiniband Network
Voltaire Grid Director 4700 ×12
IB QDR: 324 ports
Core Switch Edge Switch Edge Switch (/w 10GbE ports)
Voltaire Grid Director 4036 ×179
IB QDR : 36 ports
Voltaire Grid Director 4036E ×6
IB QDR:34ports
10GbE: 2port
12switches
6switches179switches
2.4 PB HDD +
GPFS+Tape Lustre Home
Local SSDs
Mostly Read I/O
(data-intensive apps, parallel workflow,
parameter survey)
• Home storage for computing nodes
• Cloud-based campus storage services
Backup
Fine-grained R/W I/O
(check point, temporal files)
Fine-grained R/W I/O
(check point, temporal files)
4
Towards TSUBAME 3.0
Interim Upgrade
TSUBAME2.0 to 2.5
(Sept.10th, 2013)
TSUBAME2.0 Compute Node
Fermi GPU 3 x 1408 = 4224 GPUs
TSUBAME-KFC
Upgrade the TSUBAME2.0s GPUs
NVIDIA Fermi M2050 to Kepler K20X
SFP/DFP peak
from 4.8PF/2.4PF
=> 17PF/5.7PF
A TSUBAME3.0 prototype system with
advanced cooling for next-gen.
supercomputers.
40 compute nodes are oil-submerged.
160 NVIDIA Kepler K20x
80 Ivy Bridge Xeon
FDR Infiniband
Total 210.61 Flops
System 5.21 TFlops
Storage design is also a matter!!
5
Existing Storage Problems
Small I/O
• PFS
– Throughput-oriented Design
– Master-Worker Configuration
• Performance Bottlenecks on
Meta-data references
• Low IOPS (1k – 10k IOPS)
I/O contention
• Shared across compute
nodes
– I/O contention from various
multiple applications
– I/O performance degradation
Small I/O ops
from user’s apps
PFS operation
down
Recovered
IOPS
TIME
Lustre I/O Monitoring
6
Requirements for Next
Generation I/O Sub-System 1
• Basic Requirements
– TB/sec sequential bandwidth for traditional HPC applications
– Extremely high IOPS for to cope with applications that
generate massive small I/O
• Example: graph, etc.
– Some application workflows have different I/O requirements
for each workflow component
• Reduction/consolidation of PFS I/O resources is needed
while achieving TB/sec performance
– We cannot simply use a larger number of IO servers and drives
for achieving ~TB/s throughput!
– Many constraints
• Space, power, budget, etc.
• Performance per Watt and performance per RU is crucial
7
Requirements for Next
Generation I/O Sub-System 2
• New Approaches
• Tsubame 2.0 has pioneered the use of local flash storage as
a high-IOPS alternative to an external PFS
• Tired and hybrid storage environments, combining (node)
local flash with an external PFS
• Industry Status
• High-performance, high-capacity flash (and other new
semiconductor devices) are becoming available at reasonable
cost
• New approaches/interface to use high-performance devices
(e.g. NVMexpress)
8
Key Requirements of I/O system
• Electrical Power and Space are still Challenging
– Reduce of #OSS/OST, but keep higher IO Performance and
large storage capacity
– How maximize Lustre performance
• Understand New type of Flush storage device
– Any benefits with Lustre? How use it?
– MDT on SSD helps?
9
Lustre 2.x Performance Evaluation
• Maximize OSS/OST Performance for large
Sequential IO
– Single OSS and OST performance
– 1MB vs 4MB RPC
– Not only Peak performance, but also sustain performance
• Small IO to shared file
– 4K random read access to the Lustre
• Metadata Performance
– CPU impacts?
– SSD helps?
10
Throughput per OSS Server
0
2000
4000
6000
8000
10000
12000
14000
1 2 4 8 16
Throughput(MB/sec)
Number of clients
Single OSS's Throughput (Write)
Nobonding AA-bonding AA-bonding(CPU bind)
0
2000
4000
6000
8000
10000
12000
14000
1 2 4 8 16
Throughput(MB/s)
Number of clients
Single OSS's Throughput (Read)
Nobonding AA-bonding AA-bonding(CPU bind)
Single FDR’s LNET bandwidth
6.0 (GB/sec)
Dual IB FDR LNET bandwidth
12.0 (GB/sec)
11
Performance comparing
(1 MB vs. 4 MB RPC, IOR FPP, asyncIO)
• 4 x OSS, 280 x NL-SAS
• 14 clients and up to 224 processes
0
5000
10000
15000
20000
25000
30000
28 56 112 224
Throughput(MB/sec)
Number of Process
IOR(Write, FPP, xfersize=4MB)
1MB RPC
4MB RPC
0
5000
10000
15000
20000
25000
28 56 112 224
Throughput(MB/sec)
Number of Process
IOR(Read, FPP, xfersize=4MB)
1MB RPC
4MB RPC
12
Lustre Performance Degradation with Large
Number of Threads: OSS standpoint
0
2000
4000
6000
8000
10000
12000
0 200 400 600 800 1000 1200
THroughput(MB/sec)
Number of Process
IOR(FPP, 1MB, Write)
RPC=4MB RPC=1MB
0
1000
2000
3000
4000
5000
6000
7000
8000
0 200 400 600 800 1000 1200
THroughput(MB/sec)
Number of Process
IOR(FPP, 1MB, Read)
RPC=4MB RPC=1MB
• 1 x OSS, 80 x NL-SAS
• 20 clients and up to 960 processes
4.5x
30%
55%
13
Lustre Performance Degradation with Large
Number of Threads: SSDs vs. Nearline Disks
• IOR (FFP, 1MB) to single OST
• 20 clients and up to 640 processes
0%
20%
40%
60%
80%
100%
120%
0 100 200 300 400 500 600 700
PerformanceperOSTas%ofPeakPerformance
Number of Process
Write: Lustre Backend Performance Degradation
(Maximum for each dataset=100%)
7.2KSAS(1MB RPC) 7.2KSAS(4MB RPC) SSD(1MB RPC)
0%
20%
40%
60%
80%
100%
120%
0 100 200 300 400 500 600 700
PerformanceperOSTas%ofPeakPerformance
Number of Process
Read : Lustre Backend Performance Degradation
(Maximum for each dataset=100%)
7.2KSAS(1MB RPC) 7.2KSAS(4MB RPC) SSD(1MB RPC)
14
Lustre 2.4 4k Random Read Performance
(With 10 SSDs)
• 2 x OSS, 10 x SSD(2 x RAID5), 16 clients
• Create Large random files (FFP, SSF), and run random read
access with 4KB
0
100000
200000
300000
400000
500000
600000
1n32p 2n64p 4n128p 8n256p 12n384p 16n512p
IOPS
Lustre 4K random read(FFP and SSF)
FFP(POSIX)
FFP(MPIIO)
SSF(POSIX)
SSF(MPIIO)
15
Conclusions: Lustre with SSDs
• SSDs pools in Lustre or SSDs
– SSDs change the behavior of pools, as “seek” latency is no longer an
issue
• Application scenarios
– Very consistent performance, independent of the IO size or number
of concurrent IOs
– Very high random access to a single shared file (millions of IOPS in
a large file system with sufficient clients)
• With SSDs, the bottleneck is no more the device, but the RAID
array (RAID stack) and the file system
– Very high random read IOPS in Lustre is possible, but only if the
metadata workload is limited (i.e. random IO to a single shared file)
• We will investigate more benchmarks of Lustre on SSD
16
Metadata Performance Improvements
• Very Significant Improvements (since Lustre 2.3)
– Increased performance for both unique and shared directory metadata
– Almost linear scaling for most metadata operations with DNE
0
20000
40000
60000
80000
100000
120000
140000
Directory
creation
Directory
stat
Directory
removal
File
creation
File stat File
removal
ops/sec
Performance Comparing: Lustre-1.8 vs 2.4
(metadata operation to shared dir)
lustre-1.8.8 lustre-2.4.2
* 1 x MDS, 16 clients, 768 Lustre exports(multiple mount points)
0
100000
200000
300000
400000
500000
600000
Directory
creation
Directory
stat
Directory
removal
File
creation
File stat File
removal
ops/sec
Performance Comparing
(metadata operation to unique dir)
Same Storage resource, but just add MDS resource
Single MDS Dual MDS(DNE)
17
Metadata Performance
CPU Impact
• Metadata
Performance is
highly dependent on
CPU performance
• Limited variation in
CPU frequency or
memory speed can
have a significant
impact on metadata
performance
0
10000
20000
30000
40000
50000
60000
70000
80000
90000
0 100 200 300 400 500 600 700
ops/sec
Number of Process
File Creation (Unique dir)
3.6GHz
2.4GHz
0
10000
20000
30000
40000
50000
60000
70000
0 100 200 300 400 500 600 700
ops/sec
Number of Process
File Creation (Shared dir)
3.6GHz
2.4GHz
18
Conclusions: Metadata Performance
• Significant metadata improvements for shares and unique
metadata performance since Lustre 2.3
– Improvements across all metadata workloads, especially
removals
– SSDs add to performance, but the impact is limited (and
probably mostly due to latency, not the IOPS performance of
the device)
• Important considerations for metadata benchmarks
– Metadata benchmarks are very sensitive to the underlying
hardware
– Clients limitations are important when considering metadata
performance of scaling
– Only directory/file stats scale with the number of processes
per node, other metadata workloads do appear not scale with
the number of processes on a single node
19
Summary
• Lustre Today
– Significant performance increase in object storage
and metadata performance
– Much better spindle efficiency (which is now
reaching the physical drive limit)
– Client performance is quickly becoming the
limitation for small file and random performance,
not the server side!!
• Consider alternative scenarios
– PFS combined with fast local devices (or, even,
local instances of the PFS)
– PFS combined with a global acceleration/caching
layer
20

Weitere ähnliche Inhalte

Was ist angesagt?

Collaborate nfs kyle_final
Collaborate nfs kyle_finalCollaborate nfs kyle_final
Collaborate nfs kyle_finalKyle Hailey
 
Ceph Day Melbourne - Troubleshooting Ceph
Ceph Day Melbourne - Troubleshooting Ceph Ceph Day Melbourne - Troubleshooting Ceph
Ceph Day Melbourne - Troubleshooting Ceph Ceph Community
 
PostgreSQL + ZFS best practices
PostgreSQL + ZFS best practicesPostgreSQL + ZFS best practices
PostgreSQL + ZFS best practicesSean Chittenden
 
SSD Deployment Strategies for MySQL
SSD Deployment Strategies for MySQLSSD Deployment Strategies for MySQL
SSD Deployment Strategies for MySQLYoshinori Matsunobu
 
Oracle Performance On Linux X86 systems
Oracle  Performance On Linux  X86 systems Oracle  Performance On Linux  X86 systems
Oracle Performance On Linux X86 systems Baruch Osoveskiy
 
Andy Parsons Pivotal June 2011
Andy Parsons Pivotal June 2011Andy Parsons Pivotal June 2011
Andy Parsons Pivotal June 2011Andy Parsons
 
Introduction to TrioNAS LX U300
Introduction to TrioNAS LX U300Introduction to TrioNAS LX U300
Introduction to TrioNAS LX U300qsantechnology
 
Red Hat Enterprise Linux OpenStack Platform on Inktank Ceph Enterprise
Red Hat Enterprise Linux OpenStack Platform on Inktank Ceph EnterpriseRed Hat Enterprise Linux OpenStack Platform on Inktank Ceph Enterprise
Red Hat Enterprise Linux OpenStack Platform on Inktank Ceph EnterpriseRed_Hat_Storage
 
ZFS Workshop
ZFS WorkshopZFS Workshop
ZFS WorkshopAPNIC
 
Ceph Day Melbourne - Scale and performance: Servicing the Fabric and the Work...
Ceph Day Melbourne - Scale and performance: Servicing the Fabric and the Work...Ceph Day Melbourne - Scale and performance: Servicing the Fabric and the Work...
Ceph Day Melbourne - Scale and performance: Servicing the Fabric and the Work...Ceph Community
 
Raid designs in Qsan Storage
Raid designs in Qsan StorageRaid designs in Qsan Storage
Raid designs in Qsan Storageqsantechnology
 
Zfs Nuts And Bolts
Zfs Nuts And BoltsZfs Nuts And Bolts
Zfs Nuts And BoltsEric Sproul
 
Ceph Day Melbourne - Walk Through a Software Defined Everything PoC
Ceph Day Melbourne - Walk Through a Software Defined Everything PoCCeph Day Melbourne - Walk Through a Software Defined Everything PoC
Ceph Day Melbourne - Walk Through a Software Defined Everything PoCCeph Community
 
Cassandra Performance Benchmark
Cassandra Performance BenchmarkCassandra Performance Benchmark
Cassandra Performance BenchmarkBigstep
 
Ceph Day Beijing - Optimizing Ceph Performance by Leveraging Intel Optane and...
Ceph Day Beijing - Optimizing Ceph Performance by Leveraging Intel Optane and...Ceph Day Beijing - Optimizing Ceph Performance by Leveraging Intel Optane and...
Ceph Day Beijing - Optimizing Ceph Performance by Leveraging Intel Optane and...Danielle Womboldt
 
Memory, Big Data, NoSQL and Virtualization
Memory, Big Data, NoSQL and VirtualizationMemory, Big Data, NoSQL and Virtualization
Memory, Big Data, NoSQL and VirtualizationBigstep
 
Ceph Day Beijing - SPDK for Ceph
Ceph Day Beijing - SPDK for CephCeph Day Beijing - SPDK for Ceph
Ceph Day Beijing - SPDK for CephDanielle Womboldt
 

Was ist angesagt? (20)

Collaborate nfs kyle_final
Collaborate nfs kyle_finalCollaborate nfs kyle_final
Collaborate nfs kyle_final
 
Ceph Day Melbourne - Troubleshooting Ceph
Ceph Day Melbourne - Troubleshooting Ceph Ceph Day Melbourne - Troubleshooting Ceph
Ceph Day Melbourne - Troubleshooting Ceph
 
PostgreSQL + ZFS best practices
PostgreSQL + ZFS best practicesPostgreSQL + ZFS best practices
PostgreSQL + ZFS best practices
 
SSD Deployment Strategies for MySQL
SSD Deployment Strategies for MySQLSSD Deployment Strategies for MySQL
SSD Deployment Strategies for MySQL
 
Bluestore
BluestoreBluestore
Bluestore
 
Oracle Performance On Linux X86 systems
Oracle  Performance On Linux  X86 systems Oracle  Performance On Linux  X86 systems
Oracle Performance On Linux X86 systems
 
Andy Parsons Pivotal June 2011
Andy Parsons Pivotal June 2011Andy Parsons Pivotal June 2011
Andy Parsons Pivotal June 2011
 
Introduction to TrioNAS LX U300
Introduction to TrioNAS LX U300Introduction to TrioNAS LX U300
Introduction to TrioNAS LX U300
 
Red Hat Enterprise Linux OpenStack Platform on Inktank Ceph Enterprise
Red Hat Enterprise Linux OpenStack Platform on Inktank Ceph EnterpriseRed Hat Enterprise Linux OpenStack Platform on Inktank Ceph Enterprise
Red Hat Enterprise Linux OpenStack Platform on Inktank Ceph Enterprise
 
ZFS Workshop
ZFS WorkshopZFS Workshop
ZFS Workshop
 
Ceph Day Melbourne - Scale and performance: Servicing the Fabric and the Work...
Ceph Day Melbourne - Scale and performance: Servicing the Fabric and the Work...Ceph Day Melbourne - Scale and performance: Servicing the Fabric and the Work...
Ceph Day Melbourne - Scale and performance: Servicing the Fabric and the Work...
 
Raid designs in Qsan Storage
Raid designs in Qsan StorageRaid designs in Qsan Storage
Raid designs in Qsan Storage
 
Zfs Nuts And Bolts
Zfs Nuts And BoltsZfs Nuts And Bolts
Zfs Nuts And Bolts
 
Ceph Day Melbourne - Walk Through a Software Defined Everything PoC
Ceph Day Melbourne - Walk Through a Software Defined Everything PoCCeph Day Melbourne - Walk Through a Software Defined Everything PoC
Ceph Day Melbourne - Walk Through a Software Defined Everything PoC
 
ZFS
ZFSZFS
ZFS
 
Cassandra Performance Benchmark
Cassandra Performance BenchmarkCassandra Performance Benchmark
Cassandra Performance Benchmark
 
Ceph Day Beijing - Optimizing Ceph Performance by Leveraging Intel Optane and...
Ceph Day Beijing - Optimizing Ceph Performance by Leveraging Intel Optane and...Ceph Day Beijing - Optimizing Ceph Performance by Leveraging Intel Optane and...
Ceph Day Beijing - Optimizing Ceph Performance by Leveraging Intel Optane and...
 
Memory, Big Data, NoSQL and Virtualization
Memory, Big Data, NoSQL and VirtualizationMemory, Big Data, NoSQL and Virtualization
Memory, Big Data, NoSQL and Virtualization
 
ZFS Talk Part 1
ZFS Talk Part 1ZFS Talk Part 1
ZFS Talk Part 1
 
Ceph Day Beijing - SPDK for Ceph
Ceph Day Beijing - SPDK for CephCeph Day Beijing - SPDK for Ceph
Ceph Day Beijing - SPDK for Ceph
 

Andere mochten auch

MemoryPlus Workshop
MemoryPlus WorkshopMemoryPlus Workshop
MemoryPlus WorkshopHitoshi Sato
 
Japan Lustre User Group 2014
Japan Lustre User Group 2014Japan Lustre User Group 2014
Japan Lustre User Group 2014Hitoshi Sato
 
2012 08-23 Mame Night Jenkins
2012 08-23 Mame Night Jenkins2012 08-23 Mame Night Jenkins
2012 08-23 Mame Night JenkinsTakayuki Okazaki
 
Webinar: Getting Beyond Flash 101 - Flash 102 Selecting the Right Flash Array
Webinar: Getting Beyond Flash 101 - Flash 102 Selecting the Right Flash ArrayWebinar: Getting Beyond Flash 101 - Flash 102 Selecting the Right Flash Array
Webinar: Getting Beyond Flash 101 - Flash 102 Selecting the Right Flash ArrayStorage Switzerland
 
DDN: Protecting Your Data, Protecting Your Hardware
DDN: Protecting Your Data, Protecting Your HardwareDDN: Protecting Your Data, Protecting Your Hardware
DDN: Protecting Your Data, Protecting Your Hardwareinside-BigData.com
 
Google the presentation
Google the presentationGoogle the presentation
Google the presentationSean Robinson
 
Implementing New State and Federal Air Regulatory Programs – A Systematic App...
Implementing New State and Federal Air Regulatory Programs – A Systematic App...Implementing New State and Federal Air Regulatory Programs – A Systematic App...
Implementing New State and Federal Air Regulatory Programs – A Systematic App...Trihydro Corporation
 
Liberty Minerals Presentation - April 2012
Liberty Minerals Presentation - April 2012Liberty Minerals Presentation - April 2012
Liberty Minerals Presentation - April 2012TMX Equicom
 
BEAD Presentation
BEAD PresentationBEAD Presentation
BEAD Presentationbalchenn
 
Los manuales lisbeth yajaira .......nuevo
Los manuales lisbeth yajaira .......nuevoLos manuales lisbeth yajaira .......nuevo
Los manuales lisbeth yajaira .......nuevoUNACH
 
My First Source Code
My First Source CodeMy First Source Code
My First Source Codeenidcruz
 
Air Quality Compliance Affecting Oil and Gas Development
Air Quality Compliance Affecting Oil and Gas DevelopmentAir Quality Compliance Affecting Oil and Gas Development
Air Quality Compliance Affecting Oil and Gas DevelopmentTrihydro Corporation
 
Portland Visitor's Guide
Portland Visitor's Guide Portland Visitor's Guide
Portland Visitor's Guide acarmour1020
 
Avoid Air-rors! Discuss the Air Regulations that Impact Oil and Gas Development
Avoid Air-rors! Discuss the Air Regulations that Impact Oil and Gas DevelopmentAvoid Air-rors! Discuss the Air Regulations that Impact Oil and Gas Development
Avoid Air-rors! Discuss the Air Regulations that Impact Oil and Gas DevelopmentTrihydro Corporation
 
E libros
E librosE libros
E librosSujay
 
Drayton Hall Field Notes
Drayton Hall Field NotesDrayton Hall Field Notes
Drayton Hall Field NotesAlex Cohn
 
2016 kirk family reunion final
2016 kirk family reunion final2016 kirk family reunion final
2016 kirk family reunion finalJanelle Dunn
 

Andere mochten auch (20)

GTC Japan 2014
GTC Japan 2014GTC Japan 2014
GTC Japan 2014
 
MemoryPlus Workshop
MemoryPlus WorkshopMemoryPlus Workshop
MemoryPlus Workshop
 
Japan Lustre User Group 2014
Japan Lustre User Group 2014Japan Lustre User Group 2014
Japan Lustre User Group 2014
 
2012 08-23 Mame Night Jenkins
2012 08-23 Mame Night Jenkins2012 08-23 Mame Night Jenkins
2012 08-23 Mame Night Jenkins
 
Webinar: Getting Beyond Flash 101 - Flash 102 Selecting the Right Flash Array
Webinar: Getting Beyond Flash 101 - Flash 102 Selecting the Right Flash ArrayWebinar: Getting Beyond Flash 101 - Flash 102 Selecting the Right Flash Array
Webinar: Getting Beyond Flash 101 - Flash 102 Selecting the Right Flash Array
 
DDN: Protecting Your Data, Protecting Your Hardware
DDN: Protecting Your Data, Protecting Your HardwareDDN: Protecting Your Data, Protecting Your Hardware
DDN: Protecting Your Data, Protecting Your Hardware
 
Google the presentation
Google the presentationGoogle the presentation
Google the presentation
 
Investigating Private Companies
Investigating Private CompaniesInvestigating Private Companies
Investigating Private Companies
 
Implementing New State and Federal Air Regulatory Programs – A Systematic App...
Implementing New State and Federal Air Regulatory Programs – A Systematic App...Implementing New State and Federal Air Regulatory Programs – A Systematic App...
Implementing New State and Federal Air Regulatory Programs – A Systematic App...
 
Liberty Minerals Presentation - April 2012
Liberty Minerals Presentation - April 2012Liberty Minerals Presentation - April 2012
Liberty Minerals Presentation - April 2012
 
BEAD Presentation
BEAD PresentationBEAD Presentation
BEAD Presentation
 
Los manuales lisbeth yajaira .......nuevo
Los manuales lisbeth yajaira .......nuevoLos manuales lisbeth yajaira .......nuevo
Los manuales lisbeth yajaira .......nuevo
 
My First Source Code
My First Source CodeMy First Source Code
My First Source Code
 
Air Quality Compliance Affecting Oil and Gas Development
Air Quality Compliance Affecting Oil and Gas DevelopmentAir Quality Compliance Affecting Oil and Gas Development
Air Quality Compliance Affecting Oil and Gas Development
 
Portland Visitor's Guide
Portland Visitor's Guide Portland Visitor's Guide
Portland Visitor's Guide
 
Avoid Air-rors! Discuss the Air Regulations that Impact Oil and Gas Development
Avoid Air-rors! Discuss the Air Regulations that Impact Oil and Gas DevelopmentAvoid Air-rors! Discuss the Air Regulations that Impact Oil and Gas Development
Avoid Air-rors! Discuss the Air Regulations that Impact Oil and Gas Development
 
E libros
E librosE libros
E libros
 
酒酒
 
Drayton Hall Field Notes
Drayton Hall Field NotesDrayton Hall Field Notes
Drayton Hall Field Notes
 
2016 kirk family reunion final
2016 kirk family reunion final2016 kirk family reunion final
2016 kirk family reunion final
 

Ähnlich wie LUG 2014

Accelerating HBase with NVMe and Bucket Cache
Accelerating HBase with NVMe and Bucket CacheAccelerating HBase with NVMe and Bucket Cache
Accelerating HBase with NVMe and Bucket CacheNicolas Poggi
 
Accelerating hbase with nvme and bucket cache
Accelerating hbase with nvme and bucket cacheAccelerating hbase with nvme and bucket cache
Accelerating hbase with nvme and bucket cacheDavid Grier
 
Ceph Day Berlin: Ceph on All Flash Storage - Breaking Performance Barriers
Ceph Day Berlin: Ceph on All Flash Storage - Breaking Performance BarriersCeph Day Berlin: Ceph on All Flash Storage - Breaking Performance Barriers
Ceph Day Berlin: Ceph on All Flash Storage - Breaking Performance BarriersCeph Community
 
Storage and performance, Whiptail
Storage and performance, Whiptail Storage and performance, Whiptail
Storage and performance, Whiptail Internet World
 
3.INTEL.Optane_on_ceph_v2.pdf
3.INTEL.Optane_on_ceph_v2.pdf3.INTEL.Optane_on_ceph_v2.pdf
3.INTEL.Optane_on_ceph_v2.pdfhellobank1
 
Ceph Day Beijing - Optimizing Ceph performance by leveraging Intel Optane and...
Ceph Day Beijing - Optimizing Ceph performance by leveraging Intel Optane and...Ceph Day Beijing - Optimizing Ceph performance by leveraging Intel Optane and...
Ceph Day Beijing - Optimizing Ceph performance by leveraging Intel Optane and...Ceph Community
 
IMCSummit 2015 - Day 1 Developer Track - Evolution of non-volatile memory exp...
IMCSummit 2015 - Day 1 Developer Track - Evolution of non-volatile memory exp...IMCSummit 2015 - Day 1 Developer Track - Evolution of non-volatile memory exp...
IMCSummit 2015 - Day 1 Developer Track - Evolution of non-volatile memory exp...In-Memory Computing Summit
 
Ceph Day Taipei - Ceph on All-Flash Storage
Ceph Day Taipei - Ceph on All-Flash Storage Ceph Day Taipei - Ceph on All-Flash Storage
Ceph Day Taipei - Ceph on All-Flash Storage Ceph Community
 
Ceph Day Tokyo -- Ceph on All-Flash Storage
Ceph Day Tokyo -- Ceph on All-Flash StorageCeph Day Tokyo -- Ceph on All-Flash Storage
Ceph Day Tokyo -- Ceph on All-Flash StorageCeph Community
 
Ceph Day Seoul - Ceph on All-Flash Storage
Ceph Day Seoul - Ceph on All-Flash Storage Ceph Day Seoul - Ceph on All-Flash Storage
Ceph Day Seoul - Ceph on All-Flash Storage Ceph Community
 
[OpenStack Days Korea 2016] Track1 - All flash CEPH 구성 및 최적화
[OpenStack Days Korea 2016] Track1 - All flash CEPH 구성 및 최적화[OpenStack Days Korea 2016] Track1 - All flash CEPH 구성 및 최적화
[OpenStack Days Korea 2016] Track1 - All flash CEPH 구성 및 최적화OpenStack Korea Community
 
SUN主机产品介绍.ppt
SUN主机产品介绍.pptSUN主机产品介绍.ppt
SUN主机产品介绍.pptPencilData
 
NVIDIA GPUs Power HPC & AI Workloads in Cloud with Univa
NVIDIA GPUs Power HPC & AI Workloads in Cloud with UnivaNVIDIA GPUs Power HPC & AI Workloads in Cloud with Univa
NVIDIA GPUs Power HPC & AI Workloads in Cloud with Univainside-BigData.com
 
Ceph Day Beijing - Ceph on All-Flash Storage - Breaking Performance Barriers
Ceph Day Beijing - Ceph on All-Flash Storage - Breaking Performance BarriersCeph Day Beijing - Ceph on All-Flash Storage - Breaking Performance Barriers
Ceph Day Beijing - Ceph on All-Flash Storage - Breaking Performance BarriersCeph Community
 
Deploying ssd in the data center 2014
Deploying ssd in the data center 2014Deploying ssd in the data center 2014
Deploying ssd in the data center 2014Howard Marks
 
Ceph Performance Profiling and Reporting
Ceph Performance Profiling and ReportingCeph Performance Profiling and Reporting
Ceph Performance Profiling and ReportingCeph Community
 

Ähnlich wie LUG 2014 (20)

Tacc Infinite Memory Engine
Tacc Infinite Memory EngineTacc Infinite Memory Engine
Tacc Infinite Memory Engine
 
Accelerating HBase with NVMe and Bucket Cache
Accelerating HBase with NVMe and Bucket CacheAccelerating HBase with NVMe and Bucket Cache
Accelerating HBase with NVMe and Bucket Cache
 
Accelerating hbase with nvme and bucket cache
Accelerating hbase with nvme and bucket cacheAccelerating hbase with nvme and bucket cache
Accelerating hbase with nvme and bucket cache
 
Ceph Day Berlin: Ceph on All Flash Storage - Breaking Performance Barriers
Ceph Day Berlin: Ceph on All Flash Storage - Breaking Performance BarriersCeph Day Berlin: Ceph on All Flash Storage - Breaking Performance Barriers
Ceph Day Berlin: Ceph on All Flash Storage - Breaking Performance Barriers
 
CLFS 2010
CLFS 2010CLFS 2010
CLFS 2010
 
Storage and performance, Whiptail
Storage and performance, Whiptail Storage and performance, Whiptail
Storage and performance, Whiptail
 
3.INTEL.Optane_on_ceph_v2.pdf
3.INTEL.Optane_on_ceph_v2.pdf3.INTEL.Optane_on_ceph_v2.pdf
3.INTEL.Optane_on_ceph_v2.pdf
 
Ceph Day Beijing - Optimizing Ceph performance by leveraging Intel Optane and...
Ceph Day Beijing - Optimizing Ceph performance by leveraging Intel Optane and...Ceph Day Beijing - Optimizing Ceph performance by leveraging Intel Optane and...
Ceph Day Beijing - Optimizing Ceph performance by leveraging Intel Optane and...
 
IMCSummit 2015 - Day 1 Developer Track - Evolution of non-volatile memory exp...
IMCSummit 2015 - Day 1 Developer Track - Evolution of non-volatile memory exp...IMCSummit 2015 - Day 1 Developer Track - Evolution of non-volatile memory exp...
IMCSummit 2015 - Day 1 Developer Track - Evolution of non-volatile memory exp...
 
Ceph Day Taipei - Ceph on All-Flash Storage
Ceph Day Taipei - Ceph on All-Flash Storage Ceph Day Taipei - Ceph on All-Flash Storage
Ceph Day Taipei - Ceph on All-Flash Storage
 
Ceph Day Tokyo -- Ceph on All-Flash Storage
Ceph Day Tokyo -- Ceph on All-Flash StorageCeph Day Tokyo -- Ceph on All-Flash Storage
Ceph Day Tokyo -- Ceph on All-Flash Storage
 
Ceph Day Seoul - Ceph on All-Flash Storage
Ceph Day Seoul - Ceph on All-Flash Storage Ceph Day Seoul - Ceph on All-Flash Storage
Ceph Day Seoul - Ceph on All-Flash Storage
 
[OpenStack Days Korea 2016] Track1 - All flash CEPH 구성 및 최적화
[OpenStack Days Korea 2016] Track1 - All flash CEPH 구성 및 최적화[OpenStack Days Korea 2016] Track1 - All flash CEPH 구성 및 최적화
[OpenStack Days Korea 2016] Track1 - All flash CEPH 구성 및 최적화
 
Corralling Big Data at TACC
Corralling Big Data at TACCCorralling Big Data at TACC
Corralling Big Data at TACC
 
SUN主机产品介绍.ppt
SUN主机产品介绍.pptSUN主机产品介绍.ppt
SUN主机产品介绍.ppt
 
NVIDIA GPUs Power HPC & AI Workloads in Cloud with Univa
NVIDIA GPUs Power HPC & AI Workloads in Cloud with UnivaNVIDIA GPUs Power HPC & AI Workloads in Cloud with Univa
NVIDIA GPUs Power HPC & AI Workloads in Cloud with Univa
 
Latest HPC News from NVIDIA
Latest HPC News from NVIDIALatest HPC News from NVIDIA
Latest HPC News from NVIDIA
 
Ceph Day Beijing - Ceph on All-Flash Storage - Breaking Performance Barriers
Ceph Day Beijing - Ceph on All-Flash Storage - Breaking Performance BarriersCeph Day Beijing - Ceph on All-Flash Storage - Breaking Performance Barriers
Ceph Day Beijing - Ceph on All-Flash Storage - Breaking Performance Barriers
 
Deploying ssd in the data center 2014
Deploying ssd in the data center 2014Deploying ssd in the data center 2014
Deploying ssd in the data center 2014
 
Ceph Performance Profiling and Reporting
Ceph Performance Profiling and ReportingCeph Performance Profiling and Reporting
Ceph Performance Profiling and Reporting
 

Mehr von Hitoshi Sato

ABCI: AI Bridging Cloud Infrastructure for Scalable AI/Big Data
ABCI: AI Bridging Cloud Infrastructure for Scalable AI/Big DataABCI: AI Bridging Cloud Infrastructure for Scalable AI/Big Data
ABCI: AI Bridging Cloud Infrastructure for Scalable AI/Big DataHitoshi Sato
 
AI橋渡しクラウド(ABCI)における高性能計算とAI/ビッグデータ処理の融合
AI橋渡しクラウド(ABCI)における高性能計算とAI/ビッグデータ処理の融合AI橋渡しクラウド(ABCI)における高性能計算とAI/ビッグデータ処理の融合
AI橋渡しクラウド(ABCI)における高性能計算とAI/ビッグデータ処理の融合Hitoshi Sato
 
Building Software Ecosystems for AI Cloud using Singularity HPC Container
Building Software Ecosystems for AI Cloud using Singularity HPC ContainerBuilding Software Ecosystems for AI Cloud using Singularity HPC Container
Building Software Ecosystems for AI Cloud using Singularity HPC ContainerHitoshi Sato
 
Singularityで分散深層学習
Singularityで分散深層学習Singularityで分散深層学習
Singularityで分散深層学習Hitoshi Sato
 
第162回情報処理学会ハイパフォーマンスコンピューティング研究発表会
第162回情報処理学会ハイパフォーマンスコンピューティング研究発表会第162回情報処理学会ハイパフォーマンスコンピューティング研究発表会
第162回情報処理学会ハイパフォーマンスコンピューティング研究発表会Hitoshi Sato
 
産総研AIクラウドでChainerMN
産総研AIクラウドでChainerMN産総研AIクラウドでChainerMN
産総研AIクラウドでChainerMNHitoshi Sato
 

Mehr von Hitoshi Sato (7)

ABCI: AI Bridging Cloud Infrastructure for Scalable AI/Big Data
ABCI: AI Bridging Cloud Infrastructure for Scalable AI/Big DataABCI: AI Bridging Cloud Infrastructure for Scalable AI/Big Data
ABCI: AI Bridging Cloud Infrastructure for Scalable AI/Big Data
 
AI橋渡しクラウド(ABCI)における高性能計算とAI/ビッグデータ処理の融合
AI橋渡しクラウド(ABCI)における高性能計算とAI/ビッグデータ処理の融合AI橋渡しクラウド(ABCI)における高性能計算とAI/ビッグデータ処理の融合
AI橋渡しクラウド(ABCI)における高性能計算とAI/ビッグデータ処理の融合
 
Building Software Ecosystems for AI Cloud using Singularity HPC Container
Building Software Ecosystems for AI Cloud using Singularity HPC ContainerBuilding Software Ecosystems for AI Cloud using Singularity HPC Container
Building Software Ecosystems for AI Cloud using Singularity HPC Container
 
Singularityで分散深層学習
Singularityで分散深層学習Singularityで分散深層学習
Singularityで分散深層学習
 
第162回情報処理学会ハイパフォーマンスコンピューティング研究発表会
第162回情報処理学会ハイパフォーマンスコンピューティング研究発表会第162回情報処理学会ハイパフォーマンスコンピューティング研究発表会
第162回情報処理学会ハイパフォーマンスコンピューティング研究発表会
 
GTC Japan 2017
GTC Japan 2017GTC Japan 2017
GTC Japan 2017
 
産総研AIクラウドでChainerMN
産総研AIクラウドでChainerMN産総研AIクラウドでChainerMN
産総研AIクラウドでChainerMN
 

Kürzlich hochgeladen

Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)Mark Simos
 
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024BookNet Canada
 
The State of Passkeys with FIDO Alliance.pptx
The State of Passkeys with FIDO Alliance.pptxThe State of Passkeys with FIDO Alliance.pptx
The State of Passkeys with FIDO Alliance.pptxLoriGlavin3
 
Advanced Computer Architecture – An Introduction
Advanced Computer Architecture – An IntroductionAdvanced Computer Architecture – An Introduction
Advanced Computer Architecture – An IntroductionDilum Bandara
 
TeamStation AI System Report LATAM IT Salaries 2024
TeamStation AI System Report LATAM IT Salaries 2024TeamStation AI System Report LATAM IT Salaries 2024
TeamStation AI System Report LATAM IT Salaries 2024Lonnie McRorey
 
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024BookNet Canada
 
Generative AI for Technical Writer or Information Developers
Generative AI for Technical Writer or Information DevelopersGenerative AI for Technical Writer or Information Developers
Generative AI for Technical Writer or Information DevelopersRaghuram Pandurangan
 
Connect Wave/ connectwave Pitch Deck Presentation
Connect Wave/ connectwave Pitch Deck PresentationConnect Wave/ connectwave Pitch Deck Presentation
Connect Wave/ connectwave Pitch Deck PresentationSlibray Presentation
 
DevoxxFR 2024 Reproducible Builds with Apache Maven
DevoxxFR 2024 Reproducible Builds with Apache MavenDevoxxFR 2024 Reproducible Builds with Apache Maven
DevoxxFR 2024 Reproducible Builds with Apache MavenHervé Boutemy
 
Use of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptx
Use of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptxUse of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptx
Use of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptxLoriGlavin3
 
SALESFORCE EDUCATION CLOUD | FEXLE SERVICES
SALESFORCE EDUCATION CLOUD | FEXLE SERVICESSALESFORCE EDUCATION CLOUD | FEXLE SERVICES
SALESFORCE EDUCATION CLOUD | FEXLE SERVICESmohitsingh558521
 
Dev Dives: Streamline document processing with UiPath Studio Web
Dev Dives: Streamline document processing with UiPath Studio WebDev Dives: Streamline document processing with UiPath Studio Web
Dev Dives: Streamline document processing with UiPath Studio WebUiPathCommunity
 
"ML in Production",Oleksandr Bagan
"ML in Production",Oleksandr Bagan"ML in Production",Oleksandr Bagan
"ML in Production",Oleksandr BaganFwdays
 
Hyperautomation and AI/ML: A Strategy for Digital Transformation Success.pdf
Hyperautomation and AI/ML: A Strategy for Digital Transformation Success.pdfHyperautomation and AI/ML: A Strategy for Digital Transformation Success.pdf
Hyperautomation and AI/ML: A Strategy for Digital Transformation Success.pdfPrecisely
 
Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024BookNet Canada
 
SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024Lorenzo Miniero
 
Nell’iperspazio con Rocket: il Framework Web di Rust!
Nell’iperspazio con Rocket: il Framework Web di Rust!Nell’iperspazio con Rocket: il Framework Web di Rust!
Nell’iperspazio con Rocket: il Framework Web di Rust!Commit University
 
Take control of your SAP testing with UiPath Test Suite
Take control of your SAP testing with UiPath Test SuiteTake control of your SAP testing with UiPath Test Suite
Take control of your SAP testing with UiPath Test SuiteDianaGray10
 
Digital Identity is Under Attack: FIDO Paris Seminar.pptx
Digital Identity is Under Attack: FIDO Paris Seminar.pptxDigital Identity is Under Attack: FIDO Paris Seminar.pptx
Digital Identity is Under Attack: FIDO Paris Seminar.pptxLoriGlavin3
 

Kürzlich hochgeladen (20)

Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
 
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
 
The State of Passkeys with FIDO Alliance.pptx
The State of Passkeys with FIDO Alliance.pptxThe State of Passkeys with FIDO Alliance.pptx
The State of Passkeys with FIDO Alliance.pptx
 
Advanced Computer Architecture – An Introduction
Advanced Computer Architecture – An IntroductionAdvanced Computer Architecture – An Introduction
Advanced Computer Architecture – An Introduction
 
TeamStation AI System Report LATAM IT Salaries 2024
TeamStation AI System Report LATAM IT Salaries 2024TeamStation AI System Report LATAM IT Salaries 2024
TeamStation AI System Report LATAM IT Salaries 2024
 
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
 
Generative AI for Technical Writer or Information Developers
Generative AI for Technical Writer or Information DevelopersGenerative AI for Technical Writer or Information Developers
Generative AI for Technical Writer or Information Developers
 
Connect Wave/ connectwave Pitch Deck Presentation
Connect Wave/ connectwave Pitch Deck PresentationConnect Wave/ connectwave Pitch Deck Presentation
Connect Wave/ connectwave Pitch Deck Presentation
 
DevoxxFR 2024 Reproducible Builds with Apache Maven
DevoxxFR 2024 Reproducible Builds with Apache MavenDevoxxFR 2024 Reproducible Builds with Apache Maven
DevoxxFR 2024 Reproducible Builds with Apache Maven
 
Use of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptx
Use of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptxUse of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptx
Use of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptx
 
SALESFORCE EDUCATION CLOUD | FEXLE SERVICES
SALESFORCE EDUCATION CLOUD | FEXLE SERVICESSALESFORCE EDUCATION CLOUD | FEXLE SERVICES
SALESFORCE EDUCATION CLOUD | FEXLE SERVICES
 
Dev Dives: Streamline document processing with UiPath Studio Web
Dev Dives: Streamline document processing with UiPath Studio WebDev Dives: Streamline document processing with UiPath Studio Web
Dev Dives: Streamline document processing with UiPath Studio Web
 
"ML in Production",Oleksandr Bagan
"ML in Production",Oleksandr Bagan"ML in Production",Oleksandr Bagan
"ML in Production",Oleksandr Bagan
 
DMCC Future of Trade Web3 - Special Edition
DMCC Future of Trade Web3 - Special EditionDMCC Future of Trade Web3 - Special Edition
DMCC Future of Trade Web3 - Special Edition
 
Hyperautomation and AI/ML: A Strategy for Digital Transformation Success.pdf
Hyperautomation and AI/ML: A Strategy for Digital Transformation Success.pdfHyperautomation and AI/ML: A Strategy for Digital Transformation Success.pdf
Hyperautomation and AI/ML: A Strategy for Digital Transformation Success.pdf
 
Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
 
SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024
 
Nell’iperspazio con Rocket: il Framework Web di Rust!
Nell’iperspazio con Rocket: il Framework Web di Rust!Nell’iperspazio con Rocket: il Framework Web di Rust!
Nell’iperspazio con Rocket: il Framework Web di Rust!
 
Take control of your SAP testing with UiPath Test Suite
Take control of your SAP testing with UiPath Test SuiteTake control of your SAP testing with UiPath Test Suite
Take control of your SAP testing with UiPath Test Suite
 
Digital Identity is Under Attack: FIDO Paris Seminar.pptx
Digital Identity is Under Attack: FIDO Paris Seminar.pptxDigital Identity is Under Attack: FIDO Paris Seminar.pptx
Digital Identity is Under Attack: FIDO Paris Seminar.pptx
 

LUG 2014

  • 1. Lustre2.5 Performance Evaluation: Performance Improvements with Large I/O Patches, Metadata Improvements, and Metadata Scaling with DNE Hitoshi Sato*1, Shuichi Ihara*2, Satoshi Matsuoka*1 *1 Tokyo Institute of Technology *2 Data Direct Networks Japan 1
  • 2. Tokyo Tech’s TSUBAME2.0 Nov. 1, 2010 “The Greenest Production Supercomputer in the World” TSUBAME 2.0 New Development 32nm 40nm >400GB/s Mem BW 80Gbps NW BW ~1KW max >1.6TB/s Mem BW >12TB/s Mem BW 35KW Max >600TB/s Mem BW 220Tbps NW Bisecion BW 1.4MW Max 2
  • 3. TSUBAME2 System Overview 11PB (7PB HDD, 4PB Tape, 200TB SSD) “Global Work Space” #1 SFA10k #5 “Global Work Space” #2 “Global Work Space” #3 SFA10k #4SFA10k #3SFA10k #2SFA10k #1 /data0 /work0 /work1 /gscr “cNFS/Clusterd Samba w/ GPFS” HOME System application “NFS/CIFS/iSCSI by BlueARC” HOME iSCSI Infiniband QDR Networks SFA10k #6 GPFS#1 GPFS#2 GPFS#3 GPFS#4 Parallel File System Volumes Home Volumes QDR IB(×4) × 20 10GbE × 2QDR IB (×4) × 8 1.2PB 3.6 PB /data1 Thin nodes 1408nodes (32nodes x44 Racks) HP Proliant SL390s G7 1408nodes CPU: Intel Westmere-EP 2.93GHz 6cores × 2 = 12cores/node GPU: NVIDIA Tesla K20X, 3GPUs/node Mem: 54GB (96GB) SSD: 60GB x 2 = 120GB (120GB x 2 = 240GB) Medium nodes HP Proliant DL580 G7 24nodes CPU: Intel Nehalem-EX 2.0GHz 8cores × 2 = 32cores/node GPU: NVIDIA Tesla S1070, NextIO vCORE Express 2070 Mem:128GB SSD: 120GB x 4 = 480GB Fat nodes HP Proliant DL580 G7 10nodes CPU: Intel Nehalem-EX 2.0GHz 8cores × 2 = 32cores/node GPU: NVIDIA Tesla S1070 Mem: 256GB (512GB) SSD: 120GB x 4 = 480GB ・・・・・・ Computing Nodes: 17.1PFlops(SFP), 5.76PFlops(DFP), 224.69TFlops(CPU), ~100TB MEM, ~200TB SSD Interconnets: Full-bisection Optical QDR Infiniband Network Voltaire Grid Director 4700 ×12 IB QDR: 324 ports Core Switch Edge Switch Edge Switch (/w 10GbE ports) Voltaire Grid Director 4036 ×179 IB QDR : 36 ports Voltaire Grid Director 4036E ×6 IB QDR:34ports 10GbE: 2port 12switches 6switches179switches 2.4 PB HDD + GPFS+Tape Lustre Home Local SSDs 3
  • 4. TSUBAME2 System Overview 11PB (7PB HDD, 4PB Tape, 200TB SSD) “Global Work Space” #1 SFA10k #5 “Global Work Space” #2 “Global Work Space” #3 SFA10k #4SFA10k #3SFA10k #2SFA10k #1 /data0 /work0 /work1 /gscr “cNFS/Clusterd Samba w/ GPFS” HOME System application “NFS/CIFS/iSCSI by BlueARC” HOME iSCSI Infiniband QDR Networks SFA10k #6 GPFS#1 GPFS#2 GPFS#3 GPFS#4 Parallel File System Volumes Home Volumes QDR IB(×4) × 20 10GbE × 2QDR IB (×4) × 8 1.2PB 3.6 PB /data1 Thin nodes 1408nodes (32nodes x44 Racks) HP Proliant SL390s G7 1408nodes CPU: Intel Westmere-EP 2.93GHz 6cores × 2 = 12cores/node GPU: NVIDIA Tesla K20X, 3GPUs/node Mem: 54GB (96GB) SSD: 60GB x 2 = 120GB (120GB x 2 = 240GB) Medium nodes HP Proliant DL580 G7 24nodes CPU: Intel Nehalem-EX 2.0GHz 8cores × 2 = 32cores/node GPU: NVIDIA Tesla S1070, NextIO vCORE Express 2070 Mem:128GB SSD: 120GB x 4 = 480GB Fat nodes HP Proliant DL580 G7 10nodes CPU: Intel Nehalem-EX 2.0GHz 8cores × 2 = 32cores/node GPU: NVIDIA Tesla S1070 Mem: 256GB (512GB) SSD: 120GB x 4 = 480GB ・・・・・・ Computing Nodes: 17.1PFlops(SFP), 5.76PFlops(DFP), 224.69TFlops(CPU), ~100TB MEM, ~200TB SSD Interconnets: Full-bisection Optical QDR Infiniband Network Voltaire Grid Director 4700 ×12 IB QDR: 324 ports Core Switch Edge Switch Edge Switch (/w 10GbE ports) Voltaire Grid Director 4036 ×179 IB QDR : 36 ports Voltaire Grid Director 4036E ×6 IB QDR:34ports 10GbE: 2port 12switches 6switches179switches 2.4 PB HDD + GPFS+Tape Lustre Home Local SSDs Mostly Read I/O (data-intensive apps, parallel workflow, parameter survey) • Home storage for computing nodes • Cloud-based campus storage services Backup Fine-grained R/W I/O (check point, temporal files) Fine-grained R/W I/O (check point, temporal files) 4
  • 5. Towards TSUBAME 3.0 Interim Upgrade TSUBAME2.0 to 2.5 (Sept.10th, 2013) TSUBAME2.0 Compute Node Fermi GPU 3 x 1408 = 4224 GPUs TSUBAME-KFC Upgrade the TSUBAME2.0s GPUs NVIDIA Fermi M2050 to Kepler K20X SFP/DFP peak from 4.8PF/2.4PF => 17PF/5.7PF A TSUBAME3.0 prototype system with advanced cooling for next-gen. supercomputers. 40 compute nodes are oil-submerged. 160 NVIDIA Kepler K20x 80 Ivy Bridge Xeon FDR Infiniband Total 210.61 Flops System 5.21 TFlops Storage design is also a matter!! 5
  • 6. Existing Storage Problems Small I/O • PFS – Throughput-oriented Design – Master-Worker Configuration • Performance Bottlenecks on Meta-data references • Low IOPS (1k – 10k IOPS) I/O contention • Shared across compute nodes – I/O contention from various multiple applications – I/O performance degradation Small I/O ops from user’s apps PFS operation down Recovered IOPS TIME Lustre I/O Monitoring 6
  • 7. Requirements for Next Generation I/O Sub-System 1 • Basic Requirements – TB/sec sequential bandwidth for traditional HPC applications – Extremely high IOPS for to cope with applications that generate massive small I/O • Example: graph, etc. – Some application workflows have different I/O requirements for each workflow component • Reduction/consolidation of PFS I/O resources is needed while achieving TB/sec performance – We cannot simply use a larger number of IO servers and drives for achieving ~TB/s throughput! – Many constraints • Space, power, budget, etc. • Performance per Watt and performance per RU is crucial 7
  • 8. Requirements for Next Generation I/O Sub-System 2 • New Approaches • Tsubame 2.0 has pioneered the use of local flash storage as a high-IOPS alternative to an external PFS • Tired and hybrid storage environments, combining (node) local flash with an external PFS • Industry Status • High-performance, high-capacity flash (and other new semiconductor devices) are becoming available at reasonable cost • New approaches/interface to use high-performance devices (e.g. NVMexpress) 8
  • 9. Key Requirements of I/O system • Electrical Power and Space are still Challenging – Reduce of #OSS/OST, but keep higher IO Performance and large storage capacity – How maximize Lustre performance • Understand New type of Flush storage device – Any benefits with Lustre? How use it? – MDT on SSD helps? 9
  • 10. Lustre 2.x Performance Evaluation • Maximize OSS/OST Performance for large Sequential IO – Single OSS and OST performance – 1MB vs 4MB RPC – Not only Peak performance, but also sustain performance • Small IO to shared file – 4K random read access to the Lustre • Metadata Performance – CPU impacts? – SSD helps? 10
  • 11. Throughput per OSS Server 0 2000 4000 6000 8000 10000 12000 14000 1 2 4 8 16 Throughput(MB/sec) Number of clients Single OSS's Throughput (Write) Nobonding AA-bonding AA-bonding(CPU bind) 0 2000 4000 6000 8000 10000 12000 14000 1 2 4 8 16 Throughput(MB/s) Number of clients Single OSS's Throughput (Read) Nobonding AA-bonding AA-bonding(CPU bind) Single FDR’s LNET bandwidth 6.0 (GB/sec) Dual IB FDR LNET bandwidth 12.0 (GB/sec) 11
  • 12. Performance comparing (1 MB vs. 4 MB RPC, IOR FPP, asyncIO) • 4 x OSS, 280 x NL-SAS • 14 clients and up to 224 processes 0 5000 10000 15000 20000 25000 30000 28 56 112 224 Throughput(MB/sec) Number of Process IOR(Write, FPP, xfersize=4MB) 1MB RPC 4MB RPC 0 5000 10000 15000 20000 25000 28 56 112 224 Throughput(MB/sec) Number of Process IOR(Read, FPP, xfersize=4MB) 1MB RPC 4MB RPC 12
  • 13. Lustre Performance Degradation with Large Number of Threads: OSS standpoint 0 2000 4000 6000 8000 10000 12000 0 200 400 600 800 1000 1200 THroughput(MB/sec) Number of Process IOR(FPP, 1MB, Write) RPC=4MB RPC=1MB 0 1000 2000 3000 4000 5000 6000 7000 8000 0 200 400 600 800 1000 1200 THroughput(MB/sec) Number of Process IOR(FPP, 1MB, Read) RPC=4MB RPC=1MB • 1 x OSS, 80 x NL-SAS • 20 clients and up to 960 processes 4.5x 30% 55% 13
  • 14. Lustre Performance Degradation with Large Number of Threads: SSDs vs. Nearline Disks • IOR (FFP, 1MB) to single OST • 20 clients and up to 640 processes 0% 20% 40% 60% 80% 100% 120% 0 100 200 300 400 500 600 700 PerformanceperOSTas%ofPeakPerformance Number of Process Write: Lustre Backend Performance Degradation (Maximum for each dataset=100%) 7.2KSAS(1MB RPC) 7.2KSAS(4MB RPC) SSD(1MB RPC) 0% 20% 40% 60% 80% 100% 120% 0 100 200 300 400 500 600 700 PerformanceperOSTas%ofPeakPerformance Number of Process Read : Lustre Backend Performance Degradation (Maximum for each dataset=100%) 7.2KSAS(1MB RPC) 7.2KSAS(4MB RPC) SSD(1MB RPC) 14
  • 15. Lustre 2.4 4k Random Read Performance (With 10 SSDs) • 2 x OSS, 10 x SSD(2 x RAID5), 16 clients • Create Large random files (FFP, SSF), and run random read access with 4KB 0 100000 200000 300000 400000 500000 600000 1n32p 2n64p 4n128p 8n256p 12n384p 16n512p IOPS Lustre 4K random read(FFP and SSF) FFP(POSIX) FFP(MPIIO) SSF(POSIX) SSF(MPIIO) 15
  • 16. Conclusions: Lustre with SSDs • SSDs pools in Lustre or SSDs – SSDs change the behavior of pools, as “seek” latency is no longer an issue • Application scenarios – Very consistent performance, independent of the IO size or number of concurrent IOs – Very high random access to a single shared file (millions of IOPS in a large file system with sufficient clients) • With SSDs, the bottleneck is no more the device, but the RAID array (RAID stack) and the file system – Very high random read IOPS in Lustre is possible, but only if the metadata workload is limited (i.e. random IO to a single shared file) • We will investigate more benchmarks of Lustre on SSD 16
  • 17. Metadata Performance Improvements • Very Significant Improvements (since Lustre 2.3) – Increased performance for both unique and shared directory metadata – Almost linear scaling for most metadata operations with DNE 0 20000 40000 60000 80000 100000 120000 140000 Directory creation Directory stat Directory removal File creation File stat File removal ops/sec Performance Comparing: Lustre-1.8 vs 2.4 (metadata operation to shared dir) lustre-1.8.8 lustre-2.4.2 * 1 x MDS, 16 clients, 768 Lustre exports(multiple mount points) 0 100000 200000 300000 400000 500000 600000 Directory creation Directory stat Directory removal File creation File stat File removal ops/sec Performance Comparing (metadata operation to unique dir) Same Storage resource, but just add MDS resource Single MDS Dual MDS(DNE) 17
  • 18. Metadata Performance CPU Impact • Metadata Performance is highly dependent on CPU performance • Limited variation in CPU frequency or memory speed can have a significant impact on metadata performance 0 10000 20000 30000 40000 50000 60000 70000 80000 90000 0 100 200 300 400 500 600 700 ops/sec Number of Process File Creation (Unique dir) 3.6GHz 2.4GHz 0 10000 20000 30000 40000 50000 60000 70000 0 100 200 300 400 500 600 700 ops/sec Number of Process File Creation (Shared dir) 3.6GHz 2.4GHz 18
  • 19. Conclusions: Metadata Performance • Significant metadata improvements for shares and unique metadata performance since Lustre 2.3 – Improvements across all metadata workloads, especially removals – SSDs add to performance, but the impact is limited (and probably mostly due to latency, not the IOPS performance of the device) • Important considerations for metadata benchmarks – Metadata benchmarks are very sensitive to the underlying hardware – Clients limitations are important when considering metadata performance of scaling – Only directory/file stats scale with the number of processes per node, other metadata workloads do appear not scale with the number of processes on a single node 19
  • 20. Summary • Lustre Today – Significant performance increase in object storage and metadata performance – Much better spindle efficiency (which is now reaching the physical drive limit) – Client performance is quickly becoming the limitation for small file and random performance, not the server side!! • Consider alternative scenarios – PFS combined with fast local devices (or, even, local instances of the PFS) – PFS combined with a global acceleration/caching layer 20