SlideShare ist ein Scribd-Unternehmen logo
1 von 38
Downloaden Sie, um offline zu lesen
Red Hat Storage (RHS) Performance
Ben England
Principal S/W Engr., Red Hat
What this session covers
What's new with Red Hat Storage (perf.) since summit 2013?
How to monitor RHS performance
Future directions for RHS performance.
Related presentations
•other RHS presentations in summit 2014
•http://www.redhat.com/summit/sessions/topics/red-hat-storage-
server.html
•previous RHS perf presentations in 2012, 2013
• http://rhsummit.files.wordpress.com/2013/07/england_th_0450_rhs_perf_practices-4_neependra.pdf
• http://rhsummit.files.wordpress.com/2012/03/england-rhs-performance.pdf
Improvement since last year
Libgfapi – removes FUSE bottleneck for high-IOPS applications
FUSE – linux caching can now be utilized with fopen-keep-cache feature
Brick configurationBrick configuration – alternatives for small-file, random-io-centric configurations
SSD – where it helps, configuration options
Swift - RHS converging with OpenStack IceHouse, is scalable and stable
NFS - large sequential writes more stable, 16x larger RPC size limit
INTERNAL ONLY | PRESENTER NAME5
Anatomy of Red Hat Storage (RHS) – libgfapi
smbd
(CIFS)
VFS
FUSE
glusterfs
client process
Linux kernel
other RHS servers...
RHS server
Swift
NFS V3 clients
Windows HTTP clients
glusterfsd
brick server
App
qemu (KVM)
Hadoop
Libgfapi development and significance
•Python and java bindings
•Integration with libvirt, SMB
•Gluster performs better than Ceph for Cinder volumes (Principled Tech.)
•Lowered context switching overhead
•A distributed libgfapi benchmark available at:
•https://github.com/bengland2/parallel-libgfapi
•Fio benchmark has libgfapi “engine”
•https://github.com/rootfs/fio/blob/master/engines/glusterfs.c
Libgfapi tips
•For each Gluster volume:
•gluster volume set your-volume allow-insecure on
•Gluster volume set stat-prefetch on (if it is set to “off”)
•For each Gluster server:
•Add “option rpc-auth-allow-insecure on” to /etc/glusterfs/glusterd.vol
•Restart glusterd
•Watch TCP port consumption
•On client: TCP ports = bricks/volume x libgfapi instances
•On server: TCP ports = bricks/server x clients' libgfapi instances
•Control number of bricks/server
libgfapi - performance
•for single client & server with 40-Gbps IB connection, Nytro SSD
caching
Libgfapi gets rid of client-side bottleneck
libgfapi FUSE
0
10000
20000
30000
40000
50000
60000
libgfapi vs FUSE - random write IOPS
64 threads on 1 client, 16-KB transfer size, 4 servers, 6 bricks/server, 1 GB/file, 4096 requests/file
clientIOPS(requests/sec)
Libgfapi and small files
•4 servers, 10-GbE jumbo frames, 2-socket westmere, 48 GB
RAM, 12 7200-RPM drives, 6 RAID1 LUNs, RHS 2.1U2
•1024 libgfapi processes spread across 4 servers
•Each process writes/reads 5000 1-MB files (1000 files/dir)
•Aggregate write rate: 155 MB/s/server
•Aggregate read rate: 1500 MB/s/server, 3000 disk IOPS/server
•On reads, disks are bottleneck!
Libgfapi vs Ceph for Openstack Cinder
•Principled Technologies compared for Havana RDO Cinder
•Results show Gluster outperformed Ceph significantly in some
cases
•Sequential large-file I/O
•Small-file I/O (also sequential)
•Ceph outperformed Gluster for pure-random-I/O case, why?
•RAID6 appears to be the root cause, saw high block device avg wait
Gluster libgfapi vs Ceph for OpenStack Cinder – large files
example: 4 servers, 4-16 guests, 4 threads/guest, 64-KB file
size
4 8 12 16
0
500
1000
1500
2000
2500
3000
4 Node - Read
VMs
MB/s
4 8 12 16
0
200
400
600
800
1000
1200
1400
1600
1800
4 Node - Write
ceph write 4 4 write 4 gluster write 4 4 write 4
VMs
MB/s
Libgfapi with OpenStack Cinder – small files
1,1,4 2,2,8 4,4,16 1,4,16 2,8,32 4,16,64
-20%
-10%
0%
10%
20%
30%
40%
50%
60%
Gluster %improvement over Ceph for Cinder volume small-file throughput
file size average 64-KB, files/thread 32K, 4 threads/guest
append
create
read
compute nodes, total guests, total threads
%improvementrelativetoCeph
Random I/O on RHEL OSP with RHS Cinder volumes
•Rate-limited guests for sizing, measured response time
1 2 4 8 16 32 64 128
0
500
1000
1500
2000
2500
3000
3500
4000
1
10
100
1000
1 1 1 1 1
1
43
139
OSP on JBOD RHS, Random Write IOPS vs. Latency
OSP 4.0, 3 RHS (repl 3) servers, 4 computes, 1 thread/instance, 16G filesz, 64K recsz, DIO
25 IOPS limit Resp Time
50 IOPS limit Resp Time
100 IOPS limit Resp Time
Guests
TotalIOPS>>>
<<<ResponseTime(msec)
1 2 4 8 16 32 64 128
0
500
1000
1500
2000
2500
3000
3500
4000
1
10
100
1000
OSP on RAID6 RHS, Random Write IOPS vs. Latency
OSP 4.0, 1 thread/Instance, 4 RHS (repl 2) servers,
4 computes, 16G filesz, 64K recsz, DIO
25 IOPS limit Resp Time
50 IOPS limit Resp Time
100 IOPS limit Resp Time
GuestsTotalIOPS>>>
<<<ResponseTime(msec)
RHS libgfapi: Sequential I/O to Cinder volumes
1 2 4 8 16 32 64 128
0
100
200
300
400
500
600
700
800
900
OSP on RHS, Scaling Large File Sequential I/O, RAID6 (2-replica) vs. JBOD (3-replica)
OSP 4.0, 4 computes, 1 thread/instance, 4G filesz, 64K recsz
RAID6 Writes JBOD Writes RAID6 Reads JBOD Reads
guests
TotalThroughputinMB/Sec/Server>>>
Cinder volumes on libgfapi: small files
•Increase in concurrency on RAID6 decreases system throughput
Brick configuration alternatives – metrics
How do you evaluate optimality of storage
with both Gluster replication and RAID?
•Disks/LUN –
• more bricks = more CPU power
•Physical:usable TB – what fraction of physical space is available?
•Disk loss tolerance – how many disks can you lose without data loss?
•Physical:logical write IOPS –
• how many physical disk IOPS per user IOP? Both Gluster replication and
RAID affect this
Brick configuration – how do alternatives look?
Small-file workload, 8 clients, 4 threads/client, 100K files/thread, 64 KB/file
2 servers in all cases except in JBOD 3way case
RAID type disks/LUN physical:usable
TB ratio
loss tolerance
(disks)
Logical:physical
random IO ratio
create
Files/sec
read
Files/sec
RAID6 12 2.4 4 12 234 676
JBOD 1 2.0 1 2 975 874
RAID1 2 4.0 2 4 670670 773
RAID10 12 4.0 2 4 308 631
JBOD 3way 1 3.0 2 3 500 393?
Brick config. – relevance of SSDs
•Where SSDs don't matter:
•Sequential large-file read/write – RAID6 drive near 1 GB/sec throughput
•Reads cacheable by Linux – RAM faster than SSD
•Where SSDs do matter:
•Random read working set > RAM, < SSD
•Random writes
•Concurrent sequential file writes are slightly random at block device
•Small-file workload random at block device
•metadata
Brick configuration – 3 ways to utilize SSDs
•SSDs as brick
•Configuration guideline – multiple bricks/SSD
•RAID controller supplies SSD caching layer (example: LSI Nytro)
•Transparent to RHEL, Gluster, but no control over policy
•Linux-based SSD caching layer – dm-cache (RHEL7)
•Greater control over caching policy, etc.
•Not supported on RHEL6 -> Not supported in Denali (RHS 3.0)
Example of SSD performance gain
4 64 4 64 4 64 4 64
1 1 16 16 1 1 16 16
create create create create read read read read
0
10000
20000
30000
40000
50000
60000
70000
pure-SSD vs write-back cache for small files
smallfile workload: 1,000,000 files total, fsync-on-close, hash-into-dirs, 100 files/dir, 10 sub-dir./dir, exponential file sizes,
ssd
Write-back caching
operation type, threads (1 or 16), file size (4 or 64) KB
files/sec
SSDs: Multiple bricks per SSD
distribute CPU load across glusterfsd brick server processes
workload here: random I/O
SSD caching by LSI Nytro RAID controller
dm-cache in RHEL7 can help with small-file reads
SSD conclusions
•Can add SSD to Gluster volumes today in one of above ways
•Use more SSD than server RAM to see benefits
•Don't really need for sequential large-file I/O workloads
•Each of these 3 alternatives has weaknesses
•SSD brick – very expensive, too many bricks required in Gluster volume
•Firmware caching - very conservative in using SSD blocks
•Dm-cache – RHEL7-only, not really writeback caching
•Could Gluster become SSD-aware?
•Could reduce latency of metadata access such as change logs, .glusterfs,
journal, etc.
Swift – example test configuration - TBS
Client-1 Client-2 Client-4...
server-8server-1 server-2
...
Cisco Nexus 7010 10-GbE
network switch Magenta = private replication traffic subnet
Green line = HTTP public traffic subnet
Swift-on-RHS: large objects
•No tuning except 10-GbE, 2 NICs, MTU=9000 (jumbo frames),
rhs-high-throughput tuned profile
•150-MB average object size, 4 clients, 8 servers
•writes: HTTP 1.6 GB/sec = 400 MB/sec/client, 30% net. util
•reads: HTTP 2.8 GB/s = 700 MB/s/client, 60% net. util.
Swift-on-RHS: small objects
•Catalyst workload: 28.68 million objects, PUT and GET
•8 physical clients, 64 thread/client, 1 container/client, file size
from 5-KB up to 30 MB, average 30 KB.
0
20
40
60
80
100
120
140
160
0 100 200 300 400 500
iops(Files/s)
Fileset (10K files)
PUT-1
PUT-2
PUT-3
RHS performance monitoring
•Start with network
•Are there hot spots on the servers or clients?
•Gluster volume profile your-volume [ start | info ]
•Perf. Copilot plugin
•Gluster volume top your-volume [ read | write | opendir | readdir ]
•Perf-copilot demo
Recorded demo of RHS perf.
Future (upstream) directions
•Thin Provisioning (LVM) – Allows Gluster volumes with different
users and policies to easily co-exist on same physical storage
•Snapshot (LVM) – For online backup/geo-replication of live files
•Small-file – optimizations in system calls
•Erasure coding – lowering cost/TB and write traffic on network
•Scalability – larger server and brick counts in a volume
•RDMA+libgfapi – bypassing kernel for fast path
•NFS-Ganesha and pNFS – how it complements glusterfs
LVM thin-p + snapshot testing
XFS file system
LVM thin-p volume
LVM pool built from LVM PV
RAID6 12-drive volume
LSI MegaRAID controller
Snapshot[1] Snapshot[n]
...
avoiding LVM snapshot fragmentation
LVM enhancement: bio-sorting on writes
testing just XFS over LVM thin-provisioned volume
snapshot
Scalability using virtual RHS servers
RHSS VM[1]
3 GB RAM, 2 cores
...
RHSS VM[8]
3 GB RAM, 2 cores
RHSS VM[1]
3 GB RAM, 2 cores
...
RHSS VM[8]
3 GB RAM, 2 cores
...
KVM host 1 KVM host N
10-GbE Switch
Client[1] Client[2] Client[2n]...
0 5 10 15 20 25 30 35
0
500
1000
1500
2000
2500
3000
3500
scaling of throughput with virtual Gluster servers for sequential multi-stream I/O
Glusterfs, up to 8 bare-metal clients, up to 4 bare-metal KVM hosts, up to 8 VMs/host,
2-replica volume, read-hash-mode 2, 1 brick/VM, RA 4096 KB on /dev/vdb, deadline sch.
2 iozone threads/server, 4 GB/thread, threads spread evenly across 8 clients.
read
write
server count
throughput(MB/s)
Optimizing small-file creates
Y N N
Y Y N
0
1000
2000
3000
4000
5000
6000
7000
8000
9000
10000
small-file create performance
100,000 files/thread, 64 KB/file, 16 threads, 30 files/dir, 5 subdirs/dir,
fsync before close? hash into directories?
throughput(files/sec)
Test Done with dm-cache on RHEL7 with XFS filesystem, no Gluster, 180 GB total data
Avoiding FSYNC fop in AFR
http://review.gluster.org/#/c/5501/, in gluster 3.5
•Reverses loss in small-file performance from RHS 2.0 -> 2.1
12 16
0
100
200
300
400
500
600
700
0
100
200
300
400
500
600
700
effect of no-fsync-after-append enhancement on RHS throughput
patch at http://review.gluster.org/5501
without change
with change
threads/server, files/thread, file size (KB)
throughput(files/sec)
Check Out Other Red Hat Storage Activities at The Summit
•Enter the raffle to win tickets for a $500 gift card or trip to LegoLand!
• Entry cards available in all storage sessions - the more you attend, the more chances you
have to win!
•Talk to Storage Experts:
• Red Hat Booth (# 211)
• Infrastructure
• Infrastructure-as-a-Service
•Storage Partner Solutions Booth (# 605)
•Upstream Gluster projects
• Developer Lounge
Follow us on Twitter, Facebook: @RedHatStorage

Weitere ähnliche Inhalte

Was ist angesagt?

Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...
Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...
Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...Odinot Stanislas
 
Journey to Stability: Petabyte Ceph Cluster in OpenStack Cloud
Journey to Stability: Petabyte Ceph Cluster in OpenStack CloudJourney to Stability: Petabyte Ceph Cluster in OpenStack Cloud
Journey to Stability: Petabyte Ceph Cluster in OpenStack CloudPatrick McGarry
 
Ceph - High Performance Without High Costs
Ceph - High Performance Without High CostsCeph - High Performance Without High Costs
Ceph - High Performance Without High CostsJonathan Long
 
Build an High-Performance and High-Durable Block Storage Service Based on Ceph
Build an High-Performance and High-Durable Block Storage Service Based on CephBuild an High-Performance and High-Durable Block Storage Service Based on Ceph
Build an High-Performance and High-Durable Block Storage Service Based on CephRongze Zhu
 
Quick-and-Easy Deployment of a Ceph Storage Cluster
Quick-and-Easy Deployment of a Ceph Storage ClusterQuick-and-Easy Deployment of a Ceph Storage Cluster
Quick-and-Easy Deployment of a Ceph Storage ClusterPatrick Quairoli
 
Ceph Day Beijing - Ceph All-Flash Array Design Based on NUMA Architecture
Ceph Day Beijing - Ceph All-Flash Array Design Based on NUMA ArchitectureCeph Day Beijing - Ceph All-Flash Array Design Based on NUMA Architecture
Ceph Day Beijing - Ceph All-Flash Array Design Based on NUMA ArchitectureDanielle Womboldt
 
HKG15-401: Ceph and Software Defined Storage on ARM servers
HKG15-401: Ceph and Software Defined Storage on ARM serversHKG15-401: Ceph and Software Defined Storage on ARM servers
HKG15-401: Ceph and Software Defined Storage on ARM serversLinaro
 
Red Hat Storage 2014 - Product(s) Overview
Red Hat Storage 2014 - Product(s) OverviewRed Hat Storage 2014 - Product(s) Overview
Red Hat Storage 2014 - Product(s) OverviewMarcel Hergaarden
 
Ceph on Intel: Intel Storage Components, Benchmarks, and Contributions
Ceph on Intel: Intel Storage Components, Benchmarks, and ContributionsCeph on Intel: Intel Storage Components, Benchmarks, and Contributions
Ceph on Intel: Intel Storage Components, Benchmarks, and ContributionsColleen Corrice
 
Gluster fs architecture_future_directions_tlv
Gluster fs architecture_future_directions_tlvGluster fs architecture_future_directions_tlv
Gluster fs architecture_future_directions_tlvSahina Bose
 
Erasure codes and storage tiers on gluster
Erasure codes and storage tiers on glusterErasure codes and storage tiers on gluster
Erasure codes and storage tiers on glusterRed_Hat_Storage
 
Red Hat Storage - Introduction to GlusterFS
Red Hat Storage - Introduction to GlusterFSRed Hat Storage - Introduction to GlusterFS
Red Hat Storage - Introduction to GlusterFSGlusterFS
 
LizardFS-WhitePaper-Eng-v3.9.2-web
LizardFS-WhitePaper-Eng-v3.9.2-webLizardFS-WhitePaper-Eng-v3.9.2-web
LizardFS-WhitePaper-Eng-v3.9.2-webSzymon Haly
 
Gluster Storage
Gluster StorageGluster Storage
Gluster StorageRaz Tamir
 
Build a High Available NFS Cluster Based on CephFS - Shangzhong Zhu
Build a High Available NFS Cluster Based on CephFS - Shangzhong ZhuBuild a High Available NFS Cluster Based on CephFS - Shangzhong Zhu
Build a High Available NFS Cluster Based on CephFS - Shangzhong ZhuCeph Community
 
Red Hat Gluster Storage, Container Storage and CephFS Plans
Red Hat Gluster Storage, Container Storage and CephFS PlansRed Hat Gluster Storage, Container Storage and CephFS Plans
Red Hat Gluster Storage, Container Storage and CephFS PlansRed_Hat_Storage
 
Improvements in GlusterFS for Virtualization usecase
Improvements in GlusterFS for Virtualization usecaseImprovements in GlusterFS for Virtualization usecase
Improvements in GlusterFS for Virtualization usecaseDeepak Shetty
 
Hadoop over rgw
Hadoop over rgwHadoop over rgw
Hadoop over rgwzhouyuan
 

Was ist angesagt? (20)

Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...
Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...
Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...
 
Journey to Stability: Petabyte Ceph Cluster in OpenStack Cloud
Journey to Stability: Petabyte Ceph Cluster in OpenStack CloudJourney to Stability: Petabyte Ceph Cluster in OpenStack Cloud
Journey to Stability: Petabyte Ceph Cluster in OpenStack Cloud
 
Ceph - High Performance Without High Costs
Ceph - High Performance Without High CostsCeph - High Performance Without High Costs
Ceph - High Performance Without High Costs
 
Build an High-Performance and High-Durable Block Storage Service Based on Ceph
Build an High-Performance and High-Durable Block Storage Service Based on CephBuild an High-Performance and High-Durable Block Storage Service Based on Ceph
Build an High-Performance and High-Durable Block Storage Service Based on Ceph
 
Quick-and-Easy Deployment of a Ceph Storage Cluster
Quick-and-Easy Deployment of a Ceph Storage ClusterQuick-and-Easy Deployment of a Ceph Storage Cluster
Quick-and-Easy Deployment of a Ceph Storage Cluster
 
Ceph Day Beijing - Ceph All-Flash Array Design Based on NUMA Architecture
Ceph Day Beijing - Ceph All-Flash Array Design Based on NUMA ArchitectureCeph Day Beijing - Ceph All-Flash Array Design Based on NUMA Architecture
Ceph Day Beijing - Ceph All-Flash Array Design Based on NUMA Architecture
 
HKG15-401: Ceph and Software Defined Storage on ARM servers
HKG15-401: Ceph and Software Defined Storage on ARM serversHKG15-401: Ceph and Software Defined Storage on ARM servers
HKG15-401: Ceph and Software Defined Storage on ARM servers
 
librados
libradoslibrados
librados
 
Red Hat Storage 2014 - Product(s) Overview
Red Hat Storage 2014 - Product(s) OverviewRed Hat Storage 2014 - Product(s) Overview
Red Hat Storage 2014 - Product(s) Overview
 
ceph-barcelona-v-1.2
ceph-barcelona-v-1.2ceph-barcelona-v-1.2
ceph-barcelona-v-1.2
 
Ceph on Intel: Intel Storage Components, Benchmarks, and Contributions
Ceph on Intel: Intel Storage Components, Benchmarks, and ContributionsCeph on Intel: Intel Storage Components, Benchmarks, and Contributions
Ceph on Intel: Intel Storage Components, Benchmarks, and Contributions
 
Gluster fs architecture_future_directions_tlv
Gluster fs architecture_future_directions_tlvGluster fs architecture_future_directions_tlv
Gluster fs architecture_future_directions_tlv
 
Erasure codes and storage tiers on gluster
Erasure codes and storage tiers on glusterErasure codes and storage tiers on gluster
Erasure codes and storage tiers on gluster
 
Red Hat Storage - Introduction to GlusterFS
Red Hat Storage - Introduction to GlusterFSRed Hat Storage - Introduction to GlusterFS
Red Hat Storage - Introduction to GlusterFS
 
LizardFS-WhitePaper-Eng-v3.9.2-web
LizardFS-WhitePaper-Eng-v3.9.2-webLizardFS-WhitePaper-Eng-v3.9.2-web
LizardFS-WhitePaper-Eng-v3.9.2-web
 
Gluster Storage
Gluster StorageGluster Storage
Gluster Storage
 
Build a High Available NFS Cluster Based on CephFS - Shangzhong Zhu
Build a High Available NFS Cluster Based on CephFS - Shangzhong ZhuBuild a High Available NFS Cluster Based on CephFS - Shangzhong Zhu
Build a High Available NFS Cluster Based on CephFS - Shangzhong Zhu
 
Red Hat Gluster Storage, Container Storage and CephFS Plans
Red Hat Gluster Storage, Container Storage and CephFS PlansRed Hat Gluster Storage, Container Storage and CephFS Plans
Red Hat Gluster Storage, Container Storage and CephFS Plans
 
Improvements in GlusterFS for Virtualization usecase
Improvements in GlusterFS for Virtualization usecaseImprovements in GlusterFS for Virtualization usecase
Improvements in GlusterFS for Virtualization usecase
 
Hadoop over rgw
Hadoop over rgwHadoop over rgw
Hadoop over rgw
 

Ähnlich wie Red Hat Storage Server Administration Deep Dive

Accelerating hbase with nvme and bucket cache
Accelerating hbase with nvme and bucket cacheAccelerating hbase with nvme and bucket cache
Accelerating hbase with nvme and bucket cacheDavid Grier
 
Accelerating HBase with NVMe and Bucket Cache
Accelerating HBase with NVMe and Bucket CacheAccelerating HBase with NVMe and Bucket Cache
Accelerating HBase with NVMe and Bucket CacheNicolas Poggi
 
Hadoop Architecture_Cluster_Cap_Plan
Hadoop Architecture_Cluster_Cap_PlanHadoop Architecture_Cluster_Cap_Plan
Hadoop Architecture_Cluster_Cap_PlanNarayana B
 
[B4]deview 2012-hdfs
[B4]deview 2012-hdfs[B4]deview 2012-hdfs
[B4]deview 2012-hdfsNAVER D2
 
Wheeler w 0450_linux_file_systems1
Wheeler w 0450_linux_file_systems1Wheeler w 0450_linux_file_systems1
Wheeler w 0450_linux_file_systems1sprdd
 
Wheeler w 0450_linux_file_systems1
Wheeler w 0450_linux_file_systems1Wheeler w 0450_linux_file_systems1
Wheeler w 0450_linux_file_systems1sprdd
 
Tuning Linux for your database FLOSSUK 2016
Tuning Linux for your database FLOSSUK 2016Tuning Linux for your database FLOSSUK 2016
Tuning Linux for your database FLOSSUK 2016Colin Charles
 
Pilot Hadoop Towards 2500 Nodes and Cluster Redundancy
Pilot Hadoop Towards 2500 Nodes and Cluster RedundancyPilot Hadoop Towards 2500 Nodes and Cluster Redundancy
Pilot Hadoop Towards 2500 Nodes and Cluster RedundancyStuart Pook
 
Introduction to Redis
Introduction to RedisIntroduction to Redis
Introduction to RedisArnab Mitra
 
02.28.13 WANdisco ApacheCon 2013
02.28.13 WANdisco ApacheCon 201302.28.13 WANdisco ApacheCon 2013
02.28.13 WANdisco ApacheCon 2013WANdisco Plc
 
Red Hat Ceph Storage Acceleration Utilizing Flash Technology
Red Hat Ceph Storage Acceleration Utilizing Flash Technology Red Hat Ceph Storage Acceleration Utilizing Flash Technology
Red Hat Ceph Storage Acceleration Utilizing Flash Technology Red_Hat_Storage
 
Ceph Day Berlin: Ceph on All Flash Storage - Breaking Performance Barriers
Ceph Day Berlin: Ceph on All Flash Storage - Breaking Performance BarriersCeph Day Berlin: Ceph on All Flash Storage - Breaking Performance Barriers
Ceph Day Berlin: Ceph on All Flash Storage - Breaking Performance BarriersCeph Community
 
hbaseconasia2017: Large scale data near-line loading method and architecture
hbaseconasia2017: Large scale data near-line loading method and architecturehbaseconasia2017: Large scale data near-line loading method and architecture
hbaseconasia2017: Large scale data near-line loading method and architectureHBaseCon
 
S016827 pendulum-swings-nola-v1710d
S016827 pendulum-swings-nola-v1710dS016827 pendulum-swings-nola-v1710d
S016827 pendulum-swings-nola-v1710dTony Pearson
 
Reference Architecture: Architecting Ceph Storage Solutions
Reference Architecture: Architecting Ceph Storage Solutions Reference Architecture: Architecting Ceph Storage Solutions
Reference Architecture: Architecting Ceph Storage Solutions Ceph Community
 
OSDC 2016 - Tuning Linux for your Database by Colin Charles
OSDC 2016 - Tuning Linux for your Database by Colin CharlesOSDC 2016 - Tuning Linux for your Database by Colin Charles
OSDC 2016 - Tuning Linux for your Database by Colin CharlesNETWAYS
 
Storage and performance, Whiptail
Storage and performance, Whiptail Storage and performance, Whiptail
Storage and performance, Whiptail Internet World
 
LizardFS-WhitePaper-Eng-v4.0 (1)
LizardFS-WhitePaper-Eng-v4.0 (1)LizardFS-WhitePaper-Eng-v4.0 (1)
LizardFS-WhitePaper-Eng-v4.0 (1)Pekka Männistö
 

Ähnlich wie Red Hat Storage Server Administration Deep Dive (20)

Accelerating hbase with nvme and bucket cache
Accelerating hbase with nvme and bucket cacheAccelerating hbase with nvme and bucket cache
Accelerating hbase with nvme and bucket cache
 
Accelerating HBase with NVMe and Bucket Cache
Accelerating HBase with NVMe and Bucket CacheAccelerating HBase with NVMe and Bucket Cache
Accelerating HBase with NVMe and Bucket Cache
 
Hadoop Architecture_Cluster_Cap_Plan
Hadoop Architecture_Cluster_Cap_PlanHadoop Architecture_Cluster_Cap_Plan
Hadoop Architecture_Cluster_Cap_Plan
 
[B4]deview 2012-hdfs
[B4]deview 2012-hdfs[B4]deview 2012-hdfs
[B4]deview 2012-hdfs
 
Wheeler w 0450_linux_file_systems1
Wheeler w 0450_linux_file_systems1Wheeler w 0450_linux_file_systems1
Wheeler w 0450_linux_file_systems1
 
Wheeler w 0450_linux_file_systems1
Wheeler w 0450_linux_file_systems1Wheeler w 0450_linux_file_systems1
Wheeler w 0450_linux_file_systems1
 
Tuning Linux for your database FLOSSUK 2016
Tuning Linux for your database FLOSSUK 2016Tuning Linux for your database FLOSSUK 2016
Tuning Linux for your database FLOSSUK 2016
 
LUG 2014
LUG 2014LUG 2014
LUG 2014
 
Pilot Hadoop Towards 2500 Nodes and Cluster Redundancy
Pilot Hadoop Towards 2500 Nodes and Cluster RedundancyPilot Hadoop Towards 2500 Nodes and Cluster Redundancy
Pilot Hadoop Towards 2500 Nodes and Cluster Redundancy
 
Introduction to Redis
Introduction to RedisIntroduction to Redis
Introduction to Redis
 
02.28.13 WANdisco ApacheCon 2013
02.28.13 WANdisco ApacheCon 201302.28.13 WANdisco ApacheCon 2013
02.28.13 WANdisco ApacheCon 2013
 
Redis acc 2015_eng
Redis acc 2015_engRedis acc 2015_eng
Redis acc 2015_eng
 
Red Hat Ceph Storage Acceleration Utilizing Flash Technology
Red Hat Ceph Storage Acceleration Utilizing Flash Technology Red Hat Ceph Storage Acceleration Utilizing Flash Technology
Red Hat Ceph Storage Acceleration Utilizing Flash Technology
 
Ceph Day Berlin: Ceph on All Flash Storage - Breaking Performance Barriers
Ceph Day Berlin: Ceph on All Flash Storage - Breaking Performance BarriersCeph Day Berlin: Ceph on All Flash Storage - Breaking Performance Barriers
Ceph Day Berlin: Ceph on All Flash Storage - Breaking Performance Barriers
 
hbaseconasia2017: Large scale data near-line loading method and architecture
hbaseconasia2017: Large scale data near-line loading method and architecturehbaseconasia2017: Large scale data near-line loading method and architecture
hbaseconasia2017: Large scale data near-line loading method and architecture
 
S016827 pendulum-swings-nola-v1710d
S016827 pendulum-swings-nola-v1710dS016827 pendulum-swings-nola-v1710d
S016827 pendulum-swings-nola-v1710d
 
Reference Architecture: Architecting Ceph Storage Solutions
Reference Architecture: Architecting Ceph Storage Solutions Reference Architecture: Architecting Ceph Storage Solutions
Reference Architecture: Architecting Ceph Storage Solutions
 
OSDC 2016 - Tuning Linux for your Database by Colin Charles
OSDC 2016 - Tuning Linux for your Database by Colin CharlesOSDC 2016 - Tuning Linux for your Database by Colin Charles
OSDC 2016 - Tuning Linux for your Database by Colin Charles
 
Storage and performance, Whiptail
Storage and performance, Whiptail Storage and performance, Whiptail
Storage and performance, Whiptail
 
LizardFS-WhitePaper-Eng-v4.0 (1)
LizardFS-WhitePaper-Eng-v4.0 (1)LizardFS-WhitePaper-Eng-v4.0 (1)
LizardFS-WhitePaper-Eng-v4.0 (1)
 

Mehr von Red_Hat_Storage

Red Hat Storage Day Dallas - Storage for OpenShift Containers
Red Hat Storage Day Dallas - Storage for OpenShift Containers Red Hat Storage Day Dallas - Storage for OpenShift Containers
Red Hat Storage Day Dallas - Storage for OpenShift Containers Red_Hat_Storage
 
Red Hat Storage Day Dallas - Red Hat Ceph Storage Acceleration Utilizing Flas...
Red Hat Storage Day Dallas - Red Hat Ceph Storage Acceleration Utilizing Flas...Red Hat Storage Day Dallas - Red Hat Ceph Storage Acceleration Utilizing Flas...
Red Hat Storage Day Dallas - Red Hat Ceph Storage Acceleration Utilizing Flas...Red_Hat_Storage
 
Red Hat Storage Day Dallas - Defiance of the Appliance
Red Hat Storage Day Dallas - Defiance of the Appliance Red Hat Storage Day Dallas - Defiance of the Appliance
Red Hat Storage Day Dallas - Defiance of the Appliance Red_Hat_Storage
 
Red Hat Storage Day Dallas - Gluster Storage in Containerized Application
Red Hat Storage Day Dallas - Gluster Storage in Containerized Application Red Hat Storage Day Dallas - Gluster Storage in Containerized Application
Red Hat Storage Day Dallas - Gluster Storage in Containerized Application Red_Hat_Storage
 
Red Hat Storage Day Dallas - Why Software-defined Storage Matters
Red Hat Storage Day Dallas - Why Software-defined Storage MattersRed Hat Storage Day Dallas - Why Software-defined Storage Matters
Red Hat Storage Day Dallas - Why Software-defined Storage MattersRed_Hat_Storage
 
Red Hat Storage Day Boston - Why Software-defined Storage Matters
Red Hat Storage Day Boston - Why Software-defined Storage MattersRed Hat Storage Day Boston - Why Software-defined Storage Matters
Red Hat Storage Day Boston - Why Software-defined Storage MattersRed_Hat_Storage
 
Red Hat Storage Day Boston - Supermicro Super Storage
Red Hat Storage Day Boston - Supermicro Super StorageRed Hat Storage Day Boston - Supermicro Super Storage
Red Hat Storage Day Boston - Supermicro Super StorageRed_Hat_Storage
 
Red Hat Storage Day Boston - OpenStack + Ceph Storage
Red Hat Storage Day Boston - OpenStack + Ceph StorageRed Hat Storage Day Boston - OpenStack + Ceph Storage
Red Hat Storage Day Boston - OpenStack + Ceph StorageRed_Hat_Storage
 
Red Hat Storage Day Boston - Persistent Storage for Containers
Red Hat Storage Day Boston - Persistent Storage for Containers Red Hat Storage Day Boston - Persistent Storage for Containers
Red Hat Storage Day Boston - Persistent Storage for Containers Red_Hat_Storage
 
Red Hat Storage Day Boston - Red Hat Gluster Storage vs. Traditional Storage ...
Red Hat Storage Day Boston - Red Hat Gluster Storage vs. Traditional Storage ...Red Hat Storage Day Boston - Red Hat Gluster Storage vs. Traditional Storage ...
Red Hat Storage Day Boston - Red Hat Gluster Storage vs. Traditional Storage ...Red_Hat_Storage
 
Red Hat Storage Day New York - Red Hat Gluster Storage: Historical Tick Data ...
Red Hat Storage Day New York - Red Hat Gluster Storage: Historical Tick Data ...Red Hat Storage Day New York - Red Hat Gluster Storage: Historical Tick Data ...
Red Hat Storage Day New York - Red Hat Gluster Storage: Historical Tick Data ...Red_Hat_Storage
 
Red Hat Storage Day New York - QCT: Avoid the mess, deploy with a validated s...
Red Hat Storage Day New York - QCT: Avoid the mess, deploy with a validated s...Red Hat Storage Day New York - QCT: Avoid the mess, deploy with a validated s...
Red Hat Storage Day New York - QCT: Avoid the mess, deploy with a validated s...Red_Hat_Storage
 
Red Hat Storage Day - When the Ceph Hits the Fan
Red Hat Storage Day -  When the Ceph Hits the FanRed Hat Storage Day -  When the Ceph Hits the Fan
Red Hat Storage Day - When the Ceph Hits the FanRed_Hat_Storage
 
Red Hat Storage Day New York - Penguin Computing Spotlight: Delivering Open S...
Red Hat Storage Day New York - Penguin Computing Spotlight: Delivering Open S...Red Hat Storage Day New York - Penguin Computing Spotlight: Delivering Open S...
Red Hat Storage Day New York - Penguin Computing Spotlight: Delivering Open S...Red_Hat_Storage
 
Red Hat Storage Day New York - Intel Unlocking Big Data Infrastructure Effici...
Red Hat Storage Day New York - Intel Unlocking Big Data Infrastructure Effici...Red Hat Storage Day New York - Intel Unlocking Big Data Infrastructure Effici...
Red Hat Storage Day New York - Intel Unlocking Big Data Infrastructure Effici...Red_Hat_Storage
 
Red Hat Storage Day New York - New Reference Architectures
Red Hat Storage Day New York - New Reference ArchitecturesRed Hat Storage Day New York - New Reference Architectures
Red Hat Storage Day New York - New Reference ArchitecturesRed_Hat_Storage
 
Red Hat Storage Day New York - Persistent Storage for Containers
Red Hat Storage Day New York - Persistent Storage for ContainersRed Hat Storage Day New York - Persistent Storage for Containers
Red Hat Storage Day New York - Persistent Storage for ContainersRed_Hat_Storage
 
Red Hat Storage Day New York -Performance Intensive Workloads with Samsung NV...
Red Hat Storage Day New York -Performance Intensive Workloads with Samsung NV...Red Hat Storage Day New York -Performance Intensive Workloads with Samsung NV...
Red Hat Storage Day New York -Performance Intensive Workloads with Samsung NV...Red_Hat_Storage
 
Red Hat Storage Day New York - Welcome Remarks
Red Hat Storage Day New York - Welcome Remarks Red Hat Storage Day New York - Welcome Remarks
Red Hat Storage Day New York - Welcome Remarks Red_Hat_Storage
 
Red Hat Storage Day New York - What's New in Red Hat Ceph Storage
Red Hat Storage Day New York - What's New in Red Hat Ceph StorageRed Hat Storage Day New York - What's New in Red Hat Ceph Storage
Red Hat Storage Day New York - What's New in Red Hat Ceph StorageRed_Hat_Storage
 

Mehr von Red_Hat_Storage (20)

Red Hat Storage Day Dallas - Storage for OpenShift Containers
Red Hat Storage Day Dallas - Storage for OpenShift Containers Red Hat Storage Day Dallas - Storage for OpenShift Containers
Red Hat Storage Day Dallas - Storage for OpenShift Containers
 
Red Hat Storage Day Dallas - Red Hat Ceph Storage Acceleration Utilizing Flas...
Red Hat Storage Day Dallas - Red Hat Ceph Storage Acceleration Utilizing Flas...Red Hat Storage Day Dallas - Red Hat Ceph Storage Acceleration Utilizing Flas...
Red Hat Storage Day Dallas - Red Hat Ceph Storage Acceleration Utilizing Flas...
 
Red Hat Storage Day Dallas - Defiance of the Appliance
Red Hat Storage Day Dallas - Defiance of the Appliance Red Hat Storage Day Dallas - Defiance of the Appliance
Red Hat Storage Day Dallas - Defiance of the Appliance
 
Red Hat Storage Day Dallas - Gluster Storage in Containerized Application
Red Hat Storage Day Dallas - Gluster Storage in Containerized Application Red Hat Storage Day Dallas - Gluster Storage in Containerized Application
Red Hat Storage Day Dallas - Gluster Storage in Containerized Application
 
Red Hat Storage Day Dallas - Why Software-defined Storage Matters
Red Hat Storage Day Dallas - Why Software-defined Storage MattersRed Hat Storage Day Dallas - Why Software-defined Storage Matters
Red Hat Storage Day Dallas - Why Software-defined Storage Matters
 
Red Hat Storage Day Boston - Why Software-defined Storage Matters
Red Hat Storage Day Boston - Why Software-defined Storage MattersRed Hat Storage Day Boston - Why Software-defined Storage Matters
Red Hat Storage Day Boston - Why Software-defined Storage Matters
 
Red Hat Storage Day Boston - Supermicro Super Storage
Red Hat Storage Day Boston - Supermicro Super StorageRed Hat Storage Day Boston - Supermicro Super Storage
Red Hat Storage Day Boston - Supermicro Super Storage
 
Red Hat Storage Day Boston - OpenStack + Ceph Storage
Red Hat Storage Day Boston - OpenStack + Ceph StorageRed Hat Storage Day Boston - OpenStack + Ceph Storage
Red Hat Storage Day Boston - OpenStack + Ceph Storage
 
Red Hat Storage Day Boston - Persistent Storage for Containers
Red Hat Storage Day Boston - Persistent Storage for Containers Red Hat Storage Day Boston - Persistent Storage for Containers
Red Hat Storage Day Boston - Persistent Storage for Containers
 
Red Hat Storage Day Boston - Red Hat Gluster Storage vs. Traditional Storage ...
Red Hat Storage Day Boston - Red Hat Gluster Storage vs. Traditional Storage ...Red Hat Storage Day Boston - Red Hat Gluster Storage vs. Traditional Storage ...
Red Hat Storage Day Boston - Red Hat Gluster Storage vs. Traditional Storage ...
 
Red Hat Storage Day New York - Red Hat Gluster Storage: Historical Tick Data ...
Red Hat Storage Day New York - Red Hat Gluster Storage: Historical Tick Data ...Red Hat Storage Day New York - Red Hat Gluster Storage: Historical Tick Data ...
Red Hat Storage Day New York - Red Hat Gluster Storage: Historical Tick Data ...
 
Red Hat Storage Day New York - QCT: Avoid the mess, deploy with a validated s...
Red Hat Storage Day New York - QCT: Avoid the mess, deploy with a validated s...Red Hat Storage Day New York - QCT: Avoid the mess, deploy with a validated s...
Red Hat Storage Day New York - QCT: Avoid the mess, deploy with a validated s...
 
Red Hat Storage Day - When the Ceph Hits the Fan
Red Hat Storage Day -  When the Ceph Hits the FanRed Hat Storage Day -  When the Ceph Hits the Fan
Red Hat Storage Day - When the Ceph Hits the Fan
 
Red Hat Storage Day New York - Penguin Computing Spotlight: Delivering Open S...
Red Hat Storage Day New York - Penguin Computing Spotlight: Delivering Open S...Red Hat Storage Day New York - Penguin Computing Spotlight: Delivering Open S...
Red Hat Storage Day New York - Penguin Computing Spotlight: Delivering Open S...
 
Red Hat Storage Day New York - Intel Unlocking Big Data Infrastructure Effici...
Red Hat Storage Day New York - Intel Unlocking Big Data Infrastructure Effici...Red Hat Storage Day New York - Intel Unlocking Big Data Infrastructure Effici...
Red Hat Storage Day New York - Intel Unlocking Big Data Infrastructure Effici...
 
Red Hat Storage Day New York - New Reference Architectures
Red Hat Storage Day New York - New Reference ArchitecturesRed Hat Storage Day New York - New Reference Architectures
Red Hat Storage Day New York - New Reference Architectures
 
Red Hat Storage Day New York - Persistent Storage for Containers
Red Hat Storage Day New York - Persistent Storage for ContainersRed Hat Storage Day New York - Persistent Storage for Containers
Red Hat Storage Day New York - Persistent Storage for Containers
 
Red Hat Storage Day New York -Performance Intensive Workloads with Samsung NV...
Red Hat Storage Day New York -Performance Intensive Workloads with Samsung NV...Red Hat Storage Day New York -Performance Intensive Workloads with Samsung NV...
Red Hat Storage Day New York -Performance Intensive Workloads with Samsung NV...
 
Red Hat Storage Day New York - Welcome Remarks
Red Hat Storage Day New York - Welcome Remarks Red Hat Storage Day New York - Welcome Remarks
Red Hat Storage Day New York - Welcome Remarks
 
Red Hat Storage Day New York - What's New in Red Hat Ceph Storage
Red Hat Storage Day New York - What's New in Red Hat Ceph StorageRed Hat Storage Day New York - What's New in Red Hat Ceph Storage
Red Hat Storage Day New York - What's New in Red Hat Ceph Storage
 

Kürzlich hochgeladen

[Webinar] SpiraTest - Setting New Standards in Quality Assurance
[Webinar] SpiraTest - Setting New Standards in Quality Assurance[Webinar] SpiraTest - Setting New Standards in Quality Assurance
[Webinar] SpiraTest - Setting New Standards in Quality AssuranceInflectra
 
Long journey of Ruby standard library at RubyConf AU 2024
Long journey of Ruby standard library at RubyConf AU 2024Long journey of Ruby standard library at RubyConf AU 2024
Long journey of Ruby standard library at RubyConf AU 2024Hiroshi SHIBATA
 
Why device, WIFI, and ISP insights are crucial to supporting remote Microsoft...
Why device, WIFI, and ISP insights are crucial to supporting remote Microsoft...Why device, WIFI, and ISP insights are crucial to supporting remote Microsoft...
Why device, WIFI, and ISP insights are crucial to supporting remote Microsoft...panagenda
 
From Family Reminiscence to Scholarly Archive .
From Family Reminiscence to Scholarly Archive .From Family Reminiscence to Scholarly Archive .
From Family Reminiscence to Scholarly Archive .Alan Dix
 
Genislab builds better products and faster go-to-market with Lean project man...
Genislab builds better products and faster go-to-market with Lean project man...Genislab builds better products and faster go-to-market with Lean project man...
Genislab builds better products and faster go-to-market with Lean project man...Farhan Tariq
 
Manual 508 Accessibility Compliance Audit
Manual 508 Accessibility Compliance AuditManual 508 Accessibility Compliance Audit
Manual 508 Accessibility Compliance AuditSkynet Technologies
 
Generative AI for Technical Writer or Information Developers
Generative AI for Technical Writer or Information DevelopersGenerative AI for Technical Writer or Information Developers
Generative AI for Technical Writer or Information DevelopersRaghuram Pandurangan
 
Connecting the Dots for Information Discovery.pdf
Connecting the Dots for Information Discovery.pdfConnecting the Dots for Information Discovery.pdf
Connecting the Dots for Information Discovery.pdfNeo4j
 
Potential of AI (Generative AI) in Business: Learnings and Insights
Potential of AI (Generative AI) in Business: Learnings and InsightsPotential of AI (Generative AI) in Business: Learnings and Insights
Potential of AI (Generative AI) in Business: Learnings and InsightsRavi Sanghani
 
The Future Roadmap for the Composable Data Stack - Wes McKinney - Data Counci...
The Future Roadmap for the Composable Data Stack - Wes McKinney - Data Counci...The Future Roadmap for the Composable Data Stack - Wes McKinney - Data Counci...
The Future Roadmap for the Composable Data Stack - Wes McKinney - Data Counci...Wes McKinney
 
UiPath Community: Communication Mining from Zero to Hero
UiPath Community: Communication Mining from Zero to HeroUiPath Community: Communication Mining from Zero to Hero
UiPath Community: Communication Mining from Zero to HeroUiPathCommunity
 
A Deep Dive on Passkeys: FIDO Paris Seminar.pptx
A Deep Dive on Passkeys: FIDO Paris Seminar.pptxA Deep Dive on Passkeys: FIDO Paris Seminar.pptx
A Deep Dive on Passkeys: FIDO Paris Seminar.pptxLoriGlavin3
 
Rise of the Machines: Known As Drones...
Rise of the Machines: Known As Drones...Rise of the Machines: Known As Drones...
Rise of the Machines: Known As Drones...Rick Flair
 
What is DBT - The Ultimate Data Build Tool.pdf
What is DBT - The Ultimate Data Build Tool.pdfWhat is DBT - The Ultimate Data Build Tool.pdf
What is DBT - The Ultimate Data Build Tool.pdfMounikaPolabathina
 
TeamStation AI System Report LATAM IT Salaries 2024
TeamStation AI System Report LATAM IT Salaries 2024TeamStation AI System Report LATAM IT Salaries 2024
TeamStation AI System Report LATAM IT Salaries 2024Lonnie McRorey
 
Testing tools and AI - ideas what to try with some tool examples
Testing tools and AI - ideas what to try with some tool examplesTesting tools and AI - ideas what to try with some tool examples
Testing tools and AI - ideas what to try with some tool examplesKari Kakkonen
 
Passkey Providers and Enabling Portability: FIDO Paris Seminar.pptx
Passkey Providers and Enabling Portability: FIDO Paris Seminar.pptxPasskey Providers and Enabling Portability: FIDO Paris Seminar.pptx
Passkey Providers and Enabling Portability: FIDO Paris Seminar.pptxLoriGlavin3
 
Emixa Mendix Meetup 11 April 2024 about Mendix Native development
Emixa Mendix Meetup 11 April 2024 about Mendix Native developmentEmixa Mendix Meetup 11 April 2024 about Mendix Native development
Emixa Mendix Meetup 11 April 2024 about Mendix Native developmentPim van der Noll
 
So einfach geht modernes Roaming fuer Notes und Nomad.pdf
So einfach geht modernes Roaming fuer Notes und Nomad.pdfSo einfach geht modernes Roaming fuer Notes und Nomad.pdf
So einfach geht modernes Roaming fuer Notes und Nomad.pdfpanagenda
 
How to Effectively Monitor SD-WAN and SASE Environments with ThousandEyes
How to Effectively Monitor SD-WAN and SASE Environments with ThousandEyesHow to Effectively Monitor SD-WAN and SASE Environments with ThousandEyes
How to Effectively Monitor SD-WAN and SASE Environments with ThousandEyesThousandEyes
 

Kürzlich hochgeladen (20)

[Webinar] SpiraTest - Setting New Standards in Quality Assurance
[Webinar] SpiraTest - Setting New Standards in Quality Assurance[Webinar] SpiraTest - Setting New Standards in Quality Assurance
[Webinar] SpiraTest - Setting New Standards in Quality Assurance
 
Long journey of Ruby standard library at RubyConf AU 2024
Long journey of Ruby standard library at RubyConf AU 2024Long journey of Ruby standard library at RubyConf AU 2024
Long journey of Ruby standard library at RubyConf AU 2024
 
Why device, WIFI, and ISP insights are crucial to supporting remote Microsoft...
Why device, WIFI, and ISP insights are crucial to supporting remote Microsoft...Why device, WIFI, and ISP insights are crucial to supporting remote Microsoft...
Why device, WIFI, and ISP insights are crucial to supporting remote Microsoft...
 
From Family Reminiscence to Scholarly Archive .
From Family Reminiscence to Scholarly Archive .From Family Reminiscence to Scholarly Archive .
From Family Reminiscence to Scholarly Archive .
 
Genislab builds better products and faster go-to-market with Lean project man...
Genislab builds better products and faster go-to-market with Lean project man...Genislab builds better products and faster go-to-market with Lean project man...
Genislab builds better products and faster go-to-market with Lean project man...
 
Manual 508 Accessibility Compliance Audit
Manual 508 Accessibility Compliance AuditManual 508 Accessibility Compliance Audit
Manual 508 Accessibility Compliance Audit
 
Generative AI for Technical Writer or Information Developers
Generative AI for Technical Writer or Information DevelopersGenerative AI for Technical Writer or Information Developers
Generative AI for Technical Writer or Information Developers
 
Connecting the Dots for Information Discovery.pdf
Connecting the Dots for Information Discovery.pdfConnecting the Dots for Information Discovery.pdf
Connecting the Dots for Information Discovery.pdf
 
Potential of AI (Generative AI) in Business: Learnings and Insights
Potential of AI (Generative AI) in Business: Learnings and InsightsPotential of AI (Generative AI) in Business: Learnings and Insights
Potential of AI (Generative AI) in Business: Learnings and Insights
 
The Future Roadmap for the Composable Data Stack - Wes McKinney - Data Counci...
The Future Roadmap for the Composable Data Stack - Wes McKinney - Data Counci...The Future Roadmap for the Composable Data Stack - Wes McKinney - Data Counci...
The Future Roadmap for the Composable Data Stack - Wes McKinney - Data Counci...
 
UiPath Community: Communication Mining from Zero to Hero
UiPath Community: Communication Mining from Zero to HeroUiPath Community: Communication Mining from Zero to Hero
UiPath Community: Communication Mining from Zero to Hero
 
A Deep Dive on Passkeys: FIDO Paris Seminar.pptx
A Deep Dive on Passkeys: FIDO Paris Seminar.pptxA Deep Dive on Passkeys: FIDO Paris Seminar.pptx
A Deep Dive on Passkeys: FIDO Paris Seminar.pptx
 
Rise of the Machines: Known As Drones...
Rise of the Machines: Known As Drones...Rise of the Machines: Known As Drones...
Rise of the Machines: Known As Drones...
 
What is DBT - The Ultimate Data Build Tool.pdf
What is DBT - The Ultimate Data Build Tool.pdfWhat is DBT - The Ultimate Data Build Tool.pdf
What is DBT - The Ultimate Data Build Tool.pdf
 
TeamStation AI System Report LATAM IT Salaries 2024
TeamStation AI System Report LATAM IT Salaries 2024TeamStation AI System Report LATAM IT Salaries 2024
TeamStation AI System Report LATAM IT Salaries 2024
 
Testing tools and AI - ideas what to try with some tool examples
Testing tools and AI - ideas what to try with some tool examplesTesting tools and AI - ideas what to try with some tool examples
Testing tools and AI - ideas what to try with some tool examples
 
Passkey Providers and Enabling Portability: FIDO Paris Seminar.pptx
Passkey Providers and Enabling Portability: FIDO Paris Seminar.pptxPasskey Providers and Enabling Portability: FIDO Paris Seminar.pptx
Passkey Providers and Enabling Portability: FIDO Paris Seminar.pptx
 
Emixa Mendix Meetup 11 April 2024 about Mendix Native development
Emixa Mendix Meetup 11 April 2024 about Mendix Native developmentEmixa Mendix Meetup 11 April 2024 about Mendix Native development
Emixa Mendix Meetup 11 April 2024 about Mendix Native development
 
So einfach geht modernes Roaming fuer Notes und Nomad.pdf
So einfach geht modernes Roaming fuer Notes und Nomad.pdfSo einfach geht modernes Roaming fuer Notes und Nomad.pdf
So einfach geht modernes Roaming fuer Notes und Nomad.pdf
 
How to Effectively Monitor SD-WAN and SASE Environments with ThousandEyes
How to Effectively Monitor SD-WAN and SASE Environments with ThousandEyesHow to Effectively Monitor SD-WAN and SASE Environments with ThousandEyes
How to Effectively Monitor SD-WAN and SASE Environments with ThousandEyes
 

Red Hat Storage Server Administration Deep Dive

  • 1. Red Hat Storage (RHS) Performance Ben England Principal S/W Engr., Red Hat
  • 2. What this session covers What's new with Red Hat Storage (perf.) since summit 2013? How to monitor RHS performance Future directions for RHS performance.
  • 3. Related presentations •other RHS presentations in summit 2014 •http://www.redhat.com/summit/sessions/topics/red-hat-storage- server.html •previous RHS perf presentations in 2012, 2013 • http://rhsummit.files.wordpress.com/2013/07/england_th_0450_rhs_perf_practices-4_neependra.pdf • http://rhsummit.files.wordpress.com/2012/03/england-rhs-performance.pdf
  • 4. Improvement since last year Libgfapi – removes FUSE bottleneck for high-IOPS applications FUSE – linux caching can now be utilized with fopen-keep-cache feature Brick configurationBrick configuration – alternatives for small-file, random-io-centric configurations SSD – where it helps, configuration options Swift - RHS converging with OpenStack IceHouse, is scalable and stable NFS - large sequential writes more stable, 16x larger RPC size limit
  • 5. INTERNAL ONLY | PRESENTER NAME5 Anatomy of Red Hat Storage (RHS) – libgfapi smbd (CIFS) VFS FUSE glusterfs client process Linux kernel other RHS servers... RHS server Swift NFS V3 clients Windows HTTP clients glusterfsd brick server App qemu (KVM) Hadoop
  • 6. Libgfapi development and significance •Python and java bindings •Integration with libvirt, SMB •Gluster performs better than Ceph for Cinder volumes (Principled Tech.) •Lowered context switching overhead •A distributed libgfapi benchmark available at: •https://github.com/bengland2/parallel-libgfapi •Fio benchmark has libgfapi “engine” •https://github.com/rootfs/fio/blob/master/engines/glusterfs.c
  • 7. Libgfapi tips •For each Gluster volume: •gluster volume set your-volume allow-insecure on •Gluster volume set stat-prefetch on (if it is set to “off”) •For each Gluster server: •Add “option rpc-auth-allow-insecure on” to /etc/glusterfs/glusterd.vol •Restart glusterd •Watch TCP port consumption •On client: TCP ports = bricks/volume x libgfapi instances •On server: TCP ports = bricks/server x clients' libgfapi instances •Control number of bricks/server
  • 8. libgfapi - performance •for single client & server with 40-Gbps IB connection, Nytro SSD caching
  • 9. Libgfapi gets rid of client-side bottleneck libgfapi FUSE 0 10000 20000 30000 40000 50000 60000 libgfapi vs FUSE - random write IOPS 64 threads on 1 client, 16-KB transfer size, 4 servers, 6 bricks/server, 1 GB/file, 4096 requests/file clientIOPS(requests/sec)
  • 10. Libgfapi and small files •4 servers, 10-GbE jumbo frames, 2-socket westmere, 48 GB RAM, 12 7200-RPM drives, 6 RAID1 LUNs, RHS 2.1U2 •1024 libgfapi processes spread across 4 servers •Each process writes/reads 5000 1-MB files (1000 files/dir) •Aggregate write rate: 155 MB/s/server •Aggregate read rate: 1500 MB/s/server, 3000 disk IOPS/server •On reads, disks are bottleneck!
  • 11. Libgfapi vs Ceph for Openstack Cinder •Principled Technologies compared for Havana RDO Cinder •Results show Gluster outperformed Ceph significantly in some cases •Sequential large-file I/O •Small-file I/O (also sequential) •Ceph outperformed Gluster for pure-random-I/O case, why? •RAID6 appears to be the root cause, saw high block device avg wait
  • 12. Gluster libgfapi vs Ceph for OpenStack Cinder – large files example: 4 servers, 4-16 guests, 4 threads/guest, 64-KB file size 4 8 12 16 0 500 1000 1500 2000 2500 3000 4 Node - Read VMs MB/s 4 8 12 16 0 200 400 600 800 1000 1200 1400 1600 1800 4 Node - Write ceph write 4 4 write 4 gluster write 4 4 write 4 VMs MB/s
  • 13. Libgfapi with OpenStack Cinder – small files 1,1,4 2,2,8 4,4,16 1,4,16 2,8,32 4,16,64 -20% -10% 0% 10% 20% 30% 40% 50% 60% Gluster %improvement over Ceph for Cinder volume small-file throughput file size average 64-KB, files/thread 32K, 4 threads/guest append create read compute nodes, total guests, total threads %improvementrelativetoCeph
  • 14. Random I/O on RHEL OSP with RHS Cinder volumes •Rate-limited guests for sizing, measured response time 1 2 4 8 16 32 64 128 0 500 1000 1500 2000 2500 3000 3500 4000 1 10 100 1000 1 1 1 1 1 1 43 139 OSP on JBOD RHS, Random Write IOPS vs. Latency OSP 4.0, 3 RHS (repl 3) servers, 4 computes, 1 thread/instance, 16G filesz, 64K recsz, DIO 25 IOPS limit Resp Time 50 IOPS limit Resp Time 100 IOPS limit Resp Time Guests TotalIOPS>>> <<<ResponseTime(msec) 1 2 4 8 16 32 64 128 0 500 1000 1500 2000 2500 3000 3500 4000 1 10 100 1000 OSP on RAID6 RHS, Random Write IOPS vs. Latency OSP 4.0, 1 thread/Instance, 4 RHS (repl 2) servers, 4 computes, 16G filesz, 64K recsz, DIO 25 IOPS limit Resp Time 50 IOPS limit Resp Time 100 IOPS limit Resp Time GuestsTotalIOPS>>> <<<ResponseTime(msec)
  • 15. RHS libgfapi: Sequential I/O to Cinder volumes 1 2 4 8 16 32 64 128 0 100 200 300 400 500 600 700 800 900 OSP on RHS, Scaling Large File Sequential I/O, RAID6 (2-replica) vs. JBOD (3-replica) OSP 4.0, 4 computes, 1 thread/instance, 4G filesz, 64K recsz RAID6 Writes JBOD Writes RAID6 Reads JBOD Reads guests TotalThroughputinMB/Sec/Server>>>
  • 16. Cinder volumes on libgfapi: small files •Increase in concurrency on RAID6 decreases system throughput
  • 17. Brick configuration alternatives – metrics How do you evaluate optimality of storage with both Gluster replication and RAID? •Disks/LUN – • more bricks = more CPU power •Physical:usable TB – what fraction of physical space is available? •Disk loss tolerance – how many disks can you lose without data loss? •Physical:logical write IOPS – • how many physical disk IOPS per user IOP? Both Gluster replication and RAID affect this
  • 18. Brick configuration – how do alternatives look? Small-file workload, 8 clients, 4 threads/client, 100K files/thread, 64 KB/file 2 servers in all cases except in JBOD 3way case RAID type disks/LUN physical:usable TB ratio loss tolerance (disks) Logical:physical random IO ratio create Files/sec read Files/sec RAID6 12 2.4 4 12 234 676 JBOD 1 2.0 1 2 975 874 RAID1 2 4.0 2 4 670670 773 RAID10 12 4.0 2 4 308 631 JBOD 3way 1 3.0 2 3 500 393?
  • 19. Brick config. – relevance of SSDs •Where SSDs don't matter: •Sequential large-file read/write – RAID6 drive near 1 GB/sec throughput •Reads cacheable by Linux – RAM faster than SSD •Where SSDs do matter: •Random read working set > RAM, < SSD •Random writes •Concurrent sequential file writes are slightly random at block device •Small-file workload random at block device •metadata
  • 20. Brick configuration – 3 ways to utilize SSDs •SSDs as brick •Configuration guideline – multiple bricks/SSD •RAID controller supplies SSD caching layer (example: LSI Nytro) •Transparent to RHEL, Gluster, but no control over policy •Linux-based SSD caching layer – dm-cache (RHEL7) •Greater control over caching policy, etc. •Not supported on RHEL6 -> Not supported in Denali (RHS 3.0)
  • 21. Example of SSD performance gain 4 64 4 64 4 64 4 64 1 1 16 16 1 1 16 16 create create create create read read read read 0 10000 20000 30000 40000 50000 60000 70000 pure-SSD vs write-back cache for small files smallfile workload: 1,000,000 files total, fsync-on-close, hash-into-dirs, 100 files/dir, 10 sub-dir./dir, exponential file sizes, ssd Write-back caching operation type, threads (1 or 16), file size (4 or 64) KB files/sec
  • 22. SSDs: Multiple bricks per SSD distribute CPU load across glusterfsd brick server processes workload here: random I/O
  • 23. SSD caching by LSI Nytro RAID controller
  • 24. dm-cache in RHEL7 can help with small-file reads
  • 25. SSD conclusions •Can add SSD to Gluster volumes today in one of above ways •Use more SSD than server RAM to see benefits •Don't really need for sequential large-file I/O workloads •Each of these 3 alternatives has weaknesses •SSD brick – very expensive, too many bricks required in Gluster volume •Firmware caching - very conservative in using SSD blocks •Dm-cache – RHEL7-only, not really writeback caching •Could Gluster become SSD-aware? •Could reduce latency of metadata access such as change logs, .glusterfs, journal, etc.
  • 26. Swift – example test configuration - TBS Client-1 Client-2 Client-4... server-8server-1 server-2 ... Cisco Nexus 7010 10-GbE network switch Magenta = private replication traffic subnet Green line = HTTP public traffic subnet
  • 27. Swift-on-RHS: large objects •No tuning except 10-GbE, 2 NICs, MTU=9000 (jumbo frames), rhs-high-throughput tuned profile •150-MB average object size, 4 clients, 8 servers •writes: HTTP 1.6 GB/sec = 400 MB/sec/client, 30% net. util •reads: HTTP 2.8 GB/s = 700 MB/s/client, 60% net. util.
  • 28. Swift-on-RHS: small objects •Catalyst workload: 28.68 million objects, PUT and GET •8 physical clients, 64 thread/client, 1 container/client, file size from 5-KB up to 30 MB, average 30 KB. 0 20 40 60 80 100 120 140 160 0 100 200 300 400 500 iops(Files/s) Fileset (10K files) PUT-1 PUT-2 PUT-3
  • 29. RHS performance monitoring •Start with network •Are there hot spots on the servers or clients? •Gluster volume profile your-volume [ start | info ] •Perf. Copilot plugin •Gluster volume top your-volume [ read | write | opendir | readdir ] •Perf-copilot demo
  • 30. Recorded demo of RHS perf.
  • 31. Future (upstream) directions •Thin Provisioning (LVM) – Allows Gluster volumes with different users and policies to easily co-exist on same physical storage •Snapshot (LVM) – For online backup/geo-replication of live files •Small-file – optimizations in system calls •Erasure coding – lowering cost/TB and write traffic on network •Scalability – larger server and brick counts in a volume •RDMA+libgfapi – bypassing kernel for fast path •NFS-Ganesha and pNFS – how it complements glusterfs
  • 32. LVM thin-p + snapshot testing XFS file system LVM thin-p volume LVM pool built from LVM PV RAID6 12-drive volume LSI MegaRAID controller Snapshot[1] Snapshot[n] ...
  • 33. avoiding LVM snapshot fragmentation LVM enhancement: bio-sorting on writes testing just XFS over LVM thin-provisioned volume snapshot
  • 34. Scalability using virtual RHS servers RHSS VM[1] 3 GB RAM, 2 cores ... RHSS VM[8] 3 GB RAM, 2 cores RHSS VM[1] 3 GB RAM, 2 cores ... RHSS VM[8] 3 GB RAM, 2 cores ... KVM host 1 KVM host N 10-GbE Switch Client[1] Client[2] Client[2n]...
  • 35. 0 5 10 15 20 25 30 35 0 500 1000 1500 2000 2500 3000 3500 scaling of throughput with virtual Gluster servers for sequential multi-stream I/O Glusterfs, up to 8 bare-metal clients, up to 4 bare-metal KVM hosts, up to 8 VMs/host, 2-replica volume, read-hash-mode 2, 1 brick/VM, RA 4096 KB on /dev/vdb, deadline sch. 2 iozone threads/server, 4 GB/thread, threads spread evenly across 8 clients. read write server count throughput(MB/s)
  • 36. Optimizing small-file creates Y N N Y Y N 0 1000 2000 3000 4000 5000 6000 7000 8000 9000 10000 small-file create performance 100,000 files/thread, 64 KB/file, 16 threads, 30 files/dir, 5 subdirs/dir, fsync before close? hash into directories? throughput(files/sec) Test Done with dm-cache on RHEL7 with XFS filesystem, no Gluster, 180 GB total data
  • 37. Avoiding FSYNC fop in AFR http://review.gluster.org/#/c/5501/, in gluster 3.5 •Reverses loss in small-file performance from RHS 2.0 -> 2.1 12 16 0 100 200 300 400 500 600 700 0 100 200 300 400 500 600 700 effect of no-fsync-after-append enhancement on RHS throughput patch at http://review.gluster.org/5501 without change with change threads/server, files/thread, file size (KB) throughput(files/sec)
  • 38. Check Out Other Red Hat Storage Activities at The Summit •Enter the raffle to win tickets for a $500 gift card or trip to LegoLand! • Entry cards available in all storage sessions - the more you attend, the more chances you have to win! •Talk to Storage Experts: • Red Hat Booth (# 211) • Infrastructure • Infrastructure-as-a-Service •Storage Partner Solutions Booth (# 605) •Upstream Gluster projects • Developer Lounge Follow us on Twitter, Facebook: @RedHatStorage