SlideShare ist ein Scribd-Unternehmen logo
1 von 34
Downloaden Sie, um offline zu lesen
All-Flash	Ceph on	NUMA-
Balanced	Server
Veda	Shankar,	QCT
Tushar	Gohad,	Intel		
2017	Ceph Day	San	Jose
• All-flash	Ceph and	Use	Cases
• QCT	QxStor All-flash	Ceph for	IOPS	
• QCT	Lab	Environment	Overview	&	Detailed	Architecture
• Importance	of	NUMA	and	proof	points
Agenda
2
All-flash	Ceph Use	Cases
3
Ceph – A	robust,	distributed,	petabyte-scale	software-
defined	storage	platform	
QCT	QxStor Red Hat Ceph Storage Edition
Throughput Optimized
• Densest	1U	Ceph	building	blk
• Smaller failure	domain
• Dual	socket	Xeon	E5v4
• 3xSSD	S3710	ceph journal
• 12xHDD	7.2krpm	
• 1x10	GbE NIC	
• Obtain best	throughput	&	
density at once	(2x	280TB)
• 2x	Dual	socket	Xeon	E5v4
• 2x	2x	NVMe P3700	journal
• 2x	35xHDD	7.2krpm
• 2x	40	GbE NIC
• Block or Object	Storage, Video,	Audio,	Image,	
Streaming	media, Big Data
• 3x	replication
USECASE
QxStor RCT-400QxStor RCT-200
Cost/Capacity
Optimized
QxStor RCC-400
• Maximize storage capacity
• Highest	density	560TB* raw	
capacity	per	chassis
• 2x	Dual	socket	Xeon	E5v4
• 2x	35xHDD	7.2krpm
• 2x	10	GbE NIC
• Object storage, Archive,
Backup, Enterprise Dropbox
• Erasure	coding
D51PH-1ULH T21P-4U
*	Optional model, one MB per chassis, can support 620TB raw capacity
IOPS Optimized
QxStor RCI-300
• All Flash Design,	low	latency
• Performance & capacity SKUs
• 2x	Dual	socket	Xeon	E5v4	higher	
cores
• 4x	P3520	2TB	or	4x	P3700	1.6TB
• 1x	10	GbE NIC
• Database, HPC, Mission
Critical Applications
• 3x replication
D51BP-1U
4
T21P-4U
5
QCT	QxStor RCI-300
All Flash Design Ceph for I/O Intensive Workloads
SKU 1: All	flash	Ceph - the	Best	IOPS	SKU
• Ceph Storage	Server:	D51BP-1U
• CPU:	2x	E5-2995	v4	or	plus
• RAM:	128GB
• NVMe SSD:	4x	P3700	1.6TB	
• NIC:	10GbE dual	port or 40GbE dual port
SKU 2: All	flash	Ceph	- IOPS/Capacity	Balanced	SKU	
(best	TCO,	as	of	today)
• Ceph Storage	Server:	D51BP-1U
• CPU:	2x	E5-2980 v4	or higher cores
• RAM:	128GB
• NVMe SSD:	4x	P3520	2TB	
• NIC:	10GbE dual	port or 40GbE dual port
NUMA	Balanced	Ceph Hardware
Highest	IOPS	&	Lowest	Latency
Optimized	Ceph &	HW	Integration	for	
IOPS	intensive	workloads
6
Intel	NVMe SSD	Options
P3700 P3520
Lithography MLC	NAND 3D	NAND	G1	MLC
Endurance	Rating 17	DWPD 0.7	DWPD
Capacity Available 2.0	TB 2.0	TB
Sequential	Read 2,800	MB/s 1,700	MB/s
Sequential	Write 2,000	MB/s 1,350	MB/s
Random	4K	Read 450,000	IOPS 375,000	IOPS
Random	4K Write 175,000	IOPS 26,000	IOPS
Comparative Price $	2.6x $	x
Why	All-flash	Storage?
• Falling	flash	prices:	Flash	prices	fell	
as	much	as	75%	over	the	18	months	
leading	up	to	mid-2016	and	the	trend	
continues.	
“TechRepublic:	10	storage	trends	to	watch	in	
2016”
• Flash	is	10x	cheaper	than	DRAM:	
with	persistence	and	high	capacity
“NetApp”
• Flash	is	100x	cheaper	than	disk:
pennies	per	IOPS	vs.	dollars	per	IOPS
“NetApp”
• Flash	is	1000x	faster	than	disk:
latency	drops	from	milliseconds	to	
microseconds
“NetApp”
• Flash	performance	advantage:	
HDDs	have	an	advantage	in	$/GB,	
while	flash	has	an	advantage	in	
$/IOPS.
“TechTarget:	Hybrid	storage	arrays	vs.	all-
flash	arrays:	A	little	flash	or	a	lot?”
• NVMe-based	storage	trend:	60%	
of	enterprise	storage	appliances	
will	have	NVMe bays	by	2020
“G2M	Research”
Require	sub-millisecond	latencyNeed	performance-optimized	
storage	for	mission-critical	apps
Flash	capacity	gains	while	the	
price	drops
8
Storage	Evolution
Technology claims are based on comparisons of latency, density and write cycling metrics amongst memory technologies recorded on published
specifications of in-market memory products against internal Intel specifications.
NVM	Express	(NVMe)
Standardized	interface	for	non-volatile	memory,	http://nvmexpress.org
9
10
NVMe:	Best-in-Class	IOPS,	Lower/Consistent	Latency
Lowest	Latency	of	Standard	Storage	Interfaces
0
500000
100%	Read 70%	Read 0%	Read
IOPS
IOPS	- 4K	Random	Workloads
PCIe/NVMe SAS	12Gb/s	
3x	better	IOPS	vs	SAS	12Gbps For	the	same	#CPU	cycles,	NVMe delivers	over	2X	the	IOPs	of	SAS!
Gen1	NVMe has	2	to	3x	better	Latency	Consistency	vs	SAS
Test and System Configurations: PCI Express* (PCIe*)/NVM Express* (NVMe) Measurements made on Intel® Core™ i7-3770S system @ 3.1GHz and 4GB Mem running Windows* Server 2012 Standard O/S, Intel
PCIe/NVMe SSDs, data collected by IOmeter* tool. SAS Measurements from HGST Ultrastar* SSD800M/1000M (SAS), SATA S3700 Series. For more complete information about performance and benchmark results, visit
http://www.intel.com/performance. Source: Intel Internal Testing.
QCT	CONFIDENTIAL
RADOS
LIBRADOS
QCT	Lab	Environment	Overview
Ceph 1
….
Monitors	
Storage	Clusters
Interfaces RBD
(Block	Storage)
Cluster	Network
10GbE
Public	Network
10GbE
Clients
Client	1 Client	2 Client	9 Client	10
Ceph 2 Ceph 5
….
QCT	CONFIDENTIAL
Stage Test	Subject Benchmark	tools Major	Task
I/O	Baseline Raw	Disk FIO
Determine	maximum	server	IO	
backplane	bandwidth
Network	Baseline NIC iPerf
Ensure	consistent	network	
bandwidth	between	all	nodes
Bare	Metal	RBD	Baseline LibRBD FIO CBT
Use	FIO	RBD	engine	to	test	
performance	using	libRBD
Docker	Container	OLTP	
Baseline
Percona	DB	+	Sysbench Sysbench/OLTP
Establish	number	of	workload-
driver	VMs	desired	per	client
Benchmark	criteria:
1. Default:	ceph.conf
2. Software	Level	Tuning:	ceph.conf tuned
3. Software	+	NUMA CPU	Pinning:	ceph.conf tuned	+	NUMA	CPU	Pinning
Benchmark	Methodology
QCT	CONFIDENTIAL
5-Node	all-NVMe Ceph	Cluster
Dual-Xeon E5	2699v4@2.3GHz,	88	HT,	128GB	DDR4
RHEL 7.3,	3.10,	Red	Hat	Ceph 2.1
10x	Client	Systems
Dual-Xeon	E5	2699v4@2.3GHz
88	HT,	128	GB	DDR4
Ceph	OSD1
Ceph	OSD2
Ceph	OSD3
Ceph	OSD4
Ceph	OSD16
…
NVMe3
NVMe2
NVMe4
NVMe1
20x	2TB	P3520	SSDs
80	OSDs
2x	Replication
19TB Effective	Capacity
Tests	at	cluster fill-level	
82%
Ceph	RBD	Client
Docker3
Sysbench	Client
Docker4
Sysbench	Client
Docker2	(krbd)
Percona	DB	Server
Docker1	(krbd)
Percona	DB	Server
Cluster	NW	10		GbE
Sysbench Containers
16	vCPUs,	32GB	RAM
FIO	2.8,	Sysbench	0.5
DB	Containers
16	vCPUs,	32GB	RAM,
200GB	RBD	volume,
100GB	MySQL	dataset
InnoDB	buf	cache	25GB(25%)
Public	NW	10	GbE
QuantaGrid	D51BP-1U	
QCT	CONFIDENTIAL
Detailed	System	Architecture	in	QCT	Lab
QCT	CONFIDENTIAL
• use	faster	media	for	journals,	metadata
• use	recent	Linux	kernels
– blk-mq support	packs	big	performance	gains	with	NVMe media
– optimizations	for	non-rotational	media
• use	tuned	where	available	
– adaptive	latency	performance	tuning	[2]
• virtual	memory,	network	and	storage	tweaks
– use	commonly	recommended	VM,	network	settings	[1-4]
– enable	rq_affinity,	read	ahead	for	NVMe devices
• BIOS	and	CPU	performance	governor	settings
– disable	C-states	and	enable	Turbo-boost
– use	“performance”	CPU	governor
Configuring	All-flash	Ceph
System	Tuning	for	Low-latency	Workloads
[1] https://wiki.mikejung.biz/Ubuntu_Performance_Tuning
[2] https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Power_Management_Guide/tuned-adm.html
[3] http://www.brendangregg.com/blog/2015-03-03/performance-tuning-linux-instances-on-ec2.html
[4] https://www.suse.com/documentation/ses-4/singlehtml/book_storage_admin/book_storage_admin.html
Parameter Default value Tuned value
objecter_inflight_ops 1024 102400 Objecter is	responsible	for	sending	requests	to	OSD.	
objecter_inflight_ops_bytes 104857600 1048576000
Objecter_inflight_ops/objecter_inflight_op_bytes tell	objecter to	
throttle	outgoing	ops	according	to	budget	(values	based	on	
experiments	in	the	Dumpling	timeframe)
ms_dispatch_throttle_bytes 104857600 1048576000
ms_dispatch_throttle_bytes throttle	is	to	dispatch	message	size	for	
simple	messenger	(values	based	on	experiments	in	the	Dumpling	
timeframe)
filestore_queue_max_ops 50 5000
filestore_queue_max_ops/filestore_queue_max_bytes throttle	are	
used	to	throttle	inflight	ops	for	filestore
filestore_queue_max_bytes 104857600 1048576000
These	throttles	are	checked	before	sending	ops	to	journal,	so	if	filestore
does	not	get	enough	budget	for	current	op,	OSD	op	thread	will	be	
blocked
Configuring	All-flash	Ceph
Ceph Tunables
16
Parameter
Default
value
Tuned
value
filestore_max_sync_interval 5 10
filestore_max_sync_interval controls	the	interval	(in	seconds)	that	sync	thread	flush	
data	from	memory	to	disk. Use	page	cache	- by	default	filestore writes	data	to	
memory	and	sync	thread	is	responsible	for	flushing	data	to	disk,	then	journal	entries	
can	be	trimmed. Note	that	large	filestore_max_sync_interval can	cause	performance	
spike
filestore_op_threads 2 6
filestore_op_threads controls	the	number	of	filesystem	operation	threads	that	
execute	in	parallel.
If	the	storage	backend	is	fast	enough	and	has	enough	queues	to	support	parallel	
operations,	it’s	recommended	to	increase	this	parameter,	given	there	is	enough	CPU	
available
osd_op_threads 2 32
osd_op_threads controls	the	number	of	threads	to	service	Ceph OSD	Daemon	
operations.
Setting	this	to	0	will	disable	multi-threading.
Increasing	this	number	may	increase	the	request	processing	rate
If	the	storage	backend	is	fast	enough	and	has	enough	queues	to	support	parallel	
operations,	it’s	recommended	to	increase	this	parameter,	given	there	is	enough	CPU	
available
Configuring	All-flash	Ceph
Ceph Tunables
Parameter
Default
value
Tuned value
journal_queue_max_ops 300 3000
journal_queue_max_bytes/journal_queue_max_op throttles	are	
to	throttle	inflight	ops	for	journal
If	journal	does	not	get	enough	budget	for	current	op,	it	will	block	
OSD	op	thread
journal_queue_max_bytes 33554432 1048576000
journal_max_write_entries 100 1000
journal_max_write_entries/journal_max_write_bytes throttles	
are	used	to	throttle	ops	or	bytes	for	every	journal	write
Tweaking	these	two	parameters	maybe	helpful	for	small	writes
journal_max_write_bytes 10485760 1048576000
Configuring	All-flash	Ceph
Ceph Tunables
• Leverage	latest	Intel	NVMe technology	to	reach	high	
performance,	bigger	capacity,	with	lower	$/GB
– Intel	DC	P3520	2TB	raw	performance:	375K	read	IOPS,	26K	write	IOPS
• By	using	multiple	OSD	partitions,	Ceph performance	scales	
linearly	
– Reduces	lock	contention	within	a	single	OSD	process
– Lower	latency	at	all	queue-depths,	biggest	impact	to	random	reads
• Introduces	the	concept	of	multiple	OSD’s	on	the	same	physical	
device
– Conceptually	similar	crush	map	data	placement	rules	as	managing	disks	
in	an	enclosure
Multi-partitioned	NVMe SSDs
OSD	1
OSD	2
OSD	3
OSD	4
Multi-partitioned	NVMe SSDs
0
2
4
6
8
10
12
0 200,000 400,000 600,000 800,000 1,000,000 1,200,000
AvgLatency	(ms)
IOPS
Multiple	OSD's	per	Device	comparison
4K	Random	Read	(Latency	vs.	IOPS)
5	nodes,	20/40/80	OSDs	
1	OSD/NVMe 2	OSD/NVMe 4	OSD/NVMe
0
10
20
30
40
50
60
70
80
90
%	CPU	Utilization
Single	Node	CPU	Utilization	Comparison	
4K	Random	Read	@QD32
4/8/16	OSDs
1	OSD/NVMe 2	OSD/NVMe 4	OSD/NVMe
These measurements were done on a Ceph node based Intel P3700 NVMe SSDs but are equally applicable to other
0.00
1.00
2.00
3.00
4.00
5.00
6.00
7.00
8.00
9.00
600000 700000 800000 900000 1000000 1100000 1200000 1300000 1400000 1500000 1600000
Average	Latency	(ms)
IOPS
4K	Random	Read	(Latency	vs.	IOPS),	IODepth scaling	4-128
5	nodes,	10	clients	x	10	RBD	volumes,	Red	Hat	Ceph Storage	2.1
Default Tuned
~1.57M IOPS	@~4ms
200%	improvement	in	IOPS	and	Latency
Performance	Testing	Results
4K	100%	Random	Read
~1.34M IOPS	@~1ms,	QD=16
200%	improvement	in	IOPS	and	Latency
0.00
2.00
4.00
6.00
8.00
10.00
12.00
14.00
16.00
18.00
20.00
0 50000 100000 150000 200000 250000 300000 350000 400000 450000 500000
Average	Latency	(ms)
IOPS
Latency	vs	IOPS	(100%	wr,	70/30	rd/rw)
IODepth scaling	4-128K,	5	nodes,	10	clients	x	10	RBD	volumes,	Red	Hat	Ceph 2.1
100%	write 70/30	OLTP	mix
Performance	Testing	Results
4K	100%	Random	Write,	70/30	OLTP	Mix
~450k	70/30	OLTP IOPS	
@~1ms,	QD=4
~165k	Write	IOPS	
@~2ms,	QD=4
• NUMA-balance	network	and	storage	devices	across	CPU	sockets
• Bind	IO	devices	to	local	CPU	socket	(IRQ	pinning)
• Align	OSD	data	and	Journals	to	the	same	NUMA	node
• Pin	OSD	processes	to	local	CPU	socket	(NUMA	node	pinning)
NUMA	Considerations
CORE CORE
CORE CORE
…
Socket	0
Ceph OSD
CORE CORE
Ceph OSD
CORE
Ceph OSD
CORE
…
Socket	1
Ceph OSD
Memory
Memory
QPI
NUMA	Node	0 NUMA	Node	1
Storage NICs NICs Storage
Remote
Local
QCT	CONFIDENTIAL
NUMA-Balanced	Config on	QCT	QuantaGrid D51BP-1U
CPU	0 CPU	1
RAM RAM
4	NVMe drive	slots
1	NIC	slot
QPI
PCIe Gen3	x4
Ceph OSD	1-8 Ceph OSD	9-16
PCIe Gen3	x8PCIe Gen3	x8 PCIe Gen3	x4
QCT	QuantaGrid D51BP-1U
4	NVMe drive	slots
1	NIC	slot
QCT	CONFIDENTIAL24
0.00
2.00
4.00
6.00
8.00
10.00
12.00
14.00
200000 250000 300000 350000 400000 450000 500000
Average	Latency	(ms)
IOPS
70/30	4k	OLTP	Performance,	Before	vs	After	NUMA	balance
IODepth scaling	4-128K,	5	nodes,	10	clients	x	10	RBD	volumes,	Red	Hat	Ceph 2.1
SW	Tuned SW+NUMA	CPU	Pinning
40%	better	IOPS,	
100%	better	latency	at	QD=8
with	NUMA	balance
At	QD=8
100%	better	average	latency
15-20%	better	90th pct latency
10-15%	better	99th pct latency
Performance	Testing	Results
Latency	improvements	after	NUMA	optimizations
NUMA	Partitioning	and	Hyper-converged
More	than	one	way	to	slice	the	system
25
CORE
…
Socket	0
Ceph OSD
CORE
COMPUTE
CORE
Ceph OSD
…
Socket	1
Memory
Memory
QPI
NUMA	Node	0 NUMA	Node	1
Storage NICs
CORE
Ceph OSD
CORE
Ceph OSD
CORE
COMPUTE
CORE
COMPUTE
CORE
COMPUTE
NUMA	Partitioning	and	Hyper-converged	(NFV)
Cluster-on-Die	(Xeon	Processors	with	>	10	cores)
26
…
Socket	0
CORE
COMPUTE
CORE
Ceph OSD
…
Socket	1
Memory
Memory
QPI
NUMA	Node	0
Storage NIC
CORE
Ceph OSD
CORE
COMPUTE
CORE
COMPUTE
CORE
COMPUTE
NIC NIC
NUMA	Node	1 NUMA	Node	2 NUMA	Node	3
NIC
CORE
COMPUTE
CORE
COMPUTE
§ All-NVMe Ceph enables	high	performance	workloads
§ NUMA	balanced	architecture
§ Small	footprint	(1U),	lower	overall	TCO
§ Million	IOPS	with	very	low	latency
Open	Discussion
28
Visit www.QCT.iofor QxStor Red	Hat	Ceph Storage	Edition:
• Reference	Architecture		Red	Hat	Ceph Storage	on	QCT	Servers
• Datasheet		QxStor Red Hat Ceph Storage
• Solution	Brief		QCT	and	Intel	Hadoop Over	Ceph Architecture
• Solution	Brief		Deploying	Red	Hat	Ceph Storage	on	QCT	servers
• Solution Brief Containerized Ceph for On-Demand, Hyperscale Storage	
For Other Information…
Appendix
#	Please	do	not	change	this	file	directly	since	it	is	managed	by	Ansible and	will	be	overwritten
[global]
fsid =	7e191449-3592-4ec3-b42b-e2c4d01c0104
max	open	files	=	131072
crushtool =	/usr/bin/crushtool
debug_lockdep =	0/1
debug_context =	0/1
debug_crush =	1/1
debug_buffer =	0/1
debug_timer =	0/0
debug_filer =	0/1
debug_objecter =	0/1
debug_rados =	0/5
debug_rbd =	0/5
debug_ms =	0/5
debug_monc =	0/5
debug_tp =	0/5
debug_auth =	1/5
debug_finisher =	1/5
debug_heartbeatmap =	1/5
debug_perfcounter =	1/5
debug_rgw =	1/5
debug_asok =	1/5
debug_throttle =	1/1
debug_journaler =	0/0
debug_objectcatcher =	0/0
debug_client =	0/0
debug_osd =	0/0
debug_optracker =	0/0
debug_objclass =	0/0
debug_filestore =	0/0
debug_journal =	0/0
debug_mon =	0/0
debug_paxos =	0/0
osd_crush_chooseleaf_type =	0
filestore_xattr_use_omap =	true
osd_pool_default_size =	1
osd_pool_default_min_size =	1
Configuration	Detail	– ceph.conf (1/2)
rbd_cache =	true
mon_compact_on_trim =	false
log_to_syslog =	false
log_file =	/var/log/ceph/$name.log
mutex_perf_counter =	true
throttler_perf_counter =	false
ms_nocrc =	true
[client]
admin	socket	=	/var/run/ceph/$cluster-$type.$id.$pid.$cctid.asok #	must	be	writable	by	
QEMU	and	allowed	by	SELinux or	AppArmor
log	file	=	/var/log/ceph/qemu-guest-$pid.log	#	must	be	writable	by	QEMU	and	allowed	by	
SELinux or	AppArmor
rbd_cache =	true
rbd_cache_writethrough_until_flush =	false
[mon]
[mon.qct50]
host	=	qct50
#	we	need	to	check	if	monitor_interface is	defined	in	the	inventory	per	host	or	if	it's	set	in	a	
group_vars file
mon addr =	10.5.15.50
mon_max_pool_pg_num =	166496
mon_osd_max_split_count =	10000
[osd]
osd mkfs type	=	xfs
osd mkfs options	xfs =	-f	-i size=2048
osd_mount_options_xfs =	"rw,noatime,inode64,logbsize=256k,delaylog"
osd journal	size	=	10240
cluster_network =	10.5.16.0/24
public_network =	10.5.15.0/24
filestore_queue_max_ops =	5000
osd_client_message_size_cap =	0
objecter_infilght_op_bytes =	1048576000
ms_dispatch_throttle_bytes =	1048576000
filestore_wbthrottle_enable =	True
filestore_fd_cache_shards =	64
objecter_inflight_ops =	1024000
filestore_max_sync_interval =	10
filestore_op_threads =	16
osd_pg_object_context_cache_count =	10240
journal_queue_max_ops =	3000
filestore_odsync_write =	True
journal_queue_max_bytes =	10485760000
journal_max_write_entries =	1000
filestore_queue_committing_max_ops =	5000
journal_max_write_bytes =	1048576000
filestore_fd_cache_size =	10240
osd_client_message_cap =	0
journal_dynamic_throttle =	True
osd_enable_op_tracker =	False
Configuration	Detail	– ceph.conf (2/2)
cluster:
head:	"root@qct50"
clients:	["root@qct50",	"root@qct51",	"root@qct52",	"root@qct53",	"root@qct54",	
"root@qct55",	"root@qct56",	"root@qct57",	"root@qct58",	"root@qct59"]
osds:	["root@qct62",	"root@qct63",	"root@qct64",	"root@qct65",	"root@qct66"]
mons:	["root@qct50"]
osds_per_node:	16
fs:	xfs
mkfs_opts:	-f	-i size=2048	-n	size=64k
mount_opts:	-o	inode64,noatime,logbsize=256k
conf_file:	/etc/ceph/ceph.conf
ceph.conf:	/etc/ceph/ceph.conf
iterations:	1
rebuild_every_test:	False
tmp_dir:	"/tmp/cbt"
clusterid:	7e191449-3592-4ec3-b42b-e2c4d01c0104
use_existing:	True
pool_profiles:
replicated:
pg_size:	8192
pgp_size:	8192
replication:	2
benchmarks:
librbdfio:
rbdadd_mons:	"root@qct50:6789"
rbdadd_options:	"noshare"
time:	300
ramp:	100
vol_size:	8192
mode:	['randread']
numjobs:	1
use_existing_volumes:	False
procs_per_volume:	[1]
volumes_per_client:	[10]
op_size:	[4096]
concurrent_procs:	[1]
iodepth:	[4,	8,	16,	32,	64,	128]
osd_ra:	[128]
norandommap:	True
cmd_path:	'/root/cbt_packages/fio/fio'
log_avg_msec:	250
pool_profile:	'replicated'
Configuration	Detail	- CBT	YAML	File
QCT CONFIDENTIAL
www.QCT.io
Looking	for
innovative	cloud	solution?
Come	to	QCT,	who	else?

Weitere ähnliche Inhalte

Was ist angesagt?

Ceph Day KL - Delivering cost-effective, high performance Ceph cluster
Ceph Day KL - Delivering cost-effective, high performance Ceph clusterCeph Day KL - Delivering cost-effective, high performance Ceph cluster
Ceph Day KL - Delivering cost-effective, high performance Ceph clusterCeph Community
 
Ceph: Low Fail Go Scale
Ceph: Low Fail Go Scale Ceph: Low Fail Go Scale
Ceph: Low Fail Go Scale Ceph Community
 
Ceph Day San Jose - Enable Fast Big Data Analytics on Ceph with Alluxio
Ceph Day San Jose - Enable Fast Big Data Analytics on Ceph with Alluxio Ceph Day San Jose - Enable Fast Big Data Analytics on Ceph with Alluxio
Ceph Day San Jose - Enable Fast Big Data Analytics on Ceph with Alluxio Ceph Community
 
Ceph Day Shanghai - Recovery Erasure Coding and Cache Tiering
Ceph Day Shanghai - Recovery Erasure Coding and Cache TieringCeph Day Shanghai - Recovery Erasure Coding and Cache Tiering
Ceph Day Shanghai - Recovery Erasure Coding and Cache TieringCeph Community
 
Ceph Day Taipei - How ARM Microserver Cluster Performs in Ceph
Ceph Day Taipei - How ARM Microserver Cluster Performs in CephCeph Day Taipei - How ARM Microserver Cluster Performs in Ceph
Ceph Day Taipei - How ARM Microserver Cluster Performs in CephCeph Community
 
Ceph Performance Profiling and Reporting
Ceph Performance Profiling and ReportingCeph Performance Profiling and Reporting
Ceph Performance Profiling and ReportingCeph Community
 
Ceph on 64-bit ARM with X-Gene
Ceph on 64-bit ARM with X-GeneCeph on 64-bit ARM with X-Gene
Ceph on 64-bit ARM with X-GeneCeph Community
 
Ceph Day Seoul - Ceph: a decade in the making and still going strong
Ceph Day Seoul - Ceph: a decade in the making and still going strong Ceph Day Seoul - Ceph: a decade in the making and still going strong
Ceph Day Seoul - Ceph: a decade in the making and still going strong Ceph Community
 
Ceph Day Seoul - The Anatomy of Ceph I/O
Ceph Day Seoul - The Anatomy of Ceph I/OCeph Day Seoul - The Anatomy of Ceph I/O
Ceph Day Seoul - The Anatomy of Ceph I/OCeph Community
 
Ceph Day Melbourne - Ceph on All-Flash Storage - Breaking Performance Barriers
Ceph Day Melbourne - Ceph on All-Flash Storage - Breaking Performance BarriersCeph Day Melbourne - Ceph on All-Flash Storage - Breaking Performance Barriers
Ceph Day Melbourne - Ceph on All-Flash Storage - Breaking Performance BarriersCeph Community
 
Ceph Day Taipei - Accelerate Ceph via SPDK
Ceph Day Taipei - Accelerate Ceph via SPDK Ceph Day Taipei - Accelerate Ceph via SPDK
Ceph Day Taipei - Accelerate Ceph via SPDK Ceph Community
 
Accelerating Cassandra Workloads on Ceph with All-Flash PCIE SSDS
Accelerating Cassandra Workloads on Ceph with All-Flash PCIE SSDSAccelerating Cassandra Workloads on Ceph with All-Flash PCIE SSDS
Accelerating Cassandra Workloads on Ceph with All-Flash PCIE SSDSCeph Community
 
Ceph Day KL - Ceph on All-Flash Storage
Ceph Day KL - Ceph on All-Flash Storage Ceph Day KL - Ceph on All-Flash Storage
Ceph Day KL - Ceph on All-Flash Storage Ceph Community
 
Intel - optimizing ceph performance by leveraging intel® optane™ and 3 d nand...
Intel - optimizing ceph performance by leveraging intel® optane™ and 3 d nand...Intel - optimizing ceph performance by leveraging intel® optane™ and 3 d nand...
Intel - optimizing ceph performance by leveraging intel® optane™ and 3 d nand...inwin stack
 
Ceph Day Beijing - Our journey to high performance large scale Ceph cluster a...
Ceph Day Beijing - Our journey to high performance large scale Ceph cluster a...Ceph Day Beijing - Our journey to high performance large scale Ceph cluster a...
Ceph Day Beijing - Our journey to high performance large scale Ceph cluster a...Danielle Womboldt
 
Ceph Day Tokyo - Bit-Isle's 3 years footprint with Ceph
Ceph Day Tokyo - Bit-Isle's 3 years footprint with Ceph Ceph Day Tokyo - Bit-Isle's 3 years footprint with Ceph
Ceph Day Tokyo - Bit-Isle's 3 years footprint with Ceph Ceph Community
 
Ceph Day Seoul - Delivering Cost Effective, High Performance Ceph cluster
Ceph Day Seoul - Delivering Cost Effective, High Performance Ceph cluster Ceph Day Seoul - Delivering Cost Effective, High Performance Ceph cluster
Ceph Day Seoul - Delivering Cost Effective, High Performance Ceph cluster Ceph Community
 
Ceph Day Melbourne - Walk Through a Software Defined Everything PoC
Ceph Day Melbourne - Walk Through a Software Defined Everything PoCCeph Day Melbourne - Walk Through a Software Defined Everything PoC
Ceph Day Melbourne - Walk Through a Software Defined Everything PoCCeph Community
 
Ceph on All Flash Storage -- Breaking Performance Barriers
Ceph on All Flash Storage -- Breaking Performance BarriersCeph on All Flash Storage -- Breaking Performance Barriers
Ceph on All Flash Storage -- Breaking Performance BarriersCeph Community
 

Was ist angesagt? (20)

Ceph Day KL - Delivering cost-effective, high performance Ceph cluster
Ceph Day KL - Delivering cost-effective, high performance Ceph clusterCeph Day KL - Delivering cost-effective, high performance Ceph cluster
Ceph Day KL - Delivering cost-effective, high performance Ceph cluster
 
Ceph: Low Fail Go Scale
Ceph: Low Fail Go Scale Ceph: Low Fail Go Scale
Ceph: Low Fail Go Scale
 
Ceph Day San Jose - Enable Fast Big Data Analytics on Ceph with Alluxio
Ceph Day San Jose - Enable Fast Big Data Analytics on Ceph with Alluxio Ceph Day San Jose - Enable Fast Big Data Analytics on Ceph with Alluxio
Ceph Day San Jose - Enable Fast Big Data Analytics on Ceph with Alluxio
 
Ceph Day Shanghai - Recovery Erasure Coding and Cache Tiering
Ceph Day Shanghai - Recovery Erasure Coding and Cache TieringCeph Day Shanghai - Recovery Erasure Coding and Cache Tiering
Ceph Day Shanghai - Recovery Erasure Coding and Cache Tiering
 
Ceph Day Taipei - How ARM Microserver Cluster Performs in Ceph
Ceph Day Taipei - How ARM Microserver Cluster Performs in CephCeph Day Taipei - How ARM Microserver Cluster Performs in Ceph
Ceph Day Taipei - How ARM Microserver Cluster Performs in Ceph
 
Ceph Performance Profiling and Reporting
Ceph Performance Profiling and ReportingCeph Performance Profiling and Reporting
Ceph Performance Profiling and Reporting
 
Ceph on 64-bit ARM with X-Gene
Ceph on 64-bit ARM with X-GeneCeph on 64-bit ARM with X-Gene
Ceph on 64-bit ARM with X-Gene
 
Ceph Day Seoul - Ceph: a decade in the making and still going strong
Ceph Day Seoul - Ceph: a decade in the making and still going strong Ceph Day Seoul - Ceph: a decade in the making and still going strong
Ceph Day Seoul - Ceph: a decade in the making and still going strong
 
Ceph Day Seoul - The Anatomy of Ceph I/O
Ceph Day Seoul - The Anatomy of Ceph I/OCeph Day Seoul - The Anatomy of Ceph I/O
Ceph Day Seoul - The Anatomy of Ceph I/O
 
Ceph Day Melbourne - Ceph on All-Flash Storage - Breaking Performance Barriers
Ceph Day Melbourne - Ceph on All-Flash Storage - Breaking Performance BarriersCeph Day Melbourne - Ceph on All-Flash Storage - Breaking Performance Barriers
Ceph Day Melbourne - Ceph on All-Flash Storage - Breaking Performance Barriers
 
Ceph Day Taipei - Accelerate Ceph via SPDK
Ceph Day Taipei - Accelerate Ceph via SPDK Ceph Day Taipei - Accelerate Ceph via SPDK
Ceph Day Taipei - Accelerate Ceph via SPDK
 
Accelerating Cassandra Workloads on Ceph with All-Flash PCIE SSDS
Accelerating Cassandra Workloads on Ceph with All-Flash PCIE SSDSAccelerating Cassandra Workloads on Ceph with All-Flash PCIE SSDS
Accelerating Cassandra Workloads on Ceph with All-Flash PCIE SSDS
 
Ceph Day KL - Ceph on All-Flash Storage
Ceph Day KL - Ceph on All-Flash Storage Ceph Day KL - Ceph on All-Flash Storage
Ceph Day KL - Ceph on All-Flash Storage
 
Intel - optimizing ceph performance by leveraging intel® optane™ and 3 d nand...
Intel - optimizing ceph performance by leveraging intel® optane™ and 3 d nand...Intel - optimizing ceph performance by leveraging intel® optane™ and 3 d nand...
Intel - optimizing ceph performance by leveraging intel® optane™ and 3 d nand...
 
Ceph Day Beijing - Our journey to high performance large scale Ceph cluster a...
Ceph Day Beijing - Our journey to high performance large scale Ceph cluster a...Ceph Day Beijing - Our journey to high performance large scale Ceph cluster a...
Ceph Day Beijing - Our journey to high performance large scale Ceph cluster a...
 
Ceph Day Tokyo - Bit-Isle's 3 years footprint with Ceph
Ceph Day Tokyo - Bit-Isle's 3 years footprint with Ceph Ceph Day Tokyo - Bit-Isle's 3 years footprint with Ceph
Ceph Day Tokyo - Bit-Isle's 3 years footprint with Ceph
 
Ceph Day Seoul - Delivering Cost Effective, High Performance Ceph cluster
Ceph Day Seoul - Delivering Cost Effective, High Performance Ceph cluster Ceph Day Seoul - Delivering Cost Effective, High Performance Ceph cluster
Ceph Day Seoul - Delivering Cost Effective, High Performance Ceph cluster
 
Ceph Day Melbourne - Walk Through a Software Defined Everything PoC
Ceph Day Melbourne - Walk Through a Software Defined Everything PoCCeph Day Melbourne - Walk Through a Software Defined Everything PoC
Ceph Day Melbourne - Walk Through a Software Defined Everything PoC
 
Ceph on All Flash Storage -- Breaking Performance Barriers
Ceph on All Flash Storage -- Breaking Performance BarriersCeph on All Flash Storage -- Breaking Performance Barriers
Ceph on All Flash Storage -- Breaking Performance Barriers
 
MySQL Head-to-Head
MySQL Head-to-HeadMySQL Head-to-Head
MySQL Head-to-Head
 

Andere mochten auch

Ceph Day San Jose - Ceph in a Post-Cloud World
Ceph Day San Jose - Ceph in a Post-Cloud World Ceph Day San Jose - Ceph in a Post-Cloud World
Ceph Day San Jose - Ceph in a Post-Cloud World Ceph Community
 
Ceph Day Tokyo - Ceph Community Update
Ceph Day Tokyo - Ceph Community Update Ceph Day Tokyo - Ceph Community Update
Ceph Day Tokyo - Ceph Community Update Ceph Community
 
Ceph Day Tokyo - Ceph on ARM: Scaleable and Efficient
Ceph Day Tokyo - Ceph on ARM: Scaleable and Efficient Ceph Day Tokyo - Ceph on ARM: Scaleable and Efficient
Ceph Day Tokyo - Ceph on ARM: Scaleable and Efficient Ceph Community
 
Ceph Day Tokyo - Bring Ceph to Enterprise
Ceph Day Tokyo - Bring Ceph to Enterprise Ceph Day Tokyo - Bring Ceph to Enterprise
Ceph Day Tokyo - Bring Ceph to Enterprise Ceph Community
 
Ceph Day Tokyo - Delivering cost effective, high performance Ceph cluster
Ceph Day Tokyo - Delivering cost effective, high performance Ceph clusterCeph Day Tokyo - Delivering cost effective, high performance Ceph cluster
Ceph Day Tokyo - Delivering cost effective, high performance Ceph clusterCeph Community
 
Ceph Day Seoul - AFCeph: SKT Scale Out Storage Ceph
Ceph Day Seoul - AFCeph: SKT Scale Out Storage Ceph Ceph Day Seoul - AFCeph: SKT Scale Out Storage Ceph
Ceph Day Seoul - AFCeph: SKT Scale Out Storage Ceph Ceph Community
 
Ceph Day Seoul - Ceph on All-Flash Storage
Ceph Day Seoul - Ceph on All-Flash Storage Ceph Day Seoul - Ceph on All-Flash Storage
Ceph Day Seoul - Ceph on All-Flash Storage Ceph Community
 
Ceph Day Seoul - Community Update
Ceph Day Seoul - Community UpdateCeph Day Seoul - Community Update
Ceph Day Seoul - Community UpdateCeph Community
 
Ceph Day Tokyo - High Performance Layered Architecture
Ceph Day Tokyo - High Performance Layered Architecture  Ceph Day Tokyo - High Performance Layered Architecture
Ceph Day Tokyo - High Performance Layered Architecture Ceph Community
 
Red Hat Storage Day Dallas - Storage for OpenShift Containers
Red Hat Storage Day Dallas - Storage for OpenShift Containers Red Hat Storage Day Dallas - Storage for OpenShift Containers
Red Hat Storage Day Dallas - Storage for OpenShift Containers Red_Hat_Storage
 
London Ceph Day: Unified Cloud Storage with Synnefo + Ceph + Ganeti
London Ceph Day: Unified Cloud Storage with Synnefo + Ceph + GanetiLondon Ceph Day: Unified Cloud Storage with Synnefo + Ceph + Ganeti
London Ceph Day: Unified Cloud Storage with Synnefo + Ceph + GanetiCeph Community
 
DataEngConf: Apache Kafka at Rocana: a scalable, distributed log for machine ...
DataEngConf: Apache Kafka at Rocana: a scalable, distributed log for machine ...DataEngConf: Apache Kafka at Rocana: a scalable, distributed log for machine ...
DataEngConf: Apache Kafka at Rocana: a scalable, distributed log for machine ...Hakka Labs
 
Performance Metrics and Ontology for Describing Performance Data of Grid Work...
Performance Metrics and Ontology for Describing Performance Data of Grid Work...Performance Metrics and Ontology for Describing Performance Data of Grid Work...
Performance Metrics and Ontology for Describing Performance Data of Grid Work...Hong-Linh Truong
 
Web security-–-everything-we-know-is-wrong-eoin-keary
Web security-–-everything-we-know-is-wrong-eoin-kearyWeb security-–-everything-we-know-is-wrong-eoin-keary
Web security-–-everything-we-know-is-wrong-eoin-kearydrewz lin
 
Ceph, Now and Later: Our Plan for Open Unified Cloud Storage
Ceph, Now and Later: Our Plan for Open Unified Cloud StorageCeph, Now and Later: Our Plan for Open Unified Cloud Storage
Ceph, Now and Later: Our Plan for Open Unified Cloud StorageSage Weil
 
Enable Fast Big Data Analytics on Ceph with Alluxio at Ceph Days 2017
Enable Fast Big Data Analytics on Ceph with Alluxio at Ceph Days 2017 Enable Fast Big Data Analytics on Ceph with Alluxio at Ceph Days 2017
Enable Fast Big Data Analytics on Ceph with Alluxio at Ceph Days 2017 Alluxio, Inc.
 
Double Your Hadoop Hardware Performance with SmartSense
Double Your Hadoop Hardware Performance with SmartSenseDouble Your Hadoop Hardware Performance with SmartSense
Double Your Hadoop Hardware Performance with SmartSenseHortonworks
 

Andere mochten auch (19)

Ceph Day San Jose - Ceph in a Post-Cloud World
Ceph Day San Jose - Ceph in a Post-Cloud World Ceph Day San Jose - Ceph in a Post-Cloud World
Ceph Day San Jose - Ceph in a Post-Cloud World
 
Ceph Day Tokyo - Ceph Community Update
Ceph Day Tokyo - Ceph Community Update Ceph Day Tokyo - Ceph Community Update
Ceph Day Tokyo - Ceph Community Update
 
Ceph Day Tokyo - Ceph on ARM: Scaleable and Efficient
Ceph Day Tokyo - Ceph on ARM: Scaleable and Efficient Ceph Day Tokyo - Ceph on ARM: Scaleable and Efficient
Ceph Day Tokyo - Ceph on ARM: Scaleable and Efficient
 
Ceph Day Tokyo - Bring Ceph to Enterprise
Ceph Day Tokyo - Bring Ceph to Enterprise Ceph Day Tokyo - Bring Ceph to Enterprise
Ceph Day Tokyo - Bring Ceph to Enterprise
 
Ceph Day Tokyo - Delivering cost effective, high performance Ceph cluster
Ceph Day Tokyo - Delivering cost effective, high performance Ceph clusterCeph Day Tokyo - Delivering cost effective, high performance Ceph cluster
Ceph Day Tokyo - Delivering cost effective, high performance Ceph cluster
 
Ceph Day Seoul - AFCeph: SKT Scale Out Storage Ceph
Ceph Day Seoul - AFCeph: SKT Scale Out Storage Ceph Ceph Day Seoul - AFCeph: SKT Scale Out Storage Ceph
Ceph Day Seoul - AFCeph: SKT Scale Out Storage Ceph
 
Ceph Day Seoul - Ceph on All-Flash Storage
Ceph Day Seoul - Ceph on All-Flash Storage Ceph Day Seoul - Ceph on All-Flash Storage
Ceph Day Seoul - Ceph on All-Flash Storage
 
Ceph Day Seoul - Community Update
Ceph Day Seoul - Community UpdateCeph Day Seoul - Community Update
Ceph Day Seoul - Community Update
 
Ceph Day Tokyo - High Performance Layered Architecture
Ceph Day Tokyo - High Performance Layered Architecture  Ceph Day Tokyo - High Performance Layered Architecture
Ceph Day Tokyo - High Performance Layered Architecture
 
Red Hat Storage Day Dallas - Storage for OpenShift Containers
Red Hat Storage Day Dallas - Storage for OpenShift Containers Red Hat Storage Day Dallas - Storage for OpenShift Containers
Red Hat Storage Day Dallas - Storage for OpenShift Containers
 
London Ceph Day: Unified Cloud Storage with Synnefo + Ceph + Ganeti
London Ceph Day: Unified Cloud Storage with Synnefo + Ceph + GanetiLondon Ceph Day: Unified Cloud Storage with Synnefo + Ceph + Ganeti
London Ceph Day: Unified Cloud Storage with Synnefo + Ceph + Ganeti
 
DataEngConf: Apache Kafka at Rocana: a scalable, distributed log for machine ...
DataEngConf: Apache Kafka at Rocana: a scalable, distributed log for machine ...DataEngConf: Apache Kafka at Rocana: a scalable, distributed log for machine ...
DataEngConf: Apache Kafka at Rocana: a scalable, distributed log for machine ...
 
Performance Metrics and Ontology for Describing Performance Data of Grid Work...
Performance Metrics and Ontology for Describing Performance Data of Grid Work...Performance Metrics and Ontology for Describing Performance Data of Grid Work...
Performance Metrics and Ontology for Describing Performance Data of Grid Work...
 
Web security-–-everything-we-know-is-wrong-eoin-keary
Web security-–-everything-we-know-is-wrong-eoin-kearyWeb security-–-everything-we-know-is-wrong-eoin-keary
Web security-–-everything-we-know-is-wrong-eoin-keary
 
Ford
FordFord
Ford
 
Ceph, Now and Later: Our Plan for Open Unified Cloud Storage
Ceph, Now and Later: Our Plan for Open Unified Cloud StorageCeph, Now and Later: Our Plan for Open Unified Cloud Storage
Ceph, Now and Later: Our Plan for Open Unified Cloud Storage
 
Enable Fast Big Data Analytics on Ceph with Alluxio at Ceph Days 2017
Enable Fast Big Data Analytics on Ceph with Alluxio at Ceph Days 2017 Enable Fast Big Data Analytics on Ceph with Alluxio at Ceph Days 2017
Enable Fast Big Data Analytics on Ceph with Alluxio at Ceph Days 2017
 
Connected Vehicle Data Platform
Connected Vehicle Data PlatformConnected Vehicle Data Platform
Connected Vehicle Data Platform
 
Double Your Hadoop Hardware Performance with SmartSense
Double Your Hadoop Hardware Performance with SmartSenseDouble Your Hadoop Hardware Performance with SmartSense
Double Your Hadoop Hardware Performance with SmartSense
 

Ähnlich wie Ceph Day San Jose - All-Flahs Ceph on NUMA-Balanced Server

Ceph Day Shanghai - SSD/NVM Technology Boosting Ceph Performance
Ceph Day Shanghai - SSD/NVM Technology Boosting Ceph Performance Ceph Day Shanghai - SSD/NVM Technology Boosting Ceph Performance
Ceph Day Shanghai - SSD/NVM Technology Boosting Ceph Performance Ceph Community
 
Ceph Day Beijing - Ceph all-flash array design based on NUMA architecture
Ceph Day Beijing - Ceph all-flash array design based on NUMA architectureCeph Day Beijing - Ceph all-flash array design based on NUMA architecture
Ceph Day Beijing - Ceph all-flash array design based on NUMA architectureCeph Community
 
Hands-on Lab: How to Unleash Your Storage Performance by Using NVM Express™ B...
Hands-on Lab: How to Unleash Your Storage Performance by Using NVM Express™ B...Hands-on Lab: How to Unleash Your Storage Performance by Using NVM Express™ B...
Hands-on Lab: How to Unleash Your Storage Performance by Using NVM Express™ B...Odinot Stanislas
 
Red Hat Storage Day Dallas - Red Hat Ceph Storage Acceleration Utilizing Flas...
Red Hat Storage Day Dallas - Red Hat Ceph Storage Acceleration Utilizing Flas...Red Hat Storage Day Dallas - Red Hat Ceph Storage Acceleration Utilizing Flas...
Red Hat Storage Day Dallas - Red Hat Ceph Storage Acceleration Utilizing Flas...Red_Hat_Storage
 
Ceph Community Talk on High-Performance Solid Sate Ceph
Ceph Community Talk on High-Performance Solid Sate Ceph Ceph Community Talk on High-Performance Solid Sate Ceph
Ceph Community Talk on High-Performance Solid Sate Ceph Ceph Community
 
Red Hat Ceph Storage Acceleration Utilizing Flash Technology
Red Hat Ceph Storage Acceleration Utilizing Flash Technology Red Hat Ceph Storage Acceleration Utilizing Flash Technology
Red Hat Ceph Storage Acceleration Utilizing Flash Technology Red_Hat_Storage
 
QCT Ceph Solution - Design Consideration and Reference Architecture
QCT Ceph Solution - Design Consideration and Reference ArchitectureQCT Ceph Solution - Design Consideration and Reference Architecture
QCT Ceph Solution - Design Consideration and Reference ArchitectureCeph Community
 
QCT Ceph Solution - Design Consideration and Reference Architecture
QCT Ceph Solution - Design Consideration and Reference ArchitectureQCT Ceph Solution - Design Consideration and Reference Architecture
QCT Ceph Solution - Design Consideration and Reference ArchitecturePatrick McGarry
 
IMCSummit 2015 - Day 1 Developer Track - Evolution of non-volatile memory exp...
IMCSummit 2015 - Day 1 Developer Track - Evolution of non-volatile memory exp...IMCSummit 2015 - Day 1 Developer Track - Evolution of non-volatile memory exp...
IMCSummit 2015 - Day 1 Developer Track - Evolution of non-volatile memory exp...In-Memory Computing Summit
 
Impact of Intel Optane Technology on HPC
Impact of Intel Optane Technology on HPCImpact of Intel Optane Technology on HPC
Impact of Intel Optane Technology on HPCMemVerge
 
Recent Developments in Donard
Recent Developments in DonardRecent Developments in Donard
Recent Developments in DonardPMC-Sierra Inc.
 
Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...
Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...
Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...Odinot Stanislas
 
3.INTEL.Optane_on_ceph_v2.pdf
3.INTEL.Optane_on_ceph_v2.pdf3.INTEL.Optane_on_ceph_v2.pdf
3.INTEL.Optane_on_ceph_v2.pdfhellobank1
 
Ceph Day Beijing - Optimizing Ceph Performance by Leveraging Intel Optane and...
Ceph Day Beijing - Optimizing Ceph Performance by Leveraging Intel Optane and...Ceph Day Beijing - Optimizing Ceph Performance by Leveraging Intel Optane and...
Ceph Day Beijing - Optimizing Ceph Performance by Leveraging Intel Optane and...Danielle Womboldt
 
Ceph Day Beijing - Optimizing Ceph performance by leveraging Intel Optane and...
Ceph Day Beijing - Optimizing Ceph performance by leveraging Intel Optane and...Ceph Day Beijing - Optimizing Ceph performance by leveraging Intel Optane and...
Ceph Day Beijing - Optimizing Ceph performance by leveraging Intel Optane and...Ceph Community
 
Moving to PCI Express based SSD with NVM Express
Moving to PCI Express based SSD with NVM ExpressMoving to PCI Express based SSD with NVM Express
Moving to PCI Express based SSD with NVM ExpressOdinot Stanislas
 
Red hat Storage Day LA - Designing Ceph Clusters Using Intel-Based Hardware
Red hat Storage Day LA - Designing Ceph Clusters Using Intel-Based HardwareRed hat Storage Day LA - Designing Ceph Clusters Using Intel-Based Hardware
Red hat Storage Day LA - Designing Ceph Clusters Using Intel-Based HardwareRed_Hat_Storage
 
JetStor portfolio update final_2020-2021
JetStor portfolio update final_2020-2021JetStor portfolio update final_2020-2021
JetStor portfolio update final_2020-2021Gene Leyzarovich
 
Ceph Day Taipei - Delivering cost-effective, high performance, Ceph cluster
Ceph Day Taipei - Delivering cost-effective, high performance, Ceph cluster Ceph Day Taipei - Delivering cost-effective, high performance, Ceph cluster
Ceph Day Taipei - Delivering cost-effective, high performance, Ceph cluster Ceph Community
 

Ähnlich wie Ceph Day San Jose - All-Flahs Ceph on NUMA-Balanced Server (20)

Ceph Day Shanghai - SSD/NVM Technology Boosting Ceph Performance
Ceph Day Shanghai - SSD/NVM Technology Boosting Ceph Performance Ceph Day Shanghai - SSD/NVM Technology Boosting Ceph Performance
Ceph Day Shanghai - SSD/NVM Technology Boosting Ceph Performance
 
Ceph Day Beijing - Ceph all-flash array design based on NUMA architecture
Ceph Day Beijing - Ceph all-flash array design based on NUMA architectureCeph Day Beijing - Ceph all-flash array design based on NUMA architecture
Ceph Day Beijing - Ceph all-flash array design based on NUMA architecture
 
Hands-on Lab: How to Unleash Your Storage Performance by Using NVM Express™ B...
Hands-on Lab: How to Unleash Your Storage Performance by Using NVM Express™ B...Hands-on Lab: How to Unleash Your Storage Performance by Using NVM Express™ B...
Hands-on Lab: How to Unleash Your Storage Performance by Using NVM Express™ B...
 
Red Hat Storage Day Dallas - Red Hat Ceph Storage Acceleration Utilizing Flas...
Red Hat Storage Day Dallas - Red Hat Ceph Storage Acceleration Utilizing Flas...Red Hat Storage Day Dallas - Red Hat Ceph Storage Acceleration Utilizing Flas...
Red Hat Storage Day Dallas - Red Hat Ceph Storage Acceleration Utilizing Flas...
 
Ceph Community Talk on High-Performance Solid Sate Ceph
Ceph Community Talk on High-Performance Solid Sate Ceph Ceph Community Talk on High-Performance Solid Sate Ceph
Ceph Community Talk on High-Performance Solid Sate Ceph
 
Red Hat Ceph Storage Acceleration Utilizing Flash Technology
Red Hat Ceph Storage Acceleration Utilizing Flash Technology Red Hat Ceph Storage Acceleration Utilizing Flash Technology
Red Hat Ceph Storage Acceleration Utilizing Flash Technology
 
QCT Ceph Solution - Design Consideration and Reference Architecture
QCT Ceph Solution - Design Consideration and Reference ArchitectureQCT Ceph Solution - Design Consideration and Reference Architecture
QCT Ceph Solution - Design Consideration and Reference Architecture
 
QCT Ceph Solution - Design Consideration and Reference Architecture
QCT Ceph Solution - Design Consideration and Reference ArchitectureQCT Ceph Solution - Design Consideration and Reference Architecture
QCT Ceph Solution - Design Consideration and Reference Architecture
 
IMCSummit 2015 - Day 1 Developer Track - Evolution of non-volatile memory exp...
IMCSummit 2015 - Day 1 Developer Track - Evolution of non-volatile memory exp...IMCSummit 2015 - Day 1 Developer Track - Evolution of non-volatile memory exp...
IMCSummit 2015 - Day 1 Developer Track - Evolution of non-volatile memory exp...
 
Impact of Intel Optane Technology on HPC
Impact of Intel Optane Technology on HPCImpact of Intel Optane Technology on HPC
Impact of Intel Optane Technology on HPC
 
Recent Developments in Donard
Recent Developments in DonardRecent Developments in Donard
Recent Developments in Donard
 
Ceph
CephCeph
Ceph
 
Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...
Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...
Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...
 
3.INTEL.Optane_on_ceph_v2.pdf
3.INTEL.Optane_on_ceph_v2.pdf3.INTEL.Optane_on_ceph_v2.pdf
3.INTEL.Optane_on_ceph_v2.pdf
 
Ceph Day Beijing - Optimizing Ceph Performance by Leveraging Intel Optane and...
Ceph Day Beijing - Optimizing Ceph Performance by Leveraging Intel Optane and...Ceph Day Beijing - Optimizing Ceph Performance by Leveraging Intel Optane and...
Ceph Day Beijing - Optimizing Ceph Performance by Leveraging Intel Optane and...
 
Ceph Day Beijing - Optimizing Ceph performance by leveraging Intel Optane and...
Ceph Day Beijing - Optimizing Ceph performance by leveraging Intel Optane and...Ceph Day Beijing - Optimizing Ceph performance by leveraging Intel Optane and...
Ceph Day Beijing - Optimizing Ceph performance by leveraging Intel Optane and...
 
Moving to PCI Express based SSD with NVM Express
Moving to PCI Express based SSD with NVM ExpressMoving to PCI Express based SSD with NVM Express
Moving to PCI Express based SSD with NVM Express
 
Red hat Storage Day LA - Designing Ceph Clusters Using Intel-Based Hardware
Red hat Storage Day LA - Designing Ceph Clusters Using Intel-Based HardwareRed hat Storage Day LA - Designing Ceph Clusters Using Intel-Based Hardware
Red hat Storage Day LA - Designing Ceph Clusters Using Intel-Based Hardware
 
JetStor portfolio update final_2020-2021
JetStor portfolio update final_2020-2021JetStor portfolio update final_2020-2021
JetStor portfolio update final_2020-2021
 
Ceph Day Taipei - Delivering cost-effective, high performance, Ceph cluster
Ceph Day Taipei - Delivering cost-effective, high performance, Ceph cluster Ceph Day Taipei - Delivering cost-effective, high performance, Ceph cluster
Ceph Day Taipei - Delivering cost-effective, high performance, Ceph cluster
 

Kürzlich hochgeladen

IAC 2024 - IA Fast Track to Search Focused AI Solutions
IAC 2024 - IA Fast Track to Search Focused AI SolutionsIAC 2024 - IA Fast Track to Search Focused AI Solutions
IAC 2024 - IA Fast Track to Search Focused AI SolutionsEnterprise Knowledge
 
Histor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slideHistor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slidevu2urc
 
The Codex of Business Writing Software for Real-World Solutions 2.pptx
The Codex of Business Writing Software for Real-World Solutions 2.pptxThe Codex of Business Writing Software for Real-World Solutions 2.pptx
The Codex of Business Writing Software for Real-World Solutions 2.pptxMalak Abu Hammad
 
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...Miguel Araújo
 
GenCyber Cyber Security Day Presentation
GenCyber Cyber Security Day PresentationGenCyber Cyber Security Day Presentation
GenCyber Cyber Security Day PresentationMichael W. Hawkins
 
Slack Application Development 101 Slides
Slack Application Development 101 SlidesSlack Application Development 101 Slides
Slack Application Development 101 Slidespraypatel2
 
Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024The Digital Insurer
 
TrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law DevelopmentsTrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law DevelopmentsTrustArc
 
Advantages of Hiring UIUX Design Service Providers for Your Business
Advantages of Hiring UIUX Design Service Providers for Your BusinessAdvantages of Hiring UIUX Design Service Providers for Your Business
Advantages of Hiring UIUX Design Service Providers for Your BusinessPixlogix Infotech
 
Data Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt RobisonData Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt RobisonAnna Loughnan Colquhoun
 
Boost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdfBoost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdfsudhanshuwaghmare1
 
Artificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and MythsArtificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and MythsJoaquim Jorge
 
Handwritten Text Recognition for manuscripts and early printed texts
Handwritten Text Recognition for manuscripts and early printed textsHandwritten Text Recognition for manuscripts and early printed texts
Handwritten Text Recognition for manuscripts and early printed textsMaria Levchenko
 
Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...
Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...
Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...Igalia
 
A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)Gabriella Davis
 
Automating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps ScriptAutomating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps Scriptwesley chun
 
A Year of the Servo Reboot: Where Are We Now?
A Year of the Servo Reboot: Where Are We Now?A Year of the Servo Reboot: Where Are We Now?
A Year of the Servo Reboot: Where Are We Now?Igalia
 
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptxEIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptxEarley Information Science
 
What Are The Drone Anti-jamming Systems Technology?
What Are The Drone Anti-jamming Systems Technology?What Are The Drone Anti-jamming Systems Technology?
What Are The Drone Anti-jamming Systems Technology?Antenna Manufacturer Coco
 
Scaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organizationScaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organizationRadu Cotescu
 

Kürzlich hochgeladen (20)

IAC 2024 - IA Fast Track to Search Focused AI Solutions
IAC 2024 - IA Fast Track to Search Focused AI SolutionsIAC 2024 - IA Fast Track to Search Focused AI Solutions
IAC 2024 - IA Fast Track to Search Focused AI Solutions
 
Histor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slideHistor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slide
 
The Codex of Business Writing Software for Real-World Solutions 2.pptx
The Codex of Business Writing Software for Real-World Solutions 2.pptxThe Codex of Business Writing Software for Real-World Solutions 2.pptx
The Codex of Business Writing Software for Real-World Solutions 2.pptx
 
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
 
GenCyber Cyber Security Day Presentation
GenCyber Cyber Security Day PresentationGenCyber Cyber Security Day Presentation
GenCyber Cyber Security Day Presentation
 
Slack Application Development 101 Slides
Slack Application Development 101 SlidesSlack Application Development 101 Slides
Slack Application Development 101 Slides
 
Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024
 
TrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law DevelopmentsTrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
 
Advantages of Hiring UIUX Design Service Providers for Your Business
Advantages of Hiring UIUX Design Service Providers for Your BusinessAdvantages of Hiring UIUX Design Service Providers for Your Business
Advantages of Hiring UIUX Design Service Providers for Your Business
 
Data Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt RobisonData Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt Robison
 
Boost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdfBoost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdf
 
Artificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and MythsArtificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and Myths
 
Handwritten Text Recognition for manuscripts and early printed texts
Handwritten Text Recognition for manuscripts and early printed textsHandwritten Text Recognition for manuscripts and early printed texts
Handwritten Text Recognition for manuscripts and early printed texts
 
Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...
Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...
Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...
 
A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)
 
Automating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps ScriptAutomating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps Script
 
A Year of the Servo Reboot: Where Are We Now?
A Year of the Servo Reboot: Where Are We Now?A Year of the Servo Reboot: Where Are We Now?
A Year of the Servo Reboot: Where Are We Now?
 
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptxEIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
 
What Are The Drone Anti-jamming Systems Technology?
What Are The Drone Anti-jamming Systems Technology?What Are The Drone Anti-jamming Systems Technology?
What Are The Drone Anti-jamming Systems Technology?
 
Scaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organizationScaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organization
 

Ceph Day San Jose - All-Flahs Ceph on NUMA-Balanced Server