1. A comparison of storage performance
EMC VNX 5100 vs. IBM DS5300
2011
2. A comparison of storage performance
EMC VNX 5100 vs. IBM DS5300
1. Configuration of test data storage systems:
EMC VNX 5100 4GB RAM per controller 15K RPM 3.5” 600GB
IBM DS5300 8GB RAM per controller 15K RPM 3.5” 300GB
I hasten to note that EMC VNX 5100 is the lowest model of EMC's midrange, and the IBM DS5300 is the oldest model in the IBM midrange (but already old and will
be replaced in the product line - IBM Storwize V7000).
Among the advantages of IBM DS5300 in the tested configuration is much more memory - 8GB vs. 4GB in EMC VNX 5100. Besides, IBM DS5300 almost all of this
memory is used for the cache, while at EMC VNX 5100 more than ¾ of memory used specifically for the needs of FLARE operating system and only the remaining small area
available for the cache.
Another important difference - from the host with the AIX operating system for a logical drive:
IBM DS5300 supports a maximum queue_depth = 256
EMC VNX 5100 supports maximum queue_depth = 64
On the other hand, since for both systems will be used is identical to the volume of the test area (~ 1.2TB), and the capacity of the drives installed in the tested EMC
VNX 5100 is twice as high than the IBM DS 5300 - the results of performance EMC VNX 5100 will have a positive impact so-called short stroking* effect .
* short stroking - reducing the access time from HDD, due to the shorter moving magnetic heads, when used for storage only of the available space on the HDD
Comparison of the performance will be held on the following types of raid groups: R10, R5 and R6. As the load profile will be used 100% Random with a ratio of R/W = 80/20
(typical load for the database) and the linear read/write (typical load for backup/restore).
2. When comparing the performance of the raid group R10 was used the following configuration:
EMC VNX 5100 one raid group R10 (4+4)
IBM DS5300 one raid group R10 (4+4)
3. 2.1. Results of testing with a load of 64 I/O processes for the block size 4KB, 8KB, 32KB, 64KB, 256KB at 100% random and load ratio R/W = 80/20 are shown in
diagr.2.1a. The corresponding response time of the disk subsystem at the time of testing shown on diagr.2.1b.
latency, ms
2500 12
R10 R10
64 I/O treads 10 64 I/O treads
2000
100%Random 100%Random
R/W=80/20 8 R/W=80/20
1500
EMC VNX 5100 (8x15K 3.5" EMC VNX 5100 (8x15K 3.5"
IOPS 6
HDD) cache 128MBRead HDD) cache 128MBRead
1000 673MBWrite 673MBWrite
4
EMC VNX 5100 (8x15K 3.5" EMC VNX 5100 (8x15K 3.5"
500 HDD) cache 476MBRead HDD) cache 476MBRead
2
325MBWrite (default) 325MBWrite (default)
0 IBM DS5300 (8x15K 3.5" 0 IBM DS5300 (8x15K 3.5"
4K 8K HDD) 4K 8K HDD)
32K 64K 32K 64K
256K 256K
block size block size
For EMC VNX 5100 were tested two versions of the cache settings:
- default - 476MB cache to write and 325MB read cache (for each SP)
- optimized version - 673MB cache to write and 128MB read cache (for each SP)
The diagrams presented above shows that the optimized version in comparison with the default settings significantly increases performance. Therefore, all subsequent tests
for EMC VNX 5100 to use this version of the configuration cache.
IBM DS5300 does not allow such manipulation of the cache settings - so a similar optimization could be carried out.
4. 2.2. The results of testing at a load of 512 I/O processes for the block size 4KB, 8KB, 32KB, 64KB, 256KB at 100% random and load ratio R/W = 80/20 are shown in
diagr.2.2A. The corresponding response time of the disk subsystem at the time of testing shown on diagr.2.2B.
latensy, ms
R10 R10
2500 512 I/O treads 12 512 I/O treads
100%Random 10 100%Random
2000
R/W=80/20 R/W=80/20
8
1500
IOPS EMC VNX 5100 8x 15K 3.5" 6 EMC VNX 5100 8x 15K 3.5"
1000 HDD HDD
4
500 2
0 IBM DS5300 8x 15K 3.5" 0 IBM DS5300 8x 15K 3.5"
4K HDD 4K HDD
8K 32K 8K 32K
64K 256K 64K 256K
block size block size
As can be seen from the above chart, the increase in the number of I/O processes from 64 to 512 does not increase the productivity of any of the storage system - the effect
is extremely small number of HDD, used in testing. Therefore, further restrict the maximum 64 I/O processes for testing.
5. 2.3. The results of testing under load with 1/2/4/8 I/O processes in a linear reading presented at diagr.2.3A. The results of testing under load with 1/2/4/8 I/O processes on
linear recording shown in diagr.2.3B.
In my opinion, the chart 2.3B quite clearly show differences in the capacity of back-end interface used in data storage (using two such interfaces on both the Beck-end
storage offset by write penalty = 2 for R10):
- 4Gbps FC at IBM DS5300
- 6Gbps SAS at EMC VNX 5100
800 R10 800 R10
700 sequentialread 700 sequential write
600 600
500 500
MBps 400 MBps 400
EMC VNX 5100 8x 15K 3.5" EMC VNX 5100 8x 15K 3.5"
300 HDD 300 HDD
200 200
100 100
0 0
1 I/O IBM DS5300 8x 15K 3.5" IBM DS5300 8x 15K 3.5"
2 I/O 1 I/O 2 I/O
4 I/O HDD 4 I/O HDD
8 I/O 8 I/O
I/O process I/O process
6. 3. When comparing the performance of the raid groups R5 used the following configuration raid groups:
EMC VNX 5100 one raid group R5 (7+1)
IBM DS5300 one raid group R5 (7+1)
3.1. Results of testing with a load of 64 I/O processes for the block size 4KB, 8KB, 32KB, 64KB, 256KB at 100% random and load ratio R/W = 80/20 are shown in
diagr.3.1A.
The corresponding response time of the disk subsystem at the time of testing shown on diagr.3.1B.
latensy, ms
R5 R5
2000 64 I/O treads 12 64 I/O treads
100%Random 100%Random
1500 R/W=80/20 10 R/W=80/20
8
IOPS 1000 EMC VNX 5100 8x 15K 3.5"
6 EMC VNX 5100 8x 15K 3.5"
HDD
4 HDD
500 IBM DS5300 8x 15K 3.5"
HDD 2
IBM DS5300 8x 15K 3.5"
0 0 HDD
4K 8K 4K 8K
32K 64K 32K 64K
256K 256K
block size block size
7. 3.2. The results of testing under load with 1/2/4/8 I/O processes in a linear reading presented at diagr.3.2A. The results of testing under load with 1/2/4/8 I/O processes on
linear recording shown in diagr.3.2.B.
Both storage systems using R5, expected better results demonstrate the sequential read/write than when using R10.
800 R5 800 R5
700 sequential Read 700 sequential Write
600 600
500 500
MBps 400 EMC VNX 5100 8x 15K 3.5" MBps 400
HDD EMC VNX 5100 8x 15K 3.5"
300 300 HDD
IBM DS5300 8x 15K 3.5"
200 HDD 200
100 100 IBM DS5300 8x 15K 3.5"
HDD
0 0
1 2 1 2
4 8 4 8
I/O process I/O process
8. 4. When comparing the performance of the raid groups R6 used the following configuration raid-groups:
EMC VNX 5100 one raid group R6 (6+2)
IBM DS5300 one raid group R6 (6+2)
4.1. Results of testing with a load of 64 I/O processes for the block size 4KB, 8KB, 32KB, 64KB, 256KB at 100% random and load ratio R/W = 80/20 are shown in
diagr.4.1A.
The corresponding response time of the disk subsystem at the time of testing shown on diagr.4.1B.
4.2.
latensy, ms
R6 R6
2500 64 I/O treads 12 64 I/O treads
100%Random 10 100%Random
2000
R/W=80/20 R/W=80/20
8
1500
IOPS EMC VNX 5100 8x 15K 3.5" 6
HDD EMC VNX 5100 8x 15K 3.5"
1000
4 HDD
IBM DS5300 8x 15K 3.5"
500 2
HDD
IBM DS5300 8x 15K 3.5"
0 0 HDD
4K 8K 4K 8K
32K 64K 32K 64K
256K 256K
block size block size
9. 4.2. The results of testing under load with 1/2/4/8 I/O processes in a linear reading presented at diagr.4.2A. The results of testing under load with 1/2/4/8 I/O processes on
linear recording presented at diagr.4.2B.
4.3.
800 R6 800 R6
700 sequential Read 700 sequential Write
600 600
500 500
MBps 400 EMC VNX 5100 8x 15K 3.5" MBps 400
HDD EMC VNX 5100 8x 15K 3.5"
300 300 HDD
IBM DS5300 8x 15K 3.5"
200 HDD 200
100 100 IBM DS5300 8x 15K 3.5"
HDD
0 0
1 2 1 2
4 8 4 8
I/O process I/O process
10. 5. Actually testing the performance of the two storage systems here and would come to its logical conclusion. But ... there gave us a EMC distributor for EMC VNX 5100 a
couple of SSD, with a capacity of ~ 100GB each. And in the present storage license for EMC FAST Cache. So on the next chapter of this review.
I have already previously in use EFD Storage EMC, but these were STEC SSD production line of ZeusIOPS, based on the SLC-memory and are known not only for its
high productivity, but at least its high cost. They were previously used in EMC and Symmetrix line and in the line of CLARiiON.
In EMC VNX same applies enterprise SSD another manufacturer (ie Samsung), also based on the SLC-memory, but offered at a more affordable price (compared to STEC
ZeusIOPS). This SSD utilizes SATA connector interface (3Gbps) and is made in the form factor of 3.5 ".
First, examine the speed capabilities of SSD, and then look at how to grow the performance of disk raid group when using FAST Cache.
11. 5.1 In order to evaluate the performance of SSD, from 2xSSD was created raid group R1. Test results for SSD with a load of 16 I/O processes for the block size 4KB, 8KB,
32KB, 64KB, 256KB at 100% random and load ratio R/W = 80/20, R/W = 0/100, R/W = 100/0 are shown in diagr.5.1A. The corresponding response time of the disk
subsystem at the time of testing shown on diagr.5.1B.
latensy, ms
R1 12
R1
IOPS
16 I/O treads 16 I/O treads
10
100%Random 100%Random
20000
R/W=80/20
8
EMC VNX 5100 2x SSD 3.5" EMC VNX 5100 2x 3.5" SSD
15000 HDD R/W=80/20 R/W=80/20
6
10000 EMC VNX 5100 2xSSD 3.5" EMC VNX 5100 2x 3.5" SSD
R/W=0/100 4 R/W=0/100
5000 EMC VNX 5100 2xSSD 3.5" EMC VNX 5100 2x 3.5" SSD
R/W=100/0 2 R/W=100/0
0
0
4K 8K 32K 64K 256K
block size block size
12. 5.2. The results of testing under load with 1/2/4/8 I/O processes in a linear reading presented at diagr.5.2A. The results of testing under load with 1/2/4/8 I/O processes on
linear recording shown in diagr.5.2B.
R1 (1+1) SSD R1 (1+1) SSD
800 sequential Read 800 sequential Write
600 600
400 400
MBps MBps
200 EMC VNX 5100 2x 3.5" SSD 200
EMC VNX 5100 2x 3.5" SSD
0 0
1 2 1 2
4 8 4 8
I/O process I/O process
13. 6. We now turn to testing the Fast Cache - compare results obtained earlier performance on the 5100 EMC VNX 8hHDD without FAST Cache with the results obtained
during activation Fast Cache. The effect of the Fast Cache does not appear immediately, it takes time to "hot" data had been cached - this time interval is usually referred
to as "warming up" the cache (Cache Warm-Up). This feature concerning caches in general (high volume), not just EMC FAST Cache. I note that the activation of the
Fast Cache uses the cache RAM read/write (and so small.)
Features associated with activation and deactivation of FAST Cache are shown in the screenshots below
14. 6.1 Test Results R10 with a load of 64 I/O processes for the block size 4KB, 8KB, 32KB, 64KB, 256KB at 100% random and load ratio R/W = 80/20 are shown in
diagr.6.1A.
The corresponding response time of the disk subsystem at the time of testing shown on diagr.6.1B
latensy, ms
R10 R10
20000 64 I/O treads 12 64 I/O treads
100%Random 10 100%Random
15000 R/W=80/20 R/W=80/20
8
IOPS EMC VNX 5100 8x 15K 3.5"
EMC VNX 5100 8x 15K 3.5"
10000 6 HDD
HDD
4
5000
2
EMC VNX5100 8x 15K 3.5" EMC VNX5100 8x 15K 3.5"
0 0
HDD with FAST Cache HDD with FAST Cache
4K 8K 4K 8K (2xSSD R1) after 30min
32K 64K (2xSSD R1) after 30min 32K 64K
256K 256K Cache Warm-Up
Cache Warm-Up
block size block size
15. 6.2 Outcome of testing under load R10 with 1/2/4/8 I/O processes in a linear reading presented at diagr.6.2A. Test Results R10 with a load of 1/2/4/8 I/O processes on
linear recording shown in diagr.6.2B.
800 R10 800 R10
700 sequential Read 700 sequential Write
600 600
EMC VNX 5100 8x 15K 3.5"
500 500 EMC VNX 5100 8x 15K 3.5"
HDD
HDD
MBps 400 MBps 400
300 300
200 200
EMC VNX5100 8x 15K 3.5"
100 HDD with FAST Cache 100 EMC VNX5100 8x 15K 3.5"
(2xSSD R1) after 30min HDD with FAST Cache
0 0
Cache Warm-Up (2xSSD R1) after 30min
1 2 1 2 Cache Warm-Up
4 8 4 8
I/O process I/O process
16. 6.3 Results of testing with a load R5 with 64 I/O processes for the block size 4KB, 8KB, 32KB, 64KB, 256KB at 100% random and load ratio R/W = 80/20 are shown in
diagr.6.3A. The corresponding response time of the disk subsystem at the time of testing shown in the diagram. 6.3B.
latensy, ms
R5 R5
20000 64 I/O treads 14 64 I/O treads
100%Random 12 100%Random
15000 R/W=80/20 10 R/W=80/20
IOPS 8 EMC VNX 5100 8x 15K 3.5"
EMC VNX 5100 8x 15K 3.5"
10000 HDD
HDD 6
5000 4
2
EMC VNX5100 8x 15K 3.5" EMC VNX5100 8x 15K 3.5"
0 0
HDD with FAST Cache HDD with FAST Cache
4K 8K 4K 8K (2xSSD R1) after 30min
32K 64K (2xSSD R1) after 30min 32K 64K
256K 256K Cache Warm-Up
Cache Warm-Up
block size block size
17. 6.4 Results of testing with a load R5 with 1/2/4/8 I/O processes in a linear reading presented at diagr.6.4A.
Results of testing with a load R5 with 1/2/4/8 I/O processes on linear recording shown in diagr.6.4B.
800 R5 800 R5
700 sequential Read 700 sequential Write
600 600
EMC VNX 5100 8x 15K 3.5" EMC VNX 5100 8x 15K 3.5"
500 HDD 500 HDD
MBps 400 MBps 400
300 300
200 200
EMC VNX5100 8x 15K 3.5" EMC VNX5100 8x 15K 3.5"
100 HDD with FAST Cache 100 HDD with FAST Cache
0 (2xSSD R1) after 30min 0 (2xSSD R1) after 30min
Cache Warm-Up Cache Warm-Up
1 2 1 2
4 8 4 8
I/O process I/O process
18. 6.5 Test Results R6 load with 64 I/O processes for the block size 4KB, 8KB, 32KB, 64KB, 256KB at 100% random and load ratio R/W = 80/20 are shown in diagr.6.5A.
The corresponding response time of the disk subsystem at the time of testing shown on diagr.6.5B.
latensy, ms
R6 R6
20000 64 I/O treads 12 64 I/O treads
100%Random 10 100%Random
15000 R/W=80/20 R/W=80/20
8
10000 EMC VNX 5100 8x 15K 3.5" 6
EMC VNX 5100 8x 15K 3.5"
IOPS HDD HDD
4
5000
2
0 0
4K EMC VNX 5100 8x 15K 3.5" 4K EMC VNX 5100 8x 15K 3.5"
8K 32K 8K 32K
64K HDD with FAST Cache 64K HDD with FAST Cache
256K 256K
(2xSSD R1) after 30min (2xSSD R1) after 30min
block size Cache Warm-Up block size
Cache Warm-Up
19. 6.6 Test Results R6 load with 1/2/4/8 I/O processes in a linear reading presented at diagr.6.6A. Test results R6 load with 1/2/4/8 I/O processes on linear recording shown
in diagr.6.6B.
6.7
800 R6 800 R6
700 sequential Read 700 sequential Write
600 600
500 EMC VNX 5100 8x 15K 3.5" 500 EMC VNX 5100 8x 15K 3.5"
HDD HDD
MBps 400 MBps 400
300 300
200 200
100 EMC VNX 5100 8x 15K 3.5" 100 EMC VNX 5100 8x 15K 3.5"
HDD with FAST Cache HDD with FAST Cache
0 (2xSSD R1) after 30min 0
(2xSSD R1) after 30min
1 2 Cache Warm-Up 1 2 Cache Warm-Up
4 8 4 8
I/O process I/O process
20. application
- All the tested storage systems were connected to the same LPAR with AIX 6.1 TL6 SP4
- From the host connection used by FC 8Gbps
- We used a JFS2 file system with inline log
- The file system as a working set of files were created, the total size of ~ 1.2TB
- To create a load test was used package nstress, in particular utility ndisk64
- To collect and process information about the response time of storage used by utilities NMON and NMON Analyser
respectively
Oleg Korol
it-expert@ukr.net