SlideShare ist ein Scribd-Unternehmen logo
1 von 11
<Tags> RAID technology, various RAID architectures, RAID 0, RAID 1, RAID
5, types of RAID managers, hardware solutions
RAID/Redundant Array of Independent Disks Technology Overview
An overview of RAID technology
RAID (Redundant Array of Independent Disks) is a technology allowing a
higher level of storage reliability and performance from disk-drive components
via the technique of arranging them into arrays.
A RAID array is a configuration with multiple physical disks set up to use
RAID architecture like RAID 0, RAID 1, RAID 5, etc. While the RAID array
distributes data across multiple disks, it is considered as a single disk by the
server operating system.
The various RAID architectures are designed to meet at least one of these
two goals:
o increase data reliability
o increase Input/Output (I/O) performance
A RAID array is composed of two or more physical hard disks combined into a
single logical storage unit. To give RAID array additional features compared to
JBOD (Just a Bunch of Disk), three main concepts are used:
o Mirroring
o Striping
o Error correction
Mirroring is the writing of identical data to more than one disk. The basic
example of mirroring is a RAID 1 array formed by two disks. Both disks have
the same content at any time. If the first drive fails, read and write operation
can be done directly on the second disk. Read operations on mirrored arrays
is faster compared to a single disk since the system can fetch data from
multiple disks at the same time. However, write operations are slower since
the data must be written to all disks instead of only one. The reconstruction
of a failed mirror array is quite simple: data must be copied from the healthy
disk to the new one. During reconstruction, the read performance boost of
the mirror array is reduced since only the healthy disk is fully usable.
Striping is the splitting of data across multiple disks. For example, a RAID 0
array formed by two disks strips data to both disks. Striping does not provide
fault tolerance, only a performance boost. Read and write operations on a
striped array are faster compared to a single disks as both operation are split
between the available disks.
Error correction stores parity data on disk to allow the detection and possibly
the correction of problems. RAID 5 is a good example of the error correction
mechanism. For example, a RAID 5 array composed of three drive strips data
on the first two disks and stores parity on the third disk to provide fault
tolerance. The error correction mechanism will slow down performance
especially for write operation since both data and parity information needs to
be written instead of data only. Moreover, the reconstruction of a failed array
using parity information incurs severe performance degradation as data needs
to be fetched from all drives in the array to rebuild the information for the
new disk.
The design of any RAID scheme is a compromise between data protection
and performance. The comprehension of your server requirements in terms of
storage is crucial to select the appropriate RAID configuration.
Hardware vs. Software RAID
There are two types of RAID managers:
o hardware
o software
Hardware solutions are specialized hardware components connected to the
server motherboard. Most of the time, these components will provide a post-
BIOS configuration interface that can be run before booting your server
operating system. Each configured RAID array will present himself to the
operating system as a single storage drive. The RAID array can be partitioned
into various RAID volumes at the operating system level.
On the other hand, software solutions are implemented at the operating
system level and directly create RAID volumes from entire physical disks or
partitions. Each RAID volume is seen as a standard storage space for the
applications running within the operating system. Both approaches have
advantages and disadvantages compared to each other.
Depending on the manufacturer, an hardware RAID card supporting up to 8
drives is usually sold between 400$ and 1200$ while a software RAID solution
is usually included free of charge with the operating system of your server.
Under Linux, the md RAID subsystem is able to support most RAID
configurations. Under Microsoft Windows, Software RAID is provided through
the use of dynamic disks in the disk management console.
The required processing power for RAID 0, RAID 1 and RAID 10 is relatively
low. Parity-based arrays like RAID 5, RAID 6, RAID 50 and RAID 60 require
more complex data processing during write or integrity check operations.
However, this processing time is minimal on modern CPU units as the
increase in speed of commodity CPUs has been consistently greater than the
increase in speed of hard disk drive throughput over history. Thus, the
percentage of server CPU time required to saturate an hard disk RAID array
throughput has been dropping and will probably continue to do so in the
future.
A more serious issue with software RAID array is how the operating system
deals with the boot process. Since the RAID information is kept at the
operating system level, booting a faulty RAID array is problematic. At boot
time, the operating system is not available to coordinate the failover to
another drive if the usual boot drive fails. Such systems may require manual
intervention to make them bootable again after a failure. A hardware RAID
controller is initialized before the boot process starts looking for information
on the disk drives. Therefore, hardware RAID controller will increase the
robustness of your server compared to software RAID.
A hardware RAID controller may also support hot swappable hard drives. With
such a feature, hard disks can be changed in a server without having to turn
off the power and open up server case. Removing a failed hard drive and
replacing it with a new one is a simple task with a hardware RAID controller
supporting hot swappable disks. Without this feature, the server needs to be
powered off before replacing the failed drive. This will lead to downtime
unless your web solution is properly clustered.
Finally, only hardware RAID controllers can carry a Battery Backup Unit (BBU)
to preserve the cache memory of the controller if the server is shut down
abruptly. Without such a protection, write-back cache should be disabled on
the RAID array to prevent data corruption. Turning off write-back cache
comes with a performance penalty for write operations on the RAID array.
The use of a BBU on your RAID controller is a solution to safely enable write-
back caching and improve write performance.
A RAID array is not a backup solution
Most RAID arrays provide protection in case of a disk failure. While such a
protection is important to protect yourself from data loss due to hardware
failure, it does not provide historical data. A RAID array does not allow to
recover a deleted or corrupted file due to a bug in your application. A backup
solution will allow you to go back in time to recover deleted or corrupted files.
Implementation
Note: images were adapted from those available on Wikipedia.
RAID 0
RAID 0 is a pure implementation of striping. A minimum of two (2) disks is
required for RAID 0. No parity information is stored for redundancy. It is
important to note that RAID 0 was not one of the original RAID levels and
provides no data redundancy. RAID 0 is normally used to increase
performance. RAID 0 is useful for setups where redundancy is irrelevant.
A RAID 0 array can be created with disks of differing sizes, but the total
available storage space in the array is limited by the size of the smallest disk.
For example, if a 450GB disk is striped together with a 300GB disk, the usable
size of the array will be 2 x min(450GB, 300GB) = 600GB.
For reads and writes operations dealing with small data blocks such as
database access, the data will be fetched independently on each disk of the
RAID 1 array. If the data sectors accessed are spread evenly between the two
disks, the apparent seek time of the array will be half that of a single disk.
The transfer speed of the array will be the transfer speed of all the disks
added together, limited only by the speed of the RAID controller. For reads
and writes operations dealing with large data blocks such as copying files or
video playback, the data will most likely be fetch on a single disk reducing the
performance gain of the RAID 0 array.
RAID 1
RAID 1 is a pure implementation of mirroring. A minimum of two (2) disks is
required for RAID 1. This is useful when read performance or reliability are
more important than data storage capacity. A classic RAID 1 mirrored pair
contains two disks (see diagram), which increases reliability over a single disk.
Since each member contains a complete copy of the data, and can be
addressed independently, ordinary wear-and-tear reliability is raised.
A RAID 1 array can be created with disks of differing sizes, but the total
available storage space in the array is equal to the size of the smallest disk.
For example, if a 450GB disk is mirrored with a 300GB disk, the usable size of
the array will be min(450GB, 300GB) = 300GB.
The read performance of a RAID 1 array can go up roughly as a linear
multiple of the number of copies. That is, a RAID 1 array of two disks can
query two different places at the same time so the read performance should
be two times higher than the performance of a single disk. RAID 1 is a good
starting point for applications such as email and web servers as well as for
any other use requiring above average read I/O performance and hardware
failure protection.
RAID 5
RAID 5 array uses block-level striping with distributed parity blocks across all
member disks. The disk used for the parity block is staggered from one stripe
to the next, hence the term distributed parity blocks. A minimum of three (3)
disks is required for RAID 5. This RAID configuration is mainly used to
maximize disk space while providing a protection for your data in case of a
disk failure.
Given the diagram of the RAID 5 array, where each column is a disk, let
assume A1=00000101and A2=00000011. The parity block Ap is generated
by applying the XOR operator on A1 and A2: Ap = A1 XOR A2 = 00000110
If the first disk fails, A1 will no longer be accessible, but can be reconstructed:
A1 = A2 XOR Ap = 00000101
A RAID 5 array can be created with disks of differing sizes, but the total
available storage space in the array is limited by the size of the smallest disk.
The parity data consumes a complete disk, leaving N-1 disks for usable
storage space in an array composed of N disks. For example, on an array
formed of three 450GB disks and one 300GB disk, the usable size of the array
will be (4-1) x min(450GB, 300GB) = 900GB.
RAID 5 writes are expensive in terms of disk operations and traffic between
the disks and the RAID controller since both data and parity information need
to be written to disk. The parity blocks are not read on data reads, since this
would add unnecessary overhead and would diminish performance. However,
the parity blocks are read when a defective disk sector is present in the
required data blocks. Likewise, should a disk fail in the array, the parity blocks
and the data blocks from the surviving disks are combined mathematically to
reconstruct data from the failed drive in real-time. This situation leads to
severe performance degradation on the array for read and write operations.
RAID 6
RAID 6 extends RAID 5 by adding an additional parity block. Block-level
striping is combined with two parity blocks distributed across all member disks.
A minimum of four (4) disks is required for RAID 6. This RAID configuration is
mainly used to maximize disk space while providing a protection for up to two
disk failures.
Both parity blocks Ap and Aq are generated from the data blocks A1, A2 and
A3. Ap is generated by applying the XOR operator on A1, A2 and A3. Aq is
generated using a more complex variant of the Ap formulae. If the first disk
fails, A1 will no longer be accessible, but can be reconstructed using A2 and
A3 plus the Ap parity block. If both the first and the second disk fail, A1 and
A2 will no longer be accessible, but can be reconstructed using A3 plus both
Ap and Aq parity blocks. The computation of Aq is CPU intensive, in contrast
to the simplicity of Ap. Thus, a software RAID 6 implementation may have a
significant effect on system performance especially during the reconstruction
of a failed disk.
A RAID 6 array can be created with disks of differing sizes, but the total
available storage space in the array is limited by the size of the smallest disk.
The parity data consumes two complete disks, leaving N-2 disks for usable
storage space in an array composed of N disks. For example, on an array
formed of four 450GB disks and one 300GB disk, the usable size of the array
will be (5-2) x min(450GB, 300GB) = 900GB.
RAID 6 writes are even more expensive than RAID 5 writes in terms of disk
operations and traffic between the disks and the RAID controller since both
data and parity information need to be written to disk. The parity blocks are
not read on data reads, since this would add unnecessary overhead and
would diminish performance. However, the parity blocks are read when a
defective disk sector is present in the required data blocks. Likewise, should a
disk fail in the array, the parity blocks and the data blocks from the surviving
disks are combined mathematically to reconstruct data from the failed drive in
real-time. This situation leads to severe performance degradation on the array
for read and write operations.
RAID 10
RAID 10 is a combination of RAID 1 (mirroring) and RAID 0 (striping) where
4N mirrored disks are striped together. A minimum of four (4) disks are
required for RAID 10. One disk in each RAID 1 mirror can fail without
damaging the data contained in the entire array.
A RAID 10 array can be created with disks of differing sizes, but the total
available storage space in the array is limited by the size of the smallest disk.
The mirroring consumes half of disk space, leaving 2N disks for usable
storage space in an array composed of 4N disks. For example, on an array
formed of seven 450GB disks and one 300GB disk, the usable size of the
array will be (7+1)/2 x min(450GB, 300GB) = 1200GB.
RAID 10 provides better performance than all other redundant RAID
levels. It is the preferable RAID level for I/O intensive applications such as
database servers as well as for any other use requiring high disk performance.
RAID 50
RAID 50 is a combination of RAID 5 (striping and error correction)
and RAID 0 (striping) where RAID 5 sub-arrays are striped together.
A minimum of six (6) disks are required for RAID 50. One disk in each RAID 5
sub-array can fail without damaging the data contained in the entire array.
A RAID 50 array can be created with disks of differing sizes, but the
total available storage space in the array is limited by the size of the
smallest disk. The parity data consumes a complete disk in each RAID 5
sub-array, leaving N-2 disks for usable storage space in an array composed of
N disks. For example, on an array formed of seven 450GB disks and one
300GB disk, the usable size of the array will be (8-2) x min (450GB, 300GB) =
1800GB.
RAID 50 provides better performance than RAID 5 but requires more disks.
The performance gain is particularly observed for write operations. This level
is recommended for applications that require high fault tolerance along with
high capacity.
Hot spare disks
Both hardware and software redundant RAID arrays may support the use of
hot spare disks. Such disks are physically installed in the array and are
inactive until an active disk fails. The RAID controller automatically replaces
the failed drive with the spare and starts the rebuilding process for the
affected array. This reduces the vulnerability window of the array by providing
a healthy disk to the array as soon as a problematic disk is identified.
For example, a RAID 5 array with a single hot spare disk uses the same
number of disks as a RAID 6 array while providing a similar level of protection.
The use of hot spare disks is particularly important for RAID arrays formed by
multiple disks. For example, a RAID 10 array formed of 12 disks will most
likely have a higher disk failure rate than a RAID 10 array of 4 disks. Putting
aside one or two disks as hot spare for your large RAID array will provide
additional protection in case of disk failure.
RAID arrays allow a higher level of reliability and performance for your server
storage. While RAID 1 is a good starting point for applications such as email
and web servers, RAID 10 is recommended for database applications. RAID 5
or RAID 50 can be used for backup appliances where high fault tolerance
along with high capacity are needed.
Info from http://blog.iweb.com/en/2010/05/an-overview-of-raid-
technology/4283.html
More info
o Wikipedia article, RAID
o Art S. Kagel, RAID 5 vs 10 RAID
This article was written by Patrice Guay. It was originately published on his
blog at the address: http://www.patriceguay.com/webhosting/raid and
reprinted with permission. Patrice is a sales engineer at iWeb Technologies.
More Related
How to Buy a Server for Your Business?
A Guide for Storage Newbies: RAID Levels Explained
How to Buy a Server for Your Business?
How to Choose a Server for Your Data Center’s Needs?
Configuring the hpe proliant dl380 gen9 24 sff cto server as a vertica node
Use Cases: Cisco UCS S3260 Storage Server with MapR Converged Data
Platform and Cloudera Enterprise

Weitere ähnliche Inhalte

Was ist angesagt?

Was ist angesagt? (20)

Raid Level
Raid LevelRaid Level
Raid Level
 
Raid_intro.ppt
Raid_intro.pptRaid_intro.ppt
Raid_intro.ppt
 
RAID - (Redundant Array of Inexpensive Disks or Drives, or Redundant Array of...
RAID - (Redundant Array of Inexpensive Disks or Drives, or Redundant Array of...RAID - (Redundant Array of Inexpensive Disks or Drives, or Redundant Array of...
RAID - (Redundant Array of Inexpensive Disks or Drives, or Redundant Array of...
 
Raid
RaidRaid
Raid
 
Various raid levels pros &amp; cons
Various raid levels pros &amp; consVarious raid levels pros &amp; cons
Various raid levels pros &amp; cons
 
Raid
Raid Raid
Raid
 
What is R.A.I.D?
What is R.A.I.D?What is R.A.I.D?
What is R.A.I.D?
 
Raid and its levels
Raid and its levelsRaid and its levels
Raid and its levels
 
Raid
RaidRaid
Raid
 
SEMINAR
SEMINARSEMINAR
SEMINAR
 
Raid (Redundant Array of Inexpensive Disks) in Computer Architecture
Raid (Redundant Array of Inexpensive Disks) in Computer ArchitectureRaid (Redundant Array of Inexpensive Disks) in Computer Architecture
Raid (Redundant Array of Inexpensive Disks) in Computer Architecture
 
Performance evolution of raid
Performance evolution of raidPerformance evolution of raid
Performance evolution of raid
 
Raid
Raid Raid
Raid
 
RAID CONCEPT
RAID CONCEPTRAID CONCEPT
RAID CONCEPT
 
Raid intro
Raid introRaid intro
Raid intro
 
Storage systems reliability
Storage systems reliabilityStorage systems reliability
Storage systems reliability
 
Database 3
Database 3Database 3
Database 3
 
Raid
RaidRaid
Raid
 
Diy raid-recovery
Diy raid-recoveryDiy raid-recovery
Diy raid-recovery
 
Present of Raid and Its Type
Present of Raid and Its TypePresent of Raid and Its Type
Present of Raid and Its Type
 

Ähnlich wie Raid the redundant array of independent disks technology overview

112667416 raid-seminar
112667416 raid-seminar112667416 raid-seminar
112667416 raid-seminarabhivicram
 
disk structure and multiple RAID levels .ppt
disk structure and multiple  RAID levels .pptdisk structure and multiple  RAID levels .ppt
disk structure and multiple RAID levels .pptRAJASEKHARV10
 
RAID (redundant array of independent disks)
RAID  (redundant array of independent disks)RAID  (redundant array of independent disks)
RAID (redundant array of independent disks)manditalaskar123
 
Data center core elements, Data center virtualization
Data center core elements, Data center virtualizationData center core elements, Data center virtualization
Data center core elements, Data center virtualizationMadhuraNK
 
Exercise 3-1 This chapter’s opening scenario illustrates a specific .docx
Exercise 3-1 This chapter’s opening scenario illustrates a specific .docxExercise 3-1 This chapter’s opening scenario illustrates a specific .docx
Exercise 3-1 This chapter’s opening scenario illustrates a specific .docxnealwaters20034
 
Chapter 8 - Multimedia Storage and Retrieval
Chapter 8 - Multimedia Storage and RetrievalChapter 8 - Multimedia Storage and Retrieval
Chapter 8 - Multimedia Storage and RetrievalPratik Pradhan
 
Dr module 3 assignment Management homework help.docx
Dr module 3 assignment Management homework help.docxDr module 3 assignment Management homework help.docx
Dr module 3 assignment Management homework help.docxwrite31
 

Ähnlich wie Raid the redundant array of independent disks technology overview (20)

RAID-CONFIGURATION (2023).pptx
RAID-CONFIGURATION (2023).pptxRAID-CONFIGURATION (2023).pptx
RAID-CONFIGURATION (2023).pptx
 
Raid(Storage Technology)
Raid(Storage Technology)Raid(Storage Technology)
Raid(Storage Technology)
 
Raid level
Raid levelRaid level
Raid level
 
Mohamed Ayman Task3 RAID.docx
Mohamed Ayman Task3 RAID.docxMohamed Ayman Task3 RAID.docx
Mohamed Ayman Task3 RAID.docx
 
112667416 raid-seminar
112667416 raid-seminar112667416 raid-seminar
112667416 raid-seminar
 
RAID-Presentation
RAID-PresentationRAID-Presentation
RAID-Presentation
 
RAID
RAIDRAID
RAID
 
Raid Technology
Raid TechnologyRaid Technology
Raid Technology
 
RAID CAAL
RAID CAALRAID CAAL
RAID CAAL
 
Understanding RAID Controller
Understanding RAID ControllerUnderstanding RAID Controller
Understanding RAID Controller
 
disk structure and multiple RAID levels .ppt
disk structure and multiple  RAID levels .pptdisk structure and multiple  RAID levels .ppt
disk structure and multiple RAID levels .ppt
 
RAID (redundant array of independent disks)
RAID  (redundant array of independent disks)RAID  (redundant array of independent disks)
RAID (redundant array of independent disks)
 
Data center core elements, Data center virtualization
Data center core elements, Data center virtualizationData center core elements, Data center virtualization
Data center core elements, Data center virtualization
 
Raid
RaidRaid
Raid
 
1.2 raid
1.2  raid1.2  raid
1.2 raid
 
Exercise 3-1 This chapter’s opening scenario illustrates a specific .docx
Exercise 3-1 This chapter’s opening scenario illustrates a specific .docxExercise 3-1 This chapter’s opening scenario illustrates a specific .docx
Exercise 3-1 This chapter’s opening scenario illustrates a specific .docx
 
Chapter 8 - Multimedia Storage and Retrieval
Chapter 8 - Multimedia Storage and RetrievalChapter 8 - Multimedia Storage and Retrieval
Chapter 8 - Multimedia Storage and Retrieval
 
Dr module 3 assignment Management homework help.docx
Dr module 3 assignment Management homework help.docxDr module 3 assignment Management homework help.docx
Dr module 3 assignment Management homework help.docx
 
Firebird and RAID
Firebird and RAIDFirebird and RAID
Firebird and RAID
 
RAID LEVELS
RAID LEVELSRAID LEVELS
RAID LEVELS
 

Mehr von IT Tech

Cisco ip phone key expansion module setup
Cisco ip phone key expansion module setupCisco ip phone key expansion module setup
Cisco ip phone key expansion module setupIT Tech
 
Cisco catalyst 9200 series platform spec, licenses, transition guide
Cisco catalyst 9200 series platform spec, licenses, transition guideCisco catalyst 9200 series platform spec, licenses, transition guide
Cisco catalyst 9200 series platform spec, licenses, transition guideIT Tech
 
Cisco isr 900 series highlights, platform specs, licenses, transition guide
Cisco isr 900 series highlights, platform specs, licenses, transition guideCisco isr 900 series highlights, platform specs, licenses, transition guide
Cisco isr 900 series highlights, platform specs, licenses, transition guideIT Tech
 
Hpe pro liant gen9 to gen10 server transition guide
Hpe pro liant gen9 to gen10 server transition guideHpe pro liant gen9 to gen10 server transition guide
Hpe pro liant gen9 to gen10 server transition guideIT Tech
 
The new cisco isr 4461 faq
The new cisco isr 4461 faqThe new cisco isr 4461 faq
The new cisco isr 4461 faqIT Tech
 
New nexus 400 gigabit ethernet (400 g) switches
New nexus 400 gigabit ethernet (400 g) switchesNew nexus 400 gigabit ethernet (400 g) switches
New nexus 400 gigabit ethernet (400 g) switchesIT Tech
 
Tested cisco isr 1100 delivers the richest set of wi-fi features
Tested cisco isr 1100 delivers the richest set of wi-fi featuresTested cisco isr 1100 delivers the richest set of wi-fi features
Tested cisco isr 1100 delivers the richest set of wi-fi featuresIT Tech
 
Aruba campus and branch switching solution
Aruba campus and branch switching solutionAruba campus and branch switching solution
Aruba campus and branch switching solutionIT Tech
 
Cisco transceiver module for compatible catalyst switches
Cisco transceiver module for compatible catalyst switchesCisco transceiver module for compatible catalyst switches
Cisco transceiver module for compatible catalyst switchesIT Tech
 
Cisco ios on cisco catalyst switches
Cisco ios on cisco catalyst switchesCisco ios on cisco catalyst switches
Cisco ios on cisco catalyst switchesIT Tech
 
Cisco's wireless solutions deployment modes
Cisco's wireless solutions deployment modesCisco's wireless solutions deployment modes
Cisco's wireless solutions deployment modesIT Tech
 
Competitive switching comparison cisco vs. hpe aruba vs. huawei vs. dell
Competitive switching comparison cisco vs. hpe aruba vs. huawei vs. dellCompetitive switching comparison cisco vs. hpe aruba vs. huawei vs. dell
Competitive switching comparison cisco vs. hpe aruba vs. huawei vs. dellIT Tech
 
Four reasons to consider the all in-one isr 1000
Four reasons to consider the all in-one isr 1000Four reasons to consider the all in-one isr 1000
Four reasons to consider the all in-one isr 1000IT Tech
 
The difference between yellow and white labeled ports on a nexus 2300 series fex
The difference between yellow and white labeled ports on a nexus 2300 series fexThe difference between yellow and white labeled ports on a nexus 2300 series fex
The difference between yellow and white labeled ports on a nexus 2300 series fexIT Tech
 
Cisco transceiver modules for compatible cisco switches series
Cisco transceiver modules for compatible cisco switches seriesCisco transceiver modules for compatible cisco switches series
Cisco transceiver modules for compatible cisco switches seriesIT Tech
 
Guide to the new cisco firepower 2100 series
Guide to the new cisco firepower 2100 seriesGuide to the new cisco firepower 2100 series
Guide to the new cisco firepower 2100 seriesIT Tech
 
892 f sfp configuration example
892 f sfp configuration example892 f sfp configuration example
892 f sfp configuration exampleIT Tech
 
Cisco nexus 7000 and nexus 7700
Cisco nexus 7000 and nexus 7700Cisco nexus 7000 and nexus 7700
Cisco nexus 7000 and nexus 7700IT Tech
 
Cisco firepower ngips series migration options
Cisco firepower ngips series migration optionsCisco firepower ngips series migration options
Cisco firepower ngips series migration optionsIT Tech
 
Eol transceiver to replacement model
Eol transceiver to replacement modelEol transceiver to replacement model
Eol transceiver to replacement modelIT Tech
 

Mehr von IT Tech (20)

Cisco ip phone key expansion module setup
Cisco ip phone key expansion module setupCisco ip phone key expansion module setup
Cisco ip phone key expansion module setup
 
Cisco catalyst 9200 series platform spec, licenses, transition guide
Cisco catalyst 9200 series platform spec, licenses, transition guideCisco catalyst 9200 series platform spec, licenses, transition guide
Cisco catalyst 9200 series platform spec, licenses, transition guide
 
Cisco isr 900 series highlights, platform specs, licenses, transition guide
Cisco isr 900 series highlights, platform specs, licenses, transition guideCisco isr 900 series highlights, platform specs, licenses, transition guide
Cisco isr 900 series highlights, platform specs, licenses, transition guide
 
Hpe pro liant gen9 to gen10 server transition guide
Hpe pro liant gen9 to gen10 server transition guideHpe pro liant gen9 to gen10 server transition guide
Hpe pro liant gen9 to gen10 server transition guide
 
The new cisco isr 4461 faq
The new cisco isr 4461 faqThe new cisco isr 4461 faq
The new cisco isr 4461 faq
 
New nexus 400 gigabit ethernet (400 g) switches
New nexus 400 gigabit ethernet (400 g) switchesNew nexus 400 gigabit ethernet (400 g) switches
New nexus 400 gigabit ethernet (400 g) switches
 
Tested cisco isr 1100 delivers the richest set of wi-fi features
Tested cisco isr 1100 delivers the richest set of wi-fi featuresTested cisco isr 1100 delivers the richest set of wi-fi features
Tested cisco isr 1100 delivers the richest set of wi-fi features
 
Aruba campus and branch switching solution
Aruba campus and branch switching solutionAruba campus and branch switching solution
Aruba campus and branch switching solution
 
Cisco transceiver module for compatible catalyst switches
Cisco transceiver module for compatible catalyst switchesCisco transceiver module for compatible catalyst switches
Cisco transceiver module for compatible catalyst switches
 
Cisco ios on cisco catalyst switches
Cisco ios on cisco catalyst switchesCisco ios on cisco catalyst switches
Cisco ios on cisco catalyst switches
 
Cisco's wireless solutions deployment modes
Cisco's wireless solutions deployment modesCisco's wireless solutions deployment modes
Cisco's wireless solutions deployment modes
 
Competitive switching comparison cisco vs. hpe aruba vs. huawei vs. dell
Competitive switching comparison cisco vs. hpe aruba vs. huawei vs. dellCompetitive switching comparison cisco vs. hpe aruba vs. huawei vs. dell
Competitive switching comparison cisco vs. hpe aruba vs. huawei vs. dell
 
Four reasons to consider the all in-one isr 1000
Four reasons to consider the all in-one isr 1000Four reasons to consider the all in-one isr 1000
Four reasons to consider the all in-one isr 1000
 
The difference between yellow and white labeled ports on a nexus 2300 series fex
The difference between yellow and white labeled ports on a nexus 2300 series fexThe difference between yellow and white labeled ports on a nexus 2300 series fex
The difference between yellow and white labeled ports on a nexus 2300 series fex
 
Cisco transceiver modules for compatible cisco switches series
Cisco transceiver modules for compatible cisco switches seriesCisco transceiver modules for compatible cisco switches series
Cisco transceiver modules for compatible cisco switches series
 
Guide to the new cisco firepower 2100 series
Guide to the new cisco firepower 2100 seriesGuide to the new cisco firepower 2100 series
Guide to the new cisco firepower 2100 series
 
892 f sfp configuration example
892 f sfp configuration example892 f sfp configuration example
892 f sfp configuration example
 
Cisco nexus 7000 and nexus 7700
Cisco nexus 7000 and nexus 7700Cisco nexus 7000 and nexus 7700
Cisco nexus 7000 and nexus 7700
 
Cisco firepower ngips series migration options
Cisco firepower ngips series migration optionsCisco firepower ngips series migration options
Cisco firepower ngips series migration options
 
Eol transceiver to replacement model
Eol transceiver to replacement modelEol transceiver to replacement model
Eol transceiver to replacement model
 

Kürzlich hochgeladen

Architecting Cloud Native Applications
Architecting Cloud Native ApplicationsArchitecting Cloud Native Applications
Architecting Cloud Native ApplicationsWSO2
 
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...DianaGray10
 
Platformless Horizons for Digital Adaptability
Platformless Horizons for Digital AdaptabilityPlatformless Horizons for Digital Adaptability
Platformless Horizons for Digital AdaptabilityWSO2
 
How to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerHow to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerThousandEyes
 
Elevate Developer Efficiency & build GenAI Application with Amazon Q​
Elevate Developer Efficiency & build GenAI Application with Amazon Q​Elevate Developer Efficiency & build GenAI Application with Amazon Q​
Elevate Developer Efficiency & build GenAI Application with Amazon Q​Bhuvaneswari Subramani
 
ICT role in 21st century education and its challenges
ICT role in 21st century education and its challengesICT role in 21st century education and its challenges
ICT role in 21st century education and its challengesrafiqahmad00786416
 
DBX First Quarter 2024 Investor Presentation
DBX First Quarter 2024 Investor PresentationDBX First Quarter 2024 Investor Presentation
DBX First Quarter 2024 Investor PresentationDropbox
 
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers:  A Deep Dive into Serverless Spatial Data and FMECloud Frontiers:  A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FMESafe Software
 
Strategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
Strategize a Smooth Tenant-to-tenant Migration and Copilot TakeoffStrategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
Strategize a Smooth Tenant-to-tenant Migration and Copilot Takeoffsammart93
 
Exploring Multimodal Embeddings with Milvus
Exploring Multimodal Embeddings with MilvusExploring Multimodal Embeddings with Milvus
Exploring Multimodal Embeddings with MilvusZilliz
 
Introduction to Multilingual Retrieval Augmented Generation (RAG)
Introduction to Multilingual Retrieval Augmented Generation (RAG)Introduction to Multilingual Retrieval Augmented Generation (RAG)
Introduction to Multilingual Retrieval Augmented Generation (RAG)Zilliz
 
Boost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdfBoost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdfsudhanshuwaghmare1
 
AWS Community Day CPH - Three problems of Terraform
AWS Community Day CPH - Three problems of TerraformAWS Community Day CPH - Three problems of Terraform
AWS Community Day CPH - Three problems of TerraformAndrey Devyatkin
 
MINDCTI Revenue Release Quarter One 2024
MINDCTI Revenue Release Quarter One 2024MINDCTI Revenue Release Quarter One 2024
MINDCTI Revenue Release Quarter One 2024MIND CTI
 
Apidays New York 2024 - APIs in 2030: The Risk of Technological Sleepwalk by ...
Apidays New York 2024 - APIs in 2030: The Risk of Technological Sleepwalk by ...Apidays New York 2024 - APIs in 2030: The Risk of Technological Sleepwalk by ...
Apidays New York 2024 - APIs in 2030: The Risk of Technological Sleepwalk by ...apidays
 
WSO2's API Vision: Unifying Control, Empowering Developers
WSO2's API Vision: Unifying Control, Empowering DevelopersWSO2's API Vision: Unifying Control, Empowering Developers
WSO2's API Vision: Unifying Control, Empowering DevelopersWSO2
 
Rising Above_ Dubai Floods and the Fortitude of Dubai International Airport.pdf
Rising Above_ Dubai Floods and the Fortitude of Dubai International Airport.pdfRising Above_ Dubai Floods and the Fortitude of Dubai International Airport.pdf
Rising Above_ Dubai Floods and the Fortitude of Dubai International Airport.pdfOrbitshub
 
TrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
TrustArc Webinar - Unlock the Power of AI-Driven Data DiscoveryTrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
TrustArc Webinar - Unlock the Power of AI-Driven Data DiscoveryTrustArc
 
MS Copilot expands with MS Graph connectors
MS Copilot expands with MS Graph connectorsMS Copilot expands with MS Graph connectors
MS Copilot expands with MS Graph connectorsNanddeep Nachan
 
Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024
Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024
Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024Victor Rentea
 

Kürzlich hochgeladen (20)

Architecting Cloud Native Applications
Architecting Cloud Native ApplicationsArchitecting Cloud Native Applications
Architecting Cloud Native Applications
 
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
 
Platformless Horizons for Digital Adaptability
Platformless Horizons for Digital AdaptabilityPlatformless Horizons for Digital Adaptability
Platformless Horizons for Digital Adaptability
 
How to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerHow to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected Worker
 
Elevate Developer Efficiency & build GenAI Application with Amazon Q​
Elevate Developer Efficiency & build GenAI Application with Amazon Q​Elevate Developer Efficiency & build GenAI Application with Amazon Q​
Elevate Developer Efficiency & build GenAI Application with Amazon Q​
 
ICT role in 21st century education and its challenges
ICT role in 21st century education and its challengesICT role in 21st century education and its challenges
ICT role in 21st century education and its challenges
 
DBX First Quarter 2024 Investor Presentation
DBX First Quarter 2024 Investor PresentationDBX First Quarter 2024 Investor Presentation
DBX First Quarter 2024 Investor Presentation
 
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers:  A Deep Dive into Serverless Spatial Data and FMECloud Frontiers:  A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
 
Strategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
Strategize a Smooth Tenant-to-tenant Migration and Copilot TakeoffStrategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
Strategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
 
Exploring Multimodal Embeddings with Milvus
Exploring Multimodal Embeddings with MilvusExploring Multimodal Embeddings with Milvus
Exploring Multimodal Embeddings with Milvus
 
Introduction to Multilingual Retrieval Augmented Generation (RAG)
Introduction to Multilingual Retrieval Augmented Generation (RAG)Introduction to Multilingual Retrieval Augmented Generation (RAG)
Introduction to Multilingual Retrieval Augmented Generation (RAG)
 
Boost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdfBoost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdf
 
AWS Community Day CPH - Three problems of Terraform
AWS Community Day CPH - Three problems of TerraformAWS Community Day CPH - Three problems of Terraform
AWS Community Day CPH - Three problems of Terraform
 
MINDCTI Revenue Release Quarter One 2024
MINDCTI Revenue Release Quarter One 2024MINDCTI Revenue Release Quarter One 2024
MINDCTI Revenue Release Quarter One 2024
 
Apidays New York 2024 - APIs in 2030: The Risk of Technological Sleepwalk by ...
Apidays New York 2024 - APIs in 2030: The Risk of Technological Sleepwalk by ...Apidays New York 2024 - APIs in 2030: The Risk of Technological Sleepwalk by ...
Apidays New York 2024 - APIs in 2030: The Risk of Technological Sleepwalk by ...
 
WSO2's API Vision: Unifying Control, Empowering Developers
WSO2's API Vision: Unifying Control, Empowering DevelopersWSO2's API Vision: Unifying Control, Empowering Developers
WSO2's API Vision: Unifying Control, Empowering Developers
 
Rising Above_ Dubai Floods and the Fortitude of Dubai International Airport.pdf
Rising Above_ Dubai Floods and the Fortitude of Dubai International Airport.pdfRising Above_ Dubai Floods and the Fortitude of Dubai International Airport.pdf
Rising Above_ Dubai Floods and the Fortitude of Dubai International Airport.pdf
 
TrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
TrustArc Webinar - Unlock the Power of AI-Driven Data DiscoveryTrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
TrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
 
MS Copilot expands with MS Graph connectors
MS Copilot expands with MS Graph connectorsMS Copilot expands with MS Graph connectors
MS Copilot expands with MS Graph connectors
 
Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024
Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024
Modular Monolith - a Practical Alternative to Microservices @ Devoxx UK 2024
 

Raid the redundant array of independent disks technology overview

  • 1. <Tags> RAID technology, various RAID architectures, RAID 0, RAID 1, RAID 5, types of RAID managers, hardware solutions RAID/Redundant Array of Independent Disks Technology Overview An overview of RAID technology RAID (Redundant Array of Independent Disks) is a technology allowing a higher level of storage reliability and performance from disk-drive components via the technique of arranging them into arrays. A RAID array is a configuration with multiple physical disks set up to use RAID architecture like RAID 0, RAID 1, RAID 5, etc. While the RAID array distributes data across multiple disks, it is considered as a single disk by the server operating system. The various RAID architectures are designed to meet at least one of these two goals: o increase data reliability o increase Input/Output (I/O) performance A RAID array is composed of two or more physical hard disks combined into a single logical storage unit. To give RAID array additional features compared to JBOD (Just a Bunch of Disk), three main concepts are used: o Mirroring o Striping o Error correction Mirroring is the writing of identical data to more than one disk. The basic example of mirroring is a RAID 1 array formed by two disks. Both disks have the same content at any time. If the first drive fails, read and write operation can be done directly on the second disk. Read operations on mirrored arrays is faster compared to a single disk since the system can fetch data from multiple disks at the same time. However, write operations are slower since the data must be written to all disks instead of only one. The reconstruction of a failed mirror array is quite simple: data must be copied from the healthy disk to the new one. During reconstruction, the read performance boost of the mirror array is reduced since only the healthy disk is fully usable. Striping is the splitting of data across multiple disks. For example, a RAID 0 array formed by two disks strips data to both disks. Striping does not provide fault tolerance, only a performance boost. Read and write operations on a striped array are faster compared to a single disks as both operation are split between the available disks.
  • 2. Error correction stores parity data on disk to allow the detection and possibly the correction of problems. RAID 5 is a good example of the error correction mechanism. For example, a RAID 5 array composed of three drive strips data on the first two disks and stores parity on the third disk to provide fault tolerance. The error correction mechanism will slow down performance especially for write operation since both data and parity information needs to be written instead of data only. Moreover, the reconstruction of a failed array using parity information incurs severe performance degradation as data needs to be fetched from all drives in the array to rebuild the information for the new disk. The design of any RAID scheme is a compromise between data protection and performance. The comprehension of your server requirements in terms of storage is crucial to select the appropriate RAID configuration. Hardware vs. Software RAID There are two types of RAID managers: o hardware o software Hardware solutions are specialized hardware components connected to the server motherboard. Most of the time, these components will provide a post- BIOS configuration interface that can be run before booting your server operating system. Each configured RAID array will present himself to the operating system as a single storage drive. The RAID array can be partitioned into various RAID volumes at the operating system level. On the other hand, software solutions are implemented at the operating system level and directly create RAID volumes from entire physical disks or partitions. Each RAID volume is seen as a standard storage space for the applications running within the operating system. Both approaches have advantages and disadvantages compared to each other. Depending on the manufacturer, an hardware RAID card supporting up to 8 drives is usually sold between 400$ and 1200$ while a software RAID solution is usually included free of charge with the operating system of your server. Under Linux, the md RAID subsystem is able to support most RAID configurations. Under Microsoft Windows, Software RAID is provided through the use of dynamic disks in the disk management console. The required processing power for RAID 0, RAID 1 and RAID 10 is relatively low. Parity-based arrays like RAID 5, RAID 6, RAID 50 and RAID 60 require more complex data processing during write or integrity check operations. However, this processing time is minimal on modern CPU units as the increase in speed of commodity CPUs has been consistently greater than the increase in speed of hard disk drive throughput over history. Thus, the
  • 3. percentage of server CPU time required to saturate an hard disk RAID array throughput has been dropping and will probably continue to do so in the future. A more serious issue with software RAID array is how the operating system deals with the boot process. Since the RAID information is kept at the operating system level, booting a faulty RAID array is problematic. At boot time, the operating system is not available to coordinate the failover to another drive if the usual boot drive fails. Such systems may require manual intervention to make them bootable again after a failure. A hardware RAID controller is initialized before the boot process starts looking for information on the disk drives. Therefore, hardware RAID controller will increase the robustness of your server compared to software RAID. A hardware RAID controller may also support hot swappable hard drives. With such a feature, hard disks can be changed in a server without having to turn off the power and open up server case. Removing a failed hard drive and replacing it with a new one is a simple task with a hardware RAID controller supporting hot swappable disks. Without this feature, the server needs to be powered off before replacing the failed drive. This will lead to downtime unless your web solution is properly clustered. Finally, only hardware RAID controllers can carry a Battery Backup Unit (BBU) to preserve the cache memory of the controller if the server is shut down abruptly. Without such a protection, write-back cache should be disabled on the RAID array to prevent data corruption. Turning off write-back cache comes with a performance penalty for write operations on the RAID array. The use of a BBU on your RAID controller is a solution to safely enable write- back caching and improve write performance. A RAID array is not a backup solution Most RAID arrays provide protection in case of a disk failure. While such a protection is important to protect yourself from data loss due to hardware failure, it does not provide historical data. A RAID array does not allow to recover a deleted or corrupted file due to a bug in your application. A backup solution will allow you to go back in time to recover deleted or corrupted files. Implementation Note: images were adapted from those available on Wikipedia. RAID 0
  • 4. RAID 0 is a pure implementation of striping. A minimum of two (2) disks is required for RAID 0. No parity information is stored for redundancy. It is important to note that RAID 0 was not one of the original RAID levels and provides no data redundancy. RAID 0 is normally used to increase performance. RAID 0 is useful for setups where redundancy is irrelevant. A RAID 0 array can be created with disks of differing sizes, but the total available storage space in the array is limited by the size of the smallest disk. For example, if a 450GB disk is striped together with a 300GB disk, the usable size of the array will be 2 x min(450GB, 300GB) = 600GB. For reads and writes operations dealing with small data blocks such as database access, the data will be fetched independently on each disk of the RAID 1 array. If the data sectors accessed are spread evenly between the two disks, the apparent seek time of the array will be half that of a single disk. The transfer speed of the array will be the transfer speed of all the disks added together, limited only by the speed of the RAID controller. For reads and writes operations dealing with large data blocks such as copying files or video playback, the data will most likely be fetch on a single disk reducing the performance gain of the RAID 0 array. RAID 1
  • 5. RAID 1 is a pure implementation of mirroring. A minimum of two (2) disks is required for RAID 1. This is useful when read performance or reliability are more important than data storage capacity. A classic RAID 1 mirrored pair contains two disks (see diagram), which increases reliability over a single disk. Since each member contains a complete copy of the data, and can be addressed independently, ordinary wear-and-tear reliability is raised. A RAID 1 array can be created with disks of differing sizes, but the total available storage space in the array is equal to the size of the smallest disk. For example, if a 450GB disk is mirrored with a 300GB disk, the usable size of the array will be min(450GB, 300GB) = 300GB. The read performance of a RAID 1 array can go up roughly as a linear multiple of the number of copies. That is, a RAID 1 array of two disks can query two different places at the same time so the read performance should be two times higher than the performance of a single disk. RAID 1 is a good starting point for applications such as email and web servers as well as for any other use requiring above average read I/O performance and hardware failure protection. RAID 5
  • 6. RAID 5 array uses block-level striping with distributed parity blocks across all member disks. The disk used for the parity block is staggered from one stripe to the next, hence the term distributed parity blocks. A minimum of three (3) disks is required for RAID 5. This RAID configuration is mainly used to maximize disk space while providing a protection for your data in case of a disk failure. Given the diagram of the RAID 5 array, where each column is a disk, let assume A1=00000101and A2=00000011. The parity block Ap is generated by applying the XOR operator on A1 and A2: Ap = A1 XOR A2 = 00000110 If the first disk fails, A1 will no longer be accessible, but can be reconstructed: A1 = A2 XOR Ap = 00000101 A RAID 5 array can be created with disks of differing sizes, but the total available storage space in the array is limited by the size of the smallest disk. The parity data consumes a complete disk, leaving N-1 disks for usable storage space in an array composed of N disks. For example, on an array formed of three 450GB disks and one 300GB disk, the usable size of the array will be (4-1) x min(450GB, 300GB) = 900GB. RAID 5 writes are expensive in terms of disk operations and traffic between the disks and the RAID controller since both data and parity information need to be written to disk. The parity blocks are not read on data reads, since this would add unnecessary overhead and would diminish performance. However, the parity blocks are read when a defective disk sector is present in the required data blocks. Likewise, should a disk fail in the array, the parity blocks
  • 7. and the data blocks from the surviving disks are combined mathematically to reconstruct data from the failed drive in real-time. This situation leads to severe performance degradation on the array for read and write operations. RAID 6 RAID 6 extends RAID 5 by adding an additional parity block. Block-level striping is combined with two parity blocks distributed across all member disks. A minimum of four (4) disks is required for RAID 6. This RAID configuration is mainly used to maximize disk space while providing a protection for up to two disk failures. Both parity blocks Ap and Aq are generated from the data blocks A1, A2 and A3. Ap is generated by applying the XOR operator on A1, A2 and A3. Aq is generated using a more complex variant of the Ap formulae. If the first disk fails, A1 will no longer be accessible, but can be reconstructed using A2 and A3 plus the Ap parity block. If both the first and the second disk fail, A1 and A2 will no longer be accessible, but can be reconstructed using A3 plus both Ap and Aq parity blocks. The computation of Aq is CPU intensive, in contrast to the simplicity of Ap. Thus, a software RAID 6 implementation may have a significant effect on system performance especially during the reconstruction of a failed disk. A RAID 6 array can be created with disks of differing sizes, but the total available storage space in the array is limited by the size of the smallest disk. The parity data consumes two complete disks, leaving N-2 disks for usable storage space in an array composed of N disks. For example, on an array formed of four 450GB disks and one 300GB disk, the usable size of the array will be (5-2) x min(450GB, 300GB) = 900GB.
  • 8. RAID 6 writes are even more expensive than RAID 5 writes in terms of disk operations and traffic between the disks and the RAID controller since both data and parity information need to be written to disk. The parity blocks are not read on data reads, since this would add unnecessary overhead and would diminish performance. However, the parity blocks are read when a defective disk sector is present in the required data blocks. Likewise, should a disk fail in the array, the parity blocks and the data blocks from the surviving disks are combined mathematically to reconstruct data from the failed drive in real-time. This situation leads to severe performance degradation on the array for read and write operations. RAID 10 RAID 10 is a combination of RAID 1 (mirroring) and RAID 0 (striping) where 4N mirrored disks are striped together. A minimum of four (4) disks are required for RAID 10. One disk in each RAID 1 mirror can fail without damaging the data contained in the entire array. A RAID 10 array can be created with disks of differing sizes, but the total available storage space in the array is limited by the size of the smallest disk. The mirroring consumes half of disk space, leaving 2N disks for usable storage space in an array composed of 4N disks. For example, on an array
  • 9. formed of seven 450GB disks and one 300GB disk, the usable size of the array will be (7+1)/2 x min(450GB, 300GB) = 1200GB. RAID 10 provides better performance than all other redundant RAID levels. It is the preferable RAID level for I/O intensive applications such as database servers as well as for any other use requiring high disk performance. RAID 50 RAID 50 is a combination of RAID 5 (striping and error correction) and RAID 0 (striping) where RAID 5 sub-arrays are striped together. A minimum of six (6) disks are required for RAID 50. One disk in each RAID 5 sub-array can fail without damaging the data contained in the entire array. A RAID 50 array can be created with disks of differing sizes, but the total available storage space in the array is limited by the size of the smallest disk. The parity data consumes a complete disk in each RAID 5 sub-array, leaving N-2 disks for usable storage space in an array composed of N disks. For example, on an array formed of seven 450GB disks and one 300GB disk, the usable size of the array will be (8-2) x min (450GB, 300GB) = 1800GB. RAID 50 provides better performance than RAID 5 but requires more disks. The performance gain is particularly observed for write operations. This level
  • 10. is recommended for applications that require high fault tolerance along with high capacity. Hot spare disks Both hardware and software redundant RAID arrays may support the use of hot spare disks. Such disks are physically installed in the array and are inactive until an active disk fails. The RAID controller automatically replaces the failed drive with the spare and starts the rebuilding process for the affected array. This reduces the vulnerability window of the array by providing a healthy disk to the array as soon as a problematic disk is identified. For example, a RAID 5 array with a single hot spare disk uses the same number of disks as a RAID 6 array while providing a similar level of protection. The use of hot spare disks is particularly important for RAID arrays formed by multiple disks. For example, a RAID 10 array formed of 12 disks will most likely have a higher disk failure rate than a RAID 10 array of 4 disks. Putting aside one or two disks as hot spare for your large RAID array will provide additional protection in case of disk failure. RAID arrays allow a higher level of reliability and performance for your server storage. While RAID 1 is a good starting point for applications such as email and web servers, RAID 10 is recommended for database applications. RAID 5 or RAID 50 can be used for backup appliances where high fault tolerance along with high capacity are needed. Info from http://blog.iweb.com/en/2010/05/an-overview-of-raid- technology/4283.html More info o Wikipedia article, RAID o Art S. Kagel, RAID 5 vs 10 RAID This article was written by Patrice Guay. It was originately published on his blog at the address: http://www.patriceguay.com/webhosting/raid and reprinted with permission. Patrice is a sales engineer at iWeb Technologies. More Related How to Buy a Server for Your Business? A Guide for Storage Newbies: RAID Levels Explained
  • 11. How to Buy a Server for Your Business? How to Choose a Server for Your Data Center’s Needs? Configuring the hpe proliant dl380 gen9 24 sff cto server as a vertica node Use Cases: Cisco UCS S3260 Storage Server with MapR Converged Data Platform and Cloudera Enterprise