H/W Vs S/W RAID

  • I am looking the pros and cons between Hardware and software RAID ,can some body point me to the white paper or any other link that tells what are the concerened using software RAID over H/W RAID?

  • I don't have a link but I think you might find something in WIKIPEDIA. Also, the biggest one off the top of my head is performance, hardware RAID provides a volume to the OS and hides the fact that it is anything but another disk.. Software RAID has to use the resources of the host machine to do its work, hardware RAID does this on the controller.

    CEWII

  • thanks , actually i need some concrete link to prove H/W raid performance advantage over S/W but did not find much on internet that really stop to opt my team to use S/W raid and take my advice on using H/W raid , any other detail is appreciated.

  • Directly from WIKI:

    http://en.wikipedia.org/wiki/RAID

    [edit] Operating system based ("software RAID")

    Software implementations are now provided by many operating systems. A software layer sits above the (generally block-based) disk device drivers and provides an abstraction layer between the logical drives (RAIDs) and physical drives. Most common levels are RAID 0 (striping across multiple drives for increased space and performance) and RAID 1 (mirroring two drives), followed by RAID 1+0, RAID 0+1, and RAID 5 (data striping with parity) are supported.

    Apple's Mac OS X Server[10] and Mac OS X[11] support RAID 0, RAID 1 and RAID 1+0.

    FreeBSD supports RAID 0, RAID 1, RAID 3, and RAID 5 and all layerings of the above via GEOM modules[12][13] and ccd.[14], as well as supporting RAID 0, RAID 1, RAID-Z, and RAID-Z2 (similar to RAID-5 and RAID-6 respectively), plus nested combinations of those via ZFS.

    Linux supports RAID 0, RAID 1, RAID 4, RAID 5, RAID 6 and all layerings of the above.[15][16]

    Microsoft's server operating systems support 3 RAID levels; RAID 0, RAID 1, and RAID 5. Some of the Microsoft desktop operating systems support RAID such as Windows XP Professional which supports RAID level 0 in addition to spanning multiple disks but only if using dynamic disks and volumes. Windows XP supports RAID 0, 1, and 5 with a simple file patch [17]. RAID functionality in Windows is slower than hardware RAID, but allows a RAID array to be moved to another machine with no compatibility issues.

    NetBSD supports RAID 0, RAID 1, RAID 4 and RAID 5 (and any nested combination of those like 1+0) via its software implementation, named RAIDframe.

    OpenBSD aims to support RAID 0, RAID 1, RAID 4 and RAID 5 via its software implementation softraid.

    OpenSolaris and Solaris 10 supports RAID 0, RAID 1, RAID 5 (or the similar "RAID Z" found only on ZFS), and RAID 6 (and any nested combination of those like 1+0) via ZFS and now has the ability to boot from a ZFS volume on both x86 and UltraSPARC. Through SVM, Solaris 10 and earlier versions support RAID 1 for the boot filesystem, and adds RAID 0 and RAID 5 support (and various nested combinations) for data drives.

    Software RAID has advantages and disadvantages compared to hardware RAID. The software must run on a host server attached to storage, and server's processor must dedicate processing time to run the RAID software. The additional processing capacity required for RAID 0 and RAID 1 is low, but parity-based arrays require more complex data processing during write or integrity-checking operations. As the rate of data processing increases with the number of disks in the array, so does the processing requirement. Furthermore all the buses between the processor and the disk controller must carry the extra data required by RAID which may cause congestion.

    Over the history of hard disk drives, the increase in speed of commodity CPUs has been consistently greater than the increase in speed of hard disk drive throughput[18]. Thus, over-time for a given number of hard disk drives, the percentage of host CPU time required to saturate a given number of hard disk drives has been dropping. e.g. The Linux software md RAID subsystem is capable of calculating parity information at 6GB/s (100% usage of a single core on a 2.1 GHz Intel "Core2" CPU as of Linux v2.6.26). A three-drive RAID5 array using hard disks capable of sustaining a write of 100MB/s will require parity to be calculated at the rate of 200MB/s. This will require the resources of just over 3% of a single CPU core during write operations (parity does not need to be calculated for read operations on a RAID5 array, unless a drive has failed).

    Software RAID implementations may employ more sophisticated algorithms than hardware RAID implementations (for instance with respect to disk scheduling and command queueing), and thus may be capable of increased performance.

    Another concern with operating system-based RAID is the boot process. It can be difficult or impossible to set up the boot process such that it can fail over to another drive if the usual boot drive fails. Such systems can require manual intervention to make the machine bootable again after a failure. There are exceptions to this, such as the LILO bootloader for Linux, loader for FreeBSD[19] , and some configurations of the GRUB bootloader natively understand RAID-1 and can load a kernel. If the BIOS recognizes a broken first disk and refers bootstrapping to the next disk, such a system will come up without intervention, but the BIOS might or might not do that as intended. A hardware RAID controller typically has explicit programming to decide that a disk is broken and fall through to the next disk.

    Hardware RAID controllers can also carry battery-powered cache memory. For data safety in modern systems the user of software RAID might need to turn the write-back cache on the disk off (but some drives have their own battery/capacitors on the write-back cache, a UPS, and/or implement atomicity in various ways, etc). Turning off the write cache has a performance penalty that can, depending on workload and how well supported command queuing in the disk system is, be significant. The battery backed cache on a RAID controller is one solution to have a safe write-back cache.

    Finally operating system-based RAID usually uses formats specific to the operating system in question so it cannot generally be used for partitions that are shared between operating systems as part of a multi-boot setup. However, this allows RAID disks to be moved from one computer to a computer with an operating system or file system of the same type, which can be more difficult when using hardware RAID (e.g. #1: When one computer uses a hardware RAID controller from one manufacturer and another computer uses a controller from a different manufacturer, drives typically cannot be interchanged. e.g. #2: If the hardware controller 'dies' before the disks do, data may become unrecoverable unless a hardware controller of the same type is obtained, unlike with firmware-based or software-based RAID).

    Most operating system-based implementations allow RAIDs to be created from partitions rather than entire physical drives. For instance, an administrator could divide an odd number of disks into two partitions per disk, mirror partitions across disks and stripe a volume across the mirrored partitions to emulate IBM's RAID 1E configuration. Using partitions in this way also allows mixing reliability levels on the same set of disks. For example, one could have a very robust RAID 1 partition for important files, and a less robust RAID 5 or RAID 0 partition for less important data. (Some BIOS-based controllers offer similar features, e.g. Intel Matrix RAID.) Using two partitions on the same drive in the same RAID is, however, dangerous. (e.g. #1: Having all partitions of a RAID-1 on the same drive will, obviously, make all the data inaccessible if the single drive fails. e.g. #2: In a RAID 5 array composed of four drives 250 + 250 + 250 + 500 GB, with the 500-GB drive split into two 250 GB partitions, a failure of this drive will remove two partitions from the array, causing all of the data held on it to be lost).

    [edit] Hardware-based

    Hardware RAID controllers use different, proprietary disk layouts, so it is not usually possible to span controllers from different manufacturers. They do not require processor resources, the BIOS can boot from them, and tighter integration with the device driver may offer better error handling.

    A hardware implementation of RAID requires at least a special-purpose RAID controller. On a desktop system this may be a PCI expansion card, PCI-e expansion card or built into the motherboard. Controllers supporting most types of drive may be used – IDE/ATA, SATA, SCSI, SSA, Fibre Channel, sometimes even a combination. The controller and disks may be in a stand-alone disk enclosure, rather than inside a computer. The enclosure may be directly attached to a computer, or connected via SAN. The controller hardware handles the management of the drives, and performs any parity calculations required by the chosen RAID level.

    Most hardware implementations provide a read/write cache, which, depending on the I/O workload, will improve performance. In most systems the write cache is non-volatile (i.e. battery-protected), so pending writes are not lost on a power failure.

    Hardware implementations provide guaranteed performance, add no overhead to the local CPU complex and can support many operating systems, as the controller simply presents a logical disk to the operating system.

    Hardware implementations also typically support hot swapping, allowing failed drives to be replaced while the system is running.

    However, hardware RAID controllers are mostly slower than software RAID due to a dedicated CPU on the controller card, which isn't as fast as a real CPU in a computer/server. More expensive RAID controllers have faster CPUs. If you buy a hardware RAID controller, checkout the specs and look for throughput speed.

    [edit] Firmware/driver-based RAID ("FakeRAID")

    Operating system-based RAID doesn't always protect the boot process and is generally impractical on desktop versions of Windows (as described above). Hardware RAID controllers are expensive and proprietary. To fill this gap, cheap "RAID controllers" were introduced that do not contain a RAID controller chip, but simply a standard disk controller chip with special firmware and drivers. During early stage bootup the RAID is implemented by the firmware; when a protected-mode operating system kernel such as Linux or a modern version of Microsoft Windows is loaded the drivers take over.

    These controllers are described by their manufacturers as RAID controllers, and it is rarely made clear to purchasers that the burden of RAID processing is borne by the host computer's central processing unit, not the RAID controller itself, thus introducing the aforementioned CPU overhead from which hardware controllers don't suffer. Firmware controllers often can only use certain types of hard drives in their RAID arrays (e.g. SATA for Intel Matrix RAID), as there is neither SCSI nor PATA support in modern Intel ICH southbridges; however, motherboard makers implement RAID controllers outside of the southbridge on some motherboards. Before their introduction, a "RAID controller" implied that the controller did the processing, and the new type has become known by some as "fake RAID" even though the RAID itself is implemented correctly. Adaptec calls them "HostRAID".

  • sqlquery-101401 (5/12/2010)


    thanks , actually i need some concrete link to prove H/W raid performance advantage over S/W but did not find much on internet that really stop to opt my team to use S/W raid and take my advice on using H/W raid , any other detail is appreciated.

    This link has an Adaptec WP that details all you need to know

    -----------------------------------------------------------------------------------------------------------

    "Ya can't make an omelette without breaking just a few eggs" 😉

  • sqlquery-101401 (5/12/2010)


    I am looking the pros and cons between Hardware and software RAID ,can some body point me to the white paper or any other link that tells what are the concerened using software RAID over H/W RAID?

    http://blog.taragana.com/index.php/archive/which-one-is-better-software-raid-or-hardware-raid/

Viewing 6 posts - 1 through 5 (of 5 total)

You must be logged in to reply to this topic. Login to reply