Index: chapter.sgml
===================================================================
RCS file: /home/dcvs/doc/en_US.ISO8859-1/books/handbook/geom/chapter.sgml,v
retrieving revision 1.51
diff -u -r1.51 chapter.sgml
--- chapter.sgml 21 Nov 2011 18:11:25 -0000 1.51
+++ chapter.sgml 4 Feb 2012 21:42:44 -0000
@@ -436,6 +436,169 @@
+
+
+
+
+ Mark
+ Gladman
+ Written by
+
+
+ Daniel
+ Gerzo
+
+
+
+
+ Tom
+ Rhodes
+ Based on documentation by
+
+
+ Murray
+ Stokely
+
+
+
+
+
+ GEOM
+
+
+ RAID3
+
+
+ RAID3 - Byte-level Striping with Dedicated
+ Parity
+
+ RAID3 is a method used to combine several
+ disk drives into a single volume with a dedicated parity
+ disk. In a RAID3 system, data is split up
+ into a number of bytes that get written across all the drives in
+ the array except for one disk which acts as a dedicated parity
+ disk. This means that reading 1024KB from a
+ RAID3 implementation will access all disks in
+ the array. Performance can be enhanced by using multiple
+ disk controllers. The RAID3 array provides a
+ fault tolerance of 1 drive, while providing a capacity of 1 - 1/n
+ times the total capacity of all drives in the array, where n is the
+ number of hard drives in the array. Such a configuration is
+ mostly suitable for storing data of larger sizes, e.g.
+ multimedia files.
+
+ At least 3 physical hard drives are required to build a
+ RAID3 array. Each disk must be of the same
+ size, since I/O requests are interleaved to read or write to
+ multiple disks in parallel. Also due to the nature of
+ RAID3, the number of drives must be
+ equal to 3, 5, 9, 17, etc. (2^n + 1).
+
+
+ Creating a Dedicated RAID3 Array
+
+ In &os;, support for RAID3 is
+ implemented by the &man.graid3.8; GEOM
+ class. Creating a dedicated
+ RAID3 array on &os; requires the following
+ steps.
+
+
+ While it is theoretically possible to boot from a
+ RAID3 array on &os;, that configuration
+ is uncommon and is not advised.
+
+
+
+
+ First, load the geom_raid3.ko
+ kernel module by issuing the following command:
+
+ &prompt.root; graid3 load
+
+ Alternatively, it is possible to manually load the
+ geom_raid3.ko module:
+
+ &prompt.root; kldload geom_raid3.ko
+
+
+
+ Create or ensure that a suitable mount point
+ exists:
+
+ &prompt.root; mkdir /multimedia/
+
+
+
+ Determine the device names for the disks which will be
+ added to the array, and create the new
+ RAID3 device. The final device listed
+ will act as the dedicated parity disk. This
+ example uses three unpartitioned
+ ATA drives:
+ ada1
+ and ada2
+ for data, and
+ ada3
+ for parity.
+
+ &prompt.root; graid3 label -v gr0 /dev/ada1 /dev/ada2 /dev/ada3
+Metadata value stored on /dev/ada1.
+Metadata value stored on /dev/ada2.
+Metadata value stored on /dev/ada3.
+Done.
+
+
+
+ Partition the newly created
+ gr0 device and put a UFS file
+ system on it:
+
+ &prompt.root; gpart create -s GPT /dev/raid3/gr0
+&prompt.root; gpart add -t freebsd-ufs /dev/raid3/gr0
+&prompt.root; newfs -j /dev/raid3/gr0p1
+
+ Many numbers will glide across the screen, and after a
+ bit of time, the process will be complete. The volume has
+ been created and is ready to be mounted.
+
+
+
+ The last step is to mount the file system:
+
+ &prompt.root; mount /dev/raid3/gr0p1 /multimedia/
+
+ The RAID3 array is now ready to
+ use.
+
+
+
+ Additional configuration has to be done in order to retain
+ the above configuration across the system reboots.
+
+
+
+ The geom_raid3.ko module must be
+ loaded before the array can be mounted. To automatically
+ load the kernel module during the system initialization, add
+ the following line to the
+ /boot/loader.conf file:
+
+ geom_raid3_load="YES"
+
+
+
+ The following volume information must be added to the
+ /etc/fstab file in order to
+ automatically mount the array's file system during
+ the system boot process:
+
+ /dev/raid3/gr0p1 /multimedia ufs rw 2 2
+
+
+
+
+
GEOM Gate Network Devices