Tuesday, 6 September 2011

Mdadm Cheat Sheet


Mdadm Cheat Sheet


Mdadm is the modern tool most Linux distributions use these days to manage software RAID arrays; in the past raidtools was the tool we have used for this. This cheat sheet will show the most common usages of mdadm to manage software raid arrays; it assumes you have a good understanding of software RAID and Linux in general, and it will just explain the commands line usage of mdadm. The examples bellow use RAID1, but they can be adapted for any RAID level the Linux kernel driver supports.

1. Create a new RAID array


Create (mdadm –create) is used to create a new array:
mdadm --create --verbose /dev/md0 --level=1 /dev/sda1 /dev/sdb2

or using the compact notation:
mdadm -Cv /dev/md0 -l1 -n2 /dev/sd[ab]1

2. /etc/mdadm.conf


/etc/mdadm.conf or /etc/mdadm/mdadm.conf (on debian) is the main configuration file for mdadm. After we create our RAID arrays we add them to this file using:

mdadm --detail --scan >> /etc/mdadm.conf

or on debian

mdadm --detail --scan >> /etc/mdadm/mdadm.conf

3. Remove a disk from an array
We can’t remove a disk directly from the array, unless it is failed, so we first have to fail it (if the drive it is failed this is normally already in failed state and this step is not needed):

mdadm --fail /dev/md0 /dev/sda1

and now we can remove it:

mdadm --remove /dev/md0 /dev/sda1

This can be done in a single step using:

mdadm /dev/md0 --fail /dev/sda1 --remove /dev/sda1

4. Add a disk to an existing array


We can add a new disk to an array (replacing a failed one probably):

mdadm --add /dev/md0 /dev/sdb1

5. Verifying the status of the RAID arrays


We can check the status of the arrays on the system with:

cat /proc/mdstat

or
mdadm --detail /dev/md0

The output of this command will look like:

cat /proc/mdstat
Personalities : [raid1]
md0 : active raid1 sdb1[1] sda1[0]
104320 blocks [2/2] [UU]

md1 : active raid1 sdb3[1] sda3[0]
19542976 blocks [2/2] [UU]

md2 : active raid1 sdb4[1] sda4[0]
223504192 blocks [2/2] [UU]

here we can see both drives are used and working fine – U. A failed drive will show as F, while a degraded array will miss the second disk -
Note: while monitoring the status of a RAID rebuild operation using watch can be useful:
watch cat /proc/mdstat

6. Stop and delete a RAID array


If we want to completely remove a raid array we have to stop if first and then remove it:
mdadm --stop /dev/md0
mdadm --remove /dev/md0

and finally we can even delete the superblock from the individual drives:

mdadm --zero-superblock /dev/sda

Finally in using RAID1 arrays, where we create identical partitions on both drives this can be useful to copy the partitions from sda to sdb:

sfdisk -d /dev/sda | sfdisk /dev/sdb

(this will dump the partition table of sda, removing completely the existing partitions on sdb, so be sure you want this before running this command, as it will not warn you at all).

Create a 6 drive raid 5 array called /dev/md0 with chunk size of 16384: (typically, bigger chunk sizes are better for bigger files, default is 512)

mdadm --create --level=5 --chunk=16384--raid-devices=6 /dev/md0 /dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf /dev/sdg

Assemble raid array that is not in the config file: 


mdadm --assemble /dev/md0 /dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf /dev/sdg

Create config file: 


mdadm --detail --scan > /etc/mdadm.conf

Add new disk and grow raid array: 


mdadm --add /dev/md0 /dev/sdh 

mdadm --grow /dev/md0 --raid-devices=7

Mark drive as failed, and remove from array. (this can be good if you want to replace a drive even though it has not failed) 


mdadm --fail /dev/md0 /dev/sdg 

mdadm --remove /dev/md0 /dev/sdg

Stop array: (must ensure the file system is dismounted first) 
mdadm --stop /dev/md0

Start array:
mdadm --run /dev/md0

Get details on an array: 
mdadm --detail /dev/md0

Monitor all arrays: (to get emails of failures and such) 
mdadm --monitor --scan --mail=[email address] --delay=1800 &

View rebuild speed limit: 
sysctl dev.raid.speed_limit_max

Modify rebuild speed limit: sysctl -w dev.raid.speed_limit_max=value

Increase performance before a rebuild operation: mdadm --grow --bitmap=internal /dev/md0

Once rebuild is done: mdadm --grow --bitmap=none /dev/md0


There are many other usages of mdadm particular for each type of RAID level, and I would recommend to use the manual page (man mdadm) or the help (mdadm –help) if you need more details on its usage. Hopefully these quick examples will put you on the fast track with how mdadm works.

No comments:

Post a Comment

Twitter Bird Gadget