Configure RAID 12020/01/08 |
|
Configure RAID 1 to add 2 new Disks on a computer.
|
|
| [1] | This example is based on the environment like follows. It shows to install new Disks [sdb] and [sdc] on this computer and configure RAID 1. |
|
[root@dlp ~]# df -h Filesystem Size Used Avail Use% Mounted on devtmpfs 1.9G 0 1.9G 0% /dev tmpfs 1.9G 0 1.9G 0% /dev/shm tmpfs 1.9G 8.5M 1.9G 1% /run tmpfs 1.9G 0 1.9G 0% /sys/fs/cgroup /dev/mapper/cl-root 75G 1.9G 74G 3% / /dev/sda1 976M 177M 732M 20% /boot tmpfs 379M 0 379M 0% /run/user/0 |
| [2] | Create a partition on new Disks and set RAID flag. |
|
[root@dlp ~]# parted --script /dev/sdb "mklabel gpt" [root@dlp ~]# parted --script /dev/sdc "mklabel gpt" [root@dlp ~]# parted --script /dev/sdb "mkpart primary 0% 100%" [root@dlp ~]# parted --script /dev/sdc "mkpart primary 0% 100%" [root@dlp ~]# parted --script /dev/sdb "set 1 raid on" [root@dlp ~]# parted --script /dev/sdc "set 1 raid on"
|
| [3] | Configure RAID 1. |
|
[root@dlp ~]# mdadm --create /dev/md0 --level=raid1 --raid-devices=2 /dev/sdb1 /dev/sdc1
mdadm: Note: this array has metadata at the start and
may not be suitable as a boot device. If you plan to
store '/boot' on this device please ensure that
your boot-loader understands md/v1.x metadata, or use
--metadata=0.90
Continue creating array? y
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started.
# show status [root@dlp ~]# cat /proc/mdstat
Personalities : [raid1]
md0 : active raid1 sdc1[1] sdb1[0]
83817472 blocks super 1.2 [2/2] [UU]
[==>..................] resync = 12.3% (10378496/83817472) finish=5.7min speed=211710K/sec
unused devices: <none>
# status turns like follows if syncing finished # that's OK to configure RAID 1 [root@dlp ~]# cat /proc/mdstat
Personalities : [raid1]
md0 : active raid1 sdc1[1] sdb1[0]
83817472 blocks super 1.2 [2/2] [UU]
unused devices: <none>
[root@dlp ~]#
vi /etc/sysconfig/raid-check # line 57: add RAID device to check by Cron # for Cron setting, it's [/etc/cron.d/raid-check] CHECK_DEVS=" md0 "
|
| [4] | Create any filesystem on RAID device and mount it on your System. |
|
[root@dlp ~]# mkfs.xfs -i size=1024 -s size=4096 /dev/md0
meta-data=/dev/md0 isize=1024 agcount=4, agsize=5238592 blks
= sectsz=4096 attr=2, projid32bit=1
= crc=1 finobt=1, sparse=1, rmapbt=0
= reflink=1
data = bsize=4096 blocks=20954368, imaxpct=25
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0, ftype=1
log =internal log bsize=4096 blocks=10231, version=2
= sectsz=4096 sunit=1 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
[root@dlp ~]# mount /dev/md0 /mnt [root@dlp ~]# df -hT Filesystem Type Size Used Avail Use% Mounted on devtmpfs devtmpfs 1.9G 0 1.9G 0% /dev tmpfs tmpfs 1.9G 0 1.9G 0% /dev/shm tmpfs tmpfs 1.9G 8.5M 1.9G 1% /run tmpfs tmpfs 1.9G 0 1.9G 0% /sys/fs/cgroup /dev/mapper/cl-root xfs 75G 1.9G 74G 3% / /dev/sda1 ext4 976M 177M 732M 20% /boot tmpfs tmpfs 379M 0 379M 0% /run/user/0 /dev/md0 xfs 80G 563M 80G 1% /mnt |
| [5] | If a member Disk in RAID array would be failure, re-configure RAID 1 like follows after swapping new Disk. |
|
# the status is like follows in failure [root@dlp ~]# cat /proc/mdstat
Personalities : [raid1]
md0 : active (auto-read-only) raid1 sdb1[0]
83817472 blocks super 1.2 [2/1] [U_]
unused devices: <none>
# after swapping new disk, re-configure like follows [root@dlp ~]# mdadm --manage /dev/md0 --add /dev/sdc1
mdadm: added /dev/sdc1
[root@dlp ~]# cat /proc/mdstat
Personalities : [raid1]
md0 : active raid1 sdc1[1] sdb1[0]
83817472 blocks super 1.2 [2/2] [UU]
[======>..............] resync = 31.0% (26044416/83818496) finish=4.7min speed=201190K/sec
unused devices: <none>
|
| Sponsored Link |
|
|