详解建立Linux环境软RAID 5教程

更新时间:2023-08-13 19:07:01 阅读量: IT计算机 文档下载

说明:文章内容仅供预览,部分内容可能不全。下载后的文档,内容与下面显示的完全一致。下载之前请确认下面内容是否您想要的,是否完整无缺。

详细的介绍了RAID 的分类和作用及linux下RAID 5的做法

详解建立Linux环境软RAID 5教程

1:Raid定义

RAID,全称Redundant Array of Inexpensive Disks,中文名为廉价磁盘冗余阵列.RAID可分为软RAID和硬RAID,软RAID是通过软件实现多块硬盘冗余的.而硬RAID是一般通过RAID卡来实现RAID的.前者配置简单,管理也比较灵活.对于中小企业来说不失为一最佳选择.硬RAID往往花费比较贵.不过,在性能方面具有一定优势.

2:RAID分类

3:Linux RAID 5实验详解

假设我有4块硬盘,(没有条件的朋友可以用虚拟机设置出4块硬盘出来).分别为

/dev/sda /dev/sdb /dev/sdc /dev/sdd.首先做的就是分区了.

[root@localhost /]# fdisk /dev/sda

Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel Building a new DOS disklabel. Changes will remain in memory only,

until you decide to write them. After that, of course, the previous

content won't be recoverable.

Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)

Command (m for help): n #按n创建新分区

Command action

e extended

p primary partition (1-4) #输入p 选择创建主分区

p

Partition number (1-4): 1 #输入 1 创建第一个主分区

First cylinder (1-130, default 1): #直接回车,选择分区开始柱面这里就从 1 开始

Using default value 1

Last cylinder or +size or +sizeM or +sizeK (1-102, default 130):

Using default value 130

Command (m for help): w #然后输入w写盘

The partition table has been altered!

Calling ioctl() to re-read partition table.

Syncing disks.

详细的介绍了RAID 的分类和作用及linux下RAID 5的做法

其它分区照这样做全部分出一个区出来.下面是总分区信息: [root@localhost /]# fdisk -l

Disk /dev/sda: 1073 MB, 1073741824 bytes

255 heads, 63 sectors/track, 130 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System

/dev/sda1 1 130 1044193+ 83 Linux

Disk /dev/sdb: 1073 MB, 1073741824 bytes

255 heads, 63 sectors/track, 130 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System

/dev/sdb1 1 130 1044193+ 83 Linux

Disk /dev/sdc: 1073 MB, 1073741824 bytes

255 heads, 63 sectors/track, 130 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System

/dev/sdc1 1 130 1044193+ 83 Linux

Disk /dev/sdd: 1073 MB, 1073741824 bytes

255 heads, 63 sectors/track, 130 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System

/dev/sdd1 1 130 1044193+ 83 Linux

下一步就是创建RAID了.

[root@localhost ~]# mdadm --create /dev/md0 --level=5 --raid-devices=3 --spare-devices=1 /dev/sd[a-d]1 #意思是创建RAID设备名为md0, 级别为RAID 5

mdadm: array /dev/md0 started. 使用3个设备建立RAID,空余一个做备用.

OK,初步建立了RAID了,我们看下具体情况吧.

[root@localhost ~]# mdadm --detail /dev/md0

/dev/md0:

Version : 00.90.01

Creation Time : Fri Aug 3 13:53:34 2007

Raid Level : raid5

Array Size : 2088192 (2039.25 MiB 2138.31 MB)

Device Size : 1044096 (1019.63 MiB 1069.15 MB)

Raid Devices : 3

Total Devices : 4

Preferred Minor : 0

Persistence : Superblock is persistent

Update Time : Fri Aug 3 13:54:02 2007

State : clean

Active Devices : 3

Working Devices : 4

详细的介绍了RAID 的分类和作用及linux下RAID 5的做法

Failed Devices : 0

Spare Devices : 1

Layout : left-symmetric

Chunk Size : 64K

Number Major Minor RaidDevice State

0 8 1 0 active sync /dev/sda1

1 8 17 1 active sync /dev/sdb1

2 8 33 2 active sync /dev/sdc1

3 8 49 -1 spare /dev/sdd1

UUID : e62a8ca6:2033f8a1:f333e527:78b0278a

Events : 0.2

让RAID开机启动.配置RIAD配置文件吧.默认名字为mdadm.conf,这个文件默认是不存在的,要自己建立.该配置文件存在的主要作用是系统启动的时候能够自动加载软RAID,同时也方便日后管理.

说明下,mdadm.conf文件主要由以下部分组成:DEVICES选项制定组成RAID所有设备, ARRAY选项指定阵列的设备名、RAID级别、阵列中活动设备的数目以及设备的UUID号.

[root@localhost ~]# mdadm --detail --scan > /etc/mdadm.conf

[root@localhost ~]# cat /etc/mdadm.conf

ARRAY /dev/md0 level=raid5 num-devices=3 UUID=e62a8ca6:2033f8a1:f333e527:78b0278a devices=/dev/sda1,/dev/sdb1,/dev/sdc1,/dev/sdd1

#默认格式是不正确的,需要做以下方式的修改:

[root@localhost ~]# vi /etc/mdadm.conf

[root@localhost ~]# cat /etc/mdadm.conf

devices /dev/sda1,/dev/sdb1,/dev/sdc1,/dev/sdd1

ARRAY /dev/md0 level=raid5 num-devices=3 UUID=e62a8ca6:2033f8a1:f333e527:78b0278a

将/dev/md0创建文件系统,

[root@localhost ~]# mkfs.ext3 /dev/md0

mke2fs 1.35 (28-Feb-2004)

Filesystem label=

OS type: Linux

Block size=4096 (log=2)

Fragment size=4096 (log=2)

261120 inodes, 522048 blocks

26102 blocks (5.00%) reserved for the super user

First data block=0

Maximum filesystem blocks=536870912

16 block groups

32768 blocks per group, 32768 fragments per group

16320 inodes per group

Superblock backups stored on blocks:

32768, 98304, 163840, 229376, 294912

Writing inode tables: done

Creating journal (8192 blocks): done

Writing superblocks and filesystem accounting information: done

详细的介绍了RAID 的分类和作用及linux下RAID 5的做法

This filesystem will be automatically checked every 21 mounts or

180 days, whichever comes first. Use tune2fs -c or -i to override.内容

挂载/dev/md0到系统中去,我们实验是否可用: [root@localhost ~]# cd /

[root@localhost /]# mkdir mdadm

[root@localhost /]# mount /dev/md0 /mdadm/ [root@localhost /]# cd /mdadm/

[root@localhost mdadm]# ls

lost+found

[root@localhost mdadm]# cp /etc/services .

[root@localhost mdadm]# ls

lost+found services

好了,如果其中某个硬盘坏了会怎么样呢?系统会自动停止这块硬盘的工作,然后让后备的那块硬盘顶上去工作.我们可以实验下.

[root@localhost mdadm]# mdadm /dev/md0 --fail /dev/sdc1

mdadm: set /dev/sdc1 faulty in /dev/md0

[root@localhost mdadm]# cat /proc/mdstat

Personalities : [raid5]

md0 : active raid5 sdc1[3](F) sdd1[2] sdb1[1] sda1[0] # F标签以为此盘为fail.

2088192 blocks level 5, 64k chunk, algorithm 2 [3/3] [UUU]

unused devices: <none>

如果我要移除一块坏的硬盘或添加一块硬盘呢? #删除一块硬盘

[root@localhost mdadm]# mdadm /dev/md0 --remove /dev/sdc1

mdadm: hot removed /dev/sdc1

[root@localhost mdadm]# cat /proc/mdstat

Personalities : [raid5]

md0 : active raid5 sdd1[2] sdb1[1] sda1[0]

2088192 blocks level 5, 64k chunk, algorithm 2 [3/3] [UUU]

unused devices: <none>

#增加一块硬盘

[root@localhost mdadm]# mdadm /dev/md0 --add /dev/sdc1

mdadm: hot added /dev/sdc1

[root@localhost mdadm]# cat /proc/mdstat

Personalities : [raid5]

md0 : active raid5 sdc1[3] sdd1[2] sdb1[1] sda1[0]

2088192 blocks level 5, 64k chunk, algorithm 2 [3/3] [UUU]

unused devices: <none>

好了,大家可以搭载个虚拟机试试

本文来源:https://www.bwwdw.com/article/a5hj.html

Top