MCSG11v18+HPUX11iV3实施经验

更新时间:2023-05-03 10:49:01 阅读量: 实用文档 文档下载

说明:文章内容仅供预览,部分内容可能不全。下载后的文档,内容与下面显示的完全一致。下载之前请确认下面内容是否您想要的,是否完整无缺。

DM项目HP MC/ServiceGuard实施

HP-UX 11.31 + MC/ServiceGuard A.11.18 一、实施规划

通过下表进行集群规划

1

二、准备工作

1、网络准备

HP MC/ServiceGuard实施一般需要四个网口。一个作为主网口,连接到交换机上,传送业务数据;两个作为专用心跳,通过网线直连起来;一个作为备用网口,也连接到交换机上。HP一般推荐把主网口也配置为传送心跳信息。

2

选择哪个网口作为哪种功能,主要出于高可用的考虑。比如,两个心跳网口绝对不要处于同一个网卡上,如果能处于不同的总线上就更好。主备两个业务网口也是同理。

判断那个网口位于哪个物理槽位,通过以下命令来获得信息:

# lanscan

Hardware Station Crd Hdw Net-Interface NM MAC HP-DLPI DLPI

Path Address In# State NamePPA ID Type Support Mjr#

0/0/0/1/0 0x00237DF95E19 0 UP lan0 snap0 1 ETHER Yes 119

1/0/6/1/0 0x001F290DF4F9 3 UP lan3 snap3 2 ETHER Yes 119

1/0/14/1/0 0x001F290DF5E9 4 UP lan4 snap4 3 ETHER Yes 119

0/0/6/1/0 0x001F290DF48D 1 UP lan1 snap1 4 ETHER Yes 119

0/0/14/1/0 0x001F290DF57C 2 UP lan2 snap2 5 ETHER Yes 119 LinkAgg0 0x000000000000 900 DOWN lan900 snap900 7 ETHER Yes 119 LinkAgg1 0x000000000000 901 DOWN lan901 snap901 8 ETHER Yes 119 LinkAgg2 0x000000000000 902 DOWN lan902 snap902 9 ETHER Yes 119 LinkAgg3 0x000000000000 903 DOWN lan903 snap903 10 ETHER Yes 119 LinkAgg4 0x000000000000 904 DOWN lan904 snap904 11 ETHER Yes 119

# olrad -q

Driver(s)

Capable

Slot Path Bus Max Spd Pwr Occu Susp OLAR OLD Max Mode

Num Spd Mode

0-0-0-1 0/0/8/1 140 133 133 Off No N/A N/A N/A PCI-X PCI-X

0-0-0-2 0/0/10/1 169 133 133 Off No N/A N/A N/A PCI-X PCI-X

0-0-0-3 0/0/12/1 198 266 266 Off No N/A N/A N/A PCI-X PCI-X

0-0-0-4 0/0/14/1 227 266 133 On Yes No Yes Yes PCI-X PCI-X

0-0-0-5 0/0/6/1 112 266 133 On Yes No Yes Yes PCI-X PCI-X

0-0-0-6 0/0/4/1 84 266 266 On Yes No Yes Yes PCI-X PCI-X

0-0-0-7 0/0/2/1 56 133 133 Off No N/A N/A N/A PCI-X PCI-X

0-0-0-8 0/0/1/1 28 133 133 Off No N/A N/A N/A PCI-X PCI-X

0-0-1-1 1/0/8/1 396 133 133 Off No N/A N/A N/A PCI-X PCI-X

0-0-1-2 1/0/10/1 425 133 133 Off No N/A N/A N/A PCI-X PCI-X

0-0-1-3 1/0/12/1 454 266 266 Off No N/A N/A N/A PCI-X PCI-X

0-0-1-4 1/0/14/1 483 266 133 On Yes No Yes Yes PCI-X PCI-X

0-0-1-5 1/0/6/1 368 266 133 On Yes No Yes Yes PCI-X PCI-X

0-0-1-6 1/0/4/1 340 266 266 On Yes No Yes Yes PCI-X PCI-X

0-0-1-7 1/0/2/1 312 133 133 Off No N/A N/A N/A PCI-X PCI-X

0-0-1-8 1/0/1/1 284 133 133 Off No N/A N/A N/A PCI-X PCI-X

将lanscan输出的设备路径和olrad输出的槽位信息进行比对,就能知道哪个网口在哪个槽位的那个网卡上了。

3

在本项目里,lan1、lan2、lan3、lan4都位于独立的槽位,因此就选择这四块网卡。而且lan1和lan2位于一个总线上,lan3和lan4位于另一个总线上,所以选择lan1/lan4位业务主备网口,lan2/lan3为心跳网口。

选择好网卡后,在系统配置文件/etc/rc.config.d/netconf里把网口的地址配上。缺省路由和主机名称也在这里配置。

同时,在两台主机上将/etc/hosts配置好:

# cat /etc/hosts

127.0.0.1 localhost loopback

10.199.76.208 ZBDMDB1

10.199.76.209 ZBDMDB2

192.168.1.1 DB01hb1

192.168.1.2 DB02hb1

192.168.2.1 DB01hb2

192.168.2.2 DB02hb2

在HP-UX 11.31 版本操作系统上运行的Oracle数据库居然不支持超过8个字符的主机名,因此我们把主机名限制在8个字符之内。

2、存储空间准备

HP MC/ServiceGuard的实施必须在共享盘阵上配置一个锁盘,锁盘空间一般100MB即可,另外就是要配置共享的数据空间。

因此我们在DM项目中的共享盘阵配置如下:

4

HP-UX 11.31版本中已经支持I/O路径动态路径冗余,因此不用像以前版本那样配置pvlink。操作系统中直接生成/dev/disk/diskxx和/dev/rdisk/diskxx这样的设备名来支持动态多路径,配置VG 时只要使用disk开头的设备名就可以了。

系统一般按照盘阵中划分的LUN(在EVA盘阵中叫做Vdisk)的顺序生成设备文件,如果要确认磁盘的大小,可以用diskinfo命令:

# diskinfo /dev/rdisk/disk36

SCSI describe of /dev/rdisk/disk36:

vendor: HP

product id: HSV300

type: direct access

size: 1048576 Kbytes

bytes per sector: 512

以下步骤配置磁盘空间:

A、使用ioscan命令确保系统识别到所有盘阵上的磁盘:

# ioscan -fnCdisk

Class I H/W Path Driver S/W State H/W Type Description

=======================================================================

disk 0 0/0/0/2/0.6.0 sdisk CLAIMED DEVICE HP 146 GMBA3147NC

/dev/dsk/c0t6d0 /dev/dsk/c0t6d0s2 /dev/rdsk/c0t6d0 /dev/rdsk/c0t6d0s2

/dev/dsk/c0t6d0s1 /dev/dsk/c0t6d0s3 /dev/rdsk/c0t6d0s1 /dev/rdsk/c0t6d0s3

disk 1 0/0/0/2/1.2.0 sdisk CLAIMED DEVICE Optiarc DVD RW AD-5200A

/dev/dsk/c1t2d0 /dev/rdsk/c1t2d0

disk 2 0/0/0/3/0.6.0 sdisk CLAIMED DEVICE HP 146 GMBA3147NC

/dev/dsk/c2t6d0 /dev/dsk/c2t6d0s2 /dev/rdsk/c2t6d0 /dev/rdsk/c2t6d0s2

/dev/dsk/c2t6d0s1 /dev/dsk/c2t6d0s3 /dev/rdsk/c2t6d0s1 /dev/rdsk/c2t6d0s3

disk 6 0/0/4/1/0.1.4.0.0.0.1 sdisk CLAIMED DEVICE HP HSV300

/dev/dsk/c5t0d1 /dev/rdsk/c5t0d1

disk 7 0/0/4/1/0.1.4.0.0.0.2 sdisk CLAIMED DEVICE HP HSV300

/dev/dsk/c5t0d2 /dev/rdsk/c5t0d2

disk 8 0/0/4/1/0.1.4.0.0.0.3 sdisk CLAIMED DEVICE HP HSV300

/dev/dsk/c5t0d3 /dev/rdsk/c5t0d3

disk 9 0/0/4/1/0.1.4.0.0.0.4 sdisk CLAIMED DEVICE HP HSV300

/dev/dsk/c5t0d4 /dev/rdsk/c5t0d4

disk 10 0/0/4/1/0.1.4.0.0.0.5 sdisk CLAIMED DEVICE HP HSV300

/dev/dsk/c5t0d5 /dev/rdsk/c5t0d5

5

disk 11 0/0/4/1/0.1.4.0.0.0.6 sdisk CLAIMED DEVICE HP HSV300

/dev/dsk/c5t0d6 /dev/rdsk/c5t0d6

disk 12 0/0/4/1/0.1.4.0.0.0.7 sdisk CLAIMED DEVICE HP HSV300

/dev/dsk/c5t0d7 /dev/rdsk/c5t0d7

disk 13 0/0/4/1/0.1.4.0.0.1.0 sdisk CLAIMED DEVICE HP HSV300

/dev/dsk/c5t1d0 /dev/rdsk/c5t1d0

disk 14 0/0/4/1/0.1.4.0.0.1.1 sdisk CLAIMED DEVICE HP HSV300

/dev/dsk/c5t1d1 /dev/rdsk/c5t1d1

disk 15 0/0/4/1/0.1.4.0.0.1.2 sdisk CLAIMED DEVICE HP HSV300

/dev/dsk/c5t1d2 /dev/rdsk/c5t1d2

disk 16 0/0/4/1/0.1.4.0.0.1.3 sdisk CLAIMED DEVICE HP HSV300

/dev/dsk/c5t1d3 /dev/rdsk/c5t1d3

disk 17 0/0/4/1/0.1.4.0.0.1.4 sdisk CLAIMED DEVICE HP HSV300

/dev/dsk/c5t1d4 /dev/rdsk/c5t1d4

disk 18 0/0/4/1/0.1.4.0.0.1.5 sdisk CLAIMED DEVICE HP HSV300

/dev/dsk/c5t1d5 /dev/rdsk/c5t1d5

disk 19 0/0/4/1/0.1.4.0.0.1.6 sdisk CLAIMED DEVICE HP HSV300

/dev/dsk/c5t1d6 /dev/rdsk/c5t1d6

disk 20 0/0/4/1/0.1.4.0.0.1.7 sdisk CLAIMED DEVICE HP HSV300

/dev/dsk/c5t1d7 /dev/rdsk/c5t1d7

disk 21 0/0/4/1/0.1.5.0.0.0.1 sdisk CLAIMED DEVICE HP HSV300

/dev/dsk/c9t0d1 /dev/rdsk/c9t0d1

disk 22 0/0/4/1/0.1.5.0.0.0.2 sdisk CLAIMED DEVICE HP HSV300

/dev/dsk/c9t0d2 /dev/rdsk/c9t0d2

disk 23 0/0/4/1/0.1.5.0.0.0.3 sdisk CLAIMED DEVICE HP HSV300

/dev/dsk/c9t0d3 /dev/rdsk/c9t0d3

disk 24 0/0/4/1/0.1.5.0.0.0.4 sdisk CLAIMED DEVICE HP HSV300

/dev/dsk/c9t0d4 /dev/rdsk/c9t0d4

disk 25 0/0/4/1/0.1.5.0.0.0.5 sdisk CLAIMED DEVICE HP HSV300

/dev/dsk/c9t0d5 /dev/rdsk/c9t0d5

disk 26 0/0/4/1/0.1.5.0.0.0.6 sdisk CLAIMED DEVICE HP HSV300

/dev/dsk/c9t0d6 /dev/rdsk/c9t0d6

disk 27 0/0/4/1/0.1.5.0.0.0.7 sdisk CLAIMED DEVICE HP HSV300

/dev/dsk/c9t0d7 /dev/rdsk/c9t0d7

disk 28 0/0/4/1/0.1.5.0.0.1.0 sdisk CLAIMED DEVICE HP HSV300

/dev/dsk/c9t1d0 /dev/rdsk/c9t1d0

disk 29 0/0/4/1/0.1.5.0.0.1.1 sdisk CLAIMED DEVICE HP HSV300

/dev/dsk/c9t1d1 /dev/rdsk/c9t1d1

disk 30 0/0/4/1/0.1.5.0.0.1.2 sdisk CLAIMED DEVICE HP HSV300

/dev/dsk/c9t1d2 /dev/rdsk/c9t1d2

disk 31 0/0/4/1/0.1.5.0.0.1.3 sdisk CLAIMED DEVICE HP HSV300

/dev/dsk/c9t1d3 /dev/rdsk/c9t1d3

disk 32 0/0/4/1/0.1.5.0.0.1.4 sdisk CLAIMED DEVICE HP HSV300

/dev/dsk/c9t1d4 /dev/rdsk/c9t1d4

6

disk 33 0/0/4/1/0.1.5.0.0.1.5 sdisk CLAIMED DEVICE HP HSV300

/dev/dsk/c9t1d5 /dev/rdsk/c9t1d5

disk 34 0/0/4/1/0.1.5.0.0.1.6 sdisk CLAIMED DEVICE HP HSV300

/dev/dsk/c9t1d6 /dev/rdsk/c9t1d6

disk 35 0/0/4/1/0.1.5.0.0.1.7 sdisk CLAIMED DEVICE HP HSV300

/dev/dsk/c9t1d7 /dev/rdsk/c9t1d7

disk 81 1/0/4/1/0.1.4.0.0.0.1 sdisk CLAIMED DEVICE HP HSV300

/dev/dsk/c21t0d1 /dev/rdsk/c21t0d1

disk 82 1/0/4/1/0.1.4.0.0.0.2 sdisk CLAIMED DEVICE HP HSV300

/dev/dsk/c21t0d2 /dev/rdsk/c21t0d2

disk 83 1/0/4/1/0.1.4.0.0.0.3 sdisk CLAIMED DEVICE HP HSV300

/dev/dsk/c21t0d3 /dev/rdsk/c21t0d3

disk 84 1/0/4/1/0.1.4.0.0.0.4 sdisk CLAIMED DEVICE HP HSV300

/dev/dsk/c21t0d4 /dev/rdsk/c21t0d4

disk 85 1/0/4/1/0.1.4.0.0.0.5 sdisk CLAIMED DEVICE HP HSV300

/dev/dsk/c21t0d5 /dev/rdsk/c21t0d5

disk 86 1/0/4/1/0.1.4.0.0.0.6 sdisk CLAIMED DEVICE HP HSV300

/dev/dsk/c21t0d6 /dev/rdsk/c21t0d6

disk 87 1/0/4/1/0.1.4.0.0.0.7 sdisk CLAIMED DEVICE HP HSV300

/dev/dsk/c21t0d7 /dev/rdsk/c21t0d7

disk 88 1/0/4/1/0.1.4.0.0.1.0 sdisk CLAIMED DEVICE HP HSV300

/dev/dsk/c21t1d0 /dev/rdsk/c21t1d0

disk 89 1/0/4/1/0.1.4.0.0.1.1 sdisk CLAIMED DEVICE HP HSV300

/dev/dsk/c21t1d1 /dev/rdsk/c21t1d1

disk 90 1/0/4/1/0.1.4.0.0.1.2 sdisk CLAIMED DEVICE HP HSV300

/dev/dsk/c21t1d2 /dev/rdsk/c21t1d2

disk 91 1/0/4/1/0.1.4.0.0.1.3 sdisk CLAIMED DEVICE HP HSV300

/dev/dsk/c21t1d3 /dev/rdsk/c21t1d3

disk 92 1/0/4/1/0.1.4.0.0.1.4 sdisk CLAIMED DEVICE HP HSV300

/dev/dsk/c21t1d4 /dev/rdsk/c21t1d4

disk 93 1/0/4/1/0.1.4.0.0.1.5 sdisk CLAIMED DEVICE HP HSV300

/dev/dsk/c21t1d5 /dev/rdsk/c21t1d5

disk 94 1/0/4/1/0.1.4.0.0.1.6 sdisk CLAIMED DEVICE HP HSV300

/dev/dsk/c21t1d6 /dev/rdsk/c21t1d6

disk 95 1/0/4/1/0.1.4.0.0.1.7 sdisk CLAIMED DEVICE HP HSV300

/dev/dsk/c21t1d7 /dev/rdsk/c21t1d7

disk 66 1/0/4/1/0.1.5.0.0.0.1 sdisk CLAIMED DEVICE HP HSV300

/dev/dsk/c19t0d1 /dev/rdsk/c19t0d1

disk 67 1/0/4/1/0.1.5.0.0.0.2 sdisk CLAIMED DEVICE HP HSV300

/dev/dsk/c19t0d2 /dev/rdsk/c19t0d2

disk 68 1/0/4/1/0.1.5.0.0.0.3 sdisk CLAIMED DEVICE HP HSV300

/dev/dsk/c19t0d3 /dev/rdsk/c19t0d3

disk 69 1/0/4/1/0.1.5.0.0.0.4 sdisk CLAIMED DEVICE HP HSV300

/dev/dsk/c19t0d4 /dev/rdsk/c19t0d4

7

disk 70 1/0/4/1/0.1.5.0.0.0.5 sdisk CLAIMED DEVICE HP HSV300

/dev/dsk/c19t0d5 /dev/rdsk/c19t0d5

disk 71 1/0/4/1/0.1.5.0.0.0.6 sdisk CLAIMED DEVICE HP HSV300

/dev/dsk/c19t0d6 /dev/rdsk/c19t0d6

disk 72 1/0/4/1/0.1.5.0.0.0.7 sdisk CLAIMED DEVICE HP HSV300

/dev/dsk/c19t0d7 /dev/rdsk/c19t0d7

disk 73 1/0/4/1/0.1.5.0.0.1.0 sdisk CLAIMED DEVICE HP HSV300

/dev/dsk/c19t1d0 /dev/rdsk/c19t1d0

disk 74 1/0/4/1/0.1.5.0.0.1.1 sdisk CLAIMED DEVICE HP HSV300

/dev/dsk/c19t1d1 /dev/rdsk/c19t1d1

disk 75 1/0/4/1/0.1.5.0.0.1.2 sdisk CLAIMED DEVICE HP HSV300

/dev/dsk/c19t1d2 /dev/rdsk/c19t1d2

disk 76 1/0/4/1/0.1.5.0.0.1.3 sdisk CLAIMED DEVICE HP HSV300

/dev/dsk/c19t1d3 /dev/rdsk/c19t1d3

disk 77 1/0/4/1/0.1.5.0.0.1.4 sdisk CLAIMED DEVICE HP HSV300

/dev/dsk/c19t1d4 /dev/rdsk/c19t1d4

disk 78 1/0/4/1/0.1.5.0.0.1.5 sdisk CLAIMED DEVICE HP HSV300

/dev/dsk/c19t1d5 /dev/rdsk/c19t1d5

disk 79 1/0/4/1/0.1.5.0.0.1.6 sdisk CLAIMED DEVICE HP HSV300

/dev/dsk/c19t1d6 /dev/rdsk/c19t1d6

disk 80 1/0/4/1/0.1.5.0.0.1.7 sdisk CLAIMED DEVICE HP HSV300

/dev/dsk/c19t1d7 /dev/rdsk/c19t1d7

并确定生成disk设备:

# ioscan -m dsf

Persistent DSF Legacy DSF(s)

========================================

/dev/pt/pt2 /dev/rscsi/c20t0d0

/dev/rscsi/c18t0d0

/dev/rscsi/c8t0d0

/dev/rscsi/c4t0d0

/dev/rdisk/disk3 /dev/rdsk/c0t6d0

/dev/rdisk/disk3_p1 /dev/rdsk/c0t6d0s1

/dev/rdisk/disk3_p2 /dev/rdsk/c0t6d0s2

/dev/rdisk/disk3_p3 /dev/rdsk/c0t6d0s3

/dev/rdisk/disk4 /dev/rdsk/c1t2d0

/dev/rdisk/disk5 /dev/rdsk/c2t6d0

/dev/rdisk/disk5_p1 /dev/rdsk/c2t6d0s1

/dev/rdisk/disk5_p3 /dev/rdsk/c2t6d0s3

/dev/rdisk/disk5_p2 /dev/rdsk/c2t6d0s2

/dev/pt/pt11 /dev/rscsi/c16t0d0

/dev/rscsi/c24t0d0

/dev/rscsi/c12t0d0

/dev/rscsi/c22t0d0

8

/dev/rdisk/disk36 /dev/rdsk/c21t0d1

/dev/rdsk/c19t0d1

/dev/rdsk/c9t0d1

/dev/rdsk/c5t0d1

/dev/rdisk/disk37 /dev/rdsk/c21t0d2

/dev/rdsk/c19t0d2

/dev/rdsk/c9t0d2

/dev/rdsk/c5t0d2

/dev/rdisk/disk38 /dev/rdsk/c21t0d3

/dev/rdsk/c19t0d3

/dev/rdsk/c9t0d3

/dev/rdsk/c5t0d3

/dev/rdisk/disk39 /dev/rdsk/c21t0d4

/dev/rdsk/c19t0d4

/dev/rdsk/c9t0d4

/dev/rdsk/c5t0d4

/dev/rdisk/disk40 /dev/rdsk/c21t0d5

/dev/rdsk/c19t0d5

/dev/rdsk/c9t0d5

/dev/rdsk/c5t0d5

/dev/rdisk/disk41 /dev/rdsk/c21t0d6

/dev/rdsk/c19t0d6

/dev/rdsk/c9t0d6

/dev/rdsk/c5t0d6

/dev/rdisk/disk42 /dev/rdsk/c21t0d7

/dev/rdsk/c19t0d7

/dev/rdsk/c9t0d7

/dev/rdsk/c5t0d7

/dev/rdisk/disk43 /dev/rdsk/c21t1d0

/dev/rdsk/c19t1d0

/dev/rdsk/c9t1d0

/dev/rdsk/c5t1d0

/dev/rdisk/disk44 /dev/rdsk/c21t1d1

/dev/rdsk/c19t1d1

/dev/rdsk/c9t1d1

/dev/rdsk/c5t1d1

/dev/rdisk/disk45 /dev/rdsk/c21t1d2

/dev/rdsk/c19t1d2

/dev/rdsk/c9t1d2

/dev/rdsk/c5t1d2

/dev/rdisk/disk46 /dev/rdsk/c21t1d3

/dev/rdsk/c19t1d3

/dev/rdsk/c9t1d3

/dev/rdsk/c5t1d3

9

/dev/rdisk/disk47 /dev/rdsk/c21t1d4

/dev/rdsk/c19t1d4

/dev/rdsk/c9t1d4

/dev/rdsk/c5t1d4

/dev/rdisk/disk48 /dev/rdsk/c21t1d5

/dev/rdsk/c19t1d5

/dev/rdsk/c9t1d5

/dev/rdsk/c5t1d5

/dev/rdisk/disk49 /dev/rdsk/c21t1d6

/dev/rdsk/c19t1d6

/dev/rdsk/c9t1d6

/dev/rdsk/c5t1d6

/dev/rdisk/disk50 /dev/rdsk/c21t1d7

/dev/rdsk/c19t1d7

/dev/rdsk/c9t1d7

/dev/rdsk/c5t1d7

B、创建PV,以下仅拿disk36和disk37示例:

# pvcreate /dev/rdisk/disk36

# pvcreate /dev/rdisk/disk37

C、创建VG:

在HP-UX 11.31中,可以通过vgcreate命令直接创建卷组。例如,创建lockvg,作为MC/Serivceguard 的锁盘:

# vgcreate /dev/lockvg /dev/disk/disk36

而在之前版本的操作系统中,必须使用以下命令创建lockvg:

# mkdir /dev/lockvg

# mknod /dev/lockvg/group c 64 0x010000

# vgcreate lockvg /dev/disk/disk36

Mknod命令里的0x010000在系统里不能和其他VG重复。运行前先检查其他VG的配置,比如rootvg的配置:

# ls -l /dev/vg00/group

crw-r----- 1 root sys 64 0x000000 Feb 26 10:00 /dev/vg00/group

创建数据VG。在HP-UX 11.31中支持2.0版本的卷组,有更好扩展性,我们创建2.0版本的数据卷组:

# vgcreate -V 2.0 -s 32 -S 2t VG_ODM_ORA01 /dev/disk/disk37

-s指定PE的大小,以MB为单位;-S指定卷组的最大大小。

D、划分逻辑卷,创建文件系统:

# lvcreate -l 31999 /dev/VG_ODM_ORA01

# newfs /dev/VG_ODM_ORA01/rlvol1

如果应用系统确定需要使用裸设备,那么文件系统都不要建立。逻辑卷可以根据应用系统的需求

10

划分,也可以交给应用系统实施者去划分。

E、将建立的卷组的所有信息同步到双机的另一台服务器上:

# vgchange -a n /dev/lockvg

# vgexport -p -s -m /tmp/lockvg.map /dev/lockvg

用ftp或rcp将/tmp/lockvg.map文件同步到另一台服务器上。

# vgimport -N -s -m lockvg.map /dev/lockvg

# vgchange -a y /dev/lockvg

# vgchange -a n /dev/VG_ODM_ORA01

# vgexport -p -s -m /tmp/VG_ODM_ORA01.map /dev/VG_ODM_ORA01

用ftp或rcp将/tmp/VG_ODM_ORA01.map文件同步到另一台服务器上。

# vgimport -N -s -m VG_ODM_ORA01.map /dev/VG_ODM_ORA01

# vgchange -a y /dev/VG_ODM_ORA01

vgimport的-N参数指示使用/dev/disk目录下的设备名。

3、配置根盘镜像

设置镜像需要购买产品HP MirrorDisk/UX。

安腾服务器根盘镜像

安腾服务器的引导磁盘是分区的。必须设置分区,把实用程序复制到EFI分区,并在LVM命令中使用分区设备文件。

a、创建分区描述文件:

# vi /tmp/idf

3

EFI 500MB

HPUX 100%

HPSP 400MB

b、使用分区描述文件给磁盘分区,(以镜像磁盘为/etc/rdisk/disk5为例)

# idisk –f /tmp/idf –w /dev/rdisk/disk5

验证分区布局,可以用以下命令:

# idisk /dev/rdisk/disk5

idisk version: 1.44

EFI Primary Header:

Signature = EFI PART

Revision = 0x10000

HeaderSize = 0x5c

HeaderCRC32 = 0x3f9c85dc

MyLbaLo = 0x1

MyLbaHi = 0x0

11

AlternateLbaLo = 0x1117732f

AlternateLbaHi = 0x0

FirstUsableLbaLo = 0x40

FirstUsableLbaHi = 0x0

LastUsableLbaLo = 0x111772ff

LastUsableLbaHi = 0x0

Disk GUID = e653e2b2-14fa-11de-8000-d6217b60e588

PartitionEntryLbaLo = 0x2

PartitionEntryLbaHi = 0x0

NumberOfPartitionEntries = 0xc

SizeOfPartitionEntry = 0x80

PartitionEntryArrayCRC32 = 0xd498dc91

Primary Partition Table (in 512 byte blocks):

Partition 1 (EFI):

Partition Type GUID = c12a7328-f81f-11d2-ba4b-00a0c93ec93b

Unique Partition GUID = e653e596-14fa-11de-8000-d6217b60e588

Starting Lba Lo = 0x40

Starting Lba Hi = 0x0

Ending Lba Lo = 0xf9fff

Ending Lba Hi = 0x0

Partition 2 (HP-UX):

Partition Type GUID = 75894c1e-3aeb-11d3-b7c1-7b03a0000000

Unique Partition GUID = e653e5b4-14fa-11de-8000-d6217b60e588

Starting Lba Lo = 0xfa000

Starting Lba Hi = 0x0

Ending Lba Lo = 0x110af7ff

Ending Lba Hi = 0x0

Partition 3 (HPSP):

Partition Type GUID = e2a1e728-32e3-11d6-a682-7b03a0000000

Unique Partition GUID = e653e5c8-14fa-11de-8000-d6217b60e588

Starting Lba Lo = 0x110af800

Starting Lba Hi = 0x0

Ending Lba Lo = 0x111772ff

Ending Lba Hi = 0x0

……...

c、为所有分区创建设备文件

# insf –e –H 0/0/0/3/0.6.0

0/0/0/3/0.6.0为disk5的设备路径

应该创建以下设备名:

/dev/disk/disk5 /dev/rdisk/disk5

/dev/disk/disk5_p1 /dev/rdisk/disk5_p1

12

/dev/disk/disk5_p2 /dev/rdisk/disk5_p2

/dev/disk/disk5_p3 /dev/rdisk/disk5_p3

d、使用HP-UX分区的设备文件创建可引导物理卷

# pvcreste –B /dev/rdisk/disk5_p2

e、将物理卷添加到根卷组

# vgextend vg00 /dev/disk/disk5_p2

f、将引导实用程序放置在引导区域中

# mkboot -e -l /dev/rdisk/disk5

g、在磁盘引导区域中添加自动引导文件

# mkboot -a “hpux -lq”/dev/rdisk/disk5

-lq参数禁用qorum检查。

h、确定根卷组中的逻辑卷的列表和顺序(原始引导盘为disk3)

# pvdisplay –v /dev/disk/disk3_p2 | grep …current.*0000 $?

00000 current /dev/vg00/lvol1 00000

00056 current /dev/vg00/lvol2 00000

00312 current /dev/vg00/lvol3 00000

00344 current /dev/vg00/lvol4 00000

00360 current /dev/vg00/lvol5 00000

00364 current /dev/vg00/lvol6 00000

00624 current /dev/vg00/lvol7 00000

00781 current /dev/vg00/lvol8 00000

01053 current /dev/vg00/lv_swap 00000

i、按以上顺序把vg00中的每个逻辑卷镜像。

# lvextend –m 1 /dev/vg00/lvol1 /dev/disk/disk5_p2

# lvextend –m 1 /dev/vg00/lvol2 /dev/disk/disk5_p2

# lvextend –m 1 /dev/vg00/lvol3 /dev/disk/disk5_p2

# lvextend –m 1 /dev/vg00/lvol4 /dev/disk/disk5_p2

# lvextend –m 1 /dev/vg00/lvol5 /dev/disk/disk5_p2

# lvextend –m 1 /dev/vg00/lvol6 /dev/disk/disk5_p2

# lvextend –m 1 /dev/vg00/lvol7 /dev/disk/disk5_p2

# lvextend –m 1 /dev/vg00/lvol8 /dev/disk/disk5_p2

# lvextend –m 1 /dev/vg00/lv_swap /dev/disk/disk5_p2

如果使用的是2007/9以后发行的HP-UX11iv3,可以用以下命令缩短镜像同步所需的时间:# lvextend –s –m 1 /dev/vg00/lvol1 /dev/disk/disk5_p2

# lvextend –s –m 1 /dev/vg00/lvol2 /dev/disk/disk5_p2

# lvextend -s –m 1 /dev/vg00/lvol3 /dev/disk/disk5_p2

# lvextend -s –m 1 /dev/vg00/lvol4 /dev/disk/disk5_p2

13

# lvextend -s –m 1 /dev/vg00/lvol5 /dev/disk/disk5_p2

# lvextend -s –m 1 /dev/vg00/lvol6 /dev/disk/disk5_p2

# lvextend -s –m 1 /dev/vg00/lvol7 /dev/disk/disk5_p2

# lvextend -s –m 1 /dev/vg00/lvol8 /dev/disk/disk5_p2

# lvextend -s –m 1 /dev/vg00/lv_swap /dev/disk/disk5_p2

# lvsync –T /dev/vg00/lv*

j、更新根卷组信息

# lvlnboot -R /dev/vg00

k、验证镜像的磁盘是否显示为引导磁盘,以及两个磁盘上是否都有引导逻辑卷、根逻辑卷和交换逻辑卷

# lvlnboot -v

Boot Definitions for V olume Group /dev/vg00:

Physical V olumes belonging in Root V olume Group:

/dev/disk/disk3_p2 -- Boot Disk

/dev/disk/disk5_p2 -- Boot Disk

Boot: lvol1 on: /dev/disk/disk3_p2

/dev/disk/disk5_p2

Root: lvol3 on: /dev/disk/disk3_p2

/dev/disk/disk5_p2

Swap: lvol2 on: /dev/disk/disk3_p2

/dev/disk/disk5_p2

Dump: lvol2 on: /dev/disk/disk3_p2, 0

l、将镜像磁盘指定为非易失性存储器中的备用引导路径

# setboot -a 0/0/0/3/0.6.0

m、在/stand/bootconf中为新的引导磁盘添加一行

# vi /stand/bootconf

l /dev/disk/disk5_p2

其中,字母“l”代表LVM

HP9000服务器根盘镜像命令汇总

以/dev/disk/disk4, 硬件路径0/1/1/0.1.0为例:

# insf -e -H 0/1/1/0.1.1

# pvcreate -B /dev/rdisk/disk4

# vgextend /dev/vg00 /dev/disk/disk4

# mkboot /dev/rdisk/disk4

# mkboot -a "hpux -lq" /dev/rdisk/disk4

# pvdisplay -v /dev/disk/disk3 | grep 'current.*0000 $'

# lvextend -s –m 1 /dev/vg00/lvol1 /dev/disk/disk4

# lvextend -s –m 1 /dev/vg00/lvol2 /dev/disk/disk4

14

# lvextend -s –m 1 /dev/vg00/lvol3 /dev/disk/disk4

# lvextend -s –m 1 /dev/vg00/lvol4 /dev/disk/disk4

# lvextend -s –m 1 /dev/vg00/lvol5 /dev/disk/disk4

# lvextend -s –m 1 /dev/vg00/lvol6 /dev/disk/disk4

# lvextend -s –m 1 /dev/vg00/lvol7 /dev/disk/disk4

# lvextend -s –m 1 /dev/vg00/lvol8 /dev/disk/disk4

# lvsync -T /dev/vg00/lvol*

# lvlnboot -R /dev/vg00

# lvlnboot -v

# setboot –a 0/1/1/0.0x1.0x0

# vi /stand/bootconf

l /dev/disk/disk4

其中,字母“l”(L 的小写形式)代表LVM。

4、建立信任关系

在两台服务器上编辑/etc/cmcluster/cmclnodelist文件,加入以下信息:

ZBDMDB1 root

ZBDMDB2 root

ZBDMDB1和ZBDMDB2是两台服务器主机名。

5、Identd和discard服务设置

这两个服务在MC/ServiceGuard的配置过程中会使用,因此建议在/etc/inetd.conf中打开:auth stream tcp6 wait bin /usr/lbin/identd identd

discard dgram udp6 nowait root internal

6、设置LVM卷组开机不自动激活

# vi /etc/lvmrc

AUTO_VG_ACTIV ATE=0

如果有除了vg00之外的卷组需要在开机时自动激活,

把它们加入custom_vg_activation()函数里:

custom_vg_activation()

{

}

7、设置集群开机自动运行

# vi /etc/rc.config.d/cmcluster

AUTOSTART_CMCLD=1

三、MC/ServiceGuard实施

1、指定集群节点,生成配置模板

15

# cmquerycl –v –C /etc/cmcluster/cmclconfig.ascii –n ZBDMDB1 –n ZBDMDB2

2、更改配置模板文件

# vi /etc/cmcluster/cmclconfig.ascii

# Enter a name for this cluster. This name will be used to identify the

# cluster when viewing or manipulating it.

CLUSTER_NAME DMDB_Cluster

# Cluster Lock Parameters

# The cluster lock is used as a tie-breaker for situations

FIRST_CLUSTER_LOCK_VG /dev/lockvg

# Definition of nodes in the cluster.

NODE_NAME ZBDMDB1

NETWORK_INTERFACE lan1

HEARTBEAT_IP 10.199.76.208

NETWORK_INTERFACE lan2

HEARTBEAT_IP 192.168.1.1

NETWORK_INTERFACE lan3

HEARTBEAT_IP 192.168.2.1

NETWORK_INTERFACE lan4

# CLUSTER_LOCK_LUN

FIRST_CLUSTER_LOCK_PV /dev/disk/disk36

NODE_NAME ZBDMDB2

NETWORK_INTERFACE lan1

HEARTBEAT_IP 10.199.76.209

NETWORK_INTERFACE lan2

HEARTBEAT_IP 192.168.1.2

NETWORK_INTERFACE lan3

HEARTBEAT_IP 192.168.2.2

NETWORK_INTERFACE lan4

# CLUSTER_LOCK_LUN

FIRST_CLUSTER_LOCK_PV /dev/disk/disk36

# Cluster Timing Parameters (microseconds).

HEARTBEAT_INTERV AL 1000000

NODE_TIMEOUT 2000000

# Configuration/Reconfiguration Timing Parameters (microseconds).

16

AUTO_START_TIMEOUT 600000000

NETWORK_POLLING_INTERVAL 2000000

# Network Monitor Configuration Parameters.

NETWORK_FAILURE_DETECTION INOUT

# Package Configuration Parameters.

MAX_CONFIGURED_PACKAGES 10

# List of cluster aware LVM Volume Groups. These volume groups will # be used by package applications via the vgchange -a e command.

# Neither CVM or VxVM Disk Groups should be used here. VOLUME_GROUP /dev/lockvg

VOLUME_GROUP /dev/VG_MDM_MBI

VOLUME_GROUP /dev/VG_MDM_CDR

VOLUME_GROUP /dev/VG_MDM_ORA01 VOLUME_GROUP /dev/VG_MDM_ORA02 VOLUME_GROUP /dev/VG_ODM_BAKTEMP VOLUME_GROUP /dev/VG_ODM_CDR

VOLUME_GROUP /dev/VG_ODM_ORA01 VOLUME_GROUP /dev/VG_ODM_ORA02 VOLUME_GROUP /dev/VG_ODM_ORA03 VOLUME_GROUP /dev/VG_QDB_BAKTEMP VOLUME_GROUP /dev/VG_QDB_ORA01 VOLUME_GROUP /dev/VG_QDB_ORA02 VOLUME_GROUP /dev/VG_QDB_ORA03 VOLUME_GROUP /dev/VG_ODM2_ORA01 VOLUME_GROUP /dev/VG_ODM2_ORA02 VOLUME_GROUP /dev/VG_DM_DATACOL

根据文件中的说明进行更改就行了。

3、验证集群配置

# cmcheckconf –k –v –C /etc/cmcluster/cmclconfig.ascii

4、在集群主机中分发二进制格式配置文件

# vgchange –a y /dev/lockvg

# cmapplyconf -k –v –C /etc/cmcluster/cmclconfig.ascii

# vgchange –a n /dev/lockvg

应该注意的是在分发配置之前,lockvg需要被激活和关闭。

5、启动集群

# cmruncl –f –v

17

6、检查集群状态

# cmviewcl –v

7、生成程序包配置模板(以oracle_m1程序包为例)

对于MC/ServiceGuard A.11.18,程序包配置采用了新的流程。而A.11.17之前的ServiceGuard版本创建的程序包被称作传统程序包。新的程序包采用模块化的配置方法,避免了分开的包控制脚本,从而不用手动同步包控制脚本了。我们一般生成缺省的程序包,也就是Failover,并包含所有模块的程序包。

# mkdir oracle_m1

# cmmakepkg /etc/cmcluster/oracle_m1/pkg_oracle_m1.conf

8、修改程序包配置模板

# vi /etc/cmcluster/oracle_m1/pkg_oracle_m1.conf

#

# "package_name" is the name that is used to identify the package.

#

# Package names must be unique within a cluster.

# 在这里配置程序包名称

package_name oracle_m1

# "package_description" specifies the application that the package runs.

# 可以给程序包加个描述

package_description "Management Domain Database Servie"

# "module_name" specifies the package module from which

# this package was created. Do not change the module_name.

module_name sg/basic

module_version 1

module_name sg/all

module_version 1

module_name sg/failover

module_version 1

module_name sg/priority

module_version 1

module_name sg/dependency

module_version 1

module_name sg/monitor_subnet

module_version 1

module_name sg/package_ip

18

module_version 1

module_name sg/service

module_version 1

module_name sg/resource

module_version 1

module_name sg/volume_group

module_version 1

module_name sg/filesystem

module_version 1

module_name sg/pev

module_version 1

module_name sg/external_pre

module_version 1

module_name sg/external

module_version 1

module_name sg/acp

module_version 1

# "package_type" is the type of package.

#

# The package_type attribute specifies the behavior for this

# package. Legal values and their meanings are:

#

# failover package runs on one node at a time and if a failure

# occurs it can switch to an alternate node.

#

# multi_node package runs on multiple nodes at the same time and

# can be independently started and halted on

# individual nodes. Failures of package components such # as services, EMS resources or subnets, will cause

# the package to be halted only on the node on which the # failure occurred. Relocatable IP addresses cannot be

# assigned to "multi_node" packages.

#

# system_multi_node

# package runs on all cluster nodes at the same time.

# It cannot be started and halted on individual nodes.

# Both "node_fail_fast_enabled" and "auto_run"

# must be set to "yes" for this type of package. All

# "services" must have "service_fail_fast_enabled" set # to "yes". system_multi_node packages are only

# supported for use by applications provided by

# Hewlett-Packard.

19

package_type failover

# "node_name" specified which nodes this package can run on.

#

# NOTE: The order in which the nodes are specified here determines the

# order of priority when Serviceguard is deciding where to run the

# package.

# 加入节点名称

node_name ZBDMDB2

node_name ZBDMDB1

# "auto_run" defines whether the package is to be started when the

# cluster is started, and if it will fail over automatically.

auto_run yes

# "node_fail_fast_enabled" will cause the node to fail if the package fails.

node_fail_fast_enabled no

# "run_script_timeout" is the number of seconds allowed for the package to start. # "halt_script_timeout" is the number of seconds allowed for the package to halt. #

# If the start or halt function has not completed in the specified

# number of seconds, the function will be terminated. The default is

# "no_timeout". Adjust the timeouts as necessary to permit full

# execution of each function.

run_script_timeout no_timeout

# Legal values for halt_script_timeout: no_timeout, (value > 0).

halt_script_timeout no_timeout

# "successor_halt_timeout" limits the amount of time

# Serviceguard waits for packages that depend on this package

# ("successor packages") to halt, before running the halt script of this

# package.

#

successor_halt_timeout no_timeout

# "script_log_file" is the full path name for the package control script log

20

# file. The maximum length of the path name is MAXPATHLEN characters long.

#

script_log_file $SGRUN/log/$SG_PACKAGE.log

# "operation_sequence" defines the order in which the individual script

# programs will be executed in the package start action. The package halt action

# will be executed in the reverse order.

#

operation_sequence $SGCONF/scripts/sg/external_pre.sh operation_sequence $SGCONF/scripts/sg/volume_group.sh operation_sequence $SGCONF/scripts/sg/filesystem.sh operation_sequence $SGCONF/scripts/sg/package_ip.sh operation_sequence $SGCONF/scripts/sg/external.sh operation_sequence $SGCONF/scripts/sg/service.sh operation_sequence $SGCONF/scripts/sg/resource.sh

# "log_level" controls the amount of information printed

# during validation and package startup or shutdown time.

#

# Legal values for log_level: ( (value >= 0) && (value <= 5) ).

#log_level

# "failover_policy" is the policy to be applied when package fails.

#

# This policy will be used to select a node whenever the package needs

# to be started or restarted. The default policy is "configured_node".

# This policy means Serviceguard will select nodes in priority order

# from the list of "node_name" entries.

#

# The alternative policy is "min_package_node". This policy means

# Serviceguard will select from the list of "node_name" entries the

# node, which is running fewest packages when this package needs to

# start.

failover_policy configured_node

# "failback_policy" is the action to take when a package is not running

# on its primary node.

#

failback_policy manual

21

本文来源:https://www.bwwdw.com/article/ywne.html

Top