凌奕株_centos6.5-udev-11gRac部署文档

更新时间:2024-06-09 22:35:01 阅读量: 综合文库 文档下载

说明:文章内容仅供预览,部分内容可能不全。下载后的文档,内容与下面显示的完全一致。下载之前请确认下面内容是否您想要的,是否完整无缺。

一、 安装linux系统

1、 集群安装环境

? OS : Centos6.5 x86_64 ? Oracle: Oracle 11g 11.3.0 ? Grid:

Oracle 11g

2、 系统安装要求

? ? ? ? ? ?

搭建双节点集群,需两台服务器,主机名分别为rac1和rac2 内存大小按需

两个网卡,一个作为公有网络,一个作为私有网络。 提前规划好IP地址(公网IP,私网IP,VIP和scan ip)。 系统盘、/u01、ASM数据盘,请按需规划

由于资源有限,没有光纤存储设备,使用ISCIS软件实现共享存储,利用其中一个节点的本地磁盘作为共享盘

二、 系统配置

注意:以下操作需要在两个节点中完成,文档只演示rac1!

1、 编辑/etc/hosts,添加需要的IP地址

[root@rac1 ~]# vi /etc/hosts

127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 192.168.77.13 rac1 192.168.77.14 rac2

10.0.0.13 rac1-priv 10.0.0.14 rac2-priv

192.168.77.17 rac1-vip 192.168.77.18 rac2-vip

192.168.77.19 rac-scan --注意不要用下划线

2、 编辑/etc/sysctl.conf,修改内核参数

(1)修改内核参数,--标红部分只需修改原来的参数值,其他的则需在文件后面添加。 [root@rac1 ~]# vi /etc/sysctl.conf

# Controls the maximum shared segment size, in bytes kernel.shmmax = 4398046511104

# Controls the maximum number of shared memory segments, in pages kernel.shmall = 4294967296 fs.aio-max-nr = 1048576 fs.file-max = 6815744 kernel.shmmni = 4096

kernel.sem = 250 32000 100 142

net.ipv4.ip_local_port_range = 9000 65500 net.core.rmem_default = 262144 net.core.rmem_max = 4194304 net.core.wmem_default = 262144 net.core.wmem_max = 1048586

(2)使内核参数生效 [root@rac1 ~]# sysctl -p net.ipv4.ip_forward = 0

net.ipv4.conf.default.rp_filter = 1

net.ipv4.conf.default.accept_source_route = 0 kernel.sysrq = 0

kernel.core_uses_pid = 1 net.ipv4.tcp_syncookies = 1

net.bridge.bridge-nf-call-ip6tables = 0 net.bridge.bridge-nf-call-iptables = 0 net.bridge.bridge-nf-call-arptables = 0 kernel.msgmnb = 65536 kernel.msgmax = 65536

kernel.shmmax = 4398046511104 kernel.shmall = 4294967296 fs.aio-max-nr = 1048576 fs.file-max = 6815744 kernel.shmmni = 4096

kernel.sem = 250 32000 100 142

net.ipv4.ip_local_port_range = 9000 65500 net.core.rmem_default = 262144 net.core.rmem_max = 4194304 net.core.wmem_default = 262144 net.core.wmem_max = 1048586

3、 建立需要的用户和组

[root@rac1 ~]# vi mkuser.sh groupadd -g 200 oinstall groupadd -g 201 dba groupadd -g 202 oper

groupadd -g 203 asmadmin groupadd -g 204 asmoper groupadd -g 205 asmdba

useradd -u 200 -g oinstall -G dba,asmdba,oper oracle

useradd -u 201 -g oinstall -G asmadmin,asmdba,asmoper,oper,dba grid [root@rac1 ~]# sh mkuser.sh [root@rac1 ~]# id oracle

uid=200(oracle) gid=200(oinstall) groups=200(oinstall),201(dba),202(oper),205(asmdba) [root@rac1 ~]# id grid uid=201(grid) gid=200(oinstall) groups=200(oinstall),201(dba),202(oper),203(asmadmin),204(asmoper),205(asmdba) [root@rac1 ~]# passwd oracle [root@rac1 ~]# passwd grid

4、 修改用户的shell限制

[root@rac1 ~]# vi /etc/security/limits.conf

--以下是新添内容:

oracle soft nproc oracle hard nproc oracle soft nofile oracle hard nofile oracle soft stack grid soft nproc grid hard nproc grid soft nofile grid hard nofile grid soft stack

[root@rac1 ~]# vi /etc/pam.d/login

--以下是新添内容:

session required /lib/security/pam_limits.so

[root@rac1~]# vi /etc/profile --以下是新添内容:

if [ $USER = \ if [ $SHELL = \ ulimit -p 16384 ulimit -n 65536 else

ulimit -u 16384 -n 65536 fi fi

[root@rac1 ~]# source /etc/profile

5、 配置用户环境变量

(1)配置oracle用户环境变量

2047

16384 1024 65536 10240 2047 16384 1024 65536 10240 [root@rac1 ~]# su – oracle

[oracle@rac1 ~]$ vi .bash_profile --以下是新添内容: export EDITOR=vi

export ORACLE_SID=prod1

export ORACLE_BASE=/u01/app/oracle

export ORACLE_HOME=$ORACLE_BASE/product/11.2.0/db_1 export LD_LIBRARY_PATH=$ORACLE_HOME/lib

export PATH=$ORACLE_HOME/bin:/bin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/X11R6/bin umask 0022

(2)配置grid用户环境变量 [oracle@rac1 ~]$ su - grid

[grid@rac1 ~]$ vi .bash_profile

--以下是新添内容: export EDITOR=vi

export ORACLE_SID=+ASM1

export ORACLE_BASE=/u01/app/oracle

export ORACLE_HOME=$ORACLE_BASE/product/11.2.0/db_1 export GRID_HOME=/u01/grid

export LD_LIBRARY_PATH=$ORACLE_HOME/lib export THREADS_FLAG=native export PATH=$GRID_HOME /bin :$ORACLE_HOME/bin:/bin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/X11R6/bin umask 0022

6、 对数据库磁盘进行分区

(1)查看磁盘

[root@rac1 ~]# fdisk -l

Disk /dev/vda: 107.4 GB, 107374182400 bytes –系统盘 16 heads, 63 sectors/track, 208050 cylinders Units = cylinders of 1008 * 512 = 516096 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00036309

Disk /dev/sda: 32.2 GB, 32212254720 bytes –数据库盘 255 heads, 63 sectors/track, 3916 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000

Disk /dev/sdb: 107.4 GB, 107374182400 bytes—共享盘

255 heads, 63 sectors/track, 13054 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0xb4bdbd7d

(2)对数据库磁盘/dev/sda进行分区,创建文件系统/u01 [root@rac1 ~]# fdisk /dev/sda

[root@rac1 ~]# mkfs -t ext4 /dev/sda1 [root@rac1 ~]# mkdir /u01

[root@rac1 ~]# mount /dev/sda1 /u01 [root@rac1 ~]# df -h

Filesystem Size Used Avail Use% Mounted on /dev/mapper/vg_lyz1-LogVol00 20G 9.9G 8.5G 54% /

tmpfs 2.0G 72K 2.0G 1% /dev/shm /dev/vda1 194M 35M 149M 19% /boot /dev/sda1 30G 172M 28G 1% /u01 [root@rac1 ~]# vi /etc/fstab --添加以下内容

/dev/sda1 /u01 ext4 defaults 0 2 [root@rac1 ~]# umount /u01 [root@rac1 ~]# mount /u01

7、 检查内存需求

(1)物理内存大小至少1300M,虚拟内存大小,一般为物理内存的1.5-2倍 [root@rac1 ~]# free -m

total used free shared buffers cached

Mem: 3959 372 3587 0 49 127 -/+ buffers/cache: 195 3763 Swap: 8191 0 8191

8、 检查磁盘空间需求

(1)查看临时文件系统大小,至少需要1G。 [root@rac1 ~]# df -h

Filesystem Size Used Avail Use% Mounted on /dev/mapper/vg_lyz1-LogVol00 20G 9.9G 8.5G 54% /

tmpfs 2.0G 72K 2.0G 1% /dev/shm /dev/vda1 194M 35M 149M 19% /boot /dev/sda1 30G 172M 28G 1% /u01

9、 创建需要的目录,并修改所有者和权限

[root@rac1 ~]# vi mkdir.sh

mkdir -p /u01/app/oraInventory

chown -R grid:oinstall /u01/app/oraInventory/ chmod -R 775 /u01/app/oraInventory/

mkdir -p /u01/grid

chown -R grid:oinstall /u01/grid/ chmod -R 775 /u01/grid/

mkdir -p /u01/app/oracle

mkdir -p /u01/app/oracle/cfgtoollogs chown -R oracle:oinstall /u01/app/oracle chmod -R 775 /u01/app/oracle

[root@rac1 ~]# sh mkdir.sh [root@rac1 ~]# ll /u01 total 24

drwxr-xr-x 3 root root 4096 Mar 17 13:42 app drwxrwxr-x 2 grid oinstall 4096 Mar 17 13:42 grid

drwx------ 2 root root 16384 Mar 17 13:26 lost+found

10、 关闭一些不必要的服务

(1)关闭系统ntpd服务,采用oracle自带的时间同步服务 [root@rac1 ~]# chkconfig ntpd off

[root@rac1 ~]# mv /etc/ntp.conf /etc/ntp.conf.bak

(2)关闭sendmail服务,加快系统启动速度 [root@rac1 ~]# chkconfig sendmail off

在rac1完成以上内容后,需要在rac2执行一样的内容。

三、 用iSCIS配置共享存储

1、 配置iSCSI Target

(1) 配置前提,关闭防火墙或开放3260端口

[root@rac1 ~]# service iptables stop

(2) 安装tgt

[root@rac1 ~]# yum -y install scsi-target-utils

(3) 配置tgt

tgt的主配置文件为/etc/tgt/targets.conf,下面我们来设置改文件。 在该文件最后新增以下设置:

[root@rac1 ~]# vi /etc/tgt/targets.conf

backing-store /dev/sdb 说明:

iqn = iSCSI Qualified Name

iSCSI target的名称规则如下: iqn.2014-07.dev.iscsi-target:iscsidisk iqn.年份-月份.域名反写.设备识别

每个在同一个target上的backing-store 称为逻辑单元号(Logical Unit Number,LUN)。 其他高级设置如initiator-address、incominguser,大家自行查资料。

(4) 启动iSCSI target

[root@rac1 ~]# /etc/init.d/tgtd start

Starting SCSI target daemon: [ OK ] [root@rac1 ~]# chkconfig tgtd on

[root@rac1 ~]# netstat -tulnp|grep tgt

tcp 0 0 0.0.0.0:3260 LISTEN 3075/tgtd

tcp 0 0 :::3260 LISTEN 3075/tgtd

(5) 查看iSCSI target

[root@rac1 ~]# tgt-admin -show

LUN0 是控制器,可以看到各个LUN的大小和磁盘路径。 至此,iSCSI Target 设定完毕。

0.0.0.0:* :::* 2、 配置iSCSI Initiator

(1) 安装initiator

[root@rac2 ~]# yum -y install iscsi-initiator-utils

Starting iscsid: [ OK ] 192.168.77.15:3260,1 iqn.2016-03.dev.iscsi-target:racdisk (2) 设置开机启动

[root@rac2 ~]# chkconfig iscsid on [root@rac2 ~]# chkconfig iscsi on

(3) 配置文档

initiator的配置文档位于/etc/iscsi/,该目录下有两个文件,initiatorname.iscsi 和iscsid.conf,其中iscsid.conf 是其配置文件,initiatorname.iscsi 是标记了initiator的名称,它的默认名称是InitiatorName=iqn.1994-05.com.redhat:b45be5af6021。我们可以根据实际情况进行更改,比较好区分,这里我们修改为: [root@rac2 ~]# vi /etc/iscsi/initiatorname.iscsi InitiatorName=iqn.2016-03.dev.iscsi-target:racdisk

因为在target里面,我们并没有设置访问限制,所以iscsid.conf 文件并不需要修改。

(4) 侦测target

[root@rac2 ~]# iscsiadm -m discovery -t sendtargets -p 192.168.77.13

说明:

-m discovery //侦测target -t sendtargets //通过iscsi协议

-p IP:port //指定target的IP和port,不写port的话,默认为3260

(5) 查看nodes

iscsiadm 侦测到的结果会写入/var/lib/iscsi/nodes/ 中,因此只需启动/etc/init.d/iscsi 就能够在下次开机时,自动连接到正确的target了。 [root@rac2 ~]# ll -R /var/lib/iscsi/nodes

(6) 连接target

查看目前系统上面所有的target [root@rac2 ~]# iscsiadm -m node

192.168.77.15:3260,1 iqn.2016-03.dev.iscsi-target:racdisk 登录target

[root@rac2 ~]# iscsiadm -m node -T iqn.2016-03.dev.iscsi-target:racdisk –l

(7) 查看磁盘情况

[root@rac2 ~]# fdisk –l

Disk /dev/vda: 107.4 GB, 107374182400 bytes –系统盘 16 heads, 63 sectors/track, 208050 cylinders Units = cylinders of 1008 * 512 = 516096 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00036309

Disk /dev/sda: 32.2 GB, 32212254720 bytes –数据库盘 255 heads, 63 sectors/track, 3916 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000

Disk /dev/sdb: 107.4 GB, 107374182400 bytes—共享盘,共享成功! 255 heads, 63 sectors/track, 13054 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0xb4bdbd7d

3、 规划共享存储

(1)

? ? ? ? (2)

? ? ? ?

使用ASM进行管理,需要创建分区存放以下信息 OCR DISK :存储CRS资源配置信息 VOTE DISK:仲裁盘,记录节点状态

Data Disk:存放datafile、controlfile、redologfile、spfile等数据文件

Recovery Area:存放flashback database log、archive log、rman backup等备份文件

ASM磁盘分配如下: OCR DISK :/dev/sdb1 2G 主分区 VOTE DISK:/dev/sdb2、/dev/sdb3 各2G主分区 Data Disk: /dev/sdb5、/dev/sdb6 各25G 逻辑分区 Recovery Area:/dev/sdb7、/dev/sdb8 各20G 逻辑分区

4、 在共享存储上创建分区

以下步骤均只需在rac1完成,然后在rac2做同步即可,因为磁盘是共享的。

(1) 步骤省略,分区后结果如下: [root@rac1 ~]# fdisk -l

Disk /dev/sda: 32.2 GB, 32212254720 bytes 255 heads, 63 sectors/track, 3916 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000

Disk /dev/sdb: 107.4 GB, 107374182400 bytes 255 heads, 63 sectors/track, 13054 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0xb4bdbd7d

Device Boot Start End Blocks Id System /dev/sdb1 1 262 2104483+ 83 Linux /dev/sdb2 263 524 2104515 83 Linux /dev/sdb3 525 786 2104515 83 Linux /dev/sdb4 787 13054 98542710 5 Extended /dev/sdb5 787 4051 26226081 83 Linux /dev/sdb6 4052 7316 26226081 83 Linux /dev/sdb7 7317 9928 20980858+ 83 Linux /dev/sdb8 9929 12540 20980858+ 83 Linux

(2) 因为是共享磁盘,所以rac2直接查看即可,都已经创建好了。

[root@rac2 ~]# fdisk -l

Disk /dev/sdb: 107.4 GB, 107374182400 bytes 255 heads, 63 sectors/track, 13054 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0xb4bdbd7d

Device Boot Start End Blocks Id System /dev/sdb1 1 262 2104483+ 83 Linux /dev/sdb2 263 524 2104515 83 Linux /dev/sdb3 525 786 2104515 83 Linux /dev/sdb4 787 13054 98542710 5 Extended /dev/sdb5 787 4051 26226081 83 Linux /dev/sdb6 4052 7316 26226081 83 Linux /dev/sdb7 7317 9928 20980858+ 83 Linux /dev/sdb8 9929 12540 20980858+ 83 Linux

Checking hosts config file...

Node Name Status ------------------------------------ ------------------------

rac2 passed rac1 passed

Verification of the hosts config file successful

Interface information for node \

Name IP Address Subnet Gateway Def. Gateway HW Address MTU

------ --------------- --------------- --------------- --------------- ----------------- ------

eth0 192.168.77.14 192.168.64.0 0.0.0.0 192.168.79.254 52:54:00:9D:7B:89 1500

eth1 10.0.0.14 10.0.0.0 0.0.0.0 192.168.79.254 52:54:00:86:C4:85 1500

virbr0 192.168.122.1 192.168.122.0 0.0.0.0 192.168.79.254 52:54:00:52:5E:9F 1500

Interface information for node \

Name IP Address Subnet Gateway Def. Gateway HW Address MTU

------ --------------- --------------- --------------- --------------- ----------------- ------

eth0 192.168.77.13 192.168.64.0 0.0.0.0 192.168.79.254 52:54:00:D8:9A:F3 1500

eth1 10.0.0.13 10.0.0.0 0.0.0.0 192.168.79.254 52:54:00:4A:01:9B 1500

virbr0 192.168.122.1 192.168.122.0 0.0.0.0 192.168.79.254 52:54:00:52:5E:9F 1500

Check: Node connectivity of subnet \

Source Destination Connected? ------------------------------ ------------------------------ ----------------

rac2[192.168.77.14] rac1[192.168.77.13] yes Result: Node connectivity passed for subnet \

Check: TCP connectivity of subnet \

Source Destination Connected? ------------------------------ ------------------------------ ----------------

rac1:192.168.77.13 rac2:192.168.77.14 passed

Result: TCP connectivity check passed for subnet \

Check: Node connectivity of subnet \

Source Destination Connected? ------------------------------ ------------------------------ ----------------

rac2[10.0.0.14] rac1[10.0.0.13] yes Result: Node connectivity passed for subnet \

Check: TCP connectivity of subnet \

Source Destination Connected? ------------------------------ ------------------------------ ----------------

rac1:10.0.0.13 rac2:10.0.0.14 passed Result: TCP connectivity check passed for subnet \

Check: Node connectivity of subnet \

Source Destination Connected? ------------------------------ ------------------------------ ----------------

rac2[192.168.122.1] rac1[192.168.122.1] yes Result: Node connectivity passed for subnet \

Check: TCP connectivity of subnet \

Source Destination Connected? ------------------------------ ------------------------------ ----------------

rac1:192.168.122.1 rac2:192.168.122.1 failed

ERROR:

PRVF-7617 : Node connectivity between \Result: TCP connectivity check failed for subnet \

Interfaces found on subnet \rac2 eth0:192.168.77.14 rac1 eth0:192.168.77.13

Interfaces found on subnet \rac2 eth1:10.0.0.14 rac1 eth1:10.0.0.13

Checking subnet mask consistency...

Subnet mask consistency check passed for subnet \Subnet mask consistency check passed for subnet \

Subnet mask consistency check passed for subnet \Subnet mask consistency check passed.

Result: Node connectivity check failed

Checking multicast communication...

Checking subnet \Check of subnet \for multicast communication with multicast group \passed.

Checking subnet \

Check of subnet \

Checking subnet \for multicast communication with multicast group \

Check of subnet \passed.

Check of multicast communication passed.

Checking ASMLib configuration.

Node Name Status ------------------------------------ ------------------------

rac2 passed rac1 passed Result: Check for ASMLib configuration passed.

Check: Total memory

Node Name Available Required Status ------------ ------------------------ ------------------------ ----------

rac2 3.867GB (4054836.0KB) 1.5GB (1572864.0KB) passed rac1 3.867GB (4054836.0KB) 1.5GB (1572864.0KB) passed Result: Total memory check passed

Check: Available memory

Node Name Available Required Status ------------ ------------------------ ------------------------ ----------

rac2 3.4717GB (3640296.0KB) 50MB (51200.0KB) passed rac1 3.5183GB (3689256.0KB) 50MB (51200.0KB) passed Result: Available memory check passed

Check: Swap space

Node Name Available Required Status

------------ ------------------------ ------------------------ ----------

rac2 8GB (8388600.0KB) 3.867GB (4054836.0KB) passed rac1 8GB (8388600.0KB) 3.867GB (4054836.0KB) passed Result: Swap space check passed

Check: Free disk space for \

Path Node Name Mount point Available Required Status ---------------- ------------ ------------ ------------ ------------ ------------

/tmp rac2 /tmp 2.4658GB 1GB passed

Result: Free disk space check passed for \

Check: Free disk space for \

Path Node Name Mount point Available Required ---------------- ------------ ------------ ------------ ------------ ------------

/tmp rac1 /tmp 1.4463GB passed

Result: Free disk space check passed for \

Check: User existence for \

Node Name Status Comment ------------ ------------------------ ------------------------

rac2 passed exists(200) rac1 passed exists(200)

Checking for multiple users with UID value 200

Result: Check for multiple users with UID value 200 passed Result: User existence check passed for \

Check: Group existence for \

Node Name Status Comment ------------ ------------------------ ------------------------

rac2 passed exists rac1 passed exists Result: Group existence check passed for \

Check: Group existence for \

Node Name Status Comment ------------ ------------------------ ------------------------

rac2 passed exists rac1 passed exists Result: Group existence check passed for \

Check: Membership of user \

Status 1GB Node Name User Exists Group Exists User in Group Primary Status ---------------- ------------ ------------ ------------ ------------ ------------

rac2 yes yes yes yes passed

rac1 yes yes yes yes passed

Result: Membership check for user \

Check: Membership of user \

Node Name User Exists Group Exists User in Group ---------------- ------------ ------------ ------------ ----------------

rac2 yes yes yes rac1 yes yes yes Result: Membership check for user \

Check: Run level

Node Name run level Required ------------ ------------------------ ------------------------ ----------

rac2 5 3,5 rac1 5 3,5 Result: Run level check passed

Check: Hard limits for \

Node Name Type Available Required ---------------- ------------ ------------ ------------ ----------------

rac2 hard 65536 65536 rac1 hard 65536 65536 Result: Hard limits check passed for \

Check: Soft limits for \

Node Name Type Available Required ---------------- ------------ ------------ ------------ ----------------

rac2 soft 1024 1024 rac1 soft 1024 1024 Result: Soft limits check passed for \

Check: Hard limits for \

Node Name Type Available Required ---------------- ------------ ------------ ------------ ----------------

rac2 hard 16384 16384 rac1 hard 16384 16384 Result: Hard limits check passed for \

Check: Soft limits for \

Status passed passed Status passed passed Status passed passed Status passed passed Status passed passed

Check: Package existence for \

Node Name Available Required Status ------------ ------------------------ ------------------------ ----------

rac2 libaio-devel(x86_64)-0.3.107-10.el6 libaio-devel(x86_64)-0.3.105 passed rac1 libaio-devel(x86_64)-0.3.107-10.el6 libaio-devel(x86_64)-0.3.105 passed Result: Package existence check passed for \

Check: Package existence for \

Node Name Available Required Status ------------ ------------------------ ------------------------ ----------

rac2 libgcc(x86_64)-4.4.7-4.el6 libgcc(x86_64)-3.4.6 passed rac1 libgcc(x86_64)-4.4.7-4.el6 libgcc(x86_64)-3.4.6 passed Result: Package existence check passed for \

Check: Package existence for \

Node Name Available Required Status ------------ ------------------------ ------------------------ ----------

rac2 libstdc++(x86_64)-4.4.7-4.el6 libstdc++(x86_64)-3.4.6 passed rac1 libstdc++(x86_64)-4.4.7-4.el6 libstdc++(x86_64)-3.4.6 passed Result: Package existence check passed for \

Check: Package existence for \

Node Name Available Required Status ------------ ------------------------ ------------------------ ----------

rac2 libstdc++-devel(x86_64)-4.4.7-4.el6 libstdc++-devel(x86_64)-3.4.6 passed rac1 libstdc++-devel(x86_64)-4.4.7-4.el6 libstdc++-devel(x86_64)-3.4.6 passed Result: Package existence check passed for \

Check: Package existence for \

Node Name Available Required Status ------------ ------------------------ ------------------------ ----------

rac2 sysstat-9.0.4-22.el6 sysstat-5.0.5 passed rac1 sysstat-9.0.4-22.el6 sysstat-5.0.5 passed Result: Package existence check passed for \

Check: Package existence for \

Node Name Available Required Status ------------ ------------------------ ------------------------ ----------

rac2 missing pdksh-5.2.14 failed rac1 missing pdksh-5.2.14 failed Result: Package existence check failed for \

Check: Package existence for \

Node Name Available Required Status

------------ ------------------------ ------------------------ ----------

rac2 expat(x86_64)-2.0.1-11.el6_2 expat(x86_64)-1.95.7 passed rac1 expat(x86_64)-2.0.1-11.el6_2 expat(x86_64)-1.95.7 passed Result: Package existence check passed for \

Checking for multiple users with UID value 0

Result: Check for multiple users with UID value 0 passed

Check: Current group ID

Result: Current group ID check passed

Starting check for consistency of primary group of root user

Node Name Status ------------------------------------ ------------------------

rac2 passed rac1 passed

Check for consistency of root user's primary group passed

Starting Clock synchronization checks using Network Time Protocol(NTP)...

NTP Configuration file check started...

Network Time Protocol(NTP) configuration file not found on any of the nodes. Oracle Cluster Time Synchronization Service(CTSS) can be used instead of NTP for time synchronization on the cluster nodes

No NTP Daemons or Services were found to be running

Result: Clock synchronization check using Network Time Protocol(NTP) passed

Checking Core file name pattern consistency... Core file name pattern consistency check passed.

Checking to make sure user \

Node Name Status Comment ------------ ------------------------ ------------------------

rac2 passed does not exist rac1 passed does not exist Result: User \

Check default user file creation mask

Node Name Available Required Comment ------------ ------------------------ ------------------------ ----------

rac2 0022 0022 passed rac1 0022 0022 passed

Result: Default user file creation mask check passed

Checking consistency of file \

Checking the file \File \Checking if domain entry in file \domain entry in file \

Checking if search entry in file \search entry in file \Checking DNS response time for an unreachable node

Node Name Status ------------------------------------ ------------------------

rac2 failed rac1 failed

PRVF-5637 : DNS response time could not be checked on following nodes: rac2,rac1

File \

Check: Time zone consistency

Result: Time zone consistency check passed

Pre-check for cluster services setup was unsuccessful on all the nodes.

检测结果:以下必备软件包没有安装上,安装步骤如下。 pdksh-5.2.14

2、 安装必备的软件包

以下操作2个节点都需要执行,以rac1为例。

如果使用yum安装失败,可以用光盘安装,如果实在装不上,可以跳过。

(1)切换到root用户,配置yum工具,使用yum安装软件可以解决依赖关系。 [root@rac1 ~]# yum install libaio* -y [root@rac1 ~]# yum install pdksh-5.2.14

Loaded plugins: aliases, changelog, downloadonly, fastestmirror, kabi, presto, refresh-packagekit, security, tmprepo,

: verify, versionlock Loading support for CentOS kernel ABI Determining fastest mirrors * base: mirrors.btte.net * extras: mirrors.163.com * updates: mirrors.btte.net

base | 3.7 kB 00:00

base/primary_db | 4.6 MB 00:00 extras | 3.4 kB 00:00 extras/primary_db | 34 kB 00:00 updates | 3.4 kB 00:00

updates/primary_db | 4.0 MB 00:00 Setting up Install Process

No package pdksh-5.2.14 available. Error: Nothing to do

七、 安装集群软件

以下内容只需要在其中一个节点完成即可,以rac1为例,有特殊的地方,会提醒。

1、 开始安装grid软件

(1) 切换到grid用户在grid软件安装包目录下,执行./runInstaller,稍等即会弹出图形界

面。

[root@rac1 ~]# su – grid

[grid@rac1 ~]$ export DISPLAY=192.168.13.216:0.0 [grid@rac1 ~]$ cd /soft/grid/ [grid@rac1 grid]$ ls

doc install readme.html response rpm runcluvfy.sh runInstaller sshsetup stage welcome.html

[grid@rac1 grid]$ ./runInstaller Starting Oracle Universal Installer... (2) 开始安装

(3) 选择第一项,下一步

(4) 选择高级安装,点下一步。

(5) 默认语言,下一步

(6) 修改scan name,并把GNS勾掉,下一步。

(7) 添加节点,节点名称和vip名称要和/etc/hosts中的保持一致。

(8) 验证有效性,输入密码,点击setup

(9) 设置网卡属性,把eth1改为私有网络,点下一步。

(10) 使用asm管理,下一步。

(11) 创建磁盘组,修改路径,选择磁盘,点击下一步

(12) 指定asm的密码,统一指定beijing,会有个警告说密码不符合规则,忽略即可。

(13) 不适用IPM,下一步

(14) 指定操作系统组,默认,下一步

(15) 指定安装路径,注意要和.bash_profile中指定的一致,不一致请手工修改。不确定的

请查看.bash_profile文件。

(16) 创建inventory目录,默认,下一步

(17) 进行安装前校验,校验没问题会自动跳转到下一步,有问题则会提示。由于我们在

安装前已经通过命令校验,这里一般不会报错。

(18) 以下问题可以忽略

(19) 总结,点完成,则开始安装。

(20) 安装中,安装到最后会弹出对话框,让我们执行对应的脚本。

(21) 以root用户身份执行脚本,执行脚本一定不能出错!执行顺序如下:

(22) 脚本执行顺序如下:

先在rac1跑完第一个脚本,再在rac2跑第一个脚本 [root@rac1 ~]# /u01/app/oraInventory/orainstRoot.sh Changing permissions of /u01/app/oraInventory. Adding read,write permissions for group.

Removing read,write,execute permissions for world.

Changing groupname of /u01/app/oraInventory to oinstall. The execution of the script is complete.

[root@rac2 ~]# /u01/app/oraInventory/orainstRoot.sh

然后在rac1跑完第二个脚本,再在rac2跑第二个脚本。 [root@rac1 ~]# /u01/grid/root.sh

Performing root user operation for Oracle 11g

The following environment variables are set as: ORACLE_OWNER= grid

ORACLE_HOME= /u01/grid

Enter the full pathname of the local bin directory: [/usr/local/bin]: Copying dbhome to /usr/local/bin ... Copying oraenv to /usr/local/bin ...

Copying coraenv to /usr/local/bin ...

Creating /etc/oratab file...

Entries will be added to the /etc/oratab file as needed by Database Configuration Assistant when a database is created Finished running generic part of root script.

Now product-specific root actions will be performed.

Using configuration parameter file: /u01/grid/crs/install/crsconfig_params Creating trace directory

User ignored Prerequisites during installation OLR initialization - successful root wallet root wallet cert root cert export peer wallet

profile reader wallet pa wallet

peer wallet keys pa wallet keys peer cert request pa cert request peer cert pa cert

peer root cert TP

profile reader root cert TP pa root cert TP peer pa cert TP pa peer cert TP

profile reader pa cert TP profile reader peer cert TP peer user cert pa user cert

Adding Clusterware entries to upstart

CRS-2672: Attempting to start 'ora.mdnsd' on 'rac1' CRS-2676: Start of 'ora.mdnsd' on 'rac1' succeeded CRS-2672: Attempting to start 'ora.gpnpd' on 'rac1' CRS-2676: Start of 'ora.gpnpd' on 'rac1' succeeded

CRS-2672: Attempting to start 'ora.cssdmonitor' on 'rac1' CRS-2672: Attempting to start 'ora.gipcd' on 'rac1'

CRS-2676: Start of 'ora.cssdmonitor' on 'rac1' succeeded CRS-2676: Start of 'ora.gipcd' on 'rac1' succeeded CRS-2672: Attempting to start 'ora.cssd' on 'rac1' CRS-2672: Attempting to start 'ora.diskmon' on 'rac1'

本文来源:https://www.bwwdw.com/article/4vw6.html

Top