Veritas Cluster Server VCS How To

更新时间:2023-08-07 07:25:01 阅读量: 实用文档 文档下载

说明:文章内容仅供预览,部分内容可能不全。下载后的文档,内容与下面显示的完全一致。下载之前请确认下面内容是否您想要的,是否完整无缺。

How to configure Veritas Cluster Server

Veritas Cluster Server (VCS) HOWTO:

===================================

$Id: VCS-HOWTO,v 1.25 2002/09/30 20:05:38 pzi Exp $

Copyright (c) Peter Ziobrzynski, pzi@http://www.77cn.com.cn

Contents:

---------

- Copyright

- Thanks

- Overview

- VCS installation

- Summary of cluster queries

- Summary of basic cluster operations

- Changing cluster configuration

- Configuration of a test group and test resource type

- Installation of a test agent for a test resource

- Home directories service group configuration

- NIS service groups configuration

- Time synchronization services

- ClearCase configuration

Copyright:

----------

This HOWTO document may be reproduced and distributed in whole or in

part, in any medium physical or electronic, as long as this copyright

notice is retained on all copies. Commercial redistribution is allowed

and encouraged; however, the author would like to be notified of any

such distributions.

All translations, derivative works, or aggregate works incorporating

any this HOWTO document must be covered under this copyright notice.

That is, you may not produce a derivative work from a HOWTO and impose

additional restrictions on its distribution. Exceptions to these rules

may be granted under certain conditions.

In short, I wish to promote dissemination of this information through

as many channels as possible. However, I do wish to retain copyright

on this HOWTO document, and would like to be notified of any plans to

redistribute the HOWTO.

If you have questions, please contact me: Peter Ziobrzynski

<pzi@http://www.77cn.com.cn>

Thanks:

-------

- Veritas Software provided numerous consultations that lead to the

cluster configuration described in this document.

- Parts of this document are based on the work I have done for

Kestrel Solutions, Inc.

- Basis Inc. for assisting in selecting hardware components and help

How to configure Veritas Cluster Server

in resolving installation problems.

- comp.sys.sun.admin Usenet community.

Overview:

---------

This document describes the configuration of a two or more node Solaris

Cluster using Veritas Cluster Server VCS 1.1.2 on Solaris 2.6. Number

of standard UNIX services are configured as Cluster Service Groups:

user home directories, NIS naming services, time synchronization (NTP).

In addition a popular Software Configuration Management system from

Rational - ClearCase is configured as a set of cluster service groups.

Configuration of various software components in the form

of a cluster Service Group allows for high availability of the

application

as well as load balancing (fail-over or switch-over). Beside that

cluster

configuration allows to free a node in the network for upgrades,

testing

or reconfiguration and then bring it back to service very quickly with

little or no additional work.

- Cluster topology.

The cluster topology used here is called clustered pairs. Two nodes

share disk on a single shared SCSI bus. Both computers and the disk

are connected in a chain on a SCSI bus. Both differential or fast-wide

SCSI buses can be used. Each SCSI host adapter in each node is assigned

different SCSI id (called initiator id) so both computers can coexist

on the same bus.

+ Two Node Cluster with single disk:

Node Node

| /

| /

| /

| /

|/

Disk

A single shared disk can be replaced by two disks each on its private

SCSI bus connecting both cluster nodes. This allows for disk mirroring

across disks and SCSI buses.

Note: the disk here can be understood as disk array or a disk pack.

+ Two Node Cluster with disk pair:

Node Node

|\ /|

| \ / |

| \ |

| / \ |

How to configure Veritas Cluster Server

|/ \|

Disk Disk

Single pair can be extended by chaining additional node and connecting

it to the pair by additional disks and SCSI buses. One or more nodes

can be added creating N node configuration. The perimeter nodes have

two SCSI host adapters while the middle nodes have four.

+ Three Node Cluster:

Node Node Node

|\ /| |\ /|

| \ / | | \ / |

| \ | | \ |

| / \ | | / \ |

|/ \| |/ \|

Disk Disk Disk Disk

+ N Node Cluster:

Node Node Node Node

|\ /| |\ /|\ /|

| \ / | | \ / | \ / |

| \ | | \ | ...\ |

| / \ | | / \ | / \ |

|/ \| |/ \|/ \|

Disk Disk Disk Disk Disk

- Disk configuration.

Management of the shared storage of the cluster is performed with the

Veritas Volume Manager (VM). The VM controls which disks on the shared

SCSI bus are assigned (owned) to which system. In Volume Manager disks

are grouped into disk groups and as a group can be assigned for access

from one of the systems. The assignment can be changed quickly

allowing

for cluster fail/switch-over. Disks that compose disk group can be

scattered across multiple disk enclosures (packs, arrays) and SCSI

buses. We used this feature to create disk groups that contains VM

volumes mirrored across devices. Below is a schematics of 3 cluster

nodes connected by SCSI busses to 4 disk packs (we use Sun Multipacks).

The Node 0 is connected to Disk Pack 0 and Node 1 on one SCSI bus and

to Disk Pack 1 and Node 1 on second SCSI bus. Disks 0 in Pack 0 and 1

are put into Disk group 0, disks 1 in Pack 0 and 1 are put into Disk

group 1 and so on for all the disks in the Packs. We have 4 9 GB disks

in each Pack so we have 4 Disk groups between Node 0 and 1 that can be

switched from one node to the other.

Node 1 is interfacing the the Node 2 in the same way as with the Node 0. Two disk packs Pack 2 and Pack 3 are configured with disk groups 4, 5,

6 and 7 as a shared storage between the nodes. We have a total of 8

disk

groups in the cluster. Groups 0-3 can be visible from Node 0 or 1 and

groups 4-7 from Node 1 and 2. Node 1 is in a privileged situation and

can access all disk groups.

How to configure Veritas Cluster Server

Node 0 Node 1 Node 2 ...

Node N

------- ------------------- ------

|\ /| |\ /|

| \ / | | \ / |

| \ / | | \ / |

| \ / | | \ / |

| \ / | | \ / |

| \ / | | \ / |

| \ / | | \ / |

| \ | | \ |

| / \ | | / \ |

| / \ | | / \ |

| / \ | | / \ |

| / \ | | / \ |

| / \ | | / \ |

| / \ | | / \ |

|/ \| |/ \|

Disk Pack 0: Disk Pack 1: Disk Pack 2: Disk Pack 3:

Disk group 0: Disk group 4:

+----------------------+ +------------------------+

| Disk0 Disk0 | | Disk0 Disk0 |

+----------------------+ +------------------------+

Disk group 1: Disk group 5:

+----------------------+ +------------------------+

| Disk1 Disk1 | | Disk1 Disk1 |

+----------------------+ +------------------------+

Disk group 2: Disk group 6:

+----------------------+ +------------------------+

| Disk2 Disk2 | | Disk2 Disk2 |

+----------------------+ +------------------------+

Disk group 3: Disk group 7:

+----------------------+ +------------------------+

| Disk3 Disk3 | | Disk3 Disk3 |

+----------------------+ +------------------------+

- Hardware details:

Below is a detailed listing of the hardware configuration of two

nodes. Sun part numbers are included so you can order it directly

form Sunstore and put it on your Visa:

- E250:

+ Base: A26-AA

+ 2xCPU: X1194A

+ 2x256MB RAM: X7004A,

+ 4xUltraSCSI 9.1GB hard drive: X5234A

+ 100BaseT Fast/Wide UltraSCSI PCI adapter: X1032A

+ Quad Fastethernet controller PCI adapter: X1034A

- MultiPack:

+ 4x9.1GB 10000RPM disk

+ Storedge Mulitpack: SG-XDSK040C-36G

- Connections:

How to configure Veritas Cluster Server

+ SCSI:

E250: E250:

X1032A-------SCSI----->Multipack<----SCSI---X1032A

X1032A-------SCSI----->Multipack<----SCSI---X1032A

+ VCS private LAN 0:

hme0----------Ethernet--->HUB<---Ethernet---hme0

+ VCS private LAN 1:

X1034A(qfe0)--Ethernet--->HUB<---Ethernet---X1034A(qfe0)

+ Cluster private LAN:

X1034A(qfe1)--Ethernet--->HUB<---Ethernet---X1034A(qfe1)

+ Public LAN:

X1034A(qfe2)--Ethernet--->HUB<---Ethernet---X1034A(qfe2)

Installation of VCS-1.1.2

----------------------------

Two systems are put into the cluster: foo_c and bar_c

- Set scsi-initiator-id boot prom envrionment variable to 5 on one

of the systems (say bar_c):

ok setenv scsi-initiator-id 5

ok boot -r

- Install Veritas Foundation Suite 3.0.1.

Follow Veritas manuals.

- Add entries to your c-shell environment:

set veritas = /opt/VRTSvmsa

setenv VMSAHOME $veritas

setenv MANPATH ${MANPATH}:$veritas/man

set path = ( $path $veritas/bin )

- Configure the ethernet connections to use hme0 and qfe0 as Cluster

private interconnects. Do not create /etc/hostname.{hme0,qfe0}.

Configure qfe2 as the public LAN network and qfe1 as Cluster main

private

network. The configuration files on foo_c:

/etc/hosts:

127.0.0.1 localhost

# public network (192.168.0.0/16):

192.168.1.40 bar

192.168.1.51 foo

# Cluster private network (network address 10.2.0.0/16):

10.2.0.1 bar_c

10.2.0.3 foo_c loghost

How to configure Veritas Cluster Server

/etc/hostname.qfe1:

foo_c

/etc/hostname.qfe2:

foo

The configuration files on bar_c:

/etc/hosts:

127.0.0.1 localhost

# Public network (192.168.0.0/16):

192.168.1.40 bar

192.168.1.51 foo

# Cluster private network (network address 10.2.0.0/16):

10.2.0.1 bar_c loghost

10.2.0.3 foo_c

/etc/hostname.qfe1:

bar_c

/etc/hostname.qfe2:

bar

- Configure at least two VM diskgroups on shared storage (Multipacks)

working from on one of the systems (e.g. foo_c):

+ Create cluster volume groups spanning both multipacks

using vxdiskadm '1. Add or initialize one or more disks':

cluster1: c1t1d0 c2t1d0

cluster2: c1t2d0 c2t2d0

...

Name vmdisks like that:

cluster1: cluster101 cluster102

cluster2: cluster201 cluster202

...

You can do it for 4 disk groups with this script:

#!/bin/sh

for group in 1 2 3 4;do

vxdisksetup -i c1t${group}d0

vxdisksetup -i c2t${group}d0

vxdg init cluster${group}

cluster${group}01=c1t${group}d0

vxdg -g cluster${group} adddisk

cluster${group}02=c2t${group}d0

done

+ Create volumes in each group mirrored across both multipacks.

You can do it with the script for 4 disk groups with this script:

#!/bin/sh

for group in 1 2 3 4;do

How to configure Veritas Cluster Server

vxassist -b -g cluster${group} make vol01 8g

layout=mirror cluster${group}01 cluster${group}02

done

+ or do all diskgroup and volumes in one script:

#!/bin/sh

for group in 1 2 3 4;do

vxdisksetup -i c1t${group}d0

vxdisksetup -i c2t${group}d0

vxdg init cluster${group}

cluster${group}01=c1t${group}d0

vxdg -g cluster${group} adddisk

cluster${group}02=c2t${group}d0

vxassist -b -g cluster${group} make vol01 8g

layout=mirror cluster${group}01 cluster${group}02

done

+ Create veritas file systems on the volumes:

#!/bin/sh

for group in 1 2 3 4;do

mkfs -F vxfs /dev/vx/rdsk/cluster$group/vol01

done

+ Deport a group from one system: stop volume, deport a group:

# vxvol -g cluster2 stop vol01

# vxdg deport cluster2

+ Import a group and start its volume on the other system to

see if this works:

# vxdg import cluster2

# vxrecover -g cluster2 -sb

- With the shared storage configured it is important to know how to

manually move the volumes from one node of the cluster to the other.

I use a cmount command to do that. It is like a rc scritp with

additional

argument for the disk group.

To stop (deport) the group 1 on a node do:

# cmount 1 stop

To start (import) the group 1 on the other node do:

# cmount 1 start

The cmount script is as follows:

#!/bin/sh

set -x

group=$1

case $2 in

start)

How to configure Veritas Cluster Server

vxdg import cluster$group

vxrecover -g cluster$group -sb

mount -F vxfs /dev/vx/dsk/cluster$group/vol01 /cluster$group

;;

stop)

umount /cluster$group

vxvol -g cluster$group stop vol01

vxdg deport cluster$group

;;

esac

- To remove all shared storage volumes and groups do:

#!/bin/sh

for group in 1 2 3 4; do

vxvol -g cluster$group stop vol01

vxdg destroy cluster$group

done

- Install VCS software:

(from install server on athena)

# cd /net/athena/export/arch/VCS-1.1.2/vcs_1_1_2a_solaris

# pkgadd -d . VRTScsga VRTSgab VRTSllt VRTSperl VRTSvcs

VRTSvcswz clsp

+ correct /etc/rc?.d scripts to be links:

If they are not symbolic links then it is hard to disable VCS

startup at boot. If they are just rename /etc/init.d/vcs to

stop starting and stopping at boot.

cd /etc

rm rc0.d/K10vcs rc3.d/S99vcs

cd rc0.d

ln -s ../init.d/vcs K10vcs

cd ../rc3.d

ln -s ../init.d/vcs S99vcs

+ add -evacuate option to /etc/init.d/vcs:

This is optional but I find it important to switch-over

all service groups from the node that is being shutdown.

When I take a cluster node down I expect the rest of the

cluster to pick up the responsibility to run all services.

The default VCS does not do that. The only way to move a

group from one node to another is to crash it or do manual

switch-over using hagrp command.

'stop')

$HASTOP -local -evacuate > /dev/null 2>&1

;;

- Add entry to your c-shell environment:

set vcs = /opt/VRTSvcs

How to configure Veritas Cluster Server

setenv MANPATH ${MANPATH}:$vcs/man

set path = ( $vcs/bin $path )

- To remove the VCS software:

NOTE: required if demo installation fails.

# sh /opt/VRTSvcs/wizards/config/quick_start -b

# rsh bar_c 'sh /opt/VRTSvcs/wizards/config/quick_start -b'

# pkgrm VRTScsga VRTSgab VRTSllt VRTSperl VRTSvcs VRTSvcswz clsp

# rm -rf /etc/VRTSvcs /var/VRTSvcs

# init 6

- Configure /.rhosts on both nodes to allow each node transparent rsh

root access to the other:

/.rhosts:

foo_c

bar_c

- Run quick start script from one of the nodes:

NOTE: must run from /usr/openwin/bin/xterm - other xterms cause

terminal

emulation problems

# /usr/openwin/bin/xterm &

# sh /opt/VRTSvcs/wizards/config/quick_start

Select hme0 and qfe0 network links for GAB and LLT connections.

The script will ask twice for the links interface names. Link 1 is hme0

and link2 is qfe0 for both foo_c and bar_c nodes.

You should see the heartbeat pings on the interconnection hubs.

The wizard creates LLT and GAB configuration files in /etc/llttab,

/etc/gabtab and llthosts on each system:

On foo_c:

/etc/llttab:

set-node foo_c

link hme0 /dev/hme:0

link qfe1 /dev/qfe:1

start

On bar_c:

/etc/llttab:

set-node bar_c

link hme0 /dev/hme:0

link qfe1 /dev/qfe:1

start

/etc/gabtab:

How to configure Veritas Cluster Server

/sbin/gabconfig -c -n2

/etc/llthosts:

0 foo_c

1 bar_c

The LLT and GAB communication is started by rc scripts S70llt and

S92gab

installed in /etc/rc2.d.

- We can configure private interconnect by hand creating above files.

- Check basic installation:

+ status of the gab:

# gabconfig -a

GAB Port Memberships

===============================================================

Port a gen 1e4c0001 membership 01

Port h gen dd080001 membership 01

+ status of the link:

# lltstat -n

LLT node information:

Node State Links

* 0 foo_c OPEN 2

1 bar_c OPEN 2

+ node parameters:

# hasys -display

- Set/update VCS super user password:

+ add root user:

# haconf -makerw

# hauser -add root

password:...

# haconf -dump -makero

+ change root password:

# haconf -makerw

# hauser -update root

password:...

# haconf -dump -makero

- Configure demo NFS service groups:

How to configure Veritas Cluster Server

NOTE: You have to fix the VCS wizards first: The wizard perl scripts

have a bug that makes the core dump in the middle of filling out

configuration forms. The solution is to provide shell wrapper for one

binary and avoid running it with specific set of parameters. Do the

following in VCS-1.1.2 :

# cd /opt/VRTSvcs/bin

# mkdir tmp

# mv iou tmp

# cat << EOF > iou

#!/bin/sh

echo "[$@]" >> /tmp/,.iou.log

case "$@" in

'-c 20 9 -g 2 2 3 -l 0 3') echo "skip bug" >>

/tmp/,.iou.log ;;

*) /opt/VRTSvcs/bin/tmp/iou "$@" ;;

esac

EOF

# chmod 755 iou

+ Create NFS mount point directories on both systems:

# mkdir /export1 /export2

+ Run the wizard on foo_c node:

NOTE: must run from /usr/openwin/bin/xterm - other xterms cause

terminal emulation problems

# /usr/openwin/bin/xterm &

# sh /opt/VRTSvcs/wizards/services/quick_nfs

Select for groupx:

- public network device: qfe2

- group name: groupx

- IP: 192.168.1.53

- VM disk group: cluster1

- volume: vol01

- mount point: /export1

- options: rw

- file system: vxfs

Select for groupy:

- public network device: qfe2

- group name: groupy

- IP: 192.168.1.54

- VM disk group: cluster2

- volume: vol01

- mount point: /export2

- options: rw

- file system: vxfs

You should see: Congratulations!...

The /etc/VRTSvcs/conf/config directory should have main.cf and

types.cf files configured.

How to configure Veritas Cluster Server

+ Reboot both systems:

# init 6

Summary of cluster queries:

----------------------------

- Cluster queries:

+ list cluster status summary:

# hastatus -summary

-- SYSTEM STATE

-- System State Frozen

A foo_c RUNNING 0

A bar_c RUNNING 0

-- GROUP STATE

-- Group System Probed AutoDisabled

State

B groupx foo_c Y N ONLINE

B groupx bar_c Y N

OFFLINE

B groupy foo_c Y N

OFFLINE

B groupy bar_c Y N ONLINE

+ list cluster attributes:

# haclus -display

#Attribute Value

ClusterName my_vcs

CompareRSM 0

CounterInterval 5

DumpingMembership 0

Factor runque 5 memory 1 disk 10 cpu 25

network 5

GlobalCounter 16862

GroupLimit 200

LinkMonitoring 0

LoadSampling 0

LogSize 33554432

MajorVersion 1

MaxFactor runque 100 memory 10 disk 100 cpu 100

network 100

MinorVersion 10

PrintMsg 0

ReadOnly 1

ResourceLimit 5000

SourceFile ./main.cf

TypeLimit 100

UserNames root cDgqS68RlRP4k

How to configure Veritas Cluster Server

- Resource queries:

+ list resources:

# hares -list

cluster1 foo_c

cluster1 bar_c

IP_192_168_1_53 foo_c

IP_192_168_1_53 bar_c

...

+ list resource dependencies:

# hares -dep

#Group Parent Child

groupx IP_192_168_1_53 groupx_qfe1

groupx IP_192_168_1_53 nfs_export1

groupx export1 cluster1_vol01

groupx nfs_export1 NFS_groupx_16

groupx nfs_export1 export1

groupx cluster1_vol01 cluster1

groupy IP_192_168_1_54 groupy_qfe1

groupy IP_192_168_1_54 nfs_export2

groupy export2 cluster2_vol01

groupy nfs_export2 NFS_groupy_16

groupy nfs_export2 export2

groupy cluster2_v cluster2

+ list attributes of a resource:

# hares -display export1

#Resource Attribute System Value

export1 ConfidenceLevel foo_c 100

export1 ConfidenceLevel bar_c 0

export1 Probed foo_c 1

export1 Probed bar_c 1

export1 State foo_c ONLINE

export1 State bar_c OFFLINE

export1 ArgListValues foo_c /export1

/dev/vx/dsk/cluster1/vol01 vxfs rw ""

...

- Groups queries:

+ list groups:

# hagrp -list

groupx foo_c

groupx bar_c

groupy foo_c

groupy bar_c

+ list group resources:

How to configure Veritas Cluster Server

# hagrp -resources groupx

cluster1

IP_192_168_1_53

export1

NFS_groupx_16

groupx_qfe1

nfs_export1

cluster1_vol01

+ list group dependencies:

# hagrp -dep groupx

+ list of group attributes:

# hagrp -display groupx

#Group Attribute System Value

groupx AutoFailOver global 1

groupx AutoStart global 1

groupx AutoStartList global foo_c

groupx FailOverPolicy global Priority

groupx Frozen global 0

groupx IntentOnline global 1

groupx ManualOps global 1

groupx OnlineRetryInterval global 0

groupx OnlineRetryLimit global 0

groupx Parallel global 0

groupx PreOnline global 0

groupx PrintTree global 1

groupx SourceFile global ./main.cf

groupx SystemList global foo_c 0 bar_c 1 groupx SystemZones global

groupx TFrozen global 0

groupx TriggerEvent global 1

groupx UserIntGlobal global 0

groupx UserStrGlobal global

groupx AutoDisabled foo_c 0

groupx AutoDisabled bar_c 0

groupx Enabled foo_c 1

groupx Enabled bar_c 1

groupx ProbesPending foo_c 0

groupx ProbesPending bar_c 0

groupx State foo_c |ONLINE|

groupx State bar_c |OFFLINE|

groupx UserIntLocal foo_c 0

groupx UserIntLocal bar_c 0

groupx UserStrLocal foo_c

groupx UserStrLocal bar_c

- Node queries:

+ list nodes in the cluster:

# hasys -list

How to configure Veritas Cluster Server

foo_c

bar_c

+ list node attributes:

# hasys -display bar_c

#System Attribute Value

bar_c AgentsStopped 1

bar_c ConfigBlockCount 54

bar_c ConfigCheckSum 48400

bar_c ConfigDiskState CURRENT

bar_c ConfigFile /etc/VRTSvcs/conf/config bar_c ConfigInfoCnt 0

bar_c ConfigModDate Wed Mar 29 13:46:19 2000 bar_c DiskHbDown

bar_c Frozen 0

bar_c GUIIPAddr

bar_c LinkHbDown

bar_c Load 0

bar_c LoadRaw runque 0 memory 0 disk 0 cpu 0 network 0

bar_c MajorVersion 1

bar_c MinorVersion 10

bar_c NodeId 1

bar_c OnGrpCnt 1

bar_c SourceFile ./main.cf

bar_c SysName bar_c

bar_c SysState RUNNING

bar_c TFrozen 0

bar_c UserInt 0

bar_c UserStr

- Resource types queries:

+ list resource types:

# hatype -list

CLARiiON

Disk

DiskGroup

ElifNone

FileNone

FileOnOff

FileOnOnly

IP

IPMultiNIC

Mount

MultiNICA

NFS

NIC

Phantom

Process

Proxy

ServiceGroupHB

Share

Volume

+ list all resources of a given type:

How to configure Veritas Cluster Server

# hatype -resources DiskGroup

cluster1

cluster2

+ list attributes of the given type:

# hatype -display IP

#Type Attribute Value

IP AgentFailedOn

IP AgentReplyTimeout 130

IP AgentStartTimeout 60

IP ArgList Device Address NetMask Options ArpDelay IfconfigTwice

IP AttrChangedTimeout 60

IP CleanTimeout 60

IP CloseTimeout 60

IP ConfInterval 600

IP LogLevel error

IP MonitorIfOffline 1

IP MonitorInterval 60

IP MonitorTimeout 60

IP NameRule IP_ + resource.Address IP NumThreads 10

IP OfflineTimeout 300

IP OnlineRetryLimit 0

IP OnlineTimeout 300

IP OnlineWaitLimit 2

IP OpenTimeout 60

IP Operations OnOff

IP RestartLimit 0

IP SourceFile ./types.cf

IP ToleranceLimit 0

- Agents queries:

+ list agents:

# haagent -list

CLARiiON

Disk

DiskGroup

ElifNone

FileNone

FileOnOff

FileOnOnly

IP

IPMultiNIC

Mount

MultiNICA

NFS

NIC

Phantom

Process

Proxy

ServiceGroupHB

Share

Volume

+ list status of an agent:

# haagent -display IP

How to configure Veritas Cluster Server

#Agent Attribute Value

IP AgentFile

IP Faults 0

IP Running Yes

IP Started Yes

Summary of basic cluster operations:

------------------------------------

- Cluster Start/Stop:

+ stop VCS on all systems:

# hastop -all

+ stop VCS on bar_c and move all groups out:

# hastop -sys bar_c -evacuate

+ start VCS on local system:

# hastart

- Users:

+ add gui root user:

# haconf -makerw

# hauser -add root

# haconf -dump -makero

- Group:

+ group start, stop:

# hagrp -offline groupx -sys foo_c

# hagrp -online groupx -sys foo_c

+ switch a group to other system:

# hagrp -switch groupx -to bar_c

+ freeze a group:

# hagrp -freeze groupx

+ unfreeze a group:

# hagrp -unfreeze groupx

+ enable a group:

# hagrp -enable groupx

+ disable a group:

# hagrp -disable groupx

+ enable resources a group:

# hagrp -enableresources groupx

+ disable resources a group:

# hagrp -disableresources groupx

+ flush a group:

# hagrp -flush groupx -sys bar_c

- Node:

How to configure Veritas Cluster Server

+ feeze node:

# hasys -freeze bar_c

+ thaw node:

# hasys -unfreeze bar_c

- Resources:

+ online a resouce:

# hares -online IP_192_168_1_54 -sys bar_c

+ offline a resouce:

# hares -offline IP_192_168_1_54 -sys bar_c

+ offline a resouce and propagte to children:

# hares -offprop IP_192_168_1_54 -sys bar_c

+ probe a resouce:

# hares -probe IP_192_168_1_54 -sys bar_c

+ clear faulted resource:

# hares -clear IP_192_168_1_54 -sys bar_c

- Agents:

+ start agent:

# haagent -start IP -sys bar_c

+ stop agent:

# haagent -stop IP -sys bar_c

- Reboot a node with evacuation of all service groups:

(groupy is running on bar_c)

# hastop -sys bar_c -evacuate

# init 6

# hagrp -switch groupy -to bar_c

Changing cluster configuration:

--------------------------------

You cannot edit configuration files directly while the

cluster is running. This can be done only if cluster is down.

The configuration files are in: /etc/VRTSvcs/conf/config

To change the configuartion you can:

+ use hagui

+ stop the cluster (hastop), edit main.cf and types.cf directly, regenerate main.cmd (hacf -generate .) and start the cluster (hastart)

+ use the following command line based procedure on running cluster

How to configure Veritas Cluster Server

To change the cluster while it is running do this:

- Dump current cluster configuration to files and generate main.cmd file:

# haconf -dump

# hacf -generate .

# hacf -verify .

- Create new configuration directory:

# mkdir -p ../new

- Copy existing *.cf files in there:

# cp main.cf types.cf ../new

- Add new stuff to it:

# vi main.cf types.cf

- Regenerate the main.cmd file with low level commands:

# cd ../new

# hacf -generate .

# hacf -verify .

- Catch the diffs:

# diff ../config/main.cmd main.cmd > ,.cmd

- Prepend this to the top of the file to make config rw:

# haconf -makerw

- Append the command to make configuration ro:

# haconf -dump -makero

- Apply the diffs you need:

# sh -x ,.cmd

Cluster logging:

-----------------------------------------------------

VCS logs all activities into /var/VRTSvcs/log directory.

The most important log is the engine log engine.log_A.

Each agent also has its own log file.

The logging parameters can be displayed with halog command:

# halog -info

Log on hades_c:

path = /var/VRTSvcs/log/engine.log_A

本文来源:https://www.bwwdw.com/article/409j.html

Top