aix7 install oracle11g rac asm

更新时间:2024-06-07 08:18:01 阅读量: 综合文库 文档下载

说明:文章内容仅供预览,部分内容可能不全。下载后的文档,内容与下面显示的完全一致。下载之前请确认下面内容是否您想要的,是否完整无缺。

AIX 7.1 Oracle 11g 2.0.3 RAC ASM 如何查看rootvg镜像组成 # bootlist -o -m normal hdisk0 blv=hd5 pathid=0 hdisk0 blv=hd5 pathid=1 hdisk1 blv=hd5 pathid=0 hdisk1 blv=hd5 pathid=1 cd0

如何查看cpu核数? # lsdev -Cc processor

proc0 Available 00-00 Processor proc4 Available 00-04 Processor proc8 Available 00-08 Processor proc12 Available 00-12 Processor proc16 Available 00-16 Processor proc20 Available 00-20 Processor proc24 Available 00-24 Processor proc28 Available 00-28 Processor proc32 Available 00-32 Processor proc36 Available 00-36 Processor proc40 Available 00-40 Processor proc44 Available 00-44 Processor proc48 Available 00-48 Processor proc52 Available 00-52 Processor proc56 Available 00-56 Processor proc60 Available 00-60 Processor # bindprocessor -q

The available processors are: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 #

# prtconf|grep Processors Number Of Processors: 16 #

如何查看WWN号?

# lsdev -Cc adapter -S a | grep fcs

fcs0 Available 04-00 8Gb PCI Express Dual Port FC Adapter (df1000f114108a03) fcs1 Available 04-01 8Gb PCI Express Dual Port FC Adapter (df1000f114108a03) fcs2 Available 05-00 8Gb PCI Express Dual Port FC Adapter (df1000f114108a03) fcs3 Available 05-01 8Gb PCI Express Dual Port FC Adapter (df1000f114108a03) # lscfg -vpl fcs0

fcs0 U78AA.001.WZSKJYT-P1-C2-T1 8Gb PCI Express Dual Port FC Adapter (df1000f114108a03)

Part Number.................00E0806 Serial Number...............1A4080061C Manufacturer................001A EC Level.................... D77161

Customer Card ID Number.....577D FRU Number..................00E0806 Device Specific.(ZM)........3

Network Address.............10000090FA67C1CA ROS Level and ID............027820B7 Device Specific.(Z0)........31004549 Device Specific.(Z1)........00000000 Device Specific.(Z2)........00000000 Device Specific.(Z3)........09030909 Device Specific.(Z4)........FF781150 Device Specific.(Z5)........027820B7 Device Specific.(Z6)........077320B7 Device Specific.(Z7)........0B7C20B7

Device Specific.(Z8)........20000120FA67C1CA Device Specific.(Z9)........US2.02X7 Device Specific.(ZA)........U2D2.02X7 Device Specific.(ZB)........U3K2.02X7 Device Specific.(ZC)........00000000

Hardware Location Code......U78AA.001.WZSKJYT-P1-C2-T1

PLATFORM SPECIFIC

Name: fibre-channel Model: 00E0806

Node: fibre-channel@0 Device Type: fcp

Physical Location: U78AA.001.WZSKJYT-P1-C2-T1 # lscfg -vpl fcs1

fcs1 U78AA.001.WZSKJYT-P1-C2-T2 8Gb PCI Express Dual Port FC Adapter (df1000f114108a03)

Part Number.................00E0806 Serial Number...............1A4080061C Manufacturer................001A EC Level.................... D77161

Customer Card ID Number.....577D FRU Number..................00E0806 Device Specific.(ZM)........3

Network Address.............10000090FA67C1CB ROS Level and ID............027820B7 Device Specific.(Z0)........31004549 Device Specific.(Z1)........00000000 Device Specific.(Z2)........00000000 Device Specific.(Z3)........09030909 Device Specific.(Z4)........FF781150 Device Specific.(Z5)........027820B7 Device Specific.(Z6)........077320B7 Device Specific.(Z7)........0B7C20B7

Device Specific.(Z8)........20000120FA67C1CB Device Specific.(Z9)........US2.02X7 Device Specific.(ZA)........U2D2.02X7 Device Specific.(ZB)........U3K2.02X7 Device Specific.(ZC)........00000000

Hardware Location Code......U78AA.001.WZSKJYT-P1-C2-T2

PLATFORM SPECIFIC

Name: fibre-channel Model: 00E0806

Node: fibre-channel@0,1 Device Type: fcp

Physical Location: U78AA.001.WZSKJYT-P1-C2-T2 #

查看内存大小: # lsattr -El mem0

ent_mem_cap I/O memory entitlement in Kbytes False goodsize 63232 Amount of usable physical memory in Mbytes False mem_exp_factor Memory expansion factor False size 63232 Total amount of physical memory in Mbytes False var_mem_weight Variable memory capacity weight False #

lsdev -Cc memory

select GROUP_NUMBER,DISK_NUMBER,TOTAL_MB,FREE_MB,path from V$ASM_DISK; GROUP_NUMBER DISK_NUMBER TOTAL_MB FREE_MB PATH ------------ ----------- ---------- ----------

---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 1 0 112640 108634 /dev/rhdiskpower1 1 1 153600 148149 /dev/rhdiskpower10 1 2 163840 158019 /dev/rhdiskpower11 1 3 174080 /dev/rhdiskpower12 1 4 184320 /dev/rhdiskpower13 1 5 194560 /dev/rhdiskpower14 1 6 204800 /dev/rhdiskpower15 1 7 225280 /dev/rhdiskpower16 1 8 235520 /dev/rhdiskpower17 1 9 143360 /dev/rhdiskpower2 1 10 215040 /dev/rhdiskpower3 2 0 5120 /dev/rhdiskpower5 1 11 102400 /dev/rhdiskpower7

1 12 122880 118515 /dev/rhdiskpower8 1 13 133120 /dev/rhdiskpower9

选定了 17 行

powermt display dev=all

# powermt display dev=all # powermt display dev=all Pseudo name=hdiskpower0

VNX ID=FCN00141200036 [IBM POWER740]

167900 177772 187652 197531 217274 227154 138261 207395 4724 98761 128382 Logical device ID=600601607BF038007138E28678FEE311 [CRS2] state=alive; policy=CLAROpt; queued-IOs=0

Owner: default=SP A, current=SP A Array failover mode: 4

==============================================================================

--------------- Host --------------- - Stor - -- I/O Path -- -- Stats ---

### HW Path I/O Paths Interf. Mode State Q-IOs Errors

==============================================================================

0 fscsi0 hdisk4 SP A1 active alive 0 0 1 fscsi2 hdisk28 SP B1 active alive 0 0

Pseudo name=hdiskpower1

VNX ID=FCN00141200036 [IBM POWER740]

Logical device ID=600601607BF03800EA9BE11779FEE311 [DATA2] state=alive; policy=CLAROpt; queued-IOs=0

Owner: default=SP A, current=SP A Array failover mode: 4

==============================================================================

--------------- Host --------------- - Stor - -- I/O Path -- -- Stats ---

### HW Path I/O Paths Interf. Mode State Q-IOs Errors

==============================================================================

0 fscsi0 hdisk5 SP A1 active alive 0 0 1 fscsi2 hdisk29 SP B1 active alive 0 0

Pseudo name=hdiskpower2

VNX ID=FCN00141200036 [IBM POWER740]

Logical device ID=600601607BF0380008A51F3079FEE311 [DATA5] state=alive; policy=CLAROpt; queued-IOs=0

Owner: default=SP A, current=SP A Array failover mode: 4

==============================================================================

--------------- Host --------------- - Stor - -- I/O Path -- -- Stats ---

### HW Path I/O Paths Interf. Mode State Q-IOs Errors

==============================================================================

0 fscsi0 hdisk6 SP A1 active alive 0 0 1 fscsi2 hdisk30 SP B1 active alive 0 0

Pseudo name=hdiskpower3

VNX ID=FCN00141200036 [IBM POWER740]

Logical device ID=600601607BF0380080A25E6779FEE311 [DATA12] state=alive; policy=CLAROpt; queued-IOs=0

Owner: default=SP A, current=SP A Array failover mode: 4

==============================================================================

--------------- Host --------------- - Stor - -- I/O Path -- -- Stats ---

### HW Path I/O Paths Interf. Mode State Q-IOs Errors

==============================================================================

0 fscsi0 hdisk7 SP A1 active alive 0 0 1 fscsi2 hdisk31 SP B1 active alive 0 0

Pseudo name=hdiskpower4

VNX ID=FCN00141200036 [IBM POWER740]

Logical device ID=600601607BF03800FA516CC278FEE311 [ARCH2] state=alive; policy=CLAROpt; queued-IOs=0

Owner: default=SP A, current=SP A Array failover mode: 4

==============================================================================

--------------- Host --------------- - Stor - -- I/O Path -- -- Stats ---

### HW Path I/O Paths Interf. Mode State Q-IOs Errors

==============================================================================

0 fscsi0 hdisk8 SP A1 active alive 0 0 1 fscsi2 hdisk32 SP B1 active alive 0 0

Pseudo name=hdiskpower5

VNX ID=FCN00141200036 [IBM POWER740]

Logical device ID=600601607BF038007038E28678FEE311 [CRS1] state=alive; policy=CLAROpt; queued-IOs=0

Owner: default=SP A, current=SP A Array failover mode: 4

==============================================================================

--------------- Host --------------- - Stor - -- I/O Path -- -- Stats ---

### HW Path I/O Paths Interf. Mode State Q-IOs Errors

==============================================================================

0 fscsi0 hdisk9 SP A1 active alive 0 0 1 fscsi2 hdisk33 SP B1 active alive 0 0

Pseudo name=hdiskpower6

VNX ID=FCN00141200036 [IBM POWER740]

Logical device ID=600601607BF038007238E28678FEE311 [CRS3] state=alive; policy=CLAROpt; queued-IOs=0

Owner: default=SP A, current=SP A Array failover mode: 4

======================================================================

========

--------------- Host --------------- - Stor - -- I/O Path -- -- Stats ---

### HW Path I/O Paths Interf. Mode State Q-IOs Errors

==============================================================================

0 fscsi0 hdisk10 SP A1 active alive 0 0 1 fscsi2 hdisk34 SP B1 active alive 0 0

Pseudo name=hdiskpower7

VNX ID=FCN00141200036 [IBM POWER740]

Logical device ID=600601607BF038004A0E320E79FEE311 [DATA1] state=alive; policy=CLAROpt; queued-IOs=0

Owner: default=SP A, current=SP A Array failover mode: 4

==============================================================================

--------------- Host --------------- - Stor - -- I/O Path -- -- Stats ---

### HW Path I/O Paths Interf. Mode State Q-IOs Errors

==============================================================================

0 fscsi0 hdisk11 SP A1 active alive 0 0 1 fscsi2 hdisk35 SP B1 active alive 0 0

Pseudo name=hdiskpower8

VNX ID=FCN00141200036 [IBM POWER740]

Logical device ID=600601607BF03800A64A6E2079FEE311 [DATA3] state=alive; policy=CLAROpt; queued-IOs=0

Owner: default=SP A, current=SP A Array failover mode: 4

==============================================================================

--------------- Host --------------- - Stor - -- I/O Path -- -- Stats ---

### HW Path I/O Paths Interf. Mode State Q-IOs Errors

==============================================================================

0 fscsi0 hdisk12 SP A1 active alive 0 0 1 fscsi2 hdisk36 SP B1 active alive 0 0

Pseudo name=hdiskpower9

VNX ID=FCN00141200036 [IBM POWER740]

Logical device ID=600601607BF03800F601412879FEE311 [DATA4] state=alive; policy=CLAROpt; queued-IOs=0

Owner: default=SP A, current=SP A Array failover mode: 4

==============================================================================

--------------- Host --------------- - Stor - -- I/O Path -- -- Stats ---

### HW Path I/O Paths Interf. Mode State Q-IOs Errors

==============================================================================

0 fscsi0 hdisk13 SP A1 active alive 0 0 1 fscsi2 hdisk37 SP B1 active alive 0 0

Pseudo name=hdiskpower10

VNX ID=FCN00141200036 [IBM POWER740]

Logical device ID=600601607BF038004C99743979FEE311 [DATA6] state=alive; policy=CLAROpt; queued-IOs=0

Owner: default=SP A, current=SP A Array failover mode: 4

==============================================================================

--------------- Host --------------- - Stor - -- I/O Path -- -- Stats ---

### HW Path I/O Paths Interf. Mode State Q-IOs Errors

==============================================================================

0 fscsi0 hdisk14 SP A1 active alive 0 0 1 fscsi2 hdisk38 SP B1 active alive 0 0

Pseudo name=hdiskpower11

VNX ID=FCN00141200036 [IBM POWER740]

Logical device ID=600601607BF03800228CE84279FEE311 [DATA7] state=alive; policy=CLAROpt; queued-IOs=0

Owner: default=SP A, current=SP A Array failover mode: 4

==============================================================================

--------------- Host --------------- - Stor - -- I/O Path -- -- Stats ---

### HW Path I/O Paths Interf. Mode State Q-IOs Errors

==============================================================================

0 fscsi0 hdisk15 SP A1 active alive 0 0 1 fscsi2 hdisk39 SP B1 active alive 0 0

Pseudo name=hdiskpower12

VNX ID=FCN00141200036 [IBM POWER740]

Logical device ID=600601607BF0380024FE984B79FEE311 [DATA8] state=alive; policy=CLAROpt; queued-IOs=0

Owner: default=SP A, current=SP A Array failover mode: 4

==============================================================================

--------------- Host --------------- - Stor - -- I/O Path -- -- Stats ---

### HW Path I/O Paths Interf. Mode State Q-IOs Errors

======================================================================

========

0 fscsi0 hdisk16 SP A1 active alive 0 0 1 fscsi2 hdisk40 SP B1 active alive 0 0

Pseudo name=hdiskpower13

VNX ID=FCN00141200036 [IBM POWER740]

Logical device ID=600601607BF038001A53695379FEE311 [DATA9] state=alive; policy=CLAROpt; queued-IOs=0

Owner: default=SP A, current=SP A Array failover mode: 4

==============================================================================

--------------- Host --------------- - Stor - -- I/O Path -- -- Stats ---

### HW Path I/O Paths Interf. Mode State Q-IOs Errors

==============================================================================

0 fscsi0 hdisk17 SP A1 active alive 0 0 1 fscsi2 hdisk41 SP B1 active alive 0 0

Pseudo name=hdiskpower14

VNX ID=FCN00141200036 [IBM POWER740]

Logical device ID=600601607BF038003C27765A79FEE311 [DATA10] state=alive; policy=CLAROpt; queued-IOs=0

Owner: default=SP A, current=SP A Array failover mode: 4

==============================================================================

--------------- Host --------------- - Stor - -- I/O Path -- -- Stats ---

### HW Path I/O Paths Interf. Mode State Q-IOs Errors

==============================================================================

0 fscsi0 hdisk18 SP A1 active alive 0 0 1 fscsi2 hdisk42 SP B1 active alive 0 0

Pseudo name=hdiskpower15

VNX ID=FCN00141200036 [IBM POWER740]

Logical device ID=600601607BF03800F203C96079FEE311 [DATA11] state=alive; policy=CLAROpt; queued-IOs=0

Owner: default=SP A, current=SP A Array failover mode: 4

==============================================================================

--------------- Host --------------- - Stor - -- I/O Path -- -- Stats ---

### HW Path I/O Paths Interf. Mode State Q-IOs Errors

==============================================================================

0 fscsi0 hdisk19 SP A1 active alive 0 0

1 fscsi2 hdisk43 SP B1 active alive 0 0

Pseudo name=hdiskpower16

VNX ID=FCN00141200036 [IBM POWER740]

Logical device ID=600601607BF0380010784C6E79FEE311 [DATA13] state=alive; policy=CLAROpt; queued-IOs=0

Owner: default=SP A, current=SP A Array failover mode: 4

==============================================================================

--------------- Host --------------- - Stor - -- I/O Path -- -- Stats ---

### HW Path I/O Paths Interf. Mode State Q-IOs Errors

==============================================================================

0 fscsi0 hdisk20 SP A1 active alive 0 0 1 fscsi2 hdisk44 SP B1 active alive 0 0

Pseudo name=hdiskpower17

VNX ID=FCN00141200036 [IBM POWER740]

Logical device ID=600601607BF038006A15A77679FEE311 [DATA14] state=alive; policy=CLAROpt; queued-IOs=0

Owner: default=SP A, current=SP A Array failover mode: 4

==============================================================================

--------------- Host --------------- - Stor - -- I/O Path -- -- Stats ---

### HW Path I/O Paths Interf. Mode State Q-IOs Errors

==============================================================================

0 fscsi0 hdisk21 SP A1 active alive 0 0 1 fscsi2 hdisk45 SP B1 active alive 0 0

Pseudo name=hdiskpower19

VNX ID=FCN00141200036 [IBM POWER740]

Logical device ID=6006016090B03800484C45D6C5FEE311 [ARCH1] state=alive; policy=CLAROpt; queued-IOs=0

Owner: default=SP A, current=SP A Array failover mode: 4

==============================================================================

--------------- Host --------------- - Stor - -- I/O Path -- -- Stats ---

### HW Path I/O Paths Interf. Mode State Q-IOs Errors

==============================================================================

0 fscsi0 hdisk3 SP A1 active alive 0 0 1 fscsi2 hdisk27 SP B1 active alive 0 0

Pseudo name=hdiskpower20

VNX ID=FCN00141200036 [IBM POWER740]

Logical device ID=6006016090B0380076D3B562F300E411 [Backup1] state=alive; policy=CLAROpt; queued-IOs=0

Owner: default=SP A, current=SP A Array failover mode: 4

==============================================================================

--------------- Host --------------- - Stor - -- I/O Path -- -- Stats ---

### HW Path I/O Paths Interf. Mode State Q-IOs Errors

==============================================================================

0 fscsi0 hdisk46 SP A1 active alive 0 0 1 fscsi2 hdisk48 SP B1 active alive 0 0

Pseudo name=hdiskpower21

VNX ID=FCN00141200036 [IBM POWER740]

Logical device ID=6006016090B038002EC8566FF300E411 [Backup2] state=alive; policy=CLAROpt; queued-IOs=0

Owner: default=SP A, current=SP A Array failover mode: 4

==============================================================================

--------------- Host --------------- - Stor - -- I/O Path -- -- Stats ---

### HW Path I/O Paths Interf. Mode State Q-IOs Errors

==============================================================================

0 fscsi0 hdisk47 SP A1 active alive 0 0 1 fscsi2 hdisk49 SP B1 active alive 0 0 #

crs1 hdiskpower5 crs2 hdiskpower0 crs3 hdiskpower6 data1 hdiskpower7 data2 hdiskpower1 data3 hdiskpower8 data4 hdiskpower9 data5 hdiskpower2

data6 hdiskpower10 data7 hdiskpower11 data8 hdiskpower12 data9 hdiskpower13 data10 hdiskpower14 data11 hdiskpower15 data12 hdiskpower3 data13 hdiskpower16 data14 hdiskpower17

arch1 hdiskpower19 arch2 hdiskpower4

backup1 hdiskpower20 backup2 hdiskpower21

lsattr -E -l hdiskpower0

rmdev -dl hdiskpower19

chdev -l hdiskpower0 -a pv=yes chdev -l hdiskpower1 -a pv=yes chdev -l hdiskpower2 -a pv=yes chdev -l hdiskpower3 -a pv=yes chdev -l hdiskpower4 -a pv=yes chdev -l hdiskpower5 -a pv=yes chdev -l hdiskpower6 -a pv=yes chdev -l hdiskpower7 -a pv=yes chdev -l hdiskpower8 -a pv=yes chdev -l hdiskpower9 -a pv=yes chdev -l hdiskpower10 -a pv=yes chdev -l hdiskpower11 -a pv=yes chdev -l hdiskpower12 -a pv=yes chdev -l hdiskpower13 -a pv=yes chdev -l hdiskpower14 -a pv=yes chdev -l hdiskpower15 -a pv=yes chdev -l hdiskpower16 -a pv=yes chdev -l hdiskpower17 -a pv=yes chdev -l hdiskpower19 -a pv=yes chdev -l hdiskpower20 -a pv=yes chdev -l hdiskpower21 -a pv=yes

chdev -l hdiskpower19 -a pv=yes chdev -l hdiskpower19 -a pv=clear

chdev -l hdiskpower0 -a reserve_policy=no_reserve chdev -l hdiskpower1 -a reserve_policy=no_reserve chdev -l hdiskpower2 -a reserve_policy=no_reserve chdev -l hdiskpower3 -a reserve_policy=no_reserve chdev -l hdiskpower4 -a reserve_policy=no_reserve chdev -l hdiskpower5 -a reserve_policy=no_reserve chdev -l hdiskpower6 -a reserve_policy=no_reserve chdev -l hdiskpower7 -a reserve_policy=no_reserve chdev -l hdiskpower8 -a reserve_policy=no_reserve chdev -l hdiskpower9 -a reserve_policy=no_reserve chdev -l hdiskpower10 -a reserve_policy=no_reserve chdev -l hdiskpower11 -a reserve_policy=no_reserve chdev -l hdiskpower12 -a reserve_policy=no_reserve chdev -l hdiskpower13 -a reserve_policy=no_reserve chdev -l hdiskpower14 -a reserve_policy=no_reserve chdev -l hdiskpower15 -a reserve_policy=no_reserve chdev -l hdiskpower16 -a reserve_policy=no_reserve chdev -l hdiskpower17 -a reserve_policy=no_reserve chdev -l hdiskpower19 -a reserve_policy=no_reserve chdev -l hdiskpower20 -a reserve_policy=no_reserve chdev -l hdiskpower21 -a reserve_policy=no_reserve

chown grid:asmadmin /dev/rhdiskpower0 chown grid:asmadmin /dev/rhdiskpower1 chown grid:asmadmin /dev/rhdiskpower2 chown grid:asmadmin /dev/rhdiskpower3 chown grid:asmadmin /dev/rhdiskpower5 chown grid:asmadmin /dev/rhdiskpower6 chown grid:asmadmin /dev/rhdiskpower7 chown grid:asmadmin /dev/rhdiskpower8 chown grid:asmadmin /dev/rhdiskpower9 chown grid:asmadmin /dev/rhdiskpower10 chown grid:asmadmin /dev/rhdiskpower11 chown grid:asmadmin /dev/rhdiskpower12 chown grid:asmadmin /dev/rhdiskpower13 chown grid:asmadmin /dev/rhdiskpower14 chown grid:asmadmin /dev/rhdiskpower15 chown grid:asmadmin /dev/rhdiskpower16 chown grid:asmadmin /dev/rhdiskpower17 chmod 660 /dev/rhdiskpower0

chmod 660 /dev/rhdiskpower1 chmod 660 /dev/rhdiskpower2 chmod 660 /dev/rhdiskpower3 chmod 660 /dev/rhdiskpower5 chmod 660 /dev/rhdiskpower6 chmod 660 /dev/rhdiskpower7 chmod 660 /dev/rhdiskpower8 chmod 660 /dev/rhdiskpower9 chmod 660 /dev/rhdiskpower10 chmod 660 /dev/rhdiskpower11 chmod 660 /dev/rhdiskpower12 chmod 660 /dev/rhdiskpower13 chmod 660 /dev/rhdiskpower14 chmod 660 /dev/rhdiskpower15 chmod 660 /dev/rhdiskpower16 chmod 660 /dev/rhdiskpower17

chdev -l hdiskpower5 -a reserve_policy=no_reserve /usr/sbin/lsattr -E -l hdiskpower5

# /usr/sbin/lsattr -E -l hdiskpower5

PR_key_value none Reserve Key. True clr_q no Clear Queue (RS/6000) True location Location True

lun_id 0x6000000000000 LUN ID False lun_reset_spt yes FC Forced Open LUN True max_coalesce 0x100000 Maximum coalesce size True max_retries 5 Maximum Retries True

max_transfer 0x100000 Maximum transfer size True

pvid 00f933f9c497fa560000000000000000 Physical volume identifier False pvid_takeover yes Takeover PVIDs from hdisks True q_err yes Use QERR bit True q_type simple Queue TYPE False queue_depth 32 Queue DEPTH True reassign_to 120 REASSIGN time out value True

reserve_policy no_reserve Reserve Policy used to reserve device on open. True reset_delay 2 Reset Delay True

rw_timeout 30 READ/WRITE time out True scsi_id 0x10600 SCSI ID False start_timeout 60 START unit time out True

ww_name 0x500601693ee0648b World Wide Name False

reserve_policy no_reserve

lsattr -E -l /dev/rhdiskpower0

lsattr -E -l /dev/rhdisk3

查看:

lsattr -E -l rhdiskpower0|grep reserve

lsattr -El sys0 -a realmem # lsattr -El sys0 -a realmem

realmem 64749568 Amount of usable physical memory in Kbytes False

# lsps -a

Page Space Physical Volume Volume Group Size %Used Active Auto Type Chksum hd6 hdisk0 rootvg 65536MB 0 yes yes lv 0 # #

# lsvg rootvg VOLUME GROUP: rootvg VG IDENTIFIER: 00f933f900004c0000000146e2225531

VG STATE: active PP SIZE: 512 megabyte(s)

VG PERMISSION: read/write TOTAL PPs: 1116 (571392 megabytes) MAX LVs: 256 FREE PPs: 808 (413696 megabytes) LVs: 13 USED PPs: 308 (157696 megabytes) OPEN LVs: 12 QUORUM: 1 (Disabled) TOTAL PVs: 2 VG DESCRIPTORS: 3 STALE PVs: 0 STALE PPs: 0 ACTIVE PVs: 2 AUTO ON: yes MAX PPs per VG: 32512

MAX PPs per PV: 1016 MAX PVs: 32

LTG size (Dynamic): 1024 kilobyte(s) AUTO SYNC: no HOT SPARE: no BB POLICY: relocatable PV RESTRICTION: none INFINITE RETRY: no DISK BLOCK SIZE: 512 # lsvg -l rootvg rootvg:

LV NAME TYPE LPs PPs PVs LV STATE MOUNT POINT hd5 boot 1 2 2 closed/syncd N/A hd6 paging 128 256 2 open/syncd N/A hd8 jfs2log 1 2 2 open/syncd N/A hd4 jfs2 2 4 2 open/syncd / hd2 jfs2 6 12 2 open/syncd /usr hd9var jfs2 1 2 2 open/syncd /var hd3 jfs2 4 8 2 open/syncd /tmp hd1 jfs2 1 2 2 open/syncd /home

hd10opt jfs2 1 2 2 open/syncd /opt hd11admin jfs2 1 2 2 open/syncd /admin

fwdump jfs2 3 6 2 open/syncd /var/adm/ras/platform lg_dumplv sysdump 8 8 1 open/syncd N/A

livedump jfs2 1 2 2 open/syncd /var/adm/ras/livedump #

chps -s 32 hdisk0

chfs -a size=30G /tmp chfs -a size=10G /home chfs -a size=5G /

chfs -a size=+10G /u01

mklv -t jfs2 -y u01lv rootvg 200 crfs -v jfs2 -d /dev/u01lv -m /u01 mount /u01

lsattr -El /dev/hdiskpower0 lsattr -El /dev/rhdiskpower0

ioo -o aio_maxreqs

vmo -p -o minperm%=3 vmo -p -o maxperm%=90 vmo -p -o maxclient%=90 vmo -p -o lru_file_repage=0 vmo -p -o strict_maxclient=1 vmo -p -o strict_maxperm=0

lru_file_repage

vmo -p -o lru_file_repage=0

vi + /etc/security/limits fsize = -1

core = 2097151 cpu = -1 data = -1

rss = -1 stack = -1 nofiles = -1

no -r -o ipqmaxlen=521 no -p -o rfc1323=1

no -p -o sb_max=1500000 no -p -o tcp_recvspace=65536 no -p -o tcp_sendspace=65536 no -p -o udp_recvspace=1351680 no -p -o udp_sendspace=13516

/usr/sbin/no -r -o ipqmaxlen=512 /usr/sbin/no -po rfc1323=1

/usr/sbin/no -po sb_max=131072

/usr/sbin/no -po tcp_recvspace=65536 /usr/sbin/no -po tcp_sendspace=65536 /usr/sbin/no -po udp_recvspace=65530 /usr/sbin/no -po udp_sendspace=65536

#public ip

192.168.100.103 sqwsjdb01 192.168.100.104 sqwsjdb02 #Private ip

10.10.10.1 sqwsjdb01-priv 10.10.10.2 sqwsjdb02-priv #Virtual ip

192.168.100.101 sqwsjdb01-vip 192.168.100.102 sqwsjdb02-vip #Scan ip

192.168.100.100 rac-scan

lsdev -Cc adapter

查看ssh服务状态 # lssrc -s sshd

Subsystem Group PID Status sshd ssh 3866742 停止和启动 ssh 服务 # stopsrc -s sshd # startsrc -s sshd

active 建议通过console连接,不然停止ssh后网络连接就断开连不上了。

bash-3.2# stopsrc -s sshd

0513-044 The sshd Subsystem was requested to stop. bash-3.2# lssrc -s sshd

Subsystem Group PID Status sshd ssh inoperative

bash-3.2# startsrc -s sshd

0513-059 The sshd Subsystem has been started. Subsystem PID is 19660822. bash-3.2# lssrc -s sshd

Subsystem Group PID Status sshd ssh 19660822 active

mkgroup -'A' id='1001' adms='root' oinstall mkgroup -'A' id='1002' adms='root' dba mkgroup -'A' id='1003' adms='root' asmdba mkgroup -'A' id='1004' adms='root' asmadmin mkgroup -'A' id='1005' adms='root' asmoper

mkuser id='1001' pgrp='oinstall' groups='asmdba,asmadmin,asmoper,dba' home='/home/grid' grid

mkuser id='1002' pgrp='oinstall' groups='dba,asmdba' home='/home/oracle' oracle

passwd grid passwd oracle

lsuser -a capabilities grid lsuser -a capabilities oracle

/usr/bin/chuser

capabilities=CAP_NUMA_ATTACH,CAP_BYPASS_RAC_VMM,CAP_PROPAGATE grid /usr/bin/chuser

capabilities=CAP_NUMA_ATTACH,CAP_BYPASS_RAC_VMM,CAP_PROPAGATE oracle

chown grid:asmadmin /dev/rhdiskpower0 chown grid:asmadmin /dev/rhdiskpower1 chown grid:asmadmin /dev/rhdiskpower2 chown grid:asmadmin /dev/rhdiskpower3 chown grid:asmadmin /dev/rhdiskpower5 chown grid:asmadmin /dev/rhdiskpower6 chown grid:asmadmin /dev/rhdiskpower7

chown grid:asmadmin /dev/rhdiskpower8 chown grid:asmadmin /dev/rhdiskpower9 chown grid:asmadmin /dev/rhdiskpower10 chown grid:asmadmin /dev/rhdiskpower11 chown grid:asmadmin /dev/rhdiskpower12 chown grid:asmadmin /dev/rhdiskpower13 chown grid:asmadmin /dev/rhdiskpower14 chown grid:asmadmin /dev/rhdiskpower15 chown grid:asmadmin /dev/rhdiskpower16 chown grid:asmadmin /dev/rhdiskpower17 chmod 660 /dev/rhdiskpower0 chmod 660 /dev/rhdiskpower1 chmod 660 /dev/rhdiskpower2 chmod 660 /dev/rhdiskpower3 chmod 660 /dev/rhdiskpower5 chmod 660 /dev/rhdiskpower6 chmod 660 /dev/rhdiskpower7 chmod 660 /dev/rhdiskpower8 chmod 660 /dev/rhdiskpower9 chmod 660 /dev/rhdiskpower10 chmod 660 /dev/rhdiskpower11 chmod 660 /dev/rhdiskpower12 chmod 660 /dev/rhdiskpower13 chmod 660 /dev/rhdiskpower14 chmod 660 /dev/rhdiskpower15 chmod 660 /dev/rhdiskpower16 chmod 660 /dev/rhdiskpower17

chown grid:asmadmin /dev/hdiskpower5 chmod 660 /dev/hdiskpower5

vi /etc/security/limits default:

fsize = 2097151 core = 2097151 cpu = -1

data = 262144 rss = 65536 stack = 65536 nofiles = 2000 root: fsize = -1

core = -1 cpu = -1 data = -1 rss = -1 stack = -1 nofiles = -1

daemon: bin: sys: adm:

uucp:

guest:

nobody: lpd:

pconsole:

# stack_hard = 131072 #data = 1280000

# data_hard = 1280000

esaadmin:

#stack = 393216

#stack_hard = 393216 oracle: fsize = -1 core = -1 cpu = -1 data = -1 rss = -1 stack = -1 nofiles = -1 grid: fsize = -1 core = -1 cpu = -1

data = -1 rss = -1 stack = -1 nofiles = -1

前几天在p780 系统环境 aix6.1上安装配置weblogic和tuxedo中间件的时候,用xmanager无法调出图像界面,通过查询网络,得知处理办法如下: 步骤:

① export DISPLAY= 192.168.1.18:0.0 其中 192.168.1.18为我本地电脑的IP

② vi /etc/ssh/sshd_config

找到#X11Forwarding no 这一行,将其改为 X11Forwarding yes (前面的#要去掉)

③重新启动ssh: stopsrc -s sshd startsrc -s sshd startsrc -s xntpd stopsrc -s xntpd

④ 然后断开重新登录系统,就可以调出图像界面了

startsrc -s xntpd

export JAVA_HOME=/home/weblogic/jdk1.6.0_20

export CLASSPATH=.:$CLASSPATH:$JAVA_HOME/lib:$JAVA_HOME/jre/lib export PATH=$JAVA_HOME/bin:$JAVA_HOME/jre/bin:$PATH:$HOMR/bin

./runcluvfy.sh stage -pre crsinst -n sqwsjdb01,sqwsjdb02 -fixup -verbose

/grid/sshsetup# ./sshUserSetup.sh -hosts \sqwsjdb02\-user grid -advanced -noPromptPassphrase

/grid/sshsetup# ./sshUserSetup.sh -hosts \sqwsjdb02\-user oracle -advanced -noPromptPassphrase

data = -1 rss = -1 stack = -1 nofiles = -1

前几天在p780 系统环境 aix6.1上安装配置weblogic和tuxedo中间件的时候,用xmanager无法调出图像界面,通过查询网络,得知处理办法如下: 步骤:

① export DISPLAY= 192.168.1.18:0.0 其中 192.168.1.18为我本地电脑的IP

② vi /etc/ssh/sshd_config

找到#X11Forwarding no 这一行,将其改为 X11Forwarding yes (前面的#要去掉)

③重新启动ssh: stopsrc -s sshd startsrc -s sshd startsrc -s xntpd stopsrc -s xntpd

④ 然后断开重新登录系统,就可以调出图像界面了

startsrc -s xntpd

export JAVA_HOME=/home/weblogic/jdk1.6.0_20

export CLASSPATH=.:$CLASSPATH:$JAVA_HOME/lib:$JAVA_HOME/jre/lib export PATH=$JAVA_HOME/bin:$JAVA_HOME/jre/bin:$PATH:$HOMR/bin

./runcluvfy.sh stage -pre crsinst -n sqwsjdb01,sqwsjdb02 -fixup -verbose

/grid/sshsetup# ./sshUserSetup.sh -hosts \sqwsjdb02\-user grid -advanced -noPromptPassphrase

/grid/sshsetup# ./sshUserSetup.sh -hosts \sqwsjdb02\-user oracle -advanced -noPromptPassphrase

本文来源:https://www.bwwdw.com/article/b826.html

Top