1.查看ocr磁盘组和RAC状态,已经失效
[root@rac1 ~]# ocrcheck
PROT-601: Failed to initialize ocrcheck
PROC-22: The OCR backend has an invalid format
[root@rac2 ~]# ocrcheck
PROT-601: Failed to initialize ocrcheck
PROC-22: The OCR backend has an invalid format
[root@rac1 ~]# crsctl status res -t
CRS-4535: Cannot communicate with Cluster Ready Services
CRS-4000: Command Status failed, or completed with errors.
2.停止集群相关进程
[root@rac1 ~]# ps -ef | grep d.bin
root 3491 1 1 10:02 ? 00:00:01 /u01/app/grid/bin/ohasd.bin reboot
grid 3712 1 0 10:02 ? 00:00:00 /u01/app/grid/bin/oraagent.bin
grid 3723 1 0 10:02 ? 00:00:00 /u01/app/grid/bin/mdnsd.bin
grid 3734 1 0 10:02 ? 00:00:00 /u01/app/grid/bin/gpnpd.bin
grid 3744 1 0 10:02 ? 00:00:00 /u01/app/grid/bin/gipcd.bin
root 3746 1 0 10:02 ? 00:00:00 /u01/app/grid/bin/orarootagent.bin
root 3758 1 1 10:02 ? 00:00:01 /u01/app/grid/bin/osysmond.bin
root 3772 1 0 10:02 ? 00:00:00 /u01/app/grid/bin/cssdmonitor
root 3791 1 0 10:02 ? 00:00:00 /u01/app/grid/bin/cssdagent
grid 3802 1 0 10:02 ? 00:00:00 /u01/app/grid/bin/ocssd.bin
root 3862 1 1 10:02 ? 00:00:01 /u01/app/grid/bin/ologgerd -M -d /u01/app/grid/crf/db/rac1
root 4037 3692 0 10:04 pts/0 00:00:00 grep d.bin
[root@rac1 ~]# crsctl stop crs -f
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rac1'
CRS-2673: Attempting to stop 'ora.mdnsd' on 'rac1'
CRS-2673: Attempting to stop 'ora.crf' on 'rac1'
CRS-2677: Stop of 'ora.mdnsd' on 'rac1' succeeded
CRS-2677: Stop of 'ora.crf' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.gipcd' on 'rac1'
CRS-2677: Stop of 'ora.gipcd' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.gpnpd' on 'rac1'
CRS-2677: Stop of 'ora.gpnpd' on 'rac1' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'rac1' has completed
CRS-4133: Oracle High Availability Services has been stopped.
[root@rac2 ~]# ps -ef | grep d.bin
root 4783 3626 0 10:13 pts/0 00:00:00 grep d.bin
3.用root用户在每个节点上,执行grid安装目录下/crs/install/rootcrs.pl程序,撤销集群配置
[root@rac1 ~]# cd /u01/app/grid/crs/install
[root@rac1 install]# ll
total 8308
-rwxr-xr-x 1 grid oinstall 1269 Nov 8 21:49 cmdllroot.sh
-r-xr-xr-x 1 grid oinstall 797 May 13 2008 crsconfig_addparams.sbs
-rwxr-xr-x 1 root oinstall 497715 Jul 22 2013 crsconfig_lib.pm
-rwxr-xr-x 1 grid oinstall 4100 Nov 8 21:39 crsconfig_params
-rwxr-xr-x 1 grid oinstall 4877 Mar 21 2011 crsconfig_params.sbs
-rwxr-xr-x 1 root oinstall 41420 Jul 2 2013 crsdelete.pm
-rwxr-xr-x 1 root oinstall 24317 Jun 4 2013 crspatch.pm
-rwxr-xr-x 1 root oinstall 8334 Jan 30 2013 hasdconfig.pl
-rw-r--r-- 1 grid oinstall 68 Jul 30 2007 inittab
-rwxr-xr-x 1 grid oinstall 115 Jun 4 2013 install.excl
-rwxr-xr-x 1 grid oinstall 0 Feb 23 2005 install.incl
-rwxr-xr-x 1 grid oinstall 17 Jun 21 2009 installRemove.excl
-r-xr-xr-- 1 grid oinstall 2128 Nov 8 21:49 onsconfig
-rwxr-xr-x 1 root oinstall 25147 Jun 10 2013 oraacfs.pm
-rw-r--r-- 1 grid oinstall 220 Apr 6 2011 oracle-ohasd.conf
-rwxr-xr-x 1 root oinstall 13478 Jul 4 2013 oracss.pm
-rw-r--r-- 1 grid oinstall 414 Jun 2 2005 paramfile.crs
-rw-r--r-- 1 root oinstall 50 Nov 8 21:49 ParentDirPerm_rac1.txt
-rwxr-xr-x 1 root oinstall 5316 Nov 8 21:49 preupdate.sh
-rwxr-xr-x 1 root oinstall 36870 Jul 14 2013 rootcrs.pl
-rwxr-xr-x 1 root oinstall 17679 Jan 23 2013 roothas.pl
-rwxr-xr-x 1 root oinstall 915 Jan 5 2007 rootofs.sh
-rwxr-xr-x 1 grid oinstall 3278 Dec 26 2012 s_crsconfig_defs
-rwxr-xr-x 1 root oinstall 102572 Feb 22 2013 s_crsconfig_lib.pm
-rwxr-x--- 1 root oinstall 387 Nov 8 21:49 s_crsconfig_rac1_env.txt
-rwxr-xr-x 1 root oinstall 7636861 Aug 2 2013 tfa_setup.sh
[root@rac1 install]# ./rootcrs.pl -deconfig -force
Using configuration parameter file: ./crsconfig_params
PRCR-1119 : Failed to look up CRS resources of ora.cluster_vip_net1.type type
PRCR-1068 : Failed to query resources
Cannot communicate with crsd
PRCR-1070 : Failed to check if resource ora.gsd is registered
Cannot communicate with crsd
PRCR-1070 : Failed to check if resource ora.ons is registered
Cannot communicate with crsd
CRS-4544: Unable to connect to OHAS
CRS-4000: Command Stop failed, or completed with errors.
Removing Trace File Analyzer
Successfully deconfigured Oracle clusterware stack on this node
[root@rac2 install]# ./rootcrs.pl -deconfig -force
Using configuration parameter file: ./crsconfig_params
PRCR-1119 : Failed to look up CRS resources of ora.cluster_vip_net1.type type
PRCR-1068 : Failed to query resources
Cannot communicate with crsd
PRCR-1070 : Failed to check if resource ora.gsd is registered
Cannot communicate with crsd
PRCR-1070 : Failed to check if resource ora.ons is registered
Cannot communicate with crsd
CRS-4544: Unable to connect to OHAS
CRS-4000: Command Stop failed, or completed with errors.
Removing Trace File Analyzer
Successfully deconfigured Oracle clusterware stack on this node
4.用root用户在每个节点上,执行grid安装目录下的root.sh脚本
[root@rac1 install]# cd /u01/app/grid
[root@rac1 grid]# ./root.sh
Performing root user operation for Oracle 11g
The following environment variables are set as:
ORACLE_OWNER= grid
ORACLE_HOME= /u01/app/grid
Enter the full pathname of the local bin directory: [/usr/local/bin]:
The contents of "dbhome" have not changed. No need to overwrite.
The contents of "oraenv" have not changed. No need to overwrite.
The contents of "coraenv" have not changed. No need to overwrite.
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /u01/app/grid/crs/install/crsconfig_params
User ignored Prerequisites during installation
Installing Trace File Analyzer
OLR initialization - successful
Adding Clusterware entries to upstart
CRS-2672: Attempting to start 'ora.mdnsd' on 'rac1'
CRS-2676: Start of 'ora.mdnsd' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'rac1'
CRS-2676: Start of 'ora.gpnpd' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'rac1'
CRS-2672: Attempting to start 'ora.gipcd' on 'rac1'
CRS-2676: Start of 'ora.cssdmonitor' on 'rac1' succeeded
CRS-2676: Start of 'ora.gipcd' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'rac1'
CRS-2672: Attempting to start 'ora.diskmon' on 'rac1'
CRS-2676: Start of 'ora.diskmon' on 'rac1' succeeded
CRS-2676: Start of 'ora.cssd' on 'rac1' succeeded
ASM created and started successfully.
Disk Group asmocr created successfully.
clscfg: -install mode specified
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
Successful addition of voting disk 87b6dd543f7b4fffbf78739fd2054094.
Successfully replaced voting disk group with +asmocr.
CRS-4266: Voting file(s) successfully replaced
## STATE File Universal Id File Name Disk group
-- ----- ----------------- --------- ---------
1. ONLINE 87b6dd543f7b4fffbf78739fd2054094 (/dev/raw/raw1) [ASMOCR]
Located 1 voting disk(s).
CRS-2672: Attempting to start 'ora.asm' on 'rac1'
CRS-2676: Start of 'ora.asm' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.ASMOCR.dg' on 'rac1'
CRS-2676: Start of 'ora.ASMOCR.dg' on 'rac1' succeeded
Preparing packages for installation...
cvuqdisk-1.0.9-1
Configure Oracle Grid Infrastructure for a Cluster ... succeeded
[root@rac2 install]# cd /u01/app/grid
[root@rac2 grid]# ./root.sh
Performing root user operation for Oracle 11g
The following environment variables are set as:
ORACLE_OWNER= grid
ORACLE_HOME= /u01/app/grid
Enter the full pathname of the local bin directory: [/usr/local/bin]:
The contents of "dbhome" have not changed. No need to overwrite.
The contents of "oraenv" have not changed. No need to overwrite.
The contents of "coraenv" have not changed. No need to overwrite.
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /u01/app/grid/crs/install/crsconfig_params
User ignored Prerequisites during installation
Installing Trace File Analyzer
OLR initialization - successful
Adding Clusterware entries to upstart
CRS-4402: The CSS daemon was started in exclusive mode but found an active CSS daemon on node rac1, number 1, and is terminating
An active cluster was found during exclusive startup, restarting to join the cluster
Preparing packages for installation...
cvuqdisk-1.0.9-1
Configure Oracle Grid Infrastructure for a Cluster ... succeeded
[root@rac1 grid]# crsctl status res -t
--------------------------------------------------------------------------------
NAME TARGET STATE SERVER STATE_DETAILS
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.ASMOCR.dg
ONLINE ONLINE rac1
ONLINE ONLINE rac2
ora.asm
ONLINE ONLINE rac1 Started
ONLINE ONLINE rac2 Started
ora.gsd
OFFLINE OFFLINE rac1
OFFLINE OFFLINE rac2
ora.net1.network
ONLINE ONLINE rac1
ONLINE ONLINE rac2
ora.ons
ONLINE ONLINE rac1
ONLINE ONLINE rac2
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
1 ONLINE ONLINE rac1
ora.cvu
1 ONLINE ONLINE rac1
ora.oc4j
1 ONLINE ONLINE rac1
ora.rac1.vip
1 ONLINE ONLINE rac1
ora.rac2.vip
1 ONLINE ONLINE rac2
ora.scan1.vip
1 ONLINE ONLINE rac1
5.挂载ASM磁盘组
[grid@rac1 ~]$ sqlplus / as sysasm
SQL*Plus: Release 11.2.0.4.0 Production on Wed Nov 16 10:30:36 2016
Copyright (c) 1982, 2013, Oracle. All rights reserved.
Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Real Application Clusters and Automatic Storage Management options
SQL> select name,state from v$asm_diskgroup;
NAME STATE
------------------------------ -----------
ASMOCR MOUNTED
ASMDATA DISMOUNTED
ASMFRA DISMOUNTED
SQL> alter diskgroup asmdata mount;
Diskgroup altered.
SQL> alter diskgroup asmfra mount;
Diskgroup altered.
SQL> select name,state from v$asm_diskgroup;
NAME STATE
------------------------------ -----------
ASMOCR MOUNTED
ASMDATA MOUNTED
ASMFRA MOUNTED
SQL> exit
[grid@rac2 ~]$ sqlplus / as sysasm
SQL*Plus: Release 11.2.0.4.0 Production on Wed Nov 16 10:32:07 2016
Copyright (c) 1982, 2013, Oracle. All rights reserved.
Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Real Application Clusters and Automatic Storage Management options
SQL> select name,state from v$asm_diskgroup;
NAME STATE
------------------------------ -----------
ASMOCR MOUNTED
ASMFRA DISMOUNTED
ASMDATA DISMOUNTED
SQL> alter diskgroup asmdata mount;
Diskgroup altered.
SQL> alter diskgroup asmfra mount;
Diskgroup altered.
SQL> select name,state from v$asm_diskgroup;
NAME STATE
------------------------------ -----------
ASMOCR MOUNTED
ASMFRA MOUNTED
ASMDATA MOUNTED
SQL> exit
6.添加数据库,实例资源到集群服务
[oracle@rac1 ~]$ srvctl add database -d ol11g -o /u01/app/oracle/db_1
[oracle@rac1 ~]$ srvctl status database -d ol11g
Database is not running.
[oracle@rac1 ~]$ srvctl add instance -i ol11g1 -d ol11g -n rac1
[oracle@rac1 ~]$ srvctl add instance -i ol11g2 -d ol11g -n rac2
[grid@rac1 ~]$ crsctl status res -t
--------------------------------------------------------------------------------
NAME TARGET STATE SERVER STATE_DETAILS
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.ASMDATA.dg
ONLINE ONLINE rac1
ONLINE ONLINE rac2
ora.ASMFRA.dg
ONLINE ONLINE rac1
ONLINE ONLINE rac2
ora.ASMOCR.dg
ONLINE ONLINE rac1
ONLINE ONLINE rac2
ora.asm
ONLINE ONLINE rac1 Started
ONLINE ONLINE rac2 Started
ora.gsd
OFFLINE OFFLINE rac1
OFFLINE OFFLINE rac2
ora.net1.network
ONLINE ONLINE rac1
ONLINE ONLINE rac2
ora.ons
ONLINE ONLINE rac1
ONLINE ONLINE rac2
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
1 ONLINE ONLINE rac1
ora.cvu
1 ONLINE ONLINE rac1
ora.oc4j
1 ONLINE ONLINE rac1
ora.ol11g.db
1 OFFLINE OFFLINE
2 OFFLINE OFFLINE
ora.rac1.vip
1 ONLINE ONLINE rac1
ora.rac2.vip
1 ONLINE ONLINE rac2
ora.scan1.vip
1 ONLINE ONLINE rac1
[root@rac1 grid]# srvctl start database -d ol11g
[root@rac1 grid]# crsctl status res -t
--------------------------------------------------------------------------------
NAME TARGET STATE SERVER STATE_DETAILS
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.ASMDATA.dg
ONLINE ONLINE rac1
ONLINE ONLINE rac2
ora.ASMFRA.dg
ONLINE ONLINE rac1
ONLINE ONLINE rac2
ora.ASMOCR.dg
ONLINE ONLINE rac1
ONLINE ONLINE rac2
ora.asm
ONLINE ONLINE rac1 Started
ONLINE ONLINE rac2 Started
ora.gsd
OFFLINE OFFLINE rac1
OFFLINE OFFLINE rac2
ora.net1.network
ONLINE ONLINE rac1
ONLINE ONLINE rac2
ora.ons
ONLINE ONLINE rac1
ONLINE ONLINE rac2
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
1 ONLINE ONLINE rac1
ora.cvu
1 ONLINE ONLINE rac1
ora.oc4j
1 ONLINE ONLINE rac1
ora.ol11g.db
1 ONLINE ONLINE rac1 Open
2 ONLINE ONLINE rac2 Open
ora.rac1.vip
1 ONLINE ONLINE rac1
ora.rac2.vip
1 ONLINE ONLINE rac2
ora.scan1.vip
1 ONLINE ONLINE rac1
[root@rac1 grid]# srvctl status scan
SCAN VIP scan1 is enabled
SCAN VIP scan1 is running on node rac1
[root@rac1 grid]# srvctl status scan_listener
SCAN Listener LISTENER_SCAN1 is enabled
SCAN listener LISTENER_SCAN1 is running on node rac1
[grid@rac1 ~]$ lsnrctl status listener_scan1
LSNRCTL for Linux: Version 11.2.0.4.0 - Production on 16-NOV-2016 10:47:02
Copyright (c) 1991, 2013, Oracle. All rights reserved.
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER_SCAN1)))
STATUS of the LISTENER
------------------------
Alias LISTENER_SCAN1
Version TNSLSNR for Linux: Version 11.2.0.4.0 - Production
Start Date 16-NOV-2016 10:24:45
Uptime 0 days 0 hr. 22 min. 16 sec
Trace Level off
Security ON: Local OS Authentication
SNMP OFF
Listener Parameter File /u01/app/grid/network/admin/listener.ora
Listener Log File /u01/app/grid/log/diag/tnslsnr/rac1/listener_scan1/alert/log.xml
Listening Endpoints Summary...
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER_SCAN1)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=192.168.1.220)(PORT=1521)))
Services Summary...
Service "ol11g" has 2 instance(s).
Instance "ol11g1", status READY, has 1 handler(s) for this service...
Instance "ol11g2", status READY, has 1 handler(s) for this service...
Service "ol11gXDB" has 2 instance(s).
Instance "ol11g1", status READY, has 1 handler(s) for this service...
Instance "ol11g2", status READY, has 1 handler(s) for this service...
The command completed successfully
[root@rac1 grid]# ocrcheck
Status of Oracle Cluster Registry is as follows :
Version : 3
Total space (kbytes) : 262120
Used space (kbytes) : 2932
Available space (kbytes) : 259188
ID : 784216217
Device/File Name : +asmocr
Device/File integrity check succeeded
Device/File not configured
Device/File not configured
Device/File not configured
Device/File not configured
Cluster registry integrity check succeeded
Logical corruption check succeeded
[root@rac1 grid]# crsctl query css votedisk
## STATE File Universal Id File Name Disk group
-- ----- ----------------- --------- ---------
1. ONLINE 87b6dd543f7b4fffbf78739fd2054094 (/dev/raw/raw1) [ASMOCR]
Located 1 voting disk(s).
至此,ocr和votedisk重建完成
|