重庆思庄Oracle、Redhat认证学习论坛

标题: centos7.4 安装11g rac 执行root.sh 报 ohasd failed to start [打印本页]

作者: 郑全    时间: 2017-9-20 12:28
标题: centos7.4 安装11g rac 执行root.sh 报 ohasd failed to start
[root@ceph-rac1 ~]# /u01/app/11.2.0/grid/root.sh
Performing root user operation for Oracle 11g
The following environment variables are set as:
    ORACLE_OWNER= oracle
    ORACLE_HOME=  /u01/app/11.2.0/grid
Enter the full pathname of the local bin directory: [/usr/local/bin]:
The contents of "dbhome" have not changed. No need to overwrite.
The contents of "oraenv" have not changed. No need to overwrite.
The contents of "coraenv" have not changed. No need to overwrite.
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /u01/app/11.2.0/grid/crs/install/crsconfig_params
User ignored Prerequisites during installation
Installing Trace File Analyzer
OLR initialization - successful
  root wallet
  root wallet cert
  root cert export
  peer wallet
  profile reader wallet
  pa wallet
  peer wallet keys
  pa wallet keys
  peer cert request
  pa cert request
  peer cert
  pa cert
  peer root cert TP
  profile reader root cert TP
  pa root cert TP
  peer pa cert TP
  pa peer cert TP
  profile reader pa cert TP
  profile reader peer cert TP
  peer user cert
  pa user cert
Adding Clusterware entries to inittab
ohasd failed to start
Failed to start the Clusterware. Last 20 lines of the alert log follow:
2017-09-20 11:43:51.185:
[client(13860)]CRS-2101:The OLR was formatted using version 3.
2017-09-20 11:51:26.202:
[client(14870)]CRS-2101:The OLR was formatted using version 3.


作者: 郑全    时间: 2017-9-20 12:44
这个错误的原因是 centos7以上更改了init程序,以前为sysv,现在为systemd,而oracle 11g还去找sysv相关的东西,所以,就报错了.
知道这个问题后,解决办法很多,就是需要创建一个oracle-ohasd.service服务,
oracle针对这个问题,提供了一个 patch来解决,补丁号为:18370031
在运行root.sh之前,先执行这个补丁包:
$GRID_HOME/OPatch/opatch napply -oh $GRID_HOME -local <UNZIPPED_PATCH_LOCATION>/18370031

之后,再去执行root.sh即可.
注意,所有节点都必须执行.

如果已经执行过 root.sh,必须卸载 crs,之后,再执行这个patch.

作者: 郑全    时间: 2017-9-20 12:46
本帖最后由 郑全 于 2017-9-20 12:48 编辑
郑全 发表于 2017-9-20 12:44
这个错误的原因是 centos7以上更改了init程序,以前为sysv,现在为systemd,而oracle 11g还去找sysv相关的东西 ...

下面是具体的执行过程:
--------------------------------
1.安装patch 18370031
---------------------------------

[root@ceph-rac1 ~]# su - oracle
Last login: Wed Sep 20 12:17:34 CST 2017 on pts/0
[oracle@ceph-rac1 ~]$ /u01/app/11.2.0/grid/OPatch/opatch napply -oh /u01/app/11.2.0/grid/ -local /home/oracle/setup/patch/18370031/
Oracle Interim Patch Installer version 11.2.0.3.4
Copyright (c) 2012, Oracle Corporation.  All rights reserved.

Oracle Home       : /u01/app/11.2.0/grid
Central Inventory : /u01/app/oraInventory
   from           : /u01/app/11.2.0/grid//oraInst.loc
OPatch version    : 11.2.0.3.4
OUI version       : 11.2.0.4.0
Log file location : /u01/app/11.2.0/grid/cfgtoollogs/opatch/opatch2017-09-20_12-31-49PM_1.log
Verifying environment and performing prerequisite checks...
OPatch continues with these patches:   18370031  
Do you want to proceed? [y|n]
y
User Responded with: Y
All checks passed.
Please shutdown Oracle instances running out of this ORACLE_HOME on the local system.
(Oracle Home = '/u01/app/11.2.0/grid')

Is the local system ready for patching? [y|n]
y
User Responded with: Y
Backing up files...
Applying interim patch '18370031' to OH '/u01/app/11.2.0/grid'
Patching component oracle.crs, 11.2.0.4.0...
Verifying the update...
Patch 18370031 successfully applied.
Log file location: /u01/app/11.2.0/grid/cfgtoollogs/opatch/opatch2017-09-20_12-31-49PM_1.log
OPatch succeeded.
[oracle@ceph-rac1 ~]$ exit

------------------
------------------
2.执行root.sh
------------------
------------------

[root@ceph-rac1 ~]# /u01/app/11.2.0/grid/root.sh
Performing root user operation for Oracle 11g
The following environment variables are set as:
    ORACLE_OWNER= oracle
    ORACLE_HOME=  /u01/app/11.2.0/grid
Enter the full pathname of the local bin directory: [/usr/local/bin]:
The contents of "dbhome" have not changed. No need to overwrite.
The contents of "oraenv" have not changed. No need to overwrite.
The contents of "coraenv" have not changed. No need to overwrite.

Creating /etc/oratab file...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /u01/app/11.2.0/grid/crs/install/crsconfig_params
Creating trace directory
User ignored Prerequisites during installation
Installing Trace File Analyzer
OLR initialization - successful
  root wallet
  root wallet cert
  root cert export
  peer wallet
  profile reader wallet
  pa wallet
  peer wallet keys
  pa wallet keys
  peer cert request
  pa cert request
  peer cert
  pa cert
  peer root cert TP
  profile reader root cert TP
  pa root cert TP
  peer pa cert TP
  pa peer cert TP
  profile reader pa cert TP
  profile reader peer cert TP
  peer user cert
  pa user cert
Adding Clusterware entries to oracle-ohasd.service
CRS-2672: Attempting to start 'ora.mdnsd' on 'ceph-rac1'
CRS-2676: Start of 'ora.mdnsd' on 'ceph-rac1' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'ceph-rac1'
CRS-2676: Start of 'ora.gpnpd' on 'ceph-rac1' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'ceph-rac1'
CRS-2672: Attempting to start 'ora.gipcd' on 'ceph-rac1'
CRS-2676: Start of 'ora.cssdmonitor' on 'ceph-rac1' succeeded
CRS-2676: Start of 'ora.gipcd' on 'ceph-rac1' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'ceph-rac1'
CRS-2672: Attempting to start 'ora.diskmon' on 'ceph-rac1'
CRS-2676: Start of 'ora.diskmon' on 'ceph-rac1' succeeded
CRS-2676: Start of 'ora.cssd' on 'ceph-rac1' succeeded
ASM created and started successfully.
Disk Group dgocr created successfully.
clscfg: -install mode specified
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
CRS-4256: Updating the profile
Successful addition of voting disk 84092d3bc6ff4f8cbfbfdee2765b5b2a.
Successful addition of voting disk 524bb4a777ef4f26bfb9eabd7542ec73.
Successful addition of voting disk 95243aaa57f84f73bfe62b6837e156ad.
Successfully replaced voting disk group with +dgocr.
CRS-4256: Updating the profile
CRS-4266: Voting file(s) successfully replaced
##  STATE    File Universal Id                File Name Disk group
--  -----    -----------------                --------- ---------
1. ONLINE   84092d3bc6ff4f8cbfbfdee2765b5b2a (/dev/rbd0) [DGOCR]
2. ONLINE   524bb4a777ef4f26bfb9eabd7542ec73 (/dev/rbd1) [DGOCR]
3. ONLINE   95243aaa57f84f73bfe62b6837e156ad (/dev/rbd2) [DGOCR]
Located 3 voting disk(s).
CRS-2672: Attempting to start 'ora.asm' on 'ceph-rac1'
CRS-2676: Start of 'ora.asm' on 'ceph-rac1' succeeded
CRS-2672: Attempting to start 'ora.DGOCR.dg' on 'ceph-rac1'
CRS-2676: Start of 'ora.DGOCR.dg' on 'ceph-rac1' succeeded
Configure Oracle Grid Infrastructure for a Cluster ... succeeded
[root@ceph-rac1 ~]#


作者: 郑全    时间: 2017-9-20 12:51
[root@ceph-rac1 ~]# systemctl status oracle-ohasd.service
● oracle-ohasd.service - Oracle High Availability Services
   Loaded: loaded (/etc/systemd/system/oracle-ohasd.service; enabled; vendor preset: disabled)
   Active: active (running) since Wed 2017-09-20 12:38:34 CST; 12min ago
Main PID: 12520 (init.ohasd)
   CGroup: /system.slice/oracle-ohasd.service
           └─12520 /bin/sh /etc/init.d/init.ohasd run >/dev/null 2>&1 </dev/null

Sep 20 12:38:34 ceph-rac1 systemd[1]: Started Oracle High Availability Services.
Sep 20 12:38:34 ceph-rac1 systemd[1]: Starting Oracle High Availability Services...
[root@ceph-rac1 ~]#




欢迎光临 重庆思庄Oracle、Redhat认证学习论坛 (http://bbs.cqsztech.com/) Powered by Discuz! X3.2