重庆思庄Oracle、Redhat认证学习论坛

 找回密码
 注册

QQ登录

只需一步,快速开始

搜索
查看: 1901|回复: 0
打印 上一主题 下一主题

RedHat 7.3 Oracle 12.2.0.1 RAC 安装手册

[复制链接]
跳转到指定楼层
楼主
发表于 2017-6-2 13:25:29 | 只看该作者 回帖奖励 |正序浏览 |阅读模式
本帖最后由 郑全 于 2017-6-2 13:25 编辑

1   准备工作
1.1    关于GRID的一些变化1.1.1   简化的基于映像的Oracle Grid Infrastructure安装
Oracle Grid Infrastructure 12c2版(12.2)开始,Oracle Grid Infrastructure软件可用作下载和安装的映像文件。
此功能大大简化了Oracle Grid Infrastructure的安装过程。
注意:你必须将GRID软件解压缩到希Grid Home位于的目录中,然后运行gridSetup.sh脚本以启动Oracle Grid Infrastructure安装。
1.1.2   支持Oracle域服务集群和Oracle成员集群
Oracle Grid Infrastructure 12c2版(12.2)开始,Oracle Grid Infrastructure安装程序支持部署Oracle域服务集群和Oracle成员集群的选项。
更多介绍请看官方文档:
http://docs.oracle.com/database/122/CWLIN/understanding-cluster-configuration-options.htm#GUID-4D6C2B52-9845-48E2-AD68-F0586AA20F48
1.1.3   支持Oracle可扩展集群
Oracle Grid Infrastructure 12c2版(12.2)开始,Oracle Grid Infrastructure安装程序支持将不同位置的集群节点配置为Oracle扩展集群的选项。 Oracle扩展集群由位于称为站点的多个位置的节点组成。
1.1.4   全局网格基础设施管理知识库-GIMR
Oracle Grid Infrastructure部署现在支持全局离群网格基础架构管理存储库(GIMR)。 此存储库是具有用于每个集群的GIMR的可插入数据库(PDB)的多租户数据库。 全局GIMROracle域服务集群中运行。 全局GIMR使本地群集免于在其磁盘组中为此数据专用存储,并允许长期历史数据存储用于诊断和性能分析。
这个在后面安装GRID时候,会提示你是否为GIMR单独创建一个磁盘组用于存放数据。
1.2    硬件最低配置要求
序号
组件
内存
1
Oracle Grid Infrastructure installations
4GB以上
2
Oracle Database installations
最小1GB,建议2GB以上

1.3  
  RAC规划
服务器主机名
rac1
rac2
公共 IP 地址(eth0)
192.168.56.121
192.168.56.123
虚拟 IP 地址(eth0)
192.168.56.122
192.168.56.124
私有 IP 地址(eth1)
192.168.57.121
192.168.57.123
ORACLE RAC SID
cndba1
cndba2
集群实例名称
cndba

SCAN IP   
192.168.56.125

操作系统
Red hat7.3

Oracle   版本
12.2.0.1

1.4    磁盘划分12C R2中对磁盘组空间要求更大。OCR外部冗余最少40GNORMAL最少80G
磁盘组名称      

磁盘
大小
冗余策略
DATAFILE
data01
40G
NORMAL
data02
40G
OCR
OCRVOTING01
30G
NORMAL
OCRVOTING02
30G
OCRVOTING03
30G

1.5  
  操作系统安装
具体过程.....
注意Redhat 7.3 中主机名和IP地址的操作。
相关操作可以参考:
Linux 7.2 修改主机名
hostnamectl set-hostname rac1

Linux 7 防火墙 配置管理
systemctl stop firewalld.service
systemctl disable firewalld.service
1.6    配置host在所有节点修改:

[root@rac1 ~]# cat /etc/hosts127.0.0.1   localhost192.168.56.121 rac1192.168.57.121 rac1-priv192.168.56.122 rac1-vip 192.168.56.123 rac2192.168.57.123 rac2-priv192.168.56.124 rac2-vip 192.168.56.125 rac-scan
1.7    添加用户和组
/usr/sbin/groupadd -g 54321 oinstall/usr/sbin/groupadd -g 54322 dba/usr/sbin/groupadd -g 54323 oper/usr/sbin/groupadd -g 54324 backupdba/usr/sbin/groupadd -g 54325 dgdba/usr/sbin/groupadd -g 54326 kmdba/usr/sbin/groupadd -g 54327 asmdba/usr/sbin/groupadd -g 54328 asmoper/usr/sbin/groupadd -g 54329 asmadmin/usr/sbin/groupadd -g 54330 racdba/usr/sbin/useradd -u 54321 -g oinstall -G dba,asmdba,oper oracle/usr/sbin/useradd -u 54322 -g oinstall -G dba,oper,backupdba,dgdba,kmdba,asmdba,asmoper,asmadmin,racdba grid
修改用户密码:

[root@rac1 ~]# passwd grid[root@rac1 ~]# passwd oracle
确认用户信息:

[root@rac1 ~]# id oracleuid=54321(oracle) gid=54321(oinstall) groups=54321(oinstall),54322(dba),54323(oper),54327(asmdba)[root@rac1 ~]# id griduid=54322(grid) gid=54321(oinstall) groups=54321(oinstall),54322(dba),54323(oper),54324(backupdba),54325(dgdba),54326(kmdba),54327(asmdba),54328(asmoper),54329(asmadmin),54330(racdba)[root@rac1 ~]#
1.8    关闭防火墙和selinux防火墙:

[root@rac1 ~]# systemctl stop firewalld.service[root@rac1 ~]# ]# systemctl disable firewalld.servicerm '/etc/systemd/system/basic.target.wants/firewalld.service'rm '/etc/systemd/system/dbus-org.Fedoraproject.FirewallD1.service'
SELINUX


[root@rac1 ~]# cat /etc/selinux/config# This file controls the state of SELinux on the system.# SELINUX= can take one of these three values:#     enforcing - SELinux security policy is enforced.#     permissive - SELinux prints warnings instead of enforcing.#     disabled - No SELinux policy is loaded.SELINUX=disabled# SELINUXTYPE= can take one of these two values:#     targeted - Targeted processes are protected,#     mls - Multi Level Security protection.SELINUXTYPE=targeted

1.9    配置时间同步停用NTP

[root@rac1 ~]# systemctl stop ntpd.service[root@rac1 ~]# systemctl disable ntpd.service

[root@rac1 etc]# systemctl stop chronyd.service[root@rac1 etc]# systemctl disable chronyd.serviceRemoved symlink /etc/systemd/system/multi-user.target.wants/chronyd.service.
1.10    创建目录
mkdir -p /u01/app/12.2.0/gridmkdir -p /u01/app/gridmkdir -p /u01/app/oracle/product/12.2.0/dbhome_1chown -R grid:oinstall /u01chown -R oracle:oinstall /u01/app/oraclechmod -R 775 /u01/
1.11    配置用户环境变量1.11.1   ORACLE用户
[root@rac1 ~]# cat /home/oracle/.bash_profile# .bash_profile # Get the aliases and functionsif [ -f ~/.bashrc ]; then. ~/.bashrcfi # User specific environment and startup programs ORACLE_SID=cndba1;export ORACLE_SID  #ORACLE_SID=cndba2;export ORACLE_SID  ORACLE_UNQNAME=cndba;export ORACLE_UNQNAMEJAVA_HOME=/usr/local/java; export JAVA_HOMEORACLE_BASE=/u01/app/oracle; export ORACLE_BASEORACLE_HOME=$ORACLE_BASE/product/12.2.0/dbhome_1; export ORACLE_HOMEORACLE_TERM=xterm; export ORACLE_TERMNLS_DATE_FORMAT="YYYY:MM:DDHH24:MI:SS"; export NLS_DATE_FORMATNLS_LANG=american_america.ZHS16GBK; export NLS_LANGTNS_ADMIN=$ORACLE_HOME/network/admin; export TNS_ADMINORA_NLS11=$ORACLE_HOME/nls/data; export ORA_NLS11PATH=.:${JAVA_HOME}/bin:${PATH}:$HOME/bin:$ORACLE_HOME/bin:$ORA_CRS_HOME/binPATH=${PATH}:/usr/bin:/bin:/usr/bin/X11:/usr/local/binexport PATHLD_LIBRARY_PATH=$ORACLE_HOME/libLD_LIBRARY_PATH=${LD_LIBRARY_PATH}:$ORACLE_HOME/oracm/libLD_LIBRARY_PATH=${LD_LIBRARY_PATH}:/lib:/usr/lib:/usr/local/libexport LD_LIBRARY_PATHCLASSPATH=$ORACLE_HOME/JRECLASSPATH=${CLASSPATH}:$ORACLE_HOME/jlibCLASSPATH=${CLASSPATH}:$ORACLE_HOME/rdbms/jlibCLASSPATH=${CLASSPATH}:$ORACLE_HOME/network/jlibexport CLASSPATHTHREADS_FLAG=native; export THREADS_FLAGexport TEMP=/tmpexport TMPDIR=/tmpumask 022
1.11.2   GRID用户
[root@rac1 ~]# cat /home/grid/.bash_profile# .bash_profile # Get the aliases and functionsif [ -f ~/.bashrc ]; then. ~/.bashrcfi # User specific environment and startup programs PATH=$PATH:$HOME/bin export ORACLE_SID=+ASM1  #export ORACLE_SID=+ASM2  export ORACLE_BASE=/u01/app/gridexport ORACLE_HOME=/u01/app/12.2.0/gridexport PATH=$ORACLE_HOME/bin:$PATH:/usr/local/bin/:.export TEMP=/tmpexport TMP=/tmpexport TMPDIR=/tmpumask 022export PATH
1.12    修改资源限制1.12.1   修改/etc/security/limits.conf添加如下内容

[root@rac1 ~]# vi  /etc/security/limits.confgrid  soft  nproc  2047grid  hard  nproc  16384grid  soft   nofile  1024grid  hard  nofile  65536grid  soft   stack  10240grid  hard  stack  32768 oracle  soft  nproc  2047oracle  hard  nproc  16384oracle  soft  nofile  1024oracle  hard  nofile  65536oracle  soft  stack  10240oracle  hard  stack  32768oracle soft memlock 3145728oracle hard memlock 3145728



1.13      配置NOZEROCONF编辑 /etc/sysconfig/network文件增加以下内容

[root@rac1 ~]# vi /etc/sysconfig/networkNOZEROCONF=yes
1.14        修改内核参数
[root@rac1 ~]# vim /etc/sysctl.conf fs.file-max = 6815744kernel.sem = 250 32000 100 128kernel.shmmni = 4096kernel.shmall = 1073741824kernel.shmmax = 4398046511104kernel.panic_on_oops = 1net.core.rmem_default = 262144net.core.rmem_max = 4194304net.core.wmem_default = 262144net.core.wmem_max = 1048576net.ipv4.conf.all.rp_filter = 2net.ipv4.conf.default.rp_filter = 2fs.aio-max-nr = 1048576net.ipv4.ip_local_port_range = 9000 65500[root@rac1 ~]#sysctl -p
1.15        安装必要的包yum 的配置参考如下文章:
Linux 平台下 YUM 源配置 手册 参见论坛上相关贴子.



yum install binutils  compat-libstdc++-33   gcc  gcc-c++  glibc  glibc.i686  glibc-devel   ksh   libgcc.i686   libstdc++-devel  libaio  libaio.i686  libaio-devel  libaio-devel.i686  libXext  libXext.i686  libXtst  libXtst.i686  libX11  libX11.i686 libXau  libXau.i686  libxcb  libxcb.i686  libXi  libXi.i686  make  sysstat  unixODBC  unixODBC-devel  zlib-devel  zlib-devel.i686 compat-libcap1 –y
1.16        安装cvuqdiskcvuqdisk存于oracle安装介质的cv/rpm目录下,解压缩database的安装介质即可看到此包:

export CVUQDISK_GRP=asmadmin[root@rac1 rpm]# pwd/software/database/rpm[root@rac1 rpm]# lltotal 12-rwxr-xr-x 1 root root 8860 Jan  5 17:36 cvuqdisk-1.0.10-1.rpm[root@rac1 rpm]# rpm -ivh cvuqdisk-1.0.10-1.rpmPreparing...                          ################################# [100%]Using default group oinstall to install packageUpdating / installing...   1:cvuqdisk-1.0.10-1                ################################# [100%][root@rac1 rpm]#
拷贝至另一个节点也安装一下。
1.17        配置共享磁盘执行如下脚本:

[root@rac1 ~]#for i in b c d e f ; doecho "KERNEL==/"sd*/",ENV{DEVTYPE}==/"disk/",SUBSYSTEM==/"block/",PROGRAM==/"/usr/lib/udev/scsi_id -g -u -d /$devnode/",RESULT==/"`/usr/lib/udev/scsi_id -g -u /dev/sd$i`/", RUN+=/"/bin/sh -c 'mknod /dev/asmdisk$i b  /$major /$minor; chown grid:asmadmin /dev/asmdisk$i; chmod 0660 /dev/asmdisk$i'/""done执行结果:


KERNEL=="sd*",ENV{DEVTYPE}=="disk",SUBSYSTEM=="block",PROGRAM=="/usr/lib/udev/scsi_id -g -u -d $devnode",RESULT=="1ATA_VBOX_HARDDISK_VB90ea2842-3d5cfe18", RUN+="/bin/sh -c 'mknod /dev/asmdiskb b  $major $minor; chown grid:asmadmin /dev/asmdiskb; chmod 0660 /dev/asmdiskb'"KERNEL=="sd*",ENV{DEVTYPE}=="disk",SUBSYSTEM=="block",PROGRAM=="/usr/lib/udev/scsi_id -g -u -d $devnode",RESULT=="1ATA_VBOX_HARDDISK_VB0c31ed82-ca3c7a2f", RUN+="/bin/sh -c 'mknod /dev/asmdiskc b  $major $minor; chown grid:asmadmin /dev/asmdiskc; chmod 0660 /dev/asmdiskc'"KERNEL=="sd*",ENV{DEVTYPE}=="disk",SUBSYSTEM=="block",PROGRAM=="/usr/lib/udev/scsi_id -g -u -d $devnode",RESULT=="1ATA_VBOX_HARDDISK_VBd2eba70f-9707444e", RUN+="/bin/sh -c 'mknod /dev/asmdiskd b  $major $minor; chown grid:asmadmin /dev/asmdiskd; chmod 0660 /dev/asmdiskd'"KERNEL=="sd*",ENV{DEVTYPE}=="disk",SUBSYSTEM=="block",PROGRAM=="/usr/lib/udev/scsi_id -g -u -d $devnode",RESULT=="1ATA_VBOX_HARDDISK_VB15946091-75f9c0f4", RUN+="/bin/sh -c 'mknod /dev/asmdiske b  $major $minor; chown grid:asmadmin /dev/asmdiske; chmod 0660 /dev/asmdiske'"KERNEL=="sd*",ENV{DEVTYPE}=="disk",SUBSYSTEM=="block",PROGRAM=="/usr/lib/udev/scsi_id -g -u -d $devnode",RESULT=="1ATA_VBOX_HARDDISK_VBac950c6b-de84431c", RUN+="/bin/sh -c 'mknod /dev/asmdiskf b  $major $minor; chown grid:asmadmin /dev/asmdiskf; chmod 0660 /dev/asmdiskf'"

创建规则文件:/etc/udev/rules.d/99-oracle-asmdevices.rules,并将上述内容添加到文件中。

[root@rac1 rules.d]# cat 99-oracle-asmdevices.rulesKERNEL=="sd*",ENV{DEVTYPE}=="disk",SUBSYSTEM=="block",PROGRAM=="/usr/lib/udev/scsi_id -g -u -d $devnode",RESULT=="1ATA_VBOX_HARDDISK_VB90ea2842-3d5cfe18", RUN+="/bin/sh -c 'mknod /dev/asmdiskb b  $major $minor; chown grid:asmadmin /dev/asmdiskb; chmod 0660 /dev/asmdiskb'"KERNEL=="sd*",ENV{DEVTYPE}=="disk",SUBSYSTEM=="block",PROGRAM=="/usr/lib/udev/scsi_id -g -u -d $devnode",RESULT=="1ATA_VBOX_HARDDISK_VB0c31ed82-ca3c7a2f", RUN+="/bin/sh -c 'mknod /dev/asmdiskc b  $major $minor; chown grid:asmadmin /dev/asmdiskc; chmod 0660 /dev/asmdiskc'"KERNEL=="sd*",ENV{DEVTYPE}=="disk",SUBSYSTEM=="block",PROGRAM=="/usr/lib/udev/scsi_id -g -u -d $devnode",RESULT=="1ATA_VBOX_HARDDISK_VBd2eba70f-9707444e", RUN+="/bin/sh -c 'mknod /dev/asmdiskd b  $major $minor; chown grid:asmadmin /dev/asmdiskd; chmod 0660 /dev/asmdiskd'"KERNEL=="sd*",ENV{DEVTYPE}=="disk",SUBSYSTEM=="block",PROGRAM=="/usr/lib/udev/scsi_id -g -u -d $devnode",RESULT=="1ATA_VBOX_HARDDISK_VB15946091-75f9c0f4", RUN+="/bin/sh -c 'mknod /dev/asmdiske b  $major $minor; chown grid:asmadmin /dev/asmdiske; chmod 0660 /dev/asmdiske'"KERNEL=="sd*",ENV{DEVTYPE}=="disk",SUBSYSTEM=="block",PROGRAM=="/usr/lib/udev/scsi_id -g -u -d $devnode",RESULT=="1ATA_VBOX_HARDDISK_VBac950c6b-de84431c", RUN+="/bin/sh -c 'mknod /dev/asmdiskf b  $major $minor; chown grid:asmadmin /dev/asmdiskf; chmod 0660 /dev/asmdiskf'"
执行生效

[root@rac1 ~]# /sbin/udevadm trigger --type=devices --action=change
如果权限没有变,尝试重启。

[root@rac1 rules.d]# ll /dev/asm*brw-rw---- 1 grid asmadmin 8, 16 Mar 21 22:01 /dev/asmdiskbbrw-rw---- 1 grid asmadmin 8, 32 Mar 21 22:01 /dev/asmdiskcbrw-rw---- 1 grid asmadmin 8, 48 Mar 21 22:01 /dev/asmdiskdbrw-rw---- 1 grid asmadmin 8, 64 Mar 21 22:01 /dev/asmdiskebrw-rw---- 1 grid asmadmin 8, 80 Mar 21 22:01 /dev/asmdiskf
1.17.1       修改磁盘属性1)修改磁盘属性

echo deadline >/sys/block/sdb/queue/schedulerecho deadline > /sys/block/sdc/queue/schedulerecho deadline >/sys/block/sdd/queue/schedulerecho deadline > /sys/block/sde/queue/schedulerecho deadline >/sys/block/sdf/queue/scheduler
(2)      验证属性修改结果:
如:         


[root@rac1 dev]#  more /sys/block/sdb/queue/schedulernoop anticipatory [deadline]cfq[root@rac1 dev]#  more /sys/block/sdc/queue/schedulernoop anticipatory [deadline]cfq
2       安装GRID下载地址:
http://www.oracle.com/technetwork/database/enterprise-edition/downloads/oracle12c-linux-12201-3608234.html
2.1        上传并解压介质注意:12cR2 GRID 的安装和之前版本不同,采用的是直接解压缩的模式。 所以需要先把安装介质复制到GRID HOME,然后直接进行解压缩。 这个目录必须在GRID HOME下才可以进行解压缩。      

About Image-Based Oracle Grid Infrastructure Installation
Starting with Oracle Grid Infrastructure 12c Release 2 (12.2), installation and configuration of Oracle Grid Infrastructure software is simplified with image-based installation.

[grid@rac1 ~]$ echo $ORACLE_HOME/u01/app/12.2.0/grid[grid@rac1 ~]$ cd $ORACLE_HOME[grid@rac1 grid]$ ll linuxx64_12201_grid_home.zip-rw-r--r-- 1 grid oinstall 2994687209 Mar 21 22:10 linuxx64_12201_grid_home.zip[grid@rac1 grid]$[grid@rac1 grid]$ unzip linuxx64_12201_grid_home.zip
解压缩完成后文件自动就补全了,剩下的在执行脚本即可。 没有了安装的过程了。

[grid@rac1 grid]$ lltotal 2924572drwxr-xr-x  2 grid oinstall        102 Jan 27 00:12 addnodedrwxr-xr-x 11 grid oinstall        118 Jan 27 00:10 assistantsdrwxr-xr-x  2 grid oinstall       8192 Jan 27 00:12 bindrwxr-xr-x  3 grid oinstall         23 Jan 27 00:12 cdatadrwxr-xr-x  3 grid oinstall         19 Jan 27 00:10 chadrwxr-xr-x  4 grid oinstall         87 Jan 27 00:12 clonedrwxr-xr-x 16 grid oinstall        191 Jan 27 00:12 crsdrwxr-xr-x  6 grid oinstall         53 Jan 27 00:12 cssdrwxr-xr-x  7 grid oinstall         71 Jan 27 00:10 cvdrwxr-xr-x  3 grid oinstall         19 Jan 27 00:10 dbjavadrwxr-xr-x  2 grid oinstall         22 Jan 27 00:11 dbsdrwxr-xr-x  2 grid oinstall         32 Jan 27 00:12 dc_ocmdrwxr-xr-x  5 grid oinstall        191 Jan 27 00:12 deinstalldrwxr-xr-x  3 grid oinstall         20 Jan 27 00:10 demodrwxr-xr-x  3 grid oinstall         20 Jan 27 00:10 diagnosticsdrwxr-xr-x  8 grid oinstall        179 Jan 27 00:11 dmu-rw-r--r--  1 grid oinstall        852 Aug 19  2015 env.oradrwxr-xr-x  7 grid oinstall         65 Jan 27 00:12 evmdrwxr-xr-x  5 grid oinstall         49 Jan 27 00:10 gpnp
2.2        运行安装在节点1执行安装脚本,这里依赖图形界面,可使用xshell 或者 vnc 进行调用。
Linux VNC 安装配置


[grid@rac1 grid]$ pwd/u01/app/12.2.0/grid[grid@rac1 grid]$ ll *.sh-rwxr-x--- 1 grid oinstall 5395 Jul 21  2016 gridSetup.sh-rwx------ 1 grid oinstall  603 Jan 27 00:12 root.sh-rwx------ 1 grid oinstall  612 Jan 27 00:12 rootupgrade.sh-rwxr-x--- 1 grid oinstall  628 Sep  5  2015 runcluvfy.sh

[grid@rac1 grid]$ ./gridSetup.shLaunching Oracle Grid Infrastructure Setup Wizard...

                               
登录/注册后可看大图



                               
登录/注册后可看大图


                               
登录/注册后可看大图


                               
登录/注册后可看大图

添加节点并配置SSH 验证


                               
登录/注册后可看大图


                               
登录/注册后可看大图

                               
登录/注册后可看大图

                               
登录/注册后可看大图


                               
登录/注册后可看大图

注意:新增了一个冗余类型FLEX:并且对磁盘组空间也有新的更高要求
官方文档解释:
FLEX REDUNDANCY是一种磁盘组,允许数据库在创建磁盘组后指定自己的冗余。 文件的冗余也可以在创建后进行更改。 此类型的磁盘组支持Oracle ASM文件组和配额组。 灵活磁盘组需要至少存在三个故障组。 如果弹性磁盘组具有少于五个故障组,则它可以容忍丢失一个; 否则,它可以容忍两个故障组的丢失。 要创建一个弹性磁盘组,COMPATIBLE.ASMCOMPATIBLE.RDBMS磁盘组属性必须设置为12.2或更高。

                               
登录/注册后可看大图

                               
登录/注册后可看大图

                               
登录/注册后可看大图

                               
登录/注册后可看大图

                               
登录/注册后可看大图

                               
登录/注册后可看大图

                               
登录/注册后可看大图

                               
登录/注册后可看大图

如果前提检查出现NTP,内存方面的警告,还有什么avahi-deamon的问题。可以忽略。

                               
登录/注册后可看大图

开始安装

                               
登录/注册后可看大图

执行脚本

                               
登录/注册后可看大图



[root@rac1 etc]# /u01/app/12.2.0/grid/root.shPerforming root user operation. The following environment variables are set as:    ORACLE_OWNER= grid    ORACLE_HOME=  /u01/app/12.2.0/grid Enter the full pathname of the local bin directory: [/usr/local/bin]:   Copying dbhome to /usr/local/bin ...   Copying oraenv to /usr/local/bin ...   Copying coraenv to /usr/local/bin ...Creating /etc/oratab file...Entries will be added to the /etc/oratab file as needed byDatabase Configuration Assistant when a database is createdFinished running generic part of root script.Now product-specific root actions will be performed.Relinking oracle with rac_on optionUsing configuration parameter file: /u01/app/12.2.0/grid/crs/install/crsconfig_paramsThe log of current session can be found at:  /u01/app/grid/crsdata/rac1/crsconfig/rootcrs_rac1_2017-03-21_11-50-15PM.log2017/03/21 23:50:20 CLSRSC-594: Executing installation step 1 of 19: 'SetupTFA'.2017/03/21 23:50:20 CLSRSC-4001: Installing Oracle Trace File Analyzer (TFA) Collector.2017/03/21 23:50:53 CLSRSC-4002: Successfully installed Oracle Trace File Analyzer (TFA) Collector.2017/03/21 23:50:53 CLSRSC-594: Executing installation step 2 of 19: 'ValidateEnv'.2017/03/21 23:50:57 CLSRSC-363: User ignored prerequisites during installation2017/03/21 23:50:58 CLSRSC-594: Executing installation step 3 of 19: 'CheckFirstNode'.2017/03/21 23:51:00 CLSRSC-594: Executing installation step 4 of 19: 'GenSiteGUIDs'.2017/03/21 23:51:01 CLSRSC-594: Executing installation step 5 of 19: 'SaveParamFile'.2017/03/21 23:51:12 CLSRSC-594: Executing installation step 6 of 19: 'SetupOSD'.2017/03/21 23:51:13 CLSRSC-594: Executing installation step 7 of 19: 'CheckCRSConfig'.2017/03/21 23:51:13 CLSRSC-594: Executing installation step 8 of 19: 'SetupLocalGPNP'.2017/03/21 23:51:49 CLSRSC-594: Executing installation step 9 of 19: 'ConfigOLR'.2017/03/21 23:51:58 CLSRSC-594: Executing installation step 10 of 19: 'ConfigCHMOS'.2017/03/21 23:51:58 CLSRSC-594: Executing installation step 11 of 19: 'CreateOHASD'.2017/03/21 23:52:04 CLSRSC-594: Executing installation step 12 of 19: 'ConfigOHASD'.2017/03/21 23:52:19 CLSRSC-330: Adding Clusterware entries to file 'oracle-ohasd.service'2017/03/21 23:52:42 CLSRSC-594: Executing installation step 13 of 19: 'InstallAFD'.2017/03/21 23:52:48 CLSRSC-594: Executing installation step 14 of 19: 'InstallACFS'.CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rac1'CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'rac1' has completedCRS-4133: Oracle High Availability Services has been stopped.CRS-4123: Oracle High Availability Services has been started.2017/03/21 23:53:17 CLSRSC-400: A system reboot is required to continue installing. The command '/u01/app/12.2.0/grid/perl/bin/perl -I/u01/app/12.2.0/grid/perl/lib -I/u01/app/12.2.0/grid/crs/install /u01/app/12.2.0/grid/crs/install/rootcrs.pl ' execution failed[root@rac1 etc]#

执行root.sh脚本时,出现了

2017/03/21 23:53:17 CLSRSC-400: A system reboot is required to continue installing. The command '/u01/app/12.2.0/grid/perl/bin/perl -I/u01/app/12.2.0/grid/perl/lib -I/u01/app/12.2.0/grid/crs/install /u01/app/12.2.0/grid/crs/install/rootcrs.pl ' execution failed
官方文档解释:要求一定要重启服务器,然后再次执行这两个脚本。时间稍微有点长....

如果出现CLSRSC-1102: failed to start resource 'qosmserver'这种错误,有可能是你分配的内存不够造成的,造成资源不够启动该服务。增加内存后,重新执行root.sh脚本。
root.sh脚本最后:

CRS-6016: Resource auto-start has completed for server rac1CRS-6024: Completed start of Oracle Cluster Ready Services-managed resourcesCRS-4123: Oracle High Availability Services has been started.2017/03/21 14:12:39 CLSRSC-343: Successfully started Oracle Clusterware stack2017/03/21 14:12:39 CLSRSC-594: Executing installation step 18 of 19: 'ConfigNode'.2017/03/21 14:16:10 CLSRSC-594: Executing installation step 19 of 19: 'PostConfig'.2017/03/21 14:17:55 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster ... succeeded
表示成功了。
Log 有点长,有兴趣自己看:

[root@rac1 ~]# /u01/app/12.2.0/grid/root.shPerforming root user operation.The following environment variables are set as:    ORACLE_OWNER= grid    ORACLE_HOME=  /u01/app/12.2.0/grid Enter the full pathname of the local bin directory: [/usr/local/bin]:The contents of "dbhome" have not changed. No need to overwrite.The contents of "oraenv" have not changed. No need to overwrite.The contents of "coraenv" have not changed. No need to overwrite. Entries will be added to the /etc/oratab file as needed byDatabase Configuration Assistant when a database is createdFinished running generic part of root script.Now product-specific root actions will be performed.Relinking oracle with rac_on optionUsing configuration parameter file: /u01/app/12.2.0/grid/crs/install/crsconfig_paramsThe log of current session can be found at:  /u01/app/grid/crsdata/rac1/crsconfig/rootcrs_rac1_2017-03-22_00-00-32AM.log2017/03/22 00:00:37 CLSRSC-594: Executing installation step 1 of 19: 'SetupTFA'.2017/03/22 00:00:37 CLSRSC-4001: Installing Oracle Trace File Analyzer (TFA) Collector.2017/03/22 00:00:37 CLSRSC-4002: Successfully installed Oracle Trace File Analyzer (TFA) Collector.2017/03/22 00:00:37 CLSRSC-594: Executing installation step 2 of 19: 'ValidateEnv'.2017/03/22 00:00:40 CLSRSC-363: User ignored prerequisites during installation2017/03/22 00:00:40 CLSRSC-594: Executing installation step 3 of 19: 'CheckFirstNode'.2017/03/22 00:00:42 CLSRSC-594: Executing installation step 4 of 19: 'GenSiteGUIDs'.2017/03/22 00:00:43 CLSRSC-594: Executing installation step 5 of 19: 'SaveParamFile'.2017/03/22 00:00:45 CLSRSC-594: Executing installation step 6 of 19: 'SetupOSD'.2017/03/22 00:00:47 CLSRSC-594: Executing installation step 7 of 19: 'CheckCRSConfig'.2017/03/22 00:00:47 CLSRSC-594: Executing installation step 8 of 19: 'SetupLocalGPNP'.2017/03/22 00:00:49 CLSRSC-594: Executing installation step 9 of 19: 'ConfigOLR'.2017/03/22 00:00:51 CLSRSC-594: Executing installation step 10 of 19: 'ConfigCHMOS'.2017/03/22 00:01:37 CLSRSC-594: Executing installation step 11 of 19: 'CreateOHASD'.2017/03/22 00:01:38 CLSRSC-594: Executing installation step 12 of 19: 'ConfigOHASD'.2017/03/22 00:01:53 CLSRSC-330: Adding Clusterware entries to file 'oracle-ohasd.service'2017/03/22 00:02:16 CLSRSC-594: Executing installation step 13 of 19: 'InstallAFD'.2017/03/22 00:02:20 CLSRSC-594: Executing installation step 14 of 19: 'InstallACFS'.CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rac1'CRS-2673: Attempting to stop 'ora.evmd' on 'rac1'CRS-2673: Attempting to stop 'ora.mdnsd' on 'rac1'CRS-2673: Attempting to stop 'ora.gpnpd' on 'rac1'CRS-2673: Attempting to stop 'ora.gipcd' on 'rac1'CRS-2677: Stop of 'ora.mdnsd' on 'rac1' succeededCRS-2677: Stop of 'ora.gipcd' on 'rac1' succeededCRS-2677: Stop of 'ora.evmd' on 'rac1' succeededCRS-2677: Stop of 'ora.gpnpd' on 'rac1' succeededCRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'rac1' has completedCRS-4133: Oracle High Availability Services has been stopped.CRS-4123: Oracle High Availability Services has been started.2017/03/22 00:02:52 CLSRSC-594: Executing installation step 15 of 19: 'InstallKA'.2017/03/22 00:02:57 CLSRSC-594: Executing installation step 16 of 19: 'InitConfig'.CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rac1'CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'rac1' has completedCRS-4133: Oracle High Availability Services has been stopped.CRS-4123: Oracle High Availability Services has been started.CRS-2672: Attempting to start 'ora.evmd' on 'rac1'CRS-2672: Attempting to start 'ora.mdnsd' on 'rac1'CRS-2676: Start of 'ora.mdnsd' on 'rac1' succeededCRS-2676: Start of 'ora.evmd' on 'rac1' succeededCRS-2672: Attempting to start 'ora.gpnpd' on 'rac1'CRS-2676: Start of 'ora.gpnpd' on 'rac1' succeededCRS-2672: Attempting to start 'ora.cssdmonitor' on 'rac1'CRS-2672: Attempting to start 'ora.gipcd' on 'rac1'CRS-2676: Start of 'ora.cssdmonitor' on 'rac1' succeededCRS-2676: Start of 'ora.gipcd' on 'rac1' succeededCRS-2672: Attempting to start 'ora.cssd' on 'rac1'CRS-2672: Attempting to start 'ora.diskmon' on 'rac1'CRS-2676: Start of 'ora.diskmon' on 'rac1' succeededCRS-2676: Start of 'ora.cssd' on 'rac1' succeeded Disk groups created successfully. Check /u01/app/grid/cfgtoollogs/asmca/asmca-170322AM120336.log for details. 2017/03/22 00:04:39 CLSRSC-482: Running command: '/u01/app/12.2.0/grid/bin/ocrconfig -upgrade grid oinstall'CRS-2672: Attempting to start 'ora.crf' on 'rac1'CRS-2672: Attempting to start 'ora.storage' on 'rac1'CRS-2676: Start of 'ora.storage' on 'rac1' succeededCRS-2676: Start of 'ora.crf' on 'rac1' succeededCRS-2672: Attempting to start 'ora.crsd' on 'rac1'CRS-2676: Start of 'ora.crsd' on 'rac1' succeededCRS-4256: Updating the profileSuccessful addition of voting disk 07f57bf9f7634f5abfb849735e86d3aa.Successful addition of voting disk 3c930c3a19f34f25bfddc3a5a41bbb4e.Successful addition of voting disk 4fab95ab67ed4f07bf4e9aa67e3e095e.Successfully replaced voting disk group with +OCR.CRS-4256: Updating the profileCRS-4266: Voting file(s) successfully replaced##  STATE    File Universal Id                File Name Disk group--  -----    -----------------                --------- --------- 1. ONLINE   07f57bf9f7634f5abfb849735e86d3aa (/dev/asmdiskb) [OCR] 2. ONLINE   3c930c3a19f34f25bfddc3a5a41bbb4e (/dev/asmdiskd) [OCR] 3. ONLINE   4fab95ab67ed4f07bf4e9aa67e3e095e (/dev/asmdiskc) [OCR]Located 3 voting disk(s).CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rac1'CRS-2673: Attempting to stop 'ora.crsd' on 'rac1'CRS-2677: Stop of 'ora.crsd' on 'rac1' succeededCRS-2673: Attempting to stop 'ora.storage' on 'rac1'CRS-2673: Attempting to stop 'ora.crf' on 'rac1'CRS-2673: Attempting to stop 'ora.gpnpd' on 'rac1'CRS-2673: Attempting to stop 'ora.mdnsd' on 'rac1'CRS-2677: Stop of 'ora.crf' on 'rac1' succeededCRS-2677: Stop of 'ora.gpnpd' on 'rac1' succeededCRS-2677: Stop of 'ora.storage' on 'rac1' succeededCRS-2673: Attempting to stop 'ora.asm' on 'rac1'CRS-2677: Stop of 'ora.mdnsd' on 'rac1' succeededCRS-2677: Stop of 'ora.asm' on 'rac1' succeededCRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'rac1'CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'rac1' succeededCRS-2673: Attempting to stop 'ora.ctssd' on 'rac1'CRS-2673: Attempting to stop 'ora.evmd' on 'rac1'CRS-2677: Stop of 'ora.evmd' on 'rac1' succeededCRS-2677: Stop of 'ora.ctssd' on 'rac1' succeededCRS-2673: Attempting to stop 'ora.cssd' on 'rac1'CRS-2677: Stop of 'ora.cssd' on 'rac1' succeededCRS-2673: Attempting to stop 'ora.gipcd' on 'rac1'CRS-2677: Stop of 'ora.gipcd' on 'rac1' succeededCRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'rac1' has completedCRS-4133: Oracle High Availability Services has been stopped.2017/03/22 00:06:15 CLSRSC-594: Executing installation step 17 of 19: 'StartCluster'.CRS-4123: Starting Oracle High Availability Services-managed resourcesCRS-2672: Attempting to start 'ora.evmd' on 'rac1'CRS-2672: Attempting to start 'ora.mdnsd' on 'rac1'CRS-2676: Start of 'ora.mdnsd' on 'rac1' succeededCRS-2676: Start of 'ora.evmd' on 'rac1' succeededCRS-2672: Attempting to start 'ora.gpnpd' on 'rac1'CRS-2676: Start of 'ora.gpnpd' on 'rac1' succeededCRS-2672: Attempting to start 'ora.gipcd' on 'rac1'CRS-2676: Start of 'ora.gipcd' on 'rac1' succeededCRS-2672: Attempting to start 'ora.drivers.acfs' on 'rac1'CRS-2674: Start of 'ora.drivers.acfs' on 'rac1' failedCRS-2672: Attempting to start 'ora.cssdmonitor' on 'rac1'CRS-2676: Start of 'ora.cssdmonitor' on 'rac1' succeededCRS-2672: Attempting to start 'ora.cssd' on 'rac1'CRS-2672: Attempting to start 'ora.diskmon' on 'rac1'CRS-2676: Start of 'ora.diskmon' on 'rac1' succeededCRS-2676: Start of 'ora.cssd' on 'rac1' succeededCRS-2672: Attempting to start 'ora.cluster_interconnect.haip' on 'rac1'CRS-2672: Attempting to start 'ora.ctssd' on 'rac1'CRS-2676: Start of 'ora.ctssd' on 'rac1' succeededCRS-2672: Attempting to start 'ora.drivers.acfs' on 'rac1'CRS-2674: Start of 'ora.drivers.acfs' on 'rac1' failedCRS-2676: Start of 'ora.cluster_interconnect.haip' on 'rac1' succeededCRS-2672: Attempting to start 'ora.asm' on 'rac1'CRS-2676: Start of 'ora.asm' on 'rac1' succeededCRS-2672: Attempting to start 'ora.storage' on 'rac1'CRS-2676: Start of 'ora.storage' on 'rac1' succeededCRS-2672: Attempting to start 'ora.crf' on 'rac1'CRS-2676: Start of 'ora.crf' on 'rac1' succeededCRS-2672: Attempting to start 'ora.crsd' on 'rac1'CRS-2676: Start of 'ora.crsd' on 'rac1' succeededCRS-6023: Starting Oracle Cluster Ready Services-managed resourcesCRS-6017: Processing resource auto-start for servers: rac1CRS-6016: Resource auto-start has completed for server rac1CRS-6024: Completed start of Oracle Cluster Ready Services-managed resourcesCRS-4123: Oracle High Availability Services has been started.2017/03/22 00:09:08 CLSRSC-343: Successfully started Oracle Clusterware stack2017/03/22 00:09:08 CLSRSC-594: Executing installation step 18 of 19: 'ConfigNode'. CRS-2672: Attempting to start 'ora.ASMNET1LSNR_ASM.lsnr' on 'rac1'CRS-2676: Start of 'ora.ASMNET1LSNR_ASM.lsnr' on 'rac1' succeededCRS-2672: Attempting to start 'ora.asm' on 'rac1'CRS-2676: Start of 'ora.asm' on 'rac1' succeededCRS-2672: Attempting to start 'ora.OCR.dg' on 'rac1'CRS-2676: Start of 'ora.OCR.dg' on 'rac1' succeeded2017/03/22 00:14:18 CLSRSC-594: Executing installation step 19 of 19: 'PostConfig'.2017/03/22 00:17:02 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster ... succeeded[root@rac1 ~]#

2.3        验证集群是否正常
[grid@rac1 ~]$ crsctl stat res -t--------------------------------------------------------------------------------Name           Target  State        Server                   State details       --------------------------------------------------------------------------------Local Resources--------------------------------------------------------------------------------ora.ASMNET1LSNR_ASM.lsnr               ONLINE  ONLINE       rac1                     STABLE               ONLINE  ONLINE       rac2                     STABLEora.LISTENER.lsnr               ONLINE  ONLINE       rac1                     STABLE               ONLINE  ONLINE       rac2                     STABLEora.OCR_VOTE.dg               ONLINE  ONLINE       rac1                     STABLE               ONLINE  ONLINE       rac2                     STABLEora.net1.network               ONLINE  ONLINE       rac1                     STABLE               ONLINE  ONLINE       rac2                     STABLEora.ons               ONLINE  ONLINE       rac1                     STABLE               ONLINE  ONLINE       rac2                     STABLE--------------------------------------------------------------------------------Cluster Resources--------------------------------------------------------------------------------ora.LISTENER_SCAN1.lsnr      1        ONLINE  ONLINE       rac1                     STABLEora.MGMTLSNR      1        OFFLINE OFFLINE                               STABLEora.asm      1        ONLINE  ONLINE       rac1                     Started,STABLE      2        ONLINE  ONLINE       rac2                     Started,STABLE      3        OFFLINE OFFLINE                               STABLEora.cvu      1        ONLINE  ONLINE       rac1                     STABLEora.qosmserver      1        ONLINE  ONLINE       rac1                     STABLEora.rac1.vip      1        ONLINE  ONLINE       rac1                     STABLEora.rac2.vip      1        ONLINE  ONLINE       rac2                     STABLEora.scan1.vip      1        ONLINE  ONLINE       rac1                     STABLE--------------------------------------------------------------------------------[grid@rac1 ~]$
3       ASMCA创建磁盘组界面都清爽了许多

                               
登录/注册后可看大图


4       安装DB这个安装方式和之前一样,没有变化
./runInstaller
安装部分就省略了,基本上就是配置ssh,选择磁盘组等等。

                               
登录/注册后可看大图


组分的更细了,分工更明确了。

                               
登录/注册后可看大图



                               
登录/注册后可看大图


5       DBCA创建数据库....
6       验证6.1        查看创建的容器数据库
SQL> select name,cdb from v$database;NAME          CDB--------  ---------CNDBA          YES
6.2        查看存在的插拨数据库
SQL> col pdb_name for a30SQL> select pdb_id,pdb_name,dbid,status,creation_scn from dba_pdbs;     PDB_ID PDB_NAME        DBID STATUS        CREATION_SCN---------- ------------------------------ ---------- ---------- ------------ 3 lei          3459708341 NORMAL             1456419 2 PDB$SEED          3422473700 NORMAL             1408778




分享到:  QQ好友和群QQ好友和群 QQ空间QQ空间 腾讯微博腾讯微博 腾讯朋友腾讯朋友
收藏收藏 支持支持 反对反对
回复

使用道具 举报

您需要登录后才可以回帖 登录 | 注册

本版积分规则

QQ|手机版|小黑屋|重庆思庄Oracle、Redhat认证学习论坛 ( 渝ICP备12004239号-4 )

GMT+8, 2024-11-25 03:29 , Processed in 0.133828 second(s), 20 queries .

重庆思庄学习中心论坛-重庆思庄科技有限公司论坛

© 2001-2020

快速回复 返回顶部 返回列表