重庆思庄Oracle、Redhat认证学习论坛

 找回密码
 注册

QQ登录

只需一步,快速开始

搜索
查看: 1565|回复: 0
打印 上一主题 下一主题

[安装] How To Change the Public IP Network For an Existing ODA HA After Deployment

[复制链接]
跳转到指定楼层
楼主
发表于 2020-7-6 23:54:04 | 只看该作者 回帖奖励 |倒序浏览 |阅读模式
How To Change the Public IP / Network For an Existing ODA HA After Deployment ( DCS ) (文档 ID 2638458.1)
转到底部

In this Document
Goal
Solution
References

Applies to:
Oracle Database Appliance - Version All Versions and later
Information in this document applies to any platform.
Goal
Needing to change the public IP address for an ODA after a deployment.
What is the supported way to change an existing ODA network config, within ODA v18.x using the DCS stack?
Use Case
"...We have two Oracle Database Appliances using the DCS stack.
While each of the ODAs are up-and-running and working fine physically these two machines are in the same computer room and one of the machines has to move to a different (physical) location.
Not only the physical location will change but also the network (IP).
What is the procedure?
..."
Solution
The solution below addresses the requirement to change the the IPs/subnet/gateway on ODA HA models
The note doesn't cover ODAs configured with tagged VLANs.
Please verify that the new IPs/subnet/gateway are correct
Take a backup of the ODA with ODABR on both nodes
Refer to Note 2466177.1 - ODA (Oracle Database Appliance): ODABR a System Backup/Restore Utility
Verify that the clusterware is up
   
as root OS user
# /u01/app/18.0.0.0/grid/bin/crsctl check cluster -all
**************************************************************
[hostname1]:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
[hostname2]:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************

Check the current interface configuration of the Grid Intrastructure
   
as grid OS user
$ /u01/app/18.0.0.0/grid/bin/oifcfg getif
priv0 192.168.16.16 global cluster_interconnect,asm
btbond1 <old subnet> global public

Add the new subnet to the configuration
   
as grid OS user
$ /u01/app/18.0.0.0/grid/bin/oifcfg setif -global btbond1/<new subnet>:public

Remove the old one
as grid OS user
/u01/app/18.0.0.0/grid/bin/oifcfg delif -global btbond1/<old subnet>
  
Verify the changes
   
as grid OS user
$ /u01/app/18.0.0.0/grid/bin/oifcfg getif
priv0 192.168.16.16 global cluster_interconnect,asm
btbond1 <new subnet> global public

Stop the Grid Infrastructure
   
as root OS user
# /u01/app/18.0.0.0/grid/bin/crsctl stop cluster -all
  
Change the hostip/subnet mask/gateway in the interface configuration file on both nodes
   
as root OS user
vi /etc/sysconfig/network-scripts/ifcfg-btbond1
# This file is automatically created by the ODA software.
DEVICE=btbond1
ONBOOT=yes
BOOTPROTO=none
USERCTL=no
TYPE=BOND
IPV6INIT=no
NM_CONTROLLED=no
PEERDNS=no
BONDING_OPTS="mode=active-backup miimon=100 primary=p7p1"
IPADDR=<new hostip of the local host>
NETMASK=<new netmask>
GATEWAY=<new gateway>

Restart the network on both nodes. Please note that you will lose the connectivity to the ODA at that point
   
as root OS user
# service network restart

Ask the network administrators to implement all the network related changes on switch and DNS side
Reconnect to the host via an ssh terminal
Verify that the new IP/subnet/gateway is configured on both nodes
   
as root OS user
# ifconfig -a
# netstat -rn
  
Change the DNS configuration on both nodes if required
   
#vi /etc/resolv.conf
search mydomain.com
nameserver <ip of dns1>
nameserver <ip of dns2>
nameserver <ip of dns3>
Test the DNS resolution
# nslookup <hostname1>
Server: <DNS server>
Address: <DNS server>#53
Name: <hostname1>.mydomain.com
Address: <new hostip1>

Verify that DNS and reverse DNS lookups return the right IPs, hostnames, VIPs, SCAN.

Change NTP configuration on both nodes if required
   
as root OS user
# vi /etc/ntp.conf
server <ip of ntp1> prefer
server <ip of ntp2>
Validate ntp resolution
# service ntpd stop
Shutting down ntpd: [ OK ]
Use "-q" for the test as it only queries the time
# ntpdate -q <ip of ntp1>
server <ip of ntp1>, stratum 2, offset -0.010862, delay 0.02570
11 Feb 21:23:18 ntpdate[85613]: adjust time server <ntp1> offset -0.010862 sec
# service ntpd start
Starting ntpd: [ OK ]
   
Change the IPs in /etc/hosts on both ODA nodes
as root OS user
# vi /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
127.0.0.1 oak1
<new hostip1> <hostname1>.mydomain.com <hostname1>
192.168.16.24 <hostname1>-priv.mydomain.com <hostname1>-priv
<new vip1> <vipname1>.mydomain.com <vipname1>
<new hostip2> <hostname2>.mydomain.com <hostname2>
192.168.16.25 <hostname2>-priv.mydomain.com <hostname2>-priv
<new vip2> <vipname2>.mydomain.com <vipname2>
Please note that 127.0.0.1 is oak1 on node0 and oak2 on node1 in the /etc/hosts file
  
Remove known_hosts file for grid and oracle users on all ODA nodes
   
as root OS user
rm ~oracle/.ssh/known_hosts
rm ~grid/.ssh/known_hosts
  
Start the Grid Infrastructure
as root OS user
# /u01/app/18.0.0.0/grid/bin/crsctl start cluster -all
  
Change the VIPs for both nodes
as OS root user
Check the current configuration
/u01/app/18.0.0.0/grid/bin/srvctl config nodeapps -viponly
Network 1 exists
Subnet IPv4: <old subnet>/<old netmask>/btbond1, static
Subnet IPv6:
Ping Targets:
Network is enabled
Network is individually enabled on nodes:
Network is individually disabled on nodes:
VIP exists: network number 1, hosting node <hostname1>
VIP Name: <vipname1>
VIP IPv4 Address: <old vip1>
VIP IPv6 Address:
VIP is enabled.
VIP is individually enabled on nodes:
VIP is individually disabled on nodes:
VIP exists: network number 1, hosting node <hostname2>
VIP Name: <vipname1>
VIP IPv4 Address: <old vip2>
VIP IPv6 Address:
VIP is enabled.
VIP is individually enabled on nodes:
VIP is individually disabled on nodes:
Change the VIPs
# /u01/app/18.0.0.0/grid/bin/srvctl modify nodeapps -node <hostname1> -address <vipname1>/<new netmask>/btbond1
# /u01/app/18.0.0.0/grid/bin/srvctl modify nodeapps -node <hostname2> -address <vipname2>/<new netmask>/btbond1
Verify the changes
/u01/app/18.0.0.0/grid/bin/srvctl config nodeapps -viponly
Network 1 exists
Subnet IPv4: <new subnet>/<new netmask>/btbond1, static
Subnet IPv6:
Ping Targets:
Network is enabled
Network is individually enabled on nodes:
Network is individually disabled on nodes:
VIP exists: network number 1, hosting node <hostname1>
VIP Name: <vipname1>
VIP IPv4 Address: <new vip1>
VIP IPv6 Address:
VIP is enabled.
VIP is individually enabled on nodes:
VIP is individually disabled on nodes:
VIP exists: network number 1, hosting node <hostname2>
VIP Name: <vipname2>
VIP IPv4 Address: <new vip2>
VIP IPv6 Address:
VIP is enabled.
VIP is individually enabled on nodes:
  
Verify the configuration of the network resources.
   
# /u01/app/18.0.0.0/grid/bin/srvctl config network
Network 1 exists
Subnet IPv4: <new subnet>/<new netmask>/btbond1, static
Subnet IPv6:
Ping Targets:
Network is enabled
Network is individually enabled on nodes:
Network is individually disabled on nodes:
  
Change the SCAN IP(s)  
# /u01/app/18.0.0.0/grid/bin/srvctl config scan
SCAN name: <scan name>, Network: 1
Subnet IPv4: <old subnet>/<old netmask>/btbond1, static
Subnet IPv6:
SCAN 1 IPv4 VIP: <old scanip1>
SCAN VIP is enabled.
SCAN VIP is individually enabled on nodes:
SCAN VIP is individually disabled on nodes:
Change the SCAN IPs
# /u01/app/18.0.0.0/grid/bin/srvctl modify scan -scanname <scan name> -netnum 1
# /u01/app/18.0.0.0/grid/bin/srvctl config scan
SCAN name: <scan name>, Network: 1
Subnet IPv4: <new subnet>/<new netmask>/btbond1, static
Subnet IPv6:
SCAN 1 IPv4 VIP: <new scanip1>
SCAN VIP is enabled.
SCAN VIP is individually enabled on nodes:
SCAN VIP is individually disabled on nodes:
   
You might need to change the configuration files under /u01/app/18.0.0.0/grid/network/admin
   
as grid OS user
$ cd /u01/app/18.0.0.0/grid/network/admin
$ ls -l
total 24
-rw-r--r-- 1 grid oinstall 504 Feb 10 18:38 listener.ora
drwxr-xr-x 2 grid oinstall 4096 Feb 7 2018 samples
-rw-r--r-- 1 grid oinstall 1441 Aug 26 2015 shrept.lst
-rw-r----- 1 grid oinstall 178 Feb 10 18:38 sqlnet.ora
$ more listener.ora
LISTENER=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER))))
LISTENER_SCAN1=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER_SCAN1))))
ASMNET1LSNR_ASM=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY=ASMNET1LSNR_ASM))))
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_ASMNET1LSNR_ASM=ON#lineaddedbyAgent
VALID_NODE_CHECKING_REGISTRATION_ASMNET1LSNR_ASM=SUBNET#lineaddedbyAgent
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER_SCAN1=ON#lineaddedbyAgent
VALID_NODE_CHECKING_REGISTRATION_LISTENER_SCAN1=OFF#lineaddedbyAgent-DisabledbyAgentbecauseREMOTE_REGISTRATION_ADDRESSisset
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER=ON#lineaddedbyAgent
VALID_NODE_CHECKING_REGISTRATION_LISTENER=SUBNET#lineaddedbyAgent
$ more sqlnet.ora
# sqlnet.ora Network Configuration File: /u01/app/18.0.0.0/grid/network/admin/sqlnet.ora
# Generated by Oracle configuration tools.
NAMES.DIRECTORY_PATH= (TNSNAMES, EZCONNECT)
  
With the default configuration you don’t have to touch anything.
Restart the Grid Infrastructure to verify that all resources can startup fine
   
as root OS user
# /u01/app/18.0.0.0/grid/bin/crsctl stop cluster -all
# /u01/app/18.0.0.0/grid/bin/crsctl start cluster -all
# /u01/app/18.0.0.0/grid/bin/crsctl stat res -t
Please keep in mind that it might take some time for the clusterware to start up all the resources including the databases
Run CVU to verify the health of the Grid Infrastructure
as grid OS user
Setup ssh user equivalence
/u01/app/18.0.0.0/grid/oui/prov/resources/scripts/sshUserSetup.sh -user grid -hosts "hostname1 hostname2" -noPromptPassphrase -confirm -advanced
Run CVU
$ cluvfy stage -post crsinst -n all -verbose
Post-check for cluster services setup was successful.
Warnings were encountered during execution of CVU verification request "stage -post crsinst".
Verifying OCR Integrity ...WARNING
PRVG-6017 : OCR backup is located in the same disk group "+DATA" as OCR.
Verifying Single Client Access Name (SCAN) ...WARNING
scaoda8132: PRVG-11368 : A SCAN is recommended to resolve to "3" or more IP
addresses, but SCAN "scanname" resolves to only
"/<SCAN IP(s)>"
[hostname1]: PRVG-11368 : A SCAN is recommended to resolve to "3" or more IP
addresses, but SCAN "scaoda813-scan" resolves to only
"/<SCAN IP(s)>"
CVU operation performed: stage -post crsinst
Date: Feb 12, 2020 8:38:16 PM
CVU home: /u01/app/18.0.0.0/grid/
User: grid
  
The above warnings are expected.
Verify the connectivity to your database(s). You might need to change the configuration files under the RDBMS home's network/admin folder
   
as oracle OS user
$ export ORACLE_HOME=/u01/app/oracle/product/18.0.0.0/dbhome_1
$ export PATH=$ORACLE_HOME/bin:$PATH
$ export ORACLE_SID=mydb1
$ cat /u01/app/oracle/product/18.0.0.0/dbhome_1/network/admin/tnsnames.ora
# tnsnames.ora Network Configuration File: /u01/app/oracle/product/18.0.0.0/dbhome_1/network/admin/tnsnames.ora
# Generated by Oracle configuration tools.
MYDB =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = <scan name>)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = mydb.mydomain.com)
)
)
With the default configuration there is no need to change anything in the tnsnames.ora
$ srvctl status database -database mydb
Instance mydb1 is running on node <hostname1>
Instance mydb2 is running on node <hostname2>
$ sqlplus sys@mydb as sysdba
SQL*Plus: Release 18.0.0.0.0 - Production on Tue Feb 11 21:36:22 2020
Version 18.8.0.0.0
Copyright (c) 1982, 2018, Oracle. All rights reserved.
Enter password:
Connected to:
Oracle Database 18c Enterprise Edition Release 18.0.0.0.0 - Production
Version 18.8.0.0.0
SQL> exit
Disconnected from Oracle Database 18c Enterprise Edition Release 18.0.0.0.0 - Production
Version 18.8.0.0.0

Query the current network configuration in the Derby database
   
as root OS user
# odacli list-networks -u 0
ID Name NIC InterfaceType IP Address Subnet Mask Gateway VlanId
---------------------------------------- -------------------- ---------- ---------- ------------------ ------------------ ------------------ ----------
5af19d28-a389-4261-9300-b9a11c66e60e Private-network icbond0 BOND 192.168.16.24 255.255.255.0
b1903d4d-447d-4a35-b811-293039fdb7fd Public-network btbond1 BOND <old hostip1> <old netmask> <old gateway>
# odacli list-networks -u 1
ID Name NIC InterfaceType IP Address Subnet Mask Gateway VlanId
---------------------------------------- -------------------- ---------- ---------- ------------------ ------------------ ------------------ ----------
ce32e105-7bd3-4b09-a74a-38f52b73d4b9 Private-network icbond0 BOND 192.168.16.25 255.255.255.0
d3b90af7-53ad-4b0b-a6a0-3888a7dd918f Public-network btbond1 BOND <old hostip2> <old netmask> <old gateway>

Download the zip file for 10.14.2.0 Derby from the following link and extract it under /tmp on both nodes
http://db.apache.org/derby/derby_downloads.html
  
# unzip -d /tmp db-derby-10.14.2.0-lib.zip

Stop the DCS agent on both nodes
   
as root OS user
# initctl stop initdcsagent
initdcsagent stop/waiting
  
Take a backup of the Derby db on both nodes
as root OS user
node0:
# cp -r /opt/oracle/dcs/repo/node_0/ /opt/oracle/dcs/repo/node_0.bkp
node1:
# cp -r /opt/oracle/dcs/repo/node_1/ /opt/oracle/dcs/repo/node_1.bkp
  
Connect to the Derby DB and change the IP/subnet/gateway/VIPs/SCANs on both nodes
   
as root OS user
# cd /opt/oracle/dcs/repo/
# java -cp /tmp/db-derby-10.14.2.0-lib/lib/derby.jar:/tmp/db-derby-10.14.2.0-lib/lib/derbytools.jar:/tmp/db-derby-10.14.2.0-lib/lib/derbyclient.jar org.apache.derby.tools.ij
ij version 10.14
on node0:
ij> connect 'jdbc:derby:node_0';
on node1:
ij> connect 'jdbc:derby:node_1';
Change the public IP, subnet, gateway
ij> select ipaddress,subnetmask,gateway,NICNAME,NODENUMBER from network;
IPADDRESS |SUBNETMASK |GATEWAY |NICNAME |NODENUMBER
-----------------------------------------------------------------------
192.168.16.25 |255.255.255.0 | |icbond0 |1
192.168.16.24 |255.255.255.0 | |icbond0 |0
<old hostip1> |<old netmask> |<new gateway> |btbond1 |0
<old hostip2> |<old netmask> |<new gateway> |btbond1 |1
Update the IP/network/gateway
update network set IPADDRESS='<new hostip1>',subnetmask='<new netmask>',gateway='<new gateway>' where NICNAME='btbond1' and NODENUMBER='0';
update network set IPADDRESS='<new hostip2>',subnetmask='<new netmask>',gateway='<new gateway>' where NICNAME='btbond1' and NODENUMBER='1';
Verify the changes
ij> select ipaddress,subnetmask,gateway,NICNAME,NODENUMBER from network;
IPADDRESS |SUBNETMASK |GATEWAY |NICNAME |NODENUMBER
-------------------------------------------------------------------------
192.168.16.25 |255.255.255.0 | |icbond0 |1
192.168.16.24 |255.255.255.0 | |icbond0 |0
<new hostip1> |<new netmask> |<new gateway> |btbond1 |0
<new hostip2> |<new netmask> |<new gateway> |btbond1 |1
Change SCAN IPs
ij> select * from NETWORK_IPADDRESSES ;
NETWORK_ID |IPADDRESSES
---------------------------------------------------
44009d74-f56a-4382-821d-a42d22ac5451 |<old scanip2>
44009d74-f56a-4382-821d-a42d22ac5451 |<old scanip1>
16bf166b-ae0a-4cca-9e5e-a42b8a7edc2b |<old scanip2>
16bf166b-ae0a-4cca-9e5e-a42b8a7edc2b |<old scanip1>
ij> update NETWORK_IPADDRESSES set IPADDRESSES='<new scanip1>' where IPADDRESSES='<old scanip1>';
ij> update NETWORK_IPADDRESSES set IPADDRESSES='<new scanip2>' where IPADDRESSES='<old scanip2>';
ij> select * from NETWORK_IPADDRESSES ;
NETWORK_ID |IPADDRESSES
---------------------------------------------------
44009d74-f56a-4382-821d-a42d22ac5451 |<new scanip2>
44009d74-f56a-4382-821d-a42d22ac5451 |<new scanip1>
16bf166b-ae0a-4cca-9e5e-a42b8a7edc2b |<new scanip2>
16bf166b-ae0a-4cca-9e5e-a42b8a7edc2b |<new scanip1>
Change VIPs
ij> select * from VIPS;
NW_ID |IPADDRESS |NODENUMBER |VIPNAME
------------------------------------------------------------------------
44009d74-f56a-4382-821d-a42d22ac5451 |<old vip1> |0 |vipname1
44009d74-f56a-4382-821d-a42d22ac5451 |<old vip2> |1 |vipname2
16bf166b-ae0a-4cca-9e5e-a42b8a7edc2b |<old vip1> |0 |vipname1
16bf166b-ae0a-4cca-9e5e-a42b8a7edc2b |<old vip2> |1 |vipname2
ij> update VIPS set IPADDRESS='<new vip1>' where IPADDRESS='<old vip1>';
ij> update VIPS set IPADDRESS='<new vip2>' where IPADDRESS='<old vip2>';
ij> select * from VIPS;
NW_ID |IPADDRESS |NODENUMBER |VIPNAME
------------------------------------------------------------------------
44009d74-f56a-4382-821d-a42d22ac5451 |<new vip1> |0 |vipname1
44009d74-f56a-4382-821d-a42d22ac5451 |<new vip2> |1 |vipname2
16bf166b-ae0a-4cca-9e5e-a42b8a7edc2b |<new vip1> |0 |vipname1
16bf166b-ae0a-4cca-9e5e-a42b8a7edc2b |<new vip2> |1 |vipname2
ij> commit;
ij> quit;

Start the DCS agent on both nodes
   
# initctl start initdcsagent
initdcsagent start/running, process 89333
  
Validate the Derby update by using "odacli list-networks"
   
as root OS user
# odacli list-networks -u 0
ID Name NIC InterfaceType IP Address Subnet Mask Gateway VlanId
---------------------------------------- -------------------- ---------- ---------- ------------------ ------------------ ------------------ ----------
5af19d28-a389-4261-9300-b9a11c66e60e Private-network icbond0 BOND 192.168.16.24 255.255.255.0
b1903d4d-447d-4a35-b811-293039fdb7fd Public-network btbond1 BOND <new hostip1> <new netmask> <new gateway>
# odacli list-networks -u 1
ID Name NIC InterfaceType IP Address Subnet Mask Gateway VlanId
---------------------------------------- -------------------- ---------- ---------- ------------------ ------------------ ------------------ ----------
ce32e105-7bd3-4b09-a74a-38f52b73d4b9 Private-network icbond0 BOND 192.168.16.25 255.255.255.0
d3b90af7-53ad-4b0b-a6a0-3888a7dd918f Public-network btbond1 BOND <new hostip2> <new netmask> <new gateway>
  
Sync up the DNS and NTP related metadata in the Derby database in case any of them has changed. Run it only on node0.
as root OS user
Update the registry (only available in 18.7)
# odacli update-registry -n system -f
Check the configuration before and after the registry update
# odacli describe-system
System Information
----------------------------------------------------------------
Name: testoda-c
Domain Name: mydomain.com
Time Zone: "Europe/Budapest"
DB Edition: EE
DNS Servers: <dns1> <dns2> <dns3>
NTP Servers: <ntp1> <ntp2>
  
Open the BUI and try to create a dummy database
Change the subnet/netmask in the crsconfig_params file on both nodes
as root OS user
# vi /u01/app/18.0.0.0/grid/crs/install/crsconfig_params
CRS_NODEVIPS='<vipname1>.mydomain.com/<new netmask>/btbond1,<vipname2>.mydomain.com/<netmask>/btbond1'
NETWORKS="icbond0"/192.168.16.0:asm,"icbond0"/192.168.16.0:cluster_interconnect,"btbond1"/<new subnet>:public
...
NEW_NODEVIPS='<vipname1>.mydomain.com/<new netmask>/btbond1,<vipname2>.mydomain.com/<new netmask>/btbond1'
Any additional interfaces, that require IP/subnet/gateway changes, have to be reconfigured with the ODA tooling.
分享到:  QQ好友和群QQ好友和群 QQ空间QQ空间 腾讯微博腾讯微博 腾讯朋友腾讯朋友
收藏收藏 支持支持 反对反对
回复

使用道具 举报

您需要登录后才可以回帖 登录 | 注册

本版积分规则

QQ|手机版|小黑屋|重庆思庄Oracle、Redhat认证学习论坛 ( 渝ICP备12004239号-4 )

GMT+8, 2024-5-15 10:24 , Processed in 0.103403 second(s), 20 queries .

重庆思庄学习中心论坛-重庆思庄科技有限公司论坛

© 2001-2020

快速回复 返回顶部 返回列表