oracle 12c flex cluster专题 之 节点角色转换
沃趣科技 周天鹏
笔者上一篇译文中在介绍Leaf Node时提到,
**虽然leaf node不要求直接访问共享存储,但最好还是连上共享存储,因为说不准未来哪天就要把这个leaf node转为hub node使用。**
其实这样的说法并不够准确,在12cR1时,leaf node上是无法运行只读数据库实例的,这时不连接共享存储完全不影响其使用。而12cR2的leaf node是可以运行只读数据库实例的,一旦leaf node上有了数据库,这时leaf node(确切的说这时leaf node应该叫做reader node)就必须连接共享存储了。
这次就介绍下如何将节点的角色在hub node和leaf node之间互相转换。由于笔者实验环境中已经存在了一个leaf node,所以先从leaf node转为hub node做起。
初始状态:
```
[root@rac1 ~]# crsctl get cluster mode status
Cluster is running in "flex" mode
[root@rac1 ~]# srvctl status srvpool -detail
Server pool name: Free
Active servers count: 0
Active server names:
Server pool name: Generic
Active servers count: 0
Active server names:
Server pool name: RF1POOL
Active servers count: 1
Active server names: rac3
NAME=rac3 STATE=ONLINE
Server pool name: ztp_pool
Active servers count: 2
Active server names: rac1,rac2
NAME=rac1 STATE=ONLINE
NAME=rac2 STATE=ONLINE
[root@rac1 ~]# crsctl get node role config -all
Node 'rac1' configured role is 'hub'
Node 'rac2' configured role is 'hub'
Node 'rac3' configured role is 'leaf'
[root@rac1 ~]# crsctl get node role status -all
Node 'rac1' active role is 'hub'
Node 'rac2' active role is 'hub'
Node 'rac3' active role is 'leaf'
```
# leaf转hub
该集群上运行着名为orcl的数据库,在角色转换之前先观察下orcl库的状态
```
ora.orcl.db
1ONLINEONLINErac3Open,Readonly,HOME=/
u01/app/oracle/produ
ct/12.2.0/dbhome_1,S
TABLE
2ONLINEONLINErac2Open,HOME=/u01/app/o
racle/product/12.2.0
/dbhome_1,STABLE
3ONLINEONLINErac1Open,HOME=/u01/app/o
racle/product/12.2.0
/dbhome_1,STABLE
```
显然,由于rac3现在是leaf node,所以rac3上的数据库实例只能以只读方式打开。
执行如下操作即可将rac3的角色从leaf node转为hub node
**crsctl set node role {hub | leaf}**
```
[root@rac3 ~]# crsctl set node role hub
CRS-4408: Node 'rac3' configured role successfully changed; restart Oracle High Availability Services for new role to take effect.
```
查看各节点角色信息
```
[root@rac1 ~]# crsctl get node role config -all
Node 'rac1' configured role is 'hub'
Node 'rac2' configured role is 'hub'
Node 'rac3' configured role is 'hub', but active role is 'leaf'.
Restart Oracle High Availability Services for the new role to take effect.
[root@rac1 ~]# crsctl get node role status -all
Node 'rac1' active role is 'hub'
Node 'rac2' active role is 'hub'
Node 'rac3' active role is 'leaf', but configured role is 'hub'.
Restart Oracle High Availability Services for the new role to take effect.
```
根据命令输出信息可知,在配置生效前需要重启该节点的crs,即**角色转换无法在线进行。**
关闭rac3的crs服务
```
[root@rac3 ~]# crsctl stop crs
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rac3'
CRS-2673: Attempting to stop 'ora.crsd' on 'rac3'
CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on server 'rac3'
CRS-2673: Attempting to stop 'ora.orcl.db' on 'rac3'
CRS-2677: Stop of 'ora.orcl.db' on 'rac3' succeeded
CRS-2673: Attempting to stop 'ora.LISTENER.lsnr' on 'rac3'
CRS-2673: Attempting to stop 'ora.LISTENER_LEAF.lsnr' on 'rac3'
CRS-2677: Stop of 'ora.LISTENER_LEAF.lsnr' on 'rac3' succeeded
CRS-2677: Stop of 'ora.LISTENER.lsnr' on 'rac3' succeeded
CRS-2673: Attempting to stop 'ora.rac3.vip' on 'rac3'
CRS-2677: Stop of 'ora.rac3.vip' on 'rac3' succeeded
CRS-2672: Attempting to start 'ora.rac3.vip' on 'rac2'
CRS-2676: Start of 'ora.rac3.vip' on 'rac2' succeeded
CRS-2673: Attempting to stop 'ora.net1.network' on 'rac3'
CRS-2677: Stop of 'ora.net1.network' on 'rac3' succeeded
CRS-2792: Shutdown of Cluster Ready Services-managed resources on 'rac3' has completed
CRS-2677: Stop of 'ora.crsd' on 'rac3' succeeded
CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'rac3'
CRS-2673: Attempting to stop 'ora.crf' on 'rac3'
CRS-2673: Attempting to stop 'ora.gpnpd' on 'rac3'
CRS-2673: Attempting to stop 'ora.mdnsd' on 'rac3'
CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'rac3' succeeded
CRS-2677: Stop of 'ora.gpnpd' on 'rac3' succeeded
CRS-2673: Attempting to stop 'ora.ctssd' on 'rac3'
CRS-2673: Attempting to stop 'ora.evmd' on 'rac3'
CRS-2677: Stop of 'ora.mdnsd' on 'rac3' succeeded
CRS-2677: Stop of 'ora.evmd' on 'rac3' succeeded
CRS-2677: Stop of 'ora.crf' on 'rac3' succeeded
CRS-2677: Stop of 'ora.ctssd' on 'rac3' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'rac3'
CRS-2677: Stop of 'ora.cssd' on 'rac3' succeeded
CRS-2673: Attempting to stop 'ora.driver.afd' on 'rac3'
CRS-2673: Attempting to stop 'ora.gipcd' on 'rac3'
CRS-2677: Stop of 'ora.driver.afd' on 'rac3' succeeded
CRS-2677: Stop of 'ora.gipcd' on 'rac3' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'rac3' has completed
CRS-4133: Oracle High Availability Services has been stopped.
```
查看各个节点角色信息
```
[root@rac1 ~]# crsctl get node role config -all
Node 'rac1' configured role is 'hub'
Node 'rac2' configured role is 'hub'
[root@rac1 ~]# crsctl get node role status -all
Node 'rac1' active role is 'hub'
Node 'rac2' active role is 'hub'
```
启动rac3的crs服务
```
[root@rac3 ~]# crsctl start crs -wait
CRS-4123: Starting Oracle High Availability Services-managed resources
CRS-2672: Attempting to start 'ora.evmd' on 'rac3'
CRS-2672: Attempting to start 'ora.mdnsd' on 'rac3'
CRS-2676: Start of 'ora.mdnsd' on 'rac3' succeeded
CRS-2676: Start of 'ora.evmd' on 'rac3' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'rac3'
CRS-2676: Start of 'ora.gpnpd' on 'rac3' succeeded
CRS-2672: Attempting to start 'ora.gipcd' on 'rac3'
CRS-2676: Start of 'ora.gipcd' on 'rac3' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'rac3'
CRS-2676: Start of 'ora.cssdmonitor' on 'rac3' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'rac3'
CRS-2672: Attempting to start 'ora.diskmon' on 'rac3'
CRS-2676: Start of 'ora.diskmon' on 'rac3' succeeded
CRS-2676: Start of 'ora.cssd' on 'rac3' succeeded
CRS-2672: Attempting to start 'ora.cluster_interconnect.haip' on 'rac3'
CRS-2672: Attempting to start 'ora.ctssd' on 'rac3'
CRS-2676: Start of 'ora.ctssd' on 'rac3' succeeded
CRS-2672: Attempting to start 'ora.crf' on 'rac3'
CRS-2676: Start of 'ora.crf' on 'rac3' succeeded
CRS-2672: Attempting to start 'ora.crsd' on 'rac3'
CRS-2676: Start of 'ora.crsd' on 'rac3' succeeded
CRS-2676: Start of 'ora.cluster_interconnect.haip' on 'rac3' succeeded
CRS-2672: Attempting to start 'ora.drivers.acfs' on 'rac3'
CRS-2676: Start of 'ora.drivers.acfs' on 'rac3' succeeded
CRS-2672: Attempting to start 'ora.asm' on 'rac3'
CRS-2676: Start of 'ora.asm' on 'rac3' succeeded
CRS-6017: Processing resource auto-start for servers: rac3
CRS-2672: Attempting to start 'ora.ASMNET1LSNR_ASM.lsnr' on 'rac3'
CRS-2673: Attempting to stop 'ora.rac3.vip' on 'rac2'
CRS-2673: Attempting to stop 'ora.LISTENER_SCAN2.lsnr' on 'rac2'
CRS-2672: Attempting to start 'ora.ons' on 'rac3'
CRS-2677: Stop of 'ora.rac3.vip' on 'rac2' succeeded
CRS-2672: Attempting to start 'ora.rac3.vip' on 'rac3'
CRS-2677: Stop of 'ora.LISTENER_SCAN2.lsnr' on 'rac2' succeeded
CRS-2673: Attempting to stop 'ora.scan2.vip' on 'rac2'
CRS-2677: Stop of 'ora.scan2.vip' on 'rac2' succeeded
CRS-2672: Attempting to start 'ora.scan2.vip' on 'rac3'
CRS-2676: Start of 'ora.ASMNET1LSNR_ASM.lsnr' on 'rac3' succeeded
CRS-2676: Start of 'ora.rac3.vip' on 'rac3' succeeded
CRS-2672: Attempting to start 'ora.LISTENER.lsnr' on 'rac3'
CRS-2676: Start of 'ora.ons' on 'rac3' succeeded
CRS-2676: Start of 'ora.scan2.vip' on 'rac3' succeeded
CRS-2672: Attempting to start 'ora.LISTENER_SCAN2.lsnr' on 'rac3'
CRS-2676: Start of 'ora.LISTENER.lsnr' on 'rac3' succeeded
CRS-2679: Attempting to clean 'ora.asm' on 'rac3'
CRS-2676: Start of 'ora.LISTENER_SCAN2.lsnr' on 'rac3' succeeded
CRS-2681: Clean of 'ora.asm' on 'rac3' succeeded
CRS-2672: Attempting to start 'ora.asm' on 'rac3'
CRS-2676: Start of 'ora.asm' on 'rac3' succeeded
CRS-2672: Attempting to start 'ora.DATA.dg' on 'rac3'
CRS-2672: Attempting to start 'ora.FLEXDG.dg' on 'rac3'
CRS-2676: Start of 'ora.FLEXDG.dg' on 'rac3' succeeded
CRS-2676: Start of 'ora.DATA.dg' on 'rac3' succeeded
CRS-2672: Attempting to start 'ora.orcl.db' on 'rac3'
CRS-2672: Attempting to start 'ora.prod1.db' on 'rac3'
CRS-2676: Start of 'ora.orcl.db' on 'rac3' succeeded
CRS-2676: Start of 'ora.prod1.db' on 'rac3' succeeded
CRS-6016: Resource auto-start has completed for server rac3
CRS-6024: Completed start of Oracle Cluster Ready Services-managed resources
CRS-4123: Oracle High Availability Services has been started.
```
启动完成后在查看各个节点角色信息
```
[root@rac1 ~]#crsctl get node role config -all
Node 'rac1' configured role is 'hub'
Node 'rac2' configured role is 'hub'
Node 'rac3' configured role is 'hub'
[root@rac1 ~]#crsctl get node role status -all
Node 'rac1' active role is 'hub'
Node 'rac2' active role is 'hub'
Node 'rac3' active role is 'hub'
```
此时观察下整个集群的状态
```
[root@rac1 ~]# crsctl status res -t
--------------------------------------------------------------------------------
NameTargetStateServerState details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.ASMNET1LSNR_ASM.lsnr
ONLINEONLINErac1STABLE
ONLINEONLINErac2STABLE
ONLINEONLINErac3STABLE
ora.DATA.dg
ONLINEONLINErac1STABLE
ONLINEONLINErac2STABLE
ONLINEONLINErac3STABLE
ora.FLEXDG.dg
ONLINEONLINErac1STABLE
ONLINEONLINErac2STABLE
ONLINEONLINErac3STABLE
ora.LISTENER.lsnr
ONLINEONLINErac1STABLE
ONLINEONLINErac2STABLE
ONLINEONLINErac3STABLE
ora.OCR.dg
ONLINEONLINErac1STABLE
ONLINEONLINErac2STABLE
ONLINEONLINErac3STABLE
ora.net1.network
ONLINEONLINErac1STABLE
ONLINEONLINErac2STABLE
ONLINEONLINErac3STABLE
ora.ons
ONLINEONLINErac1STABLE
ONLINEONLINErac2STABLE
ONLINEONLINErac3STABLE
ora.proxy_advm
OFFLINE OFFLINErac1STABLE
OFFLINE OFFLINErac2STABLE
OFFLINE OFFLINErac3STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
1ONLINEONLINErac1STABLE
ora.LISTENER_SCAN2.lsnr
1ONLINEONLINErac3STABLE
ora.LISTENER_SCAN3.lsnr
1ONLINEONLINErac2STABLE
ora.MGMTLSNR
1OFFLINE OFFLINESTABLE
ora.asm
1ONLINEONLINErac1Started,STABLE
2ONLINEONLINErac2Started,STABLE
3ONLINEONLINErac3Started,STABLE
ora.cvu
1ONLINEONLINErac2STABLE
ora.gns
1ONLINEONLINErac1STABLE
ora.gns.vip
1ONLINEONLINErac1STABLE
ora.orcl.db
1ONLINEONLINErac3Open,HOME=/u01/app/o
racle/product/12.2.0
/dbhome_1,STABLE
2ONLINEONLINErac2Open,HOME=/u01/app/o
racle/product/12.2.0
/dbhome_1,STABLE
3ONLINEONLINErac1Open,HOME=/u01/app/o
racle/product/12.2.0
/dbhome_1,STABLE
ora.prod1.db
1ONLINEONLINErac1Open,HOME=/u01/app/o
racle/product/12.2.0
/dbhome_1,STABLE
2ONLINEONLINErac2Open,HOME=/u01/app/o
racle/product/12.2.0
/dbhome_1,STABLE
3ONLINEONLINErac3Open,HOME=/u01/app/o
racle/product/12.2.0
/dbhome_1,STABLE
ora.qosmserver
1OFFLINE OFFLINESTABLE
ora.rac1.vip
1ONLINEONLINErac1STABLE
ora.rac2.vip
1ONLINEONLINErac2STABLE
ora.rac3.vip
1ONLINEONLINErac3STABLE
ora.scan1.vip
1ONLINEONLINErac1STABLE
ora.scan2.vip
1ONLINEONLINErac3STABLE
ora.scan3.vip
1ONLINEONLINErac2STABLE
--------------------------------------------------------------------------------
```
此时rac3上的orcl库的实例已变为open状态,而不是之前的Open,Readonly
# hub转leaf
**在12cR2中,如果想将一个节点角色设置为leaf node,那么该集群的scan解析方式必须为GNS。**
通过上面的整个集群的状态信息也可以看出笔者的实验环境是配置了GNS的。如果未配置,执行crsctl set node role leaf命令时将报错。
```
[root@rac3 ~]# crsctl set node role leaf
CRS-4408: Node 'rac3' configured role successfully changed; restart Oracle High Availability Services for new role to take effect.
```
同上,rac3依然需要重启crs来使配置生效。
过程略
重启后各个节点角色信息如下:
```
[root@rac1 ~]# crsctl get node role status -all
Node 'rac1' active role is 'hub'
Node 'rac2' active role is 'hub'
Node 'rac3' active role is 'leaf'
[root@rac1 ~]# crsctl get node role config -all
Node 'rac1' configured role is 'hub'
Node 'rac2' configured role is 'hub'
Node 'rac3' configured role is 'leaf'
```
此时整个集群状态如下:
```
[root@rac1 ~]# crsctl status res -t
--------------------------------------------------------------------------------
NameTargetStateServerState details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.ASMNET1LSNR_ASM.lsnr
ONLINEONLINErac1STABLE
ONLINEONLINErac2STABLE
ora.DATA.dg
ONLINEONLINErac1STABLE
ONLINEONLINErac2STABLE
ora.FLEXDG.dg
ONLINEONLINErac1STABLE
ONLINEONLINErac2STABLE
ora.LISTENER.lsnr
ONLINEONLINErac1STABLE
ONLINEONLINErac2STABLE
ONLINEONLINErac3STABLE
ora.LISTENER_LEAF.lsnr
OFFLINE OFFLINErac3STABLE
ora.OCR.dg
ONLINEONLINErac1STABLE
ONLINEONLINErac2STABLE
ora.net1.network
ONLINEONLINErac1STABLE
ONLINEONLINErac2STABLE
ONLINEONLINErac3STABLE
ora.ons
ONLINEONLINErac1STABLE
ONLINEONLINErac2STABLE
ora.proxy_advm
OFFLINE OFFLINErac1STABLE
OFFLINE OFFLINErac2STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
1ONLINEONLINErac1STABLE
ora.LISTENER_SCAN2.lsnr
1ONLINEONLINErac1STABLE
ora.LISTENER_SCAN3.lsnr
1ONLINEONLINErac2STABLE
ora.MGMTLSNR
1OFFLINE OFFLINESTABLE
ora.asm
1ONLINEONLINErac1Started,STABLE
2ONLINEONLINErac2Started,STABLE
3ONLINEOFFLINEInstance Shutdown,ST
ABLE
ora.cvu
1ONLINEONLINErac2STABLE
ora.gns
1ONLINEONLINErac1STABLE
ora.gns.vip
1ONLINEONLINErac1STABLE
ora.orcl.db
1ONLINEONLINErac3Open,Readonly,HOME=/
u01/app/oracle/produ
ct/12.2.0/dbhome_1,S
TABLE
2ONLINEONLINErac2Open,HOME=/u01/app/o
racle/product/12.2.0
/dbhome_1,STABLE
3ONLINEONLINErac1Open,HOME=/u01/app/o
racle/product/12.2.0
/dbhome_1,STABLE
ora.prod1.db
1ONLINEONLINErac1Open,HOME=/u01/app/o
racle/product/12.2.0
/dbhome_1,STABLE
2ONLINEONLINErac2Open,HOME=/u01/app/o
racle/product/12.2.0
/dbhome_1,STABLE
3ONLINEOFFLINEInstance Shutdown,ST
ABLE
ora.qosmserver
1OFFLINE OFFLINESTABLE
ora.rac1.vip
1ONLINEONLINErac1STABLE
ora.rac2.vip
1ONLINEONLINErac2STABLE
ora.rac3.vip
1ONLINEONLINErac3STABLE
ora.scan1.vip
1ONLINEONLINErac1STABLE
ora.scan2.vip
1ONLINEONLINErac1STABLE
ora.scan3.vip
1ONLINEONLINErac2STABLE
--------------------------------------------------------------------------------
```
可以发现在rac3切换为leaf node之后,多了ora.LISTENER_LEAF.lsnr这个资源,
而且rac3上的asm实例是不启动的,db实例又变成了readonly方式打开。
需要注意的一点是,leaf node上的只读db实例会把服务注册到LISTENER_LEAF这个监听中,而不是LISTENER。
所以lsnrctl status的输出结果始终看不到任何已注册的服务。
```
[root@rac3 ~]# srvctl start listener -listener LISTENER_LEAF
[grid@rac3 ~]$ lsnrctl status
LSNRCTL for Linux: Version 12.2.0.1.0 - Production on 27-JUL-2017 16:46:01
Copyright (c) 1991, 2016, Oracle.All rights reserved.
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER)))
STATUS of the LISTENER
------------------------
AliasLISTENER
VersionTNSLSNR for Linux: Version 12.2.0.1.0 - Production
Start Date27-JUL-2017 16:24:27
Uptime0 days 0 hr. 21 min. 34 sec
Trace Leveloff
SecurityON: Local OS Authentication
SNMPOFF
Listener Parameter File/u01/app/12.2.0/grid/network/admin/listener.ora
Listener Log File/u01/app/grid/diag/tnslsnr/rac3/listener/alert/log.xml
Listening Endpoints Summary...
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=10.10.70.103)(PORT=1521)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=10.10.70.186)(PORT=1521)))
The listener supports no services
The command completed successfully
[grid@rac3 ~]$ lsnrctl status listener_leaf
LSNRCTL for Linux: Version 12.2.0.1.0 - Production on 27-JUL-2017 16:46:02
Copyright (c) 1991, 2016, Oracle.All rights reserved.
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER_LEAF)))
STATUS of the LISTENER
------------------------
AliasLISTENER_LEAF
VersionTNSLSNR for Linux: Version 12.2.0.1.0 - Production
Start Date27-JUL-2017 16:44:31
Uptime0 days 0 hr. 1 min. 31 sec
Trace Leveloff
SecurityON: Local OS Authentication
SNMPOFF
Listener Parameter File/u01/app/12.2.0/grid/network/admin/listener.ora
Listener Log File/u01/app/grid/diag/tnslsnr/rac3/listener_leaf/alert/log.xml
Listening Endpoints Summary...
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER_LEAF)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=10.10.70.103)(PORT=1525)))
Services Summary...
Service "5491bed1838610f0e05366460a0a5736" has 1 instance(s).
Instance "orcl_1", status READY, has 1 handler(s) for this service...
Service "5507ca8c0abd4747e05365460a0a8d01" has 1 instance(s).
Instance "orcl_1", status READY, has 1 handler(s) for this service...
Service "orcl" has 1 instance(s).
Instance "orcl_1", status READY, has 1 handler(s) for this service...
Service "orclXDB" has 1 instance(s).
Instance "orcl_1", status READY, has 1 handler(s) for this service...
Service "orclpdb" has 1 instance(s).
Instance "orcl_1", status READY, has 1 handler(s) for this service...
Service "ztp" has 1 instance(s).
Instance "orcl_1", status READY, has 1 handler(s) for this service...
The command completed successfully
```
最后需要注意的是:leaf node上默认监听端口为1525
# 结论
*转换节点角色需要重启该节点crs
*12cR2中节点转换为leaf node要求必须配置GNS
*leaf node上的asm实例是不会启动的,db实例只能以只读方式启动
*12cR1中还需要手动更新inventory,12cR2中已不再需要,角色修改操作大幅简化
笔者上一篇译文中在介绍Leaf Node时提到,
**虽然leaf node不要求直接访问共享存储,但最好还是连上共享存储,因为说不准未来哪天就要把这个leaf node转为hub node使用。**
其实这样的说法并不够准确,在12cR1时,leaf node上是无法运行只读数据库实例的,这时不连接共享存储完全不影响其使用。而12cR2的leaf node是可以运行只读数据库实例的,一旦leaf node上有了数据库,这时leaf node(确切的说这时leaf node应该叫做reader node)就必须连接共享存储了。
这次就介绍下如何将节点的角色在hub node和leaf node之间互相转换。由于笔者实验环境中已经存在了一个leaf node,所以先从leaf node转为hub node做起。
初始状态:
```
[root@rac1 ~]# crsctl get cluster mode status
Cluster is running in "flex" mode
[root@rac1 ~]# srvctl status srvpool -detail
Server pool name: Free
Active servers count: 0
Active server names:
Server pool name: Generic
Active servers count: 0
Active server names:
Server pool name: RF1POOL
Active servers count: 1
Active server names: rac3
NAME=rac3 STATE=ONLINE
Server pool name: ztp_pool
Active servers count: 2
Active server names: rac1,rac2
NAME=rac1 STATE=ONLINE
NAME=rac2 STATE=ONLINE
[root@rac1 ~]# crsctl get node role config -all
Node 'rac1' configured role is 'hub'
Node 'rac2' configured role is 'hub'
Node 'rac3' configured role is 'leaf'
[root@rac1 ~]# crsctl get node role status -all
Node 'rac1' active role is 'hub'
Node 'rac2' active role is 'hub'
Node 'rac3' active role is 'leaf'
```
# leaf转hub
该集群上运行着名为orcl的数据库,在角色转换之前先观察下orcl库的状态
```
ora.orcl.db
1ONLINEONLINErac3Open,Readonly,HOME=/
u01/app/oracle/produ
ct/12.2.0/dbhome_1,S
TABLE
2ONLINEONLINErac2Open,HOME=/u01/app/o
racle/product/12.2.0
/dbhome_1,STABLE
3ONLINEONLINErac1Open,HOME=/u01/app/o
racle/product/12.2.0
/dbhome_1,STABLE
```
显然,由于rac3现在是leaf node,所以rac3上的数据库实例只能以只读方式打开。
执行如下操作即可将rac3的角色从leaf node转为hub node
**crsctl set node role {hub | leaf}**
```
[root@rac3 ~]# crsctl set node role hub
CRS-4408: Node 'rac3' configured role successfully changed; restart Oracle High Availability Services for new role to take effect.
```
查看各节点角色信息
```
[root@rac1 ~]# crsctl get node role config -all
Node 'rac1' configured role is 'hub'
Node 'rac2' configured role is 'hub'
Node 'rac3' configured role is 'hub', but active role is 'leaf'.
Restart Oracle High Availability Services for the new role to take effect.
[root@rac1 ~]# crsctl get node role status -all
Node 'rac1' active role is 'hub'
Node 'rac2' active role is 'hub'
Node 'rac3' active role is 'leaf', but configured role is 'hub'.
Restart Oracle High Availability Services for the new role to take effect.
```
根据命令输出信息可知,在配置生效前需要重启该节点的crs,即**角色转换无法在线进行。**
关闭rac3的crs服务
```
[root@rac3 ~]# crsctl stop crs
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rac3'
CRS-2673: Attempting to stop 'ora.crsd' on 'rac3'
CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on server 'rac3'
CRS-2673: Attempting to stop 'ora.orcl.db' on 'rac3'
CRS-2677: Stop of 'ora.orcl.db' on 'rac3' succeeded
CRS-2673: Attempting to stop 'ora.LISTENER.lsnr' on 'rac3'
CRS-2673: Attempting to stop 'ora.LISTENER_LEAF.lsnr' on 'rac3'
CRS-2677: Stop of 'ora.LISTENER_LEAF.lsnr' on 'rac3' succeeded
CRS-2677: Stop of 'ora.LISTENER.lsnr' on 'rac3' succeeded
CRS-2673: Attempting to stop 'ora.rac3.vip' on 'rac3'
CRS-2677: Stop of 'ora.rac3.vip' on 'rac3' succeeded
CRS-2672: Attempting to start 'ora.rac3.vip' on 'rac2'
CRS-2676: Start of 'ora.rac3.vip' on 'rac2' succeeded
CRS-2673: Attempting to stop 'ora.net1.network' on 'rac3'
CRS-2677: Stop of 'ora.net1.network' on 'rac3' succeeded
CRS-2792: Shutdown of Cluster Ready Services-managed resources on 'rac3' has completed
CRS-2677: Stop of 'ora.crsd' on 'rac3' succeeded
CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'rac3'
CRS-2673: Attempting to stop 'ora.crf' on 'rac3'
CRS-2673: Attempting to stop 'ora.gpnpd' on 'rac3'
CRS-2673: Attempting to stop 'ora.mdnsd' on 'rac3'
CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'rac3' succeeded
CRS-2677: Stop of 'ora.gpnpd' on 'rac3' succeeded
CRS-2673: Attempting to stop 'ora.ctssd' on 'rac3'
CRS-2673: Attempting to stop 'ora.evmd' on 'rac3'
CRS-2677: Stop of 'ora.mdnsd' on 'rac3' succeeded
CRS-2677: Stop of 'ora.evmd' on 'rac3' succeeded
CRS-2677: Stop of 'ora.crf' on 'rac3' succeeded
CRS-2677: Stop of 'ora.ctssd' on 'rac3' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'rac3'
CRS-2677: Stop of 'ora.cssd' on 'rac3' succeeded
CRS-2673: Attempting to stop 'ora.driver.afd' on 'rac3'
CRS-2673: Attempting to stop 'ora.gipcd' on 'rac3'
CRS-2677: Stop of 'ora.driver.afd' on 'rac3' succeeded
CRS-2677: Stop of 'ora.gipcd' on 'rac3' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'rac3' has completed
CRS-4133: Oracle High Availability Services has been stopped.
```
查看各个节点角色信息
```
[root@rac1 ~]# crsctl get node role config -all
Node 'rac1' configured role is 'hub'
Node 'rac2' configured role is 'hub'
[root@rac1 ~]# crsctl get node role status -all
Node 'rac1' active role is 'hub'
Node 'rac2' active role is 'hub'
```
启动rac3的crs服务
```
[root@rac3 ~]# crsctl start crs -wait
CRS-4123: Starting Oracle High Availability Services-managed resources
CRS-2672: Attempting to start 'ora.evmd' on 'rac3'
CRS-2672: Attempting to start 'ora.mdnsd' on 'rac3'
CRS-2676: Start of 'ora.mdnsd' on 'rac3' succeeded
CRS-2676: Start of 'ora.evmd' on 'rac3' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'rac3'
CRS-2676: Start of 'ora.gpnpd' on 'rac3' succeeded
CRS-2672: Attempting to start 'ora.gipcd' on 'rac3'
CRS-2676: Start of 'ora.gipcd' on 'rac3' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'rac3'
CRS-2676: Start of 'ora.cssdmonitor' on 'rac3' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'rac3'
CRS-2672: Attempting to start 'ora.diskmon' on 'rac3'
CRS-2676: Start of 'ora.diskmon' on 'rac3' succeeded
CRS-2676: Start of 'ora.cssd' on 'rac3' succeeded
CRS-2672: Attempting to start 'ora.cluster_interconnect.haip' on 'rac3'
CRS-2672: Attempting to start 'ora.ctssd' on 'rac3'
CRS-2676: Start of 'ora.ctssd' on 'rac3' succeeded
CRS-2672: Attempting to start 'ora.crf' on 'rac3'
CRS-2676: Start of 'ora.crf' on 'rac3' succeeded
CRS-2672: Attempting to start 'ora.crsd' on 'rac3'
CRS-2676: Start of 'ora.crsd' on 'rac3' succeeded
CRS-2676: Start of 'ora.cluster_interconnect.haip' on 'rac3' succeeded
CRS-2672: Attempting to start 'ora.drivers.acfs' on 'rac3'
CRS-2676: Start of 'ora.drivers.acfs' on 'rac3' succeeded
CRS-2672: Attempting to start 'ora.asm' on 'rac3'
CRS-2676: Start of 'ora.asm' on 'rac3' succeeded
CRS-6017: Processing resource auto-start for servers: rac3
CRS-2672: Attempting to start 'ora.ASMNET1LSNR_ASM.lsnr' on 'rac3'
CRS-2673: Attempting to stop 'ora.rac3.vip' on 'rac2'
CRS-2673: Attempting to stop 'ora.LISTENER_SCAN2.lsnr' on 'rac2'
CRS-2672: Attempting to start 'ora.ons' on 'rac3'
CRS-2677: Stop of 'ora.rac3.vip' on 'rac2' succeeded
CRS-2672: Attempting to start 'ora.rac3.vip' on 'rac3'
CRS-2677: Stop of 'ora.LISTENER_SCAN2.lsnr' on 'rac2' succeeded
CRS-2673: Attempting to stop 'ora.scan2.vip' on 'rac2'
CRS-2677: Stop of 'ora.scan2.vip' on 'rac2' succeeded
CRS-2672: Attempting to start 'ora.scan2.vip' on 'rac3'
CRS-2676: Start of 'ora.ASMNET1LSNR_ASM.lsnr' on 'rac3' succeeded
CRS-2676: Start of 'ora.rac3.vip' on 'rac3' succeeded
CRS-2672: Attempting to start 'ora.LISTENER.lsnr' on 'rac3'
CRS-2676: Start of 'ora.ons' on 'rac3' succeeded
CRS-2676: Start of 'ora.scan2.vip' on 'rac3' succeeded
CRS-2672: Attempting to start 'ora.LISTENER_SCAN2.lsnr' on 'rac3'
CRS-2676: Start of 'ora.LISTENER.lsnr' on 'rac3' succeeded
CRS-2679: Attempting to clean 'ora.asm' on 'rac3'
CRS-2676: Start of 'ora.LISTENER_SCAN2.lsnr' on 'rac3' succeeded
CRS-2681: Clean of 'ora.asm' on 'rac3' succeeded
CRS-2672: Attempting to start 'ora.asm' on 'rac3'
CRS-2676: Start of 'ora.asm' on 'rac3' succeeded
CRS-2672: Attempting to start 'ora.DATA.dg' on 'rac3'
CRS-2672: Attempting to start 'ora.FLEXDG.dg' on 'rac3'
CRS-2676: Start of 'ora.FLEXDG.dg' on 'rac3' succeeded
CRS-2676: Start of 'ora.DATA.dg' on 'rac3' succeeded
CRS-2672: Attempting to start 'ora.orcl.db' on 'rac3'
CRS-2672: Attempting to start 'ora.prod1.db' on 'rac3'
CRS-2676: Start of 'ora.orcl.db' on 'rac3' succeeded
CRS-2676: Start of 'ora.prod1.db' on 'rac3' succeeded
CRS-6016: Resource auto-start has completed for server rac3
CRS-6024: Completed start of Oracle Cluster Ready Services-managed resources
CRS-4123: Oracle High Availability Services has been started.
```
启动完成后在查看各个节点角色信息
```
[root@rac1 ~]#crsctl get node role config -all
Node 'rac1' configured role is 'hub'
Node 'rac2' configured role is 'hub'
Node 'rac3' configured role is 'hub'
[root@rac1 ~]#crsctl get node role status -all
Node 'rac1' active role is 'hub'
Node 'rac2' active role is 'hub'
Node 'rac3' active role is 'hub'
```
此时观察下整个集群的状态
```
[root@rac1 ~]# crsctl status res -t
--------------------------------------------------------------------------------
NameTargetStateServerState details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.ASMNET1LSNR_ASM.lsnr
ONLINEONLINErac1STABLE
ONLINEONLINErac2STABLE
ONLINEONLINErac3STABLE
ora.DATA.dg
ONLINEONLINErac1STABLE
ONLINEONLINErac2STABLE
ONLINEONLINErac3STABLE
ora.FLEXDG.dg
ONLINEONLINErac1STABLE
ONLINEONLINErac2STABLE
ONLINEONLINErac3STABLE
ora.LISTENER.lsnr
ONLINEONLINErac1STABLE
ONLINEONLINErac2STABLE
ONLINEONLINErac3STABLE
ora.OCR.dg
ONLINEONLINErac1STABLE
ONLINEONLINErac2STABLE
ONLINEONLINErac3STABLE
ora.net1.network
ONLINEONLINErac1STABLE
ONLINEONLINErac2STABLE
ONLINEONLINErac3STABLE
ora.ons
ONLINEONLINErac1STABLE
ONLINEONLINErac2STABLE
ONLINEONLINErac3STABLE
ora.proxy_advm
OFFLINE OFFLINErac1STABLE
OFFLINE OFFLINErac2STABLE
OFFLINE OFFLINErac3STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
1ONLINEONLINErac1STABLE
ora.LISTENER_SCAN2.lsnr
1ONLINEONLINErac3STABLE
ora.LISTENER_SCAN3.lsnr
1ONLINEONLINErac2STABLE
ora.MGMTLSNR
1OFFLINE OFFLINESTABLE
ora.asm
1ONLINEONLINErac1Started,STABLE
2ONLINEONLINErac2Started,STABLE
3ONLINEONLINErac3Started,STABLE
ora.cvu
1ONLINEONLINErac2STABLE
ora.gns
1ONLINEONLINErac1STABLE
ora.gns.vip
1ONLINEONLINErac1STABLE
ora.orcl.db
1ONLINEONLINErac3Open,HOME=/u01/app/o
racle/product/12.2.0
/dbhome_1,STABLE
2ONLINEONLINErac2Open,HOME=/u01/app/o
racle/product/12.2.0
/dbhome_1,STABLE
3ONLINEONLINErac1Open,HOME=/u01/app/o
racle/product/12.2.0
/dbhome_1,STABLE
ora.prod1.db
1ONLINEONLINErac1Open,HOME=/u01/app/o
racle/product/12.2.0
/dbhome_1,STABLE
2ONLINEONLINErac2Open,HOME=/u01/app/o
racle/product/12.2.0
/dbhome_1,STABLE
3ONLINEONLINErac3Open,HOME=/u01/app/o
racle/product/12.2.0
/dbhome_1,STABLE
ora.qosmserver
1OFFLINE OFFLINESTABLE
ora.rac1.vip
1ONLINEONLINErac1STABLE
ora.rac2.vip
1ONLINEONLINErac2STABLE
ora.rac3.vip
1ONLINEONLINErac3STABLE
ora.scan1.vip
1ONLINEONLINErac1STABLE
ora.scan2.vip
1ONLINEONLINErac3STABLE
ora.scan3.vip
1ONLINEONLINErac2STABLE
--------------------------------------------------------------------------------
```
此时rac3上的orcl库的实例已变为open状态,而不是之前的Open,Readonly
# hub转leaf
**在12cR2中,如果想将一个节点角色设置为leaf node,那么该集群的scan解析方式必须为GNS。**
通过上面的整个集群的状态信息也可以看出笔者的实验环境是配置了GNS的。如果未配置,执行crsctl set node role leaf命令时将报错。
```
[root@rac3 ~]# crsctl set node role leaf
CRS-4408: Node 'rac3' configured role successfully changed; restart Oracle High Availability Services for new role to take effect.
```
同上,rac3依然需要重启crs来使配置生效。
过程略
重启后各个节点角色信息如下:
```
[root@rac1 ~]# crsctl get node role status -all
Node 'rac1' active role is 'hub'
Node 'rac2' active role is 'hub'
Node 'rac3' active role is 'leaf'
[root@rac1 ~]# crsctl get node role config -all
Node 'rac1' configured role is 'hub'
Node 'rac2' configured role is 'hub'
Node 'rac3' configured role is 'leaf'
```
此时整个集群状态如下:
```
[root@rac1 ~]# crsctl status res -t
--------------------------------------------------------------------------------
NameTargetStateServerState details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.ASMNET1LSNR_ASM.lsnr
ONLINEONLINErac1STABLE
ONLINEONLINErac2STABLE
ora.DATA.dg
ONLINEONLINErac1STABLE
ONLINEONLINErac2STABLE
ora.FLEXDG.dg
ONLINEONLINErac1STABLE
ONLINEONLINErac2STABLE
ora.LISTENER.lsnr
ONLINEONLINErac1STABLE
ONLINEONLINErac2STABLE
ONLINEONLINErac3STABLE
ora.LISTENER_LEAF.lsnr
OFFLINE OFFLINErac3STABLE
ora.OCR.dg
ONLINEONLINErac1STABLE
ONLINEONLINErac2STABLE
ora.net1.network
ONLINEONLINErac1STABLE
ONLINEONLINErac2STABLE
ONLINEONLINErac3STABLE
ora.ons
ONLINEONLINErac1STABLE
ONLINEONLINErac2STABLE
ora.proxy_advm
OFFLINE OFFLINErac1STABLE
OFFLINE OFFLINErac2STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
1ONLINEONLINErac1STABLE
ora.LISTENER_SCAN2.lsnr
1ONLINEONLINErac1STABLE
ora.LISTENER_SCAN3.lsnr
1ONLINEONLINErac2STABLE
ora.MGMTLSNR
1OFFLINE OFFLINESTABLE
ora.asm
1ONLINEONLINErac1Started,STABLE
2ONLINEONLINErac2Started,STABLE
3ONLINEOFFLINEInstance Shutdown,ST
ABLE
ora.cvu
1ONLINEONLINErac2STABLE
ora.gns
1ONLINEONLINErac1STABLE
ora.gns.vip
1ONLINEONLINErac1STABLE
ora.orcl.db
1ONLINEONLINErac3Open,Readonly,HOME=/
u01/app/oracle/produ
ct/12.2.0/dbhome_1,S
TABLE
2ONLINEONLINErac2Open,HOME=/u01/app/o
racle/product/12.2.0
/dbhome_1,STABLE
3ONLINEONLINErac1Open,HOME=/u01/app/o
racle/product/12.2.0
/dbhome_1,STABLE
ora.prod1.db
1ONLINEONLINErac1Open,HOME=/u01/app/o
racle/product/12.2.0
/dbhome_1,STABLE
2ONLINEONLINErac2Open,HOME=/u01/app/o
racle/product/12.2.0
/dbhome_1,STABLE
3ONLINEOFFLINEInstance Shutdown,ST
ABLE
ora.qosmserver
1OFFLINE OFFLINESTABLE
ora.rac1.vip
1ONLINEONLINErac1STABLE
ora.rac2.vip
1ONLINEONLINErac2STABLE
ora.rac3.vip
1ONLINEONLINErac3STABLE
ora.scan1.vip
1ONLINEONLINErac1STABLE
ora.scan2.vip
1ONLINEONLINErac1STABLE
ora.scan3.vip
1ONLINEONLINErac2STABLE
--------------------------------------------------------------------------------
```
可以发现在rac3切换为leaf node之后,多了ora.LISTENER_LEAF.lsnr这个资源,
而且rac3上的asm实例是不启动的,db实例又变成了readonly方式打开。
需要注意的一点是,leaf node上的只读db实例会把服务注册到LISTENER_LEAF这个监听中,而不是LISTENER。
所以lsnrctl status的输出结果始终看不到任何已注册的服务。
```
[root@rac3 ~]# srvctl start listener -listener LISTENER_LEAF
[grid@rac3 ~]$ lsnrctl status
LSNRCTL for Linux: Version 12.2.0.1.0 - Production on 27-JUL-2017 16:46:01
Copyright (c) 1991, 2016, Oracle.All rights reserved.
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER)))
STATUS of the LISTENER
------------------------
AliasLISTENER
VersionTNSLSNR for Linux: Version 12.2.0.1.0 - Production
Start Date27-JUL-2017 16:24:27
Uptime0 days 0 hr. 21 min. 34 sec
Trace Leveloff
SecurityON: Local OS Authentication
SNMPOFF
Listener Parameter File/u01/app/12.2.0/grid/network/admin/listener.ora
Listener Log File/u01/app/grid/diag/tnslsnr/rac3/listener/alert/log.xml
Listening Endpoints Summary...
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=10.10.70.103)(PORT=1521)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=10.10.70.186)(PORT=1521)))
The listener supports no services
The command completed successfully
[grid@rac3 ~]$ lsnrctl status listener_leaf
LSNRCTL for Linux: Version 12.2.0.1.0 - Production on 27-JUL-2017 16:46:02
Copyright (c) 1991, 2016, Oracle.All rights reserved.
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER_LEAF)))
STATUS of the LISTENER
------------------------
AliasLISTENER_LEAF
VersionTNSLSNR for Linux: Version 12.2.0.1.0 - Production
Start Date27-JUL-2017 16:44:31
Uptime0 days 0 hr. 1 min. 31 sec
Trace Leveloff
SecurityON: Local OS Authentication
SNMPOFF
Listener Parameter File/u01/app/12.2.0/grid/network/admin/listener.ora
Listener Log File/u01/app/grid/diag/tnslsnr/rac3/listener_leaf/alert/log.xml
Listening Endpoints Summary...
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER_LEAF)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=10.10.70.103)(PORT=1525)))
Services Summary...
Service "5491bed1838610f0e05366460a0a5736" has 1 instance(s).
Instance "orcl_1", status READY, has 1 handler(s) for this service...
Service "5507ca8c0abd4747e05365460a0a8d01" has 1 instance(s).
Instance "orcl_1", status READY, has 1 handler(s) for this service...
Service "orcl" has 1 instance(s).
Instance "orcl_1", status READY, has 1 handler(s) for this service...
Service "orclXDB" has 1 instance(s).
Instance "orcl_1", status READY, has 1 handler(s) for this service...
Service "orclpdb" has 1 instance(s).
Instance "orcl_1", status READY, has 1 handler(s) for this service...
Service "ztp" has 1 instance(s).
Instance "orcl_1", status READY, has 1 handler(s) for this service...
The command completed successfully
```
最后需要注意的是:leaf node上默认监听端口为1525
# 结论
*转换节点角色需要重启该节点crs
*12cR2中节点转换为leaf node要求必须配置GNS
*leaf node上的asm实例是不会启动的,db实例只能以只读方式启动
*12cR1中还需要手动更新inventory,12cR2中已不再需要,角色修改操作大幅简化
声明:本站所有文章资源内容,如无特殊说明或标注,均为采集网络资源。如若本站内容侵犯了原著者的合法权益,可联系本站删除。