Oracle 19c RAC on Linux 7.6安装手册
目录
Oracle 19c RAC on Linux安装手册... 2
说明... 2
1 OS环境检查... 3
2 关闭THP,开启Hugepages 4
2.1 禁用透明大页面:... 4
2.2 开启大页面:... 5
3 安装软件包... 5
3.1 Red Hat Enterprise Linux 7安装包... 5
3.2 其他软件包... 6
4 内核参数... 6
4.1 使用Preinstall RPM配置内核参数... 6
4.2 手工配置参数... 6
4.2 CVU(optional) 7
5 网络配置... 7
5.1 固定配置... 8
5.2 GNS + 固定配置... 8
6 其他配置... 10
6.1 操作系统杂项配置... 10
6.2 时钟同步... 11
6.3 NAS存储附加配置... 11
6.4 I/O Scheduler 12
6.5 SSH超时限制... 12
6.3 用户组目录配置... 12
6.6 图形界面配置... 14
6.7 limits.conf 14
6.8关闭X11 Forward. 14
6.9 Direct NFS. 15
6.10 Oracle Member Cluster 15
6.11 手工配置ASM磁盘,UDEV. 15
7 gridSetup.sh. 15
7.1 gridSerup.sh. 16
7.2 runInstaller 27
7.3 19.3 升级19.5.1补丁... 33
7.4 DBCA. 34
Oracle 19c RAC on Linux安装手册说明Starting with Oracle Grid Infrastructure 12c Release 1 (12.1), as part of an Oracle Flex
Cluster installation, Oracle ASM is configured within Oracle Grid Infrastructure to
provide storage services
Starting with Oracle Grid Infrastructure 19c (19.3), with Oracle Standalone
Clusters, you can again place OCR and voting disk files directly on shared file
systems.
Oracle Flex Clusters
Starting with Oracle Grid Infrastructure 12c Release 2 (12.2), Oracle Grid
Infrastructure cluster configurations are Oracle Flex Clusters deployments.
从12.2开始,集群分Standalone Cluster与Domain Service Cluster两种集群模式,
Standalone Cluster:
ü 可以支持64个节点
ü 每个节点都直接连接共享存储
ü 各个节点共享存储都通过各自节点的ASM实例或者共享文件系统挂载。
ü 本地控制GIMR
ü 19c Standalone Cluster可选择是否配置GIMR
ü 可以使用GNS配置vip与scan,也可以自己手工配置。
Domain Services Cluster:
ü 一个或多个节点组成域服务集群(DSC)
ü 一个或多个节点组成数据库成员集群(Database Member Cluster )
ü (可选)一个或多个节点组成应用成员节点(Application Member Cluster)
ü 集中的网格基础架构管理存储库(为Oracle Cluster Domain中的每个集群提供MGMTDB)
ü 跟踪文件分析器(TFA)服务,用于Oracle Clusterware和Oracle数据库的目标诊断数据收集
ü 合并Oracle ASM存储管理服务
ü 可选的快速家庭配置(RHP)服务,用于安装群集,以及配置,修补和升级Oracle Grid Infrastructure和Oracle Database家庭。 配置Oracle域服务群集时,还可以选择配置Rapid Home Provisioning Server。
这些中心化的服务可以被cluster Domain 中的数据库成员集群利用(Datebase Member Cluster或Application Member Cluster)。
Domain Service Cluster中的存储访问:
DSC中的ASM能够提供中心化的存储管理服务,成员集群(Member Cluster)能够通过以下两种方式访问DSC上的分片式存储:
ü 直接物理连接到分片存储进行访问
ü 使用ASM IO Service 通过网络路径进行访问
单个Member Cluster中所有节点必须以相同的方式访问分片存储,一个Domain Service Cluster可以有多个Member Cluster,架构图如下:
1 OS环境检查
项目
要求
检查命令
RAM
至少8G
# grep MemTotal /proc/meminfo
运行级别
3 or 5
# runlevel
Linux版本
Oracle Linux 7.4 with the Unbreakable Enterprise Kernel 4:
4.1.12-112.16.7.el7uek.x86_64 or later
Oracle Linux 7.4 with the Unbreakable Enterprise Kernel 5:
4.14.35-1818.1.6.el7uek.x86_64 or later
Oracle Linux 7.4 with the Red Hat Compatible kernel:
3.10.0-693.5.2.0.1.el7.x86_64 or later
• Red Hat Enterprise Linux 7.4: 3.10.0-693.5.2.0.1.el7.x86_64
or later
• SUSE Linux Enterprise Server 12 SP3: 4.4.103-92.56-default
or later
# uname -mr
# cat /etc/redhat-release
/tmp
至少1G
# du -h /tmp
swap
SWAP Between 4 GB and 16 GB: Equal to RAM
More than 16 GB: 16 GB,如果启用了Huge Page,则计算SWAP需要减去分配给HugePage的内存。
# grep SwapTotal /proc/meminfo
/dev/shm
检查/dev/shm挂载类型,以及权限。
# df -h /dev/shm
软件空间要求
grid至少12G,Oracle至少10g空间,建议分配100g预留
19c开始GIMR在standalone安装时变为可选项。
# df -h /u01
2 关闭THP,开启Hugepages如果使用Oracle Linux,可以通过Preinstallation RPM配置操作系统,如果安装Oracle Domain Services Cluster,则需要配置GIMR,则需要考虑大页面会被GIMR的SGA使用1G,需要将此考虑到hugepages中,standalone则可以选择是否配置GIMR。
2.1 禁用透明大页面:#查看透明大页面是否开启
[root@db-oracle-node1~]#cat/sys/kernel/mm/transparent_hugepage/enabled
[always]madvisenever
#查看透明大页面整理碎片功能是否开启,THPdefragmentation
[root@db-oracle-node1~]#cat/sys/kernel/mm/transparent_hugepage/defrag
[always]madvisenever
将"transparent_hugepage=never"内核参数追加到GRUB_CMDLINE_LINUX选项后:
# vi /etc/default/grub
GRUB_CMDLINE_LINUX="rd.lvm.lv=rhel/rootrd.lvm.lv=rhel/swap...
transparent_hugepage=never"
备份/boot/grub2/grub.cfg,通过grub2-mkconfig -o命令重建/boot/grub2/grub.cfg文件:
OnBIOS-basedmachines:~]#grub2-mkconfig-o/boot/grub2/grub.cfg
OnUEFI-basedmachines:~]#grub2-mkconfig-o/boot/efi/EFI/redhat/grub.cfg
重启系统:
#shutdown-rnow
验证参数设置是否正确:
#cat/proc/cmdline
注:如果还没有关闭THP,参考http://blog.itpub.net/31439444/viewspace-2674001/完成剩余步骤。
2.2 开启大页面:# vim /etc/sysctl.conf
vm.nr_hugepages = xxxx
# sysctl -p
vim /etc/security/limits.conf
oracle soft memlock xxxxxxxxxxx
oracle hard memlock xxxxxxxxxxx
3 安装软件包3.1 Red Hat Enterprise Linux 7安装包openssh
bc
binutils
compat-libcap1
compat-libstdc++
elfutils-libelf
elfutils-libelf-devel
fontconfig-devel
glibc
glibc-devel
ksh
libaio
libaio-devel
libX11
libXau
libXi
libXtst
libXrender
libXrender-devel
libgcc
librdmacm-devel
libstdc++
libstdc++-devel
libxcb
make
net-tools (for Oracle RAC and Oracle Clusterware)
nfs-utils (for Oracle ACFS)
python (for Oracle ACFS Remote)
python-configshell (for Oracle ACFS Remote)
python-rtslib (for Oracle ACFS Remote)
python-six (for Oracle ACFS Remote)
targetcli (for Oracle ACFS Remote)
smartmontools
sysstat
3.2 其他软件包可以选择是否安装附加驱动与软件包,可以配置:PAM、OCFS2,ODBC、LDAP
4 内核参数4.1 使用Preinstall RPM配置内核参数如果是Oracle Linux, or Red Hat Enterprise Linux
可以使用preinstall rpm配置os:
# cd /etc/yum.repos.d/
# wget http://yum.oracle.com/public-yum-ol7.repo
# yum repolist
# yum install oracle-database-preinstall-19c
也可以手工下载preinstall rpm安装包:
http://yum.oracle.com/repo/OracleLinux/OL6/latest/x86_64//
http://yum.oracle.com/repo/OracleLinux/OL7/latest/x86_64
preinstall做以下工作:
ü 创建oracle用户,创建oraInventory(oinstall)以及OSDBA(dba)组。
ü 设置sysctl.conf,调整Oracle建议的系统启动参数、驱动参数
ü 设置hard以及soft用户资源限制。
ü 设置其他与系统内核版本相关的建议参数。
ü 设置numa=off
4.2 手工配置参数如果不使用preinstall rpm配置内核参数,也可以手工配置kernel parameter:
# vi /etc/sysctl.d/97-oracledatabase-
sysctl.conf
fs.aio-max-nr = 1048576
fs.file-max = 6815744
kernel.shmall = 2097152
kernel.shmmax = 4294967295
kernel.shmmni = 4096
kernel.sem = 250 32000 100 128
net.ipv4.ip_local_port_range = 9000 65500
net.core.rmem_default = 262144
net.core.rmem_max = 4194304
net.core.wmem_default = 262144
net.core.wmem_max = 1048576
改变当前系统值:
# /sbin/sysctl –system
# /sbin/sysctl -a
设置网络端口范围:
$ cat /proc/sys/net/ipv4/ip_local_port_range
# echo 9000 65500 > /proc/sys/net/ipv4/ip_local_port_range
# /etc/rc.d/init.d/network restart
4.2 CVU(optional)如果不使用Oracle Preinstallation RPM,可以使用Cluster Verification Utility,按照下面步骤安装CVU:
ü Locate the cvuqdisk RPM package, which is located in the directory
Grid_home/cv/rpm. Where Grid_home is the Oracle Grid Infrastructure home
directory.
ü Copy the cvuqdisk package to each node on the cluster. You should ensure that
each node is running the same version of Linux.
ü Log in as root.
ü Use the following command to find if you have an existing version of the cvuqdisk
package:
# rpm -qi cvuqdisk
ü If you have an existing version of cvuqdisk, then enter the following command to
deinstall the existing version:
# rpm -e cvuqdisk
ü Set the environment variable CVUQDISK_GRP to point to the group that owns
cvuqdisk, typically oinstall. For example:
# CVUQDISK_GRP=oinstall; export CVUQDISK_GRP
ü In the directory where you have saved the cvuqdisk RPM, use the command rpm
-iv package to install the cvuqdisk package. For example:
# rpm -iv cvuqdisk-1.0.10-1.rpm
ü 运行安装验证
$ ./runcluvfy.sh stage -pre crsinst -fixup -n node1,node2,node3
5 网络配置网络配置说明:
(1)要么全部ipv4,要么全部ipv6,GNS可以生成ipv6地址
(2)VIP,Starting with Oracle Grid Infrastructure 18c, using VIP is optional for Oracle
Clusterware deployments. You can specify VIPs for all or none of the cluster
nodes. However, specifying VIPs for selected cluster nodes is not supported.
(3)Private:安装过程可以配置四个interface private IP做为HAIP(高可用IP),如果配置了超过四个interface,则超过四个的部分自动做为冗余,private可以不使用bond网卡绑定,集群可以自动高可用。
(4)Public/VIP名称:可以使用字母数字以及“-”连接符,不允许使用“_“下划线
(5)Public/VIP/SCAN VIP 需要在同一个子网段。
(6)Public需要固定配置在各个节点网卡,VIP、Private IP、SCAN都可以交给GNS来配置,除了SCAN需要三个固定IP以外,其他都需要一个固定IP,可以不固定在网卡,但是要固定解析。
5.1 固定配置只通过DNS解析SCAN,Public/Private/VIP均通过手工配置固定IP,安装时手工指定设置。
5.2 GNS + 固定配置要启用GNS,需要使用dhcp+DNS配置,DNS正反解析无需解析vip以及scan,只需要vip与scan的域名在交给gns管理的子域里即可。
/etc/hosts
192.168.204.11 pub19-node1.rac.libai
192.168.204.12 pub19-node2.rac.libai
#private ip
40.40.40.41 priv19-node1.rac.libai
40.40.40.42 priv19-node2.rac.libai
#vip
192.168.204.21 vip19-node1.rac.libai
192.168.204.22 vip19-node2.rac.libai
#scan-vip
#192.168.204.33 scan19-vip.rac.libai
#192.168.204.34 scan19-vip.rac.libai
#192.168.204.35 scan19-vip.rac.libai
#gns-vip
192.168.204.10 gns19-vip.rac.libai
DNS配置:
[root@19c-node2 limits.d]# yum install -y bind chroot
[root@19c-node2 limits.d]# vi /etc/named.conf
options {
directory "/var/named";
dump-file "/var/named/data/cache_dump.db";
statistics-file "/var/named/data/named_stats.txt";
memstatistics-file "/var/named/data/named_mem_stats.txt";
allow-query { any; }; # any可以为一个指定网段,允许该网段查询DNS服务器。
recursion yes;
allow-transfer { none; };
};
zone "." IN {
type hint;
file "named.ca";
};
zone "rac.libai" IN { # 正解域 centos.libai
type master;
file "named.rac.libai";
};
zone "204.168.192.in-addr.arpa" IN { # 反解域 204.168.192.in-addr.arpa
type master;
file "named.192.168.204";
};
zone "40.40.40.in-addr.arpa" IN { # 反解域 204.168.192.in-addr.arpa
type master;
file "named.40.40.40";
};
/* 编辑vip pub正解析域
[root@pub19-node2 ~]# vi /var/named/named.rac.libai
$TTL 600
@ IN SOA rac.libai. admin.rac.libai. (
0 ; serial number
1D ; refresh
1H ; retry
1W ; expire
3H ) ; minimum
@ IN NS master
master IN A 192.168.204.12
priv19-node1.rac.libai. IN A 40.40.40.41
priv19-node2.rac.libai. IN A 40.40.40.42
pub19-node1.rac.libai. IN A 192.168.204.11
pub19-node2.rac.libai. IN A 192.168.204.12
vip.rac.libai. IN NS gns.rac.libai.
gns.rac.libai. IN A 192.168.204.10
# 最后两行表示:子域vip.rac.libai的解析服务器为gns.rac.libai,gns.rac.libai的服务器地址为192.168.204.10
这是配置gns的关键。
# 在gridSetup.sh配置SCAN的页面,scan的域名scan19.vip.rac.libai必须包含交给gns管理的子域即scan19.vip.rac.libai需要包含vip.rac.libai
# gridSetup.sh配置gns的IP地址即192.168.204.10,subdomain即vip.rac.libai
# 如果配合DHCP,则可以完成vip,private,scan都使用gns分配IP。
来自:http://blog.sina.com.cn/s/blog_701a48e70102w6gv.html
# 无需DNS解析SCAN与VIP,交给GNS即可,需要启用dhcp。
[root@19c-node2 named]# vi named.192.168.204
$TTL 600
@ IN SOA rac.libai. admin.rac.libai. (
10 ; serial
3H ; refresh
15M ; retry
1W ; expire
1D ) ; minimum
@ IN NS master.rac.libai.
12 IN PTR master.rac.libai.
11 IN PTR pub19-node1.rac.libai.
12 IN PTR pub19-node2.rac.libai.
10 IN PTR gns.rac.libai.
[root@19c-node2 named]# vi named.40.40.40
$TTL 600
@ IN SOA rac.libai. admin.rac.libai. (
10 ; serial
3H ; refresh
15M ; retry
1W ; expire
1D ) ; minimum
@ IN NS master.rac.libai.
42 In PTR 19cpriv-node2.rac.libai.
[root@19c-node2 named]# systemctl restart named
[root@19c-node1 software]# yum install -y dhcp
[root@19c-node1 software]# vi /etc/dhcp/dhcpd.conf
# see /usr/share/doc/dhcp*/dhcpd.conf.example
# see dhcpd.conf(5) man page
#
ddns-update-styleinterim;
ignoreclient-updates;
subnet 192.168.204.0 netmask 255.255.255.0 {
option routers 192.168.204.1;
option subnet-mask 255.255.255.0;
option nis-domain "rac.libai";
option domain-name "rac.libai";
option domain-name-servers 192.168.204.12;
option time-offset -18000; # Eastern Standard Time
range dynamic-bootp 192.168.204.21 192.168.204.26;
default-lease-time 21600;
max-lease-time 43200;
}
[root@19c-node2 ~]# systemctl enable dhcpd
[root@19c-node2 ~]# systemctl restart dhcpd
[root@19c-node2 ~]# systemctl status dhcpd
/* 查看租约文件
/var/lib/dhcp/dhcpd.leases
/* 为enp0s10重新获取dhcp地址
# dhclient -d enp0s10
/* 释放租约
# dhclient -r enp0s10
6 其他配置6.1 操作系统杂项配置(1)cluster名称:
大小写不敏感,必须字母数字,必须包含-连接符,不能包含_下划线,最长15个字符,安装后,只能通过重装GI修改集群名称。
(2)/etc/hosts
#public Ip
192.168.204.11 pub19-node1.rac.libai
192.168.204.12 pub19-node2.rac.libai
#private ip
40.40.40.41 priv19-node1.rac.libai
40.40.40.42 priv19-node2.rac.libai
#vip
192.168.204.21 vip19-node1.rac.libai
192.168.204.22 vip19-node2.rac.libai
#scan-vip
#192.168.204.33 scan19.vip.rac.libai
#192.168.204.34 scan19.vip.rac.libai
#192.168.204.35 scan19.vip.rac.libai
#gns-vip
192.168.204.10 gns.rac.libai
(3)操作系统主机名
hostnamectl set-hostname pub19-node1.rac.libai –static
hostnamectl set-hostname pub19-node2.rac.libai --static
6.2 时钟同步保证所有节点使用NTP或者CTSS同步时间。
安装之前,保证各个节点时钟相同,如果使用CTSS,可以通过下面步骤关闭linux 7自带NTP:
By default, the NTP service available on Oracle Linux 7 and Red Hat
Linux 7 is chronyd.
Deactivating the chronyd Service
To deactivate the chronyd service, you must stop the existing chronyd service, and
disable it from the initialization sequences.
Complete this step on Oracle Linux 7 and Red Hat Linux 7:
1. Run the following commands as the root user:
# systemctl stop chronyd
# systemctl disable chronyd
Confirming Oracle Cluster Time Synchronization Service After Installation
To confirm that ctssd is active after installation, enter the following command as the
Grid installation owner:
$ crsctl check ctss
6.3 NAS存储附加配置如果使用NAS,为了Oracle Clusterware更好的容忍NAS设备以及NAS挂载的网络失败,建议开启Name Service Cache Daemon (nscd)。
# chkconfig --list nscd
# chkconfig --level 35 nscd on
# service nscd start
# service nscd restart
systemctl --all |grep nscd
6.4 I/O SchedulerFor best performance for Oracle ASM, Oracle recommends that you use the Deadline
I/O Scheduler.
# cat /sys/block/${ASM_DISK}/queue/scheduler
noop [deadline] cfq
If the default disk I/O scheduler is not Deadline, then set it using a rules file:
1. Using a text editor, create a UDEV rules file for the Oracle ASM devices:
# vi /etc/udev/rules.d/60-oracle-schedulers.rules
2. Add the following line to the rules file and save it:
ACTION=="add|change", KERNEL=="sd[a-z]", ATTR{queue/rotational}=="0",
ATTR{queue/scheduler}="deadline"
3. On clustered systems, copy the rules file to all other nodes on the cluster. For
example:
$ scp 60-oracle-schedulers.rules root@node2:/etc/udev/rules.d/
4. Load the rules file and restart the UDEV service. For example:
Oracle Linux and Red Hat Enterprise Linux
# udevadm control --reload-rules
5. Verify that the disk I/O scheduler is set as Deadline.
# cat /sys/block/${ASM_DISK}/queue/scheduler
noop [deadline] cfq
6.5 SSH超时限制为了防止某些情况下ssh失败,设置超时限制为ulimit:
/etc/ssh/sshd_config on all cluster nodes:
# vi /etc/ssh/sshd_config
LoginGraceTime 0
6.3 用户组目录配置判断是否有inventory以及组是否之前存在:
# more /etc/oraInst.loc
$ grep oinstall /etc/group
创建inventory目录,不要指定到oracle base目录下,防止发生安装过程中权限改变导致安装错误。
所有节点user以及group的id必须相同。
# groupadd -g 54421 oinstall
# groupadd -g 54322 dba
# groupadd -g 54323 oper
# groupadd -g 54324 backupdba
# groupadd -g 54325 dgdba
# groupadd -g 54326 kmdba
# groupadd -g 54327 asmdba
# groupadd -g 54328 asmoper
# groupadd -g 54329 asmadmin
# groupadd -g 54330 racdba
# /usr/sbin/useradd -u 54321 -g oinstall -G dba,asmdba,backupdba,dgdba,kmdba,oper,racdba oracle
# useradd -u 54322 -g oinstall -G asmadmin,asmdba,racdba grid
# id oracle
# id grid
# passwd oracle
# passwd grid
建议使用OFA目录结构,保证Oracle home目录路径只包含ASCII码字符。
GRID standalone可以将grid安装在oracle database软件的ORACLE_BASE目录下,其他不可以。
# mkdir -p /u01/app/19.0.0/grid
# mkdir -p /u01/app/grid
# mkdir -p /u01/app/oracle/product/19.0.0/dbhome_1/
# chown -R grid:oinstall /u01
# chown oracle:oinstall /u01/app/oracle
# chmod -R 775 /u01/
grid .bash_profile:
# su – grid
$ vi ~/.bash_profile
umask 022
export ORACLE_BASE=/u01/app/grid
export ORACLE_HOME=/u01/app/19.0.0/grid
export PATH=$PATH:$ORACLE_HOME/bin
export NLS_DATE_FORMAT=’yyyy-mm-dd hh34:mi:ss’
export NLS_LANG=AMERICAN.AMERICA_AL32UTF8
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$ORACLE_HOME/lib
$ . ./.bash_profile
oracle .bash_profile:
# su – oracle
$ vi ~/.bash_profile
umask 022
export ORACLE_BASE=/u01/app/oracle
export ORACLE_HOME=$ORACLE_BASE/product/19.0.0/dbhome_1
export PATH=$PATH:$ORACLE_HOME/bin
export NLS_DATE_FORMAT=’yyyy-mm-dd hh34:mi:ss’
export NLS_LANG=AMERICAN.AMERICA_AL32UTF8
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$ORACLE_HOME/lib
$ . ./.bash_profile
6.6 图形界面配置$ xhost + hostname
$ export DISPLAY=local_host:0.0
6.7 limits.confpreinstall rpm包只配置oracle用户,安装GI,复制oracle设置,改为grid用户:
以下oracle grid用户都需要检查:
file descriptor:
$ ulimit -Sn
$ ulimit -Hn
number of processes:
$ ulimit -Su
$ ulimit -Hu
stack:
$ ulimit -Ss
$ ulimit -Hs
6.8关闭X11 Forward为了确保不会因为X11转发导致安装失败,oracle grid用户家目录下.ssh:
$ ~/.ssh/config
Host *
ForwardX11 no
6.9 Direct NFS如果使用DNFS,则可以参考文档配置DNFS。
6.10 Oracle Member Cluster如果要创建Oracle Member Cluster,则需要在Oracle Domain Services Cluster上创建Member Cluster Manifest File,参照官方文档Oracle Grid Infrastructure Grid Infrastructure Installation and Upgrade Guide下面章节:
Creating Member Cluster Manifest File for Oracle Member Clusters
6.11 手工配置ASM磁盘,UDEV/* 获取磁盘UUID
# /usr/lib/udev/scsi_id -g -u /dev/sdb
/* 编写UDEV规则文件
# vi /etc/udev/rules.d/99-oracle-asmdevices.rules
KERNEL=="sd*", ENV{DEVTYPE}=="disk",SUBSYSTEM=="block",PROGRAM=="/usr/lib/udev/scsi_id -g -u -d $devnode",RESULT=="1ATA_VBOX_HARDDISK_VB9c33adf6-29245311",RUN+="/bin/sh -c 'mknod /dev/asmocr1 b $major $minor;chown grid:asmadmin /dev/asmocr1;chmod 0660 /dev/asmocr1'"
KERNEL=="sd*", ENV{DEVTYPE}=="disk",SUBSYSTEM=="block",PROGRAM=="/usr/lib/udev/scsi_id -g -u -d $devnode",RESULT=="1ATA_VBOX_HARDDISK_VBb008c422-c636d509",RUN+="/bin/sh -c 'mknod /dev/asmdata1 b $major $minor;chown grid:asmadmin /dev/asmdata1;chmod 0660 /dev/asmdata1'"
KERNEL=="sd*", ENV{DEVTYPE}=="disk",SUBSYSTEM=="block",PROGRAM=="/usr/lib/udev/scsi_id -g -u -d $devnode",RESULT=="1ATA_VBOX_HARDDISK_VB7d37c0f6-8f45f264",RUN+="/bin/sh -c 'mknod /dev/asmfra1 b $major $minor;chown grid:asmadmin /dev/asmfra1;chmod 0660 /dev/asmfra1'"
/* 拷贝UDEV规则文件到集群其他节点
# scp 99-oracle-asmdevices.rules root@node2:/etc/udev/rules.d/99-oracleasmdevices.
rules
/* reload udev配置,测试
/sbin/udevadm trigger --type=devices --action=change
/sbin/udevadm control --reload
/sbin/udevadm test /sys/block/sdb
7 gridSetup.sh$ su root
# export ORACLE_HOME=/u01/app/19.0.0/grid
Use Oracle ASM command line tool (ASMCMD) to provision the disk devices
for use with Oracle ASM Filter Driver.
[root@19c-node1 grid]# asmcmd afd_label DATA1 /dev/sdb --init
[root@19c-node1 grid]# asmcmd afd_label DATA2 /dev/sdc --init
[root@19c-node1 grid]# asmcmd afd_label DATA3 /dev/sdd --init
[root@19c-node1 grid]# asmcmd afd_lslbl /dev/sdb
[root@19c-node1 grid]# asmcmd afd_lslbl /dev/sdc
[root@19c-node1 grid]# asmcmd afd_lslbl /dev/sdd
7.1 gridSerup.sh$ unzip LINUX.X64_193000_grid_home.zip -d /u01/app/19.0.0/grid/
$ /u01/app/19.0.0/grid/gridSetup.sh
遇到问题:
图形界面进行到创建OCR ASM磁盘组时,无法发现ASM磁盘,检查UDEV,UDEV配置正确,检查cfgtoollogs日志发现如下报错:
[root@19c-node1 ~]# su – grid
[grid@19c-node1 ~]$ cd $ORACLE_HOME/cfgtoollogs/out/GridSetupActions2020-03-09_01-02-16PM
[grid@19c-node1 ~]$ vi gridSetupActions2020-03-09_01-02-16PM.log
INFO: [Mar 9, 2020 1:15:03 PM] Executing [/u01/app/19.0.0/grid/bin/kfod.bin, nohdr=true, verbose=true, disks=all, op=disks, shallow=true, asm_diskstring='/dev/asm*']
INFO: [Mar 9, 2020 1:15:03 PM] Starting Output Reader Threads for process /u01/app/19.0.0/grid/bin/kfod.bin
INFO: [Mar 9, 2020 1:15:03 PM] Parsing Error 49802 initializing ADR
INFO: [Mar 9, 2020 1:15:03 PM] Parsing ERROR!!! could not initialize the diag context
grid ORACLE_HOME/cfgtoollogs/out/GridSetupActions2020-03-09_01-02-16PM
发现ASM磁盘路径报错:
INFO: [Mar 9, 2020 1:15:03 PM] Executing [/u01/app/19.0.0/grid/bin/kfod.bin, nohdr=true, verbose=true, disks=all, status=true, op=disks, asm_diskstring='/dev/asm*']
INFO: [Mar 9, 2020 1:15:03 PM] Starting Output Reader Threads for process /u01/app/19.0.0/grid/bin/kfod.bin
INFO: [Mar 9, 2020 1:15:03 PM] Parsing Error 49802 initializing ADR
INFO: [Mar 9, 2020 1:15:03 PM] Parsing ERROR!!! could not initialize the diag context
解决:
将报错前命令单独拿出来执行
/u01/app/19.0.0/grid/bin/kfod.bin nohdr=true, verbose=true, disks=all, status=true, op=disks, asm_diskstring='/dev/asm*'
发现报错NLS DATA错误,很明显,跟.bash_profile环境配置文件设置的NLS相关变量有关,注释掉相关NLS_LANG变量,生效,再次执行,一切正常。
[root@pub19-node1 ~]# /u01/app/oraInventory/orainstRoot.sh
[root@pub19-node2 ~]# /u01/app/oraInventory/orainstRoot.sh
[root@pub19-node1 ~]# /u01/app/19.0.0/grid/root.sh
[root@pub19-node2 ~]# /u01/app/19.0.0/grid/root.sh
7.2 runInstaller[oracle@pub19-node1 dbhome_1]$ unzip LINUX.X64_193000_db_home.zip -d /u01/app/oracle/product/19.0.0/dbhome_1/
[oracle@pub19-node1 dbhome_1]$ ./runInstaller
[oracle@pub19-node1 dbhome_1]$ dbca
遇到问题:
CRS-5017: The resource action "ora.czhl.db start" encountered the following error:
ORA-12547: TNS:lost contact
. For details refer to "(:CLSN00107:)" in "/u01/app/grid/diag/crs/pub19-node2/crs/trace/crsd_oraagent_oracle.trc".
解决:
节点2 ORACLE_HOME目录有两层权限不正确,修改权限之后,手工启动数据库正常。
[root@pub19-node2 oracle]# chown oracle:oinstall product/
[root@pub19-node2 product]# chown oracle:oinstall 19.0.0
[root@pub19-node2 19.0.0]# chown oracle:oinstall dbhome_1/
[grid@pub19-node2 ~]$ srvctl start instance -node pub19-node2.rac.libai
starting database instances on nodes "pub19-node2.rac.libai" ...
started resources "ora.czhl.db" on node "pub19-node2"
7.3 19.3 升级19.5.1补丁grid用户(两节点都要升级):
# su - grid
$ unzip LINUX.X64_193000_grid_home.zip -d /u01/app/19.0.0/grid/
$ unzip unzip p30464035_190000_Linux-x86-64.zip
oracle用户(两节点都要升级):
# su - oracle
$ unzip -o p6880880_190000_Linux-x86-64.zip -d /u01/app/oracle/product/19.0.0/dbhome_1/
root用户:
/* 检查补丁版本,其实只给1节点GI打了补丁,继续给节点2GI,节点1 DB,节点2 DB打补丁,一定要注意opatchauto,GI打补丁需要用GI ORACLE_HOME下opatchauto,DB打补丁需要DB ORACLE_HOME下opatchauto
节点1:
# /u01/app/19.0.0/grid/OPatch/opatchauto apply -oh /u01/app/19.0.0/grid /software/30464035/
节点2:
# /u01/app/19.0.0/grid/OPatch/opatchauto apply -oh /u01/app/19.0.0/grid /software/30464035/
节点1:
# /u01/app/oracle/product/19.0.0/dbhome_1/OPatch/opatchauto apply /software/30464035/ -oh /u01/app/19.0.0/grid,/u01/app/oracle/product/19.0.0/dbhome_1
节点2:
# ls -l /u01/app/oraInventory/ContentsXML/oui-patch.xml # 一定要检查此文件此时权限,否则报下面错误,导致补丁corrupt,且无法回退跟再次正向应用,修改权限,打补丁,如果报错,可采用opatchauto resume命令,继续应用补丁即可。
# /u01/app/oracle/product/19.0.0/dbhome_1/OPatch/opatchauto apply /software/30464035/ -oh /u01/app/oracle/product/19.0.0/dbhome_1
Caution:
[Mar 11, 2020 8:56:05 PM] [WARNING] OUI-67124:ApplySession failed in system modification phase... 'ApplySession::apply failed: java.io.IOException: oracle.sysman.oui.patch.PatchException: java.io.FileNotFoundException: /u01/app/oraInventory/ContentsXML/oui-patch.xml (Permission denied)'
解决:
/* 按照日志输出,赋权
# chmod 664 /u01/app/oraInventory/ContentsXML/oui-patch.xml
# /u01/app/oracle/product/19.0.0/dbhome_1/OPatch/opatchauto resume /software/30464035/ -oh /u01/app/oracle/product/19.0.0/dbhome_1
如果按照日志提示恢复,则可以采取如下步骤来解决打补丁问题:
/* 执行restore.sh,最后还是失败,所以只能采取手工复制软件,加回滚的办法
# /u01/app/oracle/product/19.0.0/dbhome_1/OPatch/opatchauto rollback /software/30464035/ -oh /u01/app/oracle/product/19.0.0/dbhome_1
/* 按照失败提示,哪些文件不存在,将对应补丁解压文件夹中拷贝到ORACLE_HOME指定目录中,继续回滚,直到成功回滚。
再次给节点2 oracle软件打补丁:
# /u01/app/oracle/product/19.0.0/dbhome_1/OPatch/opatchauto apply /software/30464035/ -oh /u01/app/19.0.0/grid,/u01/app/oracle/product/19.0.0/dbhome_1
验证补丁:
$ /u01/app/19.0.0/grid/OPatch/opatch lsinv
$ /u01/app/oracle/product/19.0.0/dbhome_1/OPatch/opatch lsinv
# su – grid
$ kfod op=patches
$ kfod op=patchlvl
7.4 DBCAOracle 19c RAC on Linux安装手册.docx
声明:本站所有文章资源内容,如无特殊说明或标注,均为采集网络资源。如若本站内容侵犯了原著者的合法权益,可联系本站删除。