Home >Backend Development >PHP Tutorial >A quick test of Oracle's ORACLE 11GR2 RAC installation and configuration - prerequisite configuration stage_PHP tutorial
Installing Oracle 11GR2 11.2.0.4 RAC cluster based on Linux RedHat 6.4 in VMwarevCenter Server
Public and private should be divided into different network segments. Ensure secure transmission
[root@Zracnode1~]# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4localhost4.localdomain4
::1 localhost localhost.localdomainlocalhost6 localhost6.localdomain6
10.2.13.80 zracnode1
10.2.13.81 zracnode2
10.2.13.82 zracnode1-vip
10.2.13.83 zracnode2-vip
10.2.12.140 zracnode1-priv
10.2.12.141 zracnode2-priv
10.2.13.142 zrac-scan
10.2.13.143 zrac-scan
10.2.13.144 zrac-scan
*. The installed operating system is LinuxRedhat 6.4
*.Disk partition configuration
|
Disk size | |||||||||||||
/ | 30GB | |||||||||||||
swap
|
16GB | |||||||||||||
/u01 | 100GB |
options |
Description |
None |
Other virtual machines cannot share the virtual disk. |
Virtual |
Same station Virtual machines on the server can share virtual disks. |
Physics |
Any server All virtual machines on the virtual machine can share the virtual disk. |
3.2 Create a new SCSI controller 1 controller and set the relevant parameters. The VmwarevCenter operation page is as follows:
3.3 Create a new hard disk, the hard disk type is [Thick provision eager zeroed]. And set the virtual Device Node to [SCSI(1:0)]. Set the disk mode to [Independent-Persistent]. The operation page is as follows:
3.4 Add an existing disk. On the ZRAC02 node, add the disk. The operation page is as follows:
Mount image configuration yum
mount -oloop -t iso9660 /u01/software/rhel-server-6.4-x86_64-dvd.iso /u01/iso
[root@Zracnode1u01]# cat /etc/yum.repos.d/rhel-source.repo
[Server]
name=Server
baseurl =file:///u01/iso
gpgcheck=0
gpgkey=file:///u01/iso/RPM-GPG-KEY-redhat-release
VNC installation on Linux
#yuminstall tigervnc-server
#vncserver #The The command starts a VNC process on the server side. Allow one
VNC View to connect in; if you need multiple Views to connect to the server, you need to execute the above command multiple times;
Password: # In order not to allow anyone to remotely control this computer. Therefore, when you start the VNC server for the first time, you will be asked to set the password for network remote control.
Verify: # Verify the password;
Enter the .vnc hidden directory under the root home directory, find the xstartup file and edit it:
# cd/root/.vnc
# vi xstartup
# twm& (comment out the line)
startkde& (add the line)
# killallXvnc
#vncserver
[root@rac01network-scripts]# vi /etc/hosts
<.> 127.0.0.1 localhost localhost.localDomain localhost4Localhost4.localDomain4:: 1 localhost localhost.localhost6. LOCALDOMAIN6
10.2.13.80 zracnode1
10.2.13.81 ZRACNode2 10.2.13.82 Zracnode1-vip10.2.13.83 Zracnode2-vip10.2.13.140 Zracnode1-priv10.2.13.141 Zracnode2-priv10.2.13.142 Zrac-scan 4.3 Add combined user (all nodes)groupadd -g500 oinstallgroupadd -g501 dba groupadd -g502 opergroupadd -g503 asmadmingroupadd -g504 asmopergroupadd -g505 asmdbauseradd -goinstall - G dba,asmdba,oper oracleuseradd -goinstall -G asmadmin,asmdba,asmoper,oper,dba grid Detect user oracle and grid [root@rac1~]# id oracleuid=500(oracle)gid=500(oinstall) groups=500(oinstall),501(dba),502(oper) ,505(asmdba)[root@rac1~]# id griduid=501(grid)gid=500(oinstall) groups=500(oinstall),501(dba), 502(oper),503(asmadmin),504(asmoper),505(asmdba) Set passwords for users oracle and grid [root@rac1~]# passwd oracle[root@rac1~]# passwd grid 4.4. Create directory (all nodes) mkdir/u01/appchown -Rgrid:oinstall /u01/app/chmod -R 775/u01/app/mkdir - p/u01/app/oraInventorychown -Rgrid:oinstall /u01/app/oraInventory/chmod -R 775/u01/app/oraInventory/mkdir - p/u01/app/gridmkdir -p/u01/app/oraclechown -Rgrid:oinstall /u01/app/grid/chown -Roracle: oinstall /u01/app/oracle/chmod -R 775/u01/app/grid/chmod -R 775/u01/app/oracle/
[root@rac01~]# vi /etc/sysctl.conf
# for oracle11g
fs.aio -max-nr= 1048576
fs.file-max =6815744
kernel.shmall= 2147483648
kernel.shmmax= 68719476736
kernel.shmmni = 4096
kernel.sem =250 32000 100 128
net.ipv4.ip_local_port_range= 9000 65500
net.core.rmem_default= 262144
net .core.rmem_max= 4194304
net.core.wmem_default= 262144
net.core.wmem_max= 1048586
Make the modified parameters take effect immediately:
[root@rac01~]# /sbin/sysctl -p
[root@rac01~]# /etc/security/limits.conf
grid soft nproc 2047
grid hard nproc 16384
grid soft nofile 1024
grid hard nofile 65536
oracle soft nproc 2047
oracle hard nproc 16384
oracle soft nofile 1024
oracle hard nofile 65536
[root@rac01~]# /etc/pam.d/login
session required pam_limits.so
[root@rac01 ~]#/etc/profile
if [ $USER ="oracle" ] || [ $ USER = "grid" ]; then
if [ $SHELL = "/bin/ksh" ]; then
ulimit -p 16384
ulimit -n 65536
else
ulimit -u 16384 -n 65536
fi
umask 022
fi
serviceiptables stop
chkconfigiptables off
chkconfigiptables --list
setenforce 0
sed -i's/SELINUX=.*/SELINUX=disabled/' /etc/selinux/config
rpm -q --qf'%{NAME}-%{VERSION}-%{RELEASE} (%{ARCH})n' binutils
compat-libstdc -33
elfutils-libelf
elfutils-libelf-devel
gcc
gcc-c
glibc
glibc-common
glibc-devel
glibc-headers
ksh
libaio
libaio-devel
libgcc
libstdc
libstdc -devel
make
sysstat
unixODBC
grid user:
[grid@rac01 ~]# vi .bash_profile
export ORACLE_SID= ASM1/ ASM2
export ORACLE_BASE=/u01/app/oracle
export ORACLE_HOME=/u01/app/grid/11.2
export PATH=$PATH:$ ORACLE_HOME/bin
oracle user:
[oracle@rac01 ~]# vi .bash_profile
export ORACLE_SID=racdb1/racdb2
export ORACLE_UNQNAME=$ORACLE_SID
export ORACLE_BASE=/u01/app/oracle
export ORACLE_HOME=$ORACLE_BASE/11.2/db_1
export PATH=$PATH:$ ORACLE_HOME/bin
[root@rac01 ~]# fdisk /dev/sdb
The partitioning effect is as follows:
--------------------- -------------------------------------------------- ----
Device Boot Start End Blocks Id System
/dev/sdb1 1 132 1060258 83 Linux //CRS1 900M
/dev /sdb2 133 264 1060290 83 Linux //CRS2 900M
/dev/sdb3 265 396 1060290 83 Linux //CRS3 900M
/dev/sdb4 397 13054 101675385 5 Extended
/dev/sdb5 397 3008 20980858 83 Linux //DATA1 30GB
/dev/sdb6 3009 5620 20980858 83 Linux //DATA2 30GB
/dev/sdb7 5621 8232 20980 858 83 Linux // DATA3 30GB
/dev/sdb8 8233 9538 10490413 83 Linux //REC1 30GB
/dev/sdb9 9539 13054 28242238 83 Linux //REC2 30GB
[root@rac02software]# rpm -ivh kmod-oracleasm-2.0.6.rh1-2.el6.x86_64.rpm
warning:kmod-oracleasm -2.0.6.rh1-2.el6.x86_64.rpm: Header V3 RSA/SHA256 Signature, keyID fd431d51: NOKEY
Preparing... ############## ############################ [100%]
1:kmod-oracleasm ###### #################################### [100%]
[root@ rac02 software]# rpm -ivhoracleasm-support-2.1.8-1.el6.x86_64.rpm
warning: oracleasm-support-2.1.8-1.el6.x86_64.rpm:Header V3 RSA/SHA256 Signature, key ID ec551f03: NOKEY
Preparing... ################################ ########## [100%]
1:oracleasm-support ######################### ################## [100%]
[root@rac02 software]# rpm -ivhoracleasmlib-2.0.4-1.el6.x86_64.rpm
warning:oracleasmlib-2.0.4-1.el6.x86_64.rpm: Header V3 RSA/SHA256 Signature, key IDec551f03: NOKEY
Preparing... ####### #################################### [100%]
1:oracleasmlib # ########################################[100%]
RAC01 to perform the following operations:
[root@rac1 ~]# /etc/init.d/oracleasmconfigure
[root@rac1 ~ ]# /etc/init.d/oracleasmconfigure
Default user to own the driver interface[]: grid
Default group to own the driver interface[]: asmadmin
Start Oracle ASM library driver on boot(y/n) [n]: y
Scan for Oracle ASM disks on boot (y/n)[y]: y
Writing Oracle ASM library driverconfiguration: done
Initializing the Oracle ASMLib driver: [ OK ]
Scanning the system for Oracle ASMLibdisks: [ OK ]
RAC02 performs the following operations:
[root@rac02 software]# /etc/init.d/oracleasmconfigure
Default user to own the driver interface[]: grid
Default group to own the driver interface[]: asmadmin
Start Oracle ASM library driver on boot(y/n) [n]: y
Scan for Oracle ASM disks on boot ( y/n)[y]: y
Writing Oracle ASM library driverconfiguration: done
Initializing the Oracle ASMLib driver: [ OK ]
Scanning the system for Oracle ASMLibdisks: [ OK ]
The following operations are performed on one of the nodes in rac, and I operate on rac1
[root@rac1 ~]# / etc/init.d/oracleasmcreatedisk CRS1 /dev/sdb1
[root@rac1 ~]# /etc/init.d/oracleasmcreatedisk CRS2 /dev/sdb2
[root@rac1 ~] # /etc/init.d/oracleasmcreatedisk CRS3 /dev/sdb3
[root@rac1 ~]# /etc/init.d/oracleasmcreatedisk DATA1 /dev/sdb5
[root@rac1 ~]# /etc/init.d/oracleasmcreatedisk DATA2 /dev/sdb6
[root@rac1 ~]# /etc/init.d/oracleasmcreatedisk DATA3 /dev/sdb7
[root @rac1 ~]# /etc/init.d/oracleasmcreatedisk REC1 /dev/sdb8
[root@rac1 ~]# /etc/init.d/oracleasmcreatedisk REC2 /dev/sdb9
The following operations are performed on another node, on RAC2
[root@rac02 software]# oracleasm scandisks
Reloading disk partitions: done
Cleaning any stale ASM disks...
Scanning system for ASM disks...
[root@Zracnode2 software]# oracleasmlistdisks
CRS1
CRS2
CRS3
DATA1
DATA2
DATA3
REC1
REC2
Create connections for ssh and scp
ls -l /usr/local/bin/ssh
ls - l /usr/local/bin/scp
Create if it does not exist
[root@rac01 ~]# /bin/ln -s /usr/bin/ssh/usr/local/bin /ssh
[root@rac01 ~]# /bin/ln -s /usr/bin/scp/usr/local/bin/scp
is the grid user Configure SSH:
On each node:
[root@rac01 ~]# su – grid
[grid@rac01 ~]# mkdir ~/.ssh
[grid@rac01 ~]#cd .ssh
[grid@rac01 ~]# ssh-keygen -t rsa
[grid@rac01 ~]# ssh-keygen -t dsa
On node 1:
[grid@rac01 ~]# touch authorized_keys
[grid@rac01 ~]# ssh rac01 cat/home/grid/.ssh/id_rsa.pub >> authorized_keys
[grid@rac01 ~]# ssh rac02 cat/home/grid/.ssh/id_rsa.pub >> authorized_keys
[grid@rac01 ~]# ssh rac01 cat /home/grid/.ssh/id_dsa.pub>> authorized_keys
[grid@rac01 ~]# ssh rac02 cat/home/grid /.ssh/id_dsa.pub >> authorized_keys
[grid@rac01 ~]# scp authorized_keysrac02:/home/grid/.ssh/
On each node:
[grid@rac01 ~]# ssh rac01 date
[grid@rac01 ~]# ssh rac02 date
[grid@rac01 ~]# ssh-agent $SHELL
[grid@rac01 ~]# ssh-add
Configure SSH for oracle user :
On each node:
[root@rac01 ~]# su – oracle
[oracle@rac01 ~]# mkdir ~/.ssh
[oracle@rac01 ~]#cd .ssh
[oracle@rac01 ~]# ssh-keygen -t rsa
[oracle@rac01 ~]# ssh-keygen -t dsa
on node 1:
[oracle@rac01 ~]# touch authorized_keys
[oracle@rac01 ~]# ssh rac01 cat / home/oracle/.ssh/id_rsa.pub>> authorized_keys
[oracle@rac01 ~]# ssh rac02 cat /home/oracle/.ssh/id_rsa.pub>> authorized_keys
[oracle@rac01 ~]# ssh rac01 cat /home/oracle/.ssh/id_dsa.pub>> authorized_keys
[oracle@rac01 ~]# ssh rac02 cat /home/oracle/.ssh/ id_dsa.pub>> authorized_keys
[oracle@rac01 ~]# scp authorized_keysrac02:/home/oracle/.ssh/
on each node respectively:
[oracle@rac01 ~]# ssh rac01 date
[oracle@rac01 ~]# ssh rac02 date
[oracle@rac01 ~] # ssh-agent $SHELL
[oracle@rac01 ~]# ssh-add
[root@rac01 ~] #vi /etc/ntp.conf
...
#server 0.centos.pool.ntp.org
#server 1.centos.pool.ntp. org
#server 2.centos.pool.ntp.org
server xxx.xxx.xxx.xxx
#server127.127.1.0 # local clock
#fudge 127.127.1.0 stratum 10
[root@rac01 ~]# vi /etc/sysconfig/ntpd
# Drop root to id 'ntp:ntp' by default.
OPTIONS="-x -u ntp:ntp -p /var/run/ntpd.pid"
# Set to 'yes' to sync hw clock aftersuccessful ntpdate
SYNC_HWCLOCK=yes
# Additional options for ntpdate
NTPDATE_OPTIONS=""
Start
[root@rac01 ~]# chkconfig ntpd on
[root@rac01 ~]# service ntpd start
[root@rac01 ~]# ntpdate -d -u xxx.xxx.xxx.xxx
Enable name service cache daemon
[root@rac01 ~]# chkconfig --level 35 nscdon
[root@rac01 ~]# service nscd restart
-----------The next two stages are GI installation and oracle installation. To be continued! ~