roacleRAC学习文档crs安装.ppt_第1页
roacleRAC学习文档crs安装.ppt_第2页
roacleRAC学习文档crs安装.ppt_第3页
roacleRAC学习文档crs安装.ppt_第4页
roacleRAC学习文档crs安装.ppt_第5页
已阅读5页,还剩26页未读 继续免费阅读

下载本文档

版权说明:本文档由用户提供并上传,收益归属内容提供方,若内容存在侵权,请进行举报或认领

文档简介

10gR2 CRS Install),10gR2 CRS和RDBMS安装 主要步骤,安装CRS软件 10.2.0.1 (base version) - 从DVD安装 安装数据库软件 10.2.0.1 (base version),选择只安装软件 从DVD安装 安装Oracle 10.2.0.3 patchset 10.2.0.3 patchset包括CRS和数据库软件部分 - 先把CRS升级到10.2.0.3 - 再把数据库软件升级到10.2.0.3 参考10.2.0.3 patchset readme文件 用数据库软件的$ORACLE_HOME/bin目录下的dbca创建RAC数据库 安装CRS 和数据库 bundle或单个patch,10gR2 CRS和RDBMS安装 安装CRS和RDBMS前的主要准备工作:,Creating Required OS Groups and User 用户: oracle - 确保所有节点上id号一致 nobody 组: dba - 确保所有节点上id号一致 oinstall (primary group) - 确保所有节点上id号一致 hagsuser - Only if using IBM HACMP,10gR2 CRS和RDBMS安装 安装CRS和RDBMS前的主要准备工作:,Configuring the oracle Users Env. ORACLE_BASE ORACLE_HOME ORA_ASM_HOME (if separate from database home) ORA_CRS_HOME ORA_NLS10 - Not ORA_NLS33,ORA_NLS33是用于oracle 8,8i,9 - $OH/nls/data,not $OH/ocommon/nls/admin/data LD_LIBRARY_PATH PATH umask 022 Configure Shell Limits for Oracle user $ulimit a,10gR2 CRS和RDBMS安装 安装CRS和RDBMS前的主要准备工作:,Configure the Hangcheck Timer (Linux) Configuring SSH on All Cluster Nodes Hardware verification/configuration RAM (1GB) disk space swap /tmp (400MB) Software verification/configuration Version,packages,patches Setting up/verifying kernel parameters shmmax等 - for database udp参数 - for RAC interconnect LLT参数(Only for VCS) - for RAC interconnect,10gR2 CRS和RDBMS安装 安装CRS和RDBMS前的主要准备工作:,Setting Up the Network - 网络交换机 - dedicated switch for RAC interconnect - crossover cable is not supported for RAC interconnect - /etc/hosts - 网卡 - the interface names associated with the network adapters for each network must be the same on all nodes (网卡名字要一致,eth0,eth1,etc) - public ip - virtual ip (RAC VIP) - 一定要与public的ip在同一个子网 (Application VIP(usrvip)支持multiple vips/multiple listeners) - private ip - 通常使用10.10.x.x或192.168.x.x,10gR2 CRS和RDBMS安装 安装CRS和RDBMS前的主要准备工作:,Setting Up the Network Optional but recommended 多块网卡绑定: Set up redundancy for public and private network if multiple NICs: IBM - EtherChannel - 使用较多 - HACMP Swap Adapter HP - Auto port aggregation (APA) - 使用较多 - MC/ServiceGuard Local Switch SUN - Trunking - IP Multipathing (IPMP) - 使用较多 Linux- NIC bonding - 使用较多,10gR2 CRS和RDBMS安装 安装CRS和RDBMS前的主要准备工作:,Install and configure OCFS2 (linux) install and configure asmlib (Linux) - 在linux上是必须的吗? No,used for asm performance Setting up shared storage(cfs,asm,raw devices) - OCR (doesnt support ASM) - Voting disks (doesnt support ASM) - Datafiles - Flash Recovery Area (doesnt support raw devices) Run CVU to check various requirement,Set up DISPLAY environment variable for display of GUI. Verify DISPLAY properly set by running xclock. Do not proceed unless xclock displays properly.,Click Next,Verify Path is properly pointing to intended $CRS_HOME location, click Next. NOTE: Typo in the screenshot above. Path should be /u01/app/crs/product/10.2.0/crs,Click Next,NOTE: The installer seems to make some assumptions regarding naming conventions for private interconnect and vip names. It assumes that the local install node is going to be part of the cluster and adds it to the Cluster Nodes list with the default naming convention. In this case, the defaults didnt match the networking setup of the system. Click Edit and change the names appropriately.,After entering the correct names, click OK. Then click Add to add additional Cluster Nodes with the appropriate network names. The Public node and Virtual Host names should properly resolve in DNS and the /etc/hosts file. Use nslookup to verify that the names properly resolve in DNS. Verify both the fully qualified and short names. All should resolve properly. Also verify reverse lookups using nslookup to resolve the IP addresses to the proper host names.,Click OK,Once all the Cluster Nodes have been added, click Next. Note about Cluster naming conventions. Adopt a naming convention for clusters throughout an organization with though toward monitoring these clusters using Grid Control. If cluster names are defined in a haphazard way then this can cause problems later when configuring Grid Control. For instance, no duplicate cluster names should be used. Consult Grid Control documentation for more details on the cluster configuration process.,The installer automatically identifies the systems ethernet interfaces and differentiates between private and public IP subnets (10. and 192. networks are assumed to be private) This should be set up the same on all the servers that will be in the cluster. They should all be on the same subnet. In this case we want to use eth0 as the public interface and eth1 for the private interconnect. Veth31 will not be used as part of the configuration. Click its line in the list and then click Edit.,Indicate Do not use by selecting the proper radio button, click OK.,Click Next,In 10gR2 the OCR and voting disk can be multiplexed. In this case, 5 raw devices have been set up on the shared storage prior to installation in order to provide locations for these files (raw3 raw7). After filling out the locations for each appropriately, click Next.,Click Next.,Click Install,Wait for completion of the installation. The CRS software will be automatically copied to the other cluster nodes (assuming that ssh is properly configured beforehand).,Once the software has been copied to all nodes in the cluster you are prompted to run a couple of scripts as root on each node. It is very important that they not be run simultaneously. Run orainstRoot.sh on each node one at a time and then run root.sh first on the install node. root.sh formats the OCR and voting disk and brings up the CRS stack on the first node. This can take a few minutes. Wait until the root.sh script finishes before running it on subsequent nodes. Run root.sh on subsequent nodes one at a time. On each node that root.sh is run it updates the OCR and voting disks with info about that node and then brings up the CRS stack. Below is the output of these scripts on the install node and the second node.,Install node orainstRoot.sh and root.sh output: rootatlegen5$ cd /u01/app/oracle/product/oraInventory rootatlegen5$ ./orainstRoot.sh Changing permissions of /u01/app/oracle/product/oraInventory to 770. Changing groupname of /u01/app/oracle/product/oraInventory to oinstall. The execution of the script is complete rootatlegen5$ cd /u01/app/crs/product/10.2.0/crs/ rootatlegen5$ ./root.sh . Checking to see if Oracle CRS stack is already configured Setting the permissions on OCR backup directory Setting up NS directories Oracle Cluster Registry configuration upgraded successfully . assigning default hostname atlegen5 for node 1. assigning default hostname atlegen6 for node 2. Successfully accumulated necessary OCR keys. Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897. node : node 1: atlegen5 atlegen5i atlegen5 node 2: atlegen6 atlegen6i atlegen6 Creating OCR keys for user root, privgrp root Operation successful. Now formatting voting device: /dev/raw/raw5 Now formatting voting device: /dev/raw/raw6 Now formatting voting device: /dev/raw/raw7 Format of 3 voting devices complete. Startup will be queued to init within 90 seconds. Adding daemons to inittab Expecting the CRS daemons to be up within 600 seconds. CSS is active on these nodes. atlegen5 CSS is inactive on these nodes. atlegen6 Local node checking complete. Run root.sh on remaining nodes to start CRS daemons.,Non-install node orainstRoot.sh and root.sh output. rootatlegen6$ cd /u01/app/oracle/product/oraInventory rootatlegen6$ ./orainstRoot.sh Changing permissions of /u01/app/oracle/product/oraInventory to 770. Changing groupname of /u01/app/oracle/product/oraInventory to oinstall. The execution of the script is complete rootatlegen6$ cd /u01/app/crs/product/10.2.0/crs/ rootatlegen6$ ./root.sh . Checking to see if Oracle CRS stack is already configured Setting the permissions on OCR backup directory Setting up NS directories Oracle Cluster Registry configuration upgraded successfully . clscfg: EXISTING configuration version 3 detected. clscfg: version 3 is 10G Release 2. assigning default hostname atlegen5 for node 1. assigning default hostname atlegen6 for node 2. Successfully accumulated necessary OCR keys. Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897. node : node 1: atlegen5 atlegen5i atlegen5 node 2: atlegen6 atlegen6i atlegen6 clscfg: Arguments check out successfully. Continued on next slide.,NO KEYS WERE WRITTEN. Supply -force parameter to override. -force is destructive and will destroy any previous cluster configuration. Oracle Cluster Registry for cluster has already been initialized Startup will be queued to init within 90 seconds. Adding daemons to inittab Expecting the CRS daemons to be up within 600 seconds. CSS is active on these nodes. atlegen5 atlegen6 CSS is active on all nodes. Waiting for the Oracle CRSD and EVMD to start Oracle CRS stack installed and running under init(1M) Running vipca(silent) for configuring nodeapps Creating VIP application resource on (2) nodes. Creating GSD application resource on (2) nodes. Creating ONS application resource on (2) nodes. Starting VIP application resource on (2) nodes. Starting GSD application resource on (2) nodes. Starting ONS application resource on (2) nodes. Done.,Note that once the root.sh is run on the last node and the CRS stack is up on all the nodes that the root.sh script invokes the vipca utility to configure the nodeapps (including the vips) on all the nodes.,Several configuration assistants are ru

温馨提示

  • 1. 本站所有资源如无特殊说明,都需要本地电脑安装OFFICE2007和PDF阅读器。图纸软件为CAD,CAXA,PROE,UG,SolidWorks等.压缩文件请下载最新的WinRAR软件解压。
  • 2. 本站的文档不包含任何第三方提供的附件图纸等,如果需要附件,请联系上传者。文件的所有权益归上传用户所有。
  • 3. 本站RAR压缩包中若带图纸,网页内容里面会有图纸预览,若没有图纸预览就没有图纸。
  • 4. 未经权益所有人同意不得将文件中的内容挪作商业或盈利用途。
  • 5. 人人文库网仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对用户上传分享的文档内容本身不做任何修改或编辑,并不能对任何下载内容负责。
  • 6. 下载文件中如有侵权或不适当内容,请与我们联系,我们立即纠正。
  • 7. 本站不保证下载资源的准确性、安全性和完整性, 同时也不承担用户因使用这些下载资源对自己和他人造成任何形式的伤害或损失。

评论

0/150

提交评论