GPFS的安装、配置、管理的介绍.doc_第1页
GPFS的安装、配置、管理的介绍.doc_第2页
GPFS的安装、配置、管理的介绍.doc_第3页
GPFS的安装、配置、管理的介绍.doc_第4页
GPFS的安装、配置、管理的介绍.doc_第5页
已阅读5页,还剩8页未读 继续免费阅读

下载本文档

版权说明:本文档由用户提供并上传,收益归属内容提供方,若内容存在侵权,请进行举报或认领

文档简介

文档管理规范GPFS的安装、配置、管理I 目 录1GPFS安装、配置、管理和维护11.1GPFS安装11.2GPFS配置11.2.1集群节点配置11.2.2GPFS NSD磁盘配置21.2.3优化GPFS 集群配置41.2.4创建GPFS 文件系统61.2.5配置变更61.3GPFS管理和维护111 GPFS安装、配置、管理和维护本部分围绕XX公司HPIS系统实际情况,描述GPFS的安装、配置过程,以及GPFS管理和维护方法。 1.1 GPFS安装GPFS 3.3及补丁使用AIX系统标准installp方式安装,安装命令示例:# installp -agYXd . all查看安装结果;# lslpp -l |grep gpfs gpfs.base 3.3.0.16 COMMITTED GPFS File Manager gpfs.base 3.3.0.16 COMMITTED GPFS File Managergpfs.docs.data 3.3.0.1 COMMITTED GPFS Server Manpages and1.2 GPFS配置1.2.1 集群节点配置 准备node列表文件,节点属性:manager或client, quorum或nonquorum# more /home/GPFS/nodeshpis1:manager-quorumhpis2:manager-quorum 创建RAC两节点的GPFS集群# mcrcluster -N /home/GPFS/nodes -p hpis1 -s hpis2 -r /usr/bin/rsh -R /usr/bin/rcp -C cls_hpis -p,-s分别表示主,辅集群管理服务器-r /usr/bin/rsh -R /usr/bin/rcp表示rsh,rcp方式进行集群管理时使用rsh,rcp管理方式;也可以通过配置ssh,使用ssh,scp方式 查看集群配置#mmlscluster1.2.2 GPFS NSD磁盘配置因为RAC只有两个Server节点,为保证GPFS文件系统的高可用性,需要将两台Server均设为quorum节点,同时从两个存储和一台Server的本地盘中分别取一个盘作为tiebreaker quorum Disk。 NSD准备生成NSD文件,格式如:# more /home/GPFS/nsdhdisk3:hpis2:descOnly:1:nsd00:hdisk29:dataAndMetadata:2:nsd01:hdisk30:dataAndMetadata:2:nsd02:hdisk31:dataAndMetadata:2:nsd03:hdisk32:dataAndMetadata:2:nsd04:hdisk33:dataAndMetadata:2:nsd05:hdisk59:dataAndMetadata:3:nsd06:hdisk60:dataAndMetadata:3:nsd07:hdisk61:dataAndMetadata:3:nsd08:hdisk62:dataAndMetadata:3:nsd09:hdisk63:dataAndMetadata:3:nsd10:注:1. 这里分别将hpis2节点的本地盘,两个存储的failgroup分别设为1,2,32. hdisk3本地盘设为descOnly,存储Disk设为dataAndMetadata 创建NSD# mmcrnsd -F /home/GPFS/nsd -v yes生成NSD后,/home/GPFS/nsd会被改写,如:# hdisk3:descOnly:1:nsd00:nsd00:descOnly:1:# hdisk29:dataAndMetadata:2:nsd01:nsd01:dataAndMetadata:2:# hdisk30:dataAndMetadata:2:nsd02:nsd02:dataAndMetadata:2:# hdisk31:dataAndMetadata:2:nsd03:nsd03:dataAndMetadata:2:# hdisk32:dataAndMetadata:2:nsd04:nsd04:dataAndMetadata:2:# hdisk33:dataAndMetadata:2:nsd05:nsd05:dataAndMetadata:2:# hdisk59:dataAndMetadata:3:nsd06:nsd06:dataAndMetadata:3:# hdisk60:dataAndMetadata:3:nsd07:nsd07:dataAndMetadata:3:# hdisk61:dataAndMetadata:3:nsd08:nsd08:dataAndMetadata:3:# hdisk62:dataAndMetadata:3:nsd09:nsd09:dataAndMetadata:3:# hdisk63:dataAndMetadata:3:nsd10:nsd10:dataAndMetadata:3:1.2.3 优化GPFS 集群配置 查看当前集群配置# mmlsclusterGPFS cluster information= GPFS cluster name: cls_hpis.hpis1 GPFS cluster id: 752142207565323869 GPFS UID domain: cls_hpis.hpis1 Remote shell command: /usr/bin/rsh Remote file copy command: /usr/bin/rcpGPFS cluster configuration servers:- Primary server: hpis2 Secondary server: hpis1 Node Daemon node name IP address Admin node name Designation - 1 hpis1 10.1.1.90 hpis1 quorum-manager 2 hpis2 10.1.1.91 hpis2 quorum-manager# mmlsconfigConfiguration data for cluster cls_hpis.hpis1:-clusterName cls_hpis.hpis1clusterId 752142207565323869autoload yesminReleaseLevel 3.3.0.2dmapiFileHandleSize 32maxblocksize 8MmaxFilesToCache 16384maxStatCache 65536maxMBpS 8192pagepool 2048MpagepoolMaxPhysMemPct 80tiebreakerDisks nsd00;nsd01;nsd06failureDetectionTime 10adminMode centralFile systems in cluster cls_hpis.hpis1:-/dev/oradata此处pagepool与tiebreakerDisks参数最重要 修改集群配置命令:mmchconfig,有些配置需要先shutdown GPFS集群#mmchconfig pagepool=3072M#mmchconfig tiebreakerDisks=nsd00;nsd01;nsd061.2.4 创建GPFS 文件系统# mmcrfs oradata -F /home/GPFS/nsd -T /oradata -A yes -K always -B 2m -E no -m 2 -M 2 -n 32 -Q no -r 2 -R 2 -S yes -v no注: 其mount点为/oradata, blocksize为2m;注意此参数已经创建,不可修改,blocksize一般要参照存储端LUN设置,以获得最优性能; -m 2 -M 2表示写两份metadata数据,-r 2 -R 2表示写两份data数据; 我们已经将数据盘的failgroup分别设为2、3,GPFS会自动将数据均衡写入不同failgroup中;1.2.5 配置变更1.2.5.1 节点变更 Cluster和filesystem manager角色当前manager角色如下# mmlsmgrfile system manager node- -oradata 10.1.1.90 (hpis1)Cluster manager node: 10.1.1.90 (hpis1) 变更Cluster或filesystem manager角色为hpis2#mmchmgr oradata hpis2#mmchmgr -c hpis2 增加节点使用mmaddnode命令增加,如:mmaddnode N othernode1 节点quorum属性变更# mmchnode -quorum -N othernode1# mmchnode -nonquorum -N othernode2# mmchnode -manager -N othernode1# mmchnode -client -N othernode1注:目前RAC两个的属性应均设为manger, quorum,若新增server节点,可设为manger, quorum,新增client节点建议设为client,nonquorum1.2.5.2 NSD disk变更 增加NSD disk 新增NSD时,请从两个存储中挑选disk,成对增加,在AIX系统中可以通过lscfg vp|grep hdisk查看某个hdisk来源与哪个存储,如:# lscfg -vp|grep hdiskhdisk4 U78AA.001.WZSGP8Z-P1-C4-T1-W20140080E518F286-L1000000000000 MPIO DS5020 Disk.hdisk34 U78AA.001.WZSGP8Z-P1-C4-T1-W20140080E518E3DA-L1000000000000 MPIO DS5020 Disk通过蓝色部分可以区分hdisk的来源,增加hdisk时,要注意failgroup的设置和现有来自同存储的failgroup一样编写待新增nsd文件,如:# more /home/GPFS/nsd2hdisk28:dataAndMetadata:2:nsd11:hdisk58:dataAndMetadata:3:nsd12:# mmcrnsd -F /home/GPFS/nsd2 -v yesmmcrnsd: Processing disk hdisk28mmcrnsd: Processing disk hdisk58mmcrnsd: Propagating the cluster configuration data to all affected nodes. This is an asynchronous process.# more /home/GPFS/nsd2# hdisk28:dataAndMetadata:2:nsd11:nsd11:dataAndMetadata:2:# hdisk58:dataAndMetadata:3:nsd12:nsd12:dataAndMetadata:3: 将新NSD增加到文件系统中# mmadddisk oradata -F /home/GPFS/nsd2 The following disks of oradata will be formatted on node hpis1: nsd11: size 209715200 KB nsd12: size 209715200 KBExtending Allocation MapChecking Allocation Map for storage pool systemCompleted adding disks to file system oradata.mmadddisk: Propagating the cluster configuration data to allaffected nodes. This is an asynchronous process. 当前NSD列表# mmlsnsd -aL File system Disk name NSD volume ID NSD servers - oradata nsd00 0A70255B4E8260E1 hpis2 oradata nsd01 0A70255B4E826051 (directly attached) oradata nsd02 0A70255B4E826052 (directly attached) oradata nsd03 0A70255B4E826053 (directly attached) oradata nsd04 0A70255B4E826054 (directly attached) oradata nsd05 0A70255B4E826055 (directly attached) oradata nsd06 0A70255B4E826056 (directly attached) oradata nsd07 0A70255B4E826057 (directly attached) oradata nsd08 0A70255B4E826058 (directly attached) oradata nsd09 0A70255B4E826059 (directly attached) oradata nsd10 0A70255B4E82605A (directly attached) oradata nsd11 0A70255B4E855BFB (directly attached) oradata nsd12 0A70255B4E855BFC (directly attached) 当前/oradata文件系统的disk列表# mmlsdisk oradata -Ldisk driver sector failure holds holds storagename type size group metadata data status availability disk id pool remarks - - - - - - - - - - -nsd00 nsd 512 1 no no ready up 1 system descnsd01 nsd 512 2 yes yes ready up 2 system descnsd02 nsd 512 2 yes yes ready up 3 system nsd03 nsd 512 2 yes yes ready up 4 system nsd04 nsd 512 2 yes yes ready up 5 system nsd05 nsd 512 2 yes yes ready up 6 system nsd06 nsd 512 3 yes yes ready up 7 system descnsd07 nsd 512 3 yes yes ready up 8 system nsd08 nsd 512 3 yes yes ready up 9 system nsd09 nsd 512 3 yes yes ready up 10 system nsd10 nsd 512 3 yes yes ready up 11 system nsd11 nsd 512 2 yes yes ready up 12 system nsd12 nsd 512 3 yes yes ready up 13 system Number of quorum disks: 3 Read quorum value: 2Write quorum value: 2 当前/oradata文件系统大小即状态# mmdf oradatadisk disk size failure holds holds free KB free KBname in KB group metadata data in full blocks in fragments- - - - - - -Disks in storage pool: system (Maximum disk size allowed is 4.5 TB)nsd00 143373952 1 no no 0 ( 0%) 0 ( 0%) nsd01 209715200 2 yes yes 208898048 (100%) 5248 ( 0%) nsd02 209715200 2 yes yes 208902144 (100%) 4352 ( 0%) nsd03 209715200 2 yes yes 208889856 (100%) 5888 ( 0%) nsd04 209715200 2 yes yes 208887808 (100%) 4416 ( 0%) nsd05 209715200 2 yes yes 208896000 (100%) 5632 ( 0%) nsd11 209715200 2 yes yes 209711104 (100%) 1984 ( 0%) nsd07 209715200 3 yes yes 208902144 (100%) 6400 ( 0%) nsd08 209715200 3 yes yes 208891904 (100%) 4544 ( 0%) nsd09 209715200 3 yes yes 208885760 (100%) 3072 ( 0%) nsd10 209715200 3 yes yes 208893952 (100%) 7872 ( 0%) nsd06 209715200 3 yes yes 208900096 (100%) 3648 ( 0%) nsd12 209715200 3 yes yes 209711104 (100%) 1984 ( 0%) - - -(pool total) 2659956352 2508369920 ( 94%) 55040 ( 0%) = = =(total) 2659956352 2508369920 ( 94%) 55040 ( 0%)Inode Information-Number of used inodes: 4095Number of free inodes: 528385Number of allocated inodes: 532480Maximum number of inodes: 21880191.3 GPFS管理和维护 启动所有/单个节点mmstartup ammstartup -N hpis1mmstartup -N hpis2 Mount 所有/单个节点文件系统mmmount all ammmount all umount 所有/单个节点文件系统mmumount all -ammumount all 停止所有/单个节点文件系统mmshutdown -ammshutdown N hpis1mmshutdown N hpis2 查看GPFS状态# mmgetstate -Las Node number Node name Quorum Nodes up Total nodes GPFS state Remarks - 1 hpis1 1* 2 2 active quorum node 2 hpis2 1* 2 2 active quorum node Summary information -Number of nodes defined in the cluster: 2Number of local nodes active in the cluster: 2Number of remote nodes joined in this cluster: 0Number of quorum nodes defined in the cluster: 2Number of quorum nodes active in the cluster: 2Quorum = 1*, Quorum achieved 查看文件系统mount状态# mmlsmount all -L File system oradata is mounted on 2 nodes: 10.1.1.90 hpis1 10.1.1.91 hpis2 监控GPFS文件系统IO情况(mmpmon)如:mmpmon -i /home/mon_gpfs -d 2000 -r 1000 -s -t 60 存储故障恢复后手工同步文件系统存储发生故障后,故障存储的NSD将变更down状态,mmlsdisk查看结果示例如下:# mmlsdisk oradata -Ldisk driver sector failure holds holds storagename type size group metadata data status availability disk id pool remarks - - - - - - - - - - -nsd00 nsd 512 1 no no ready up 1 system descnsd01 nsd 512 2 yes yes ready up 2 system descnsd02 nsd 512 2 yes yes ready up 3 system descnsd03 nsd 512 2 yes yes ready up 4 system nsd04 nsd 512 2 yes yes ready up 5 system nsd05 nsd 512 2 yes yes ready up 6 system nsd06 nsd 512 3 yes yes ready down 7 system nsd07 nsd 512 3 yes yes ready down 8 system nsd08 nsd 512 3 yes yes ready down 9 system nsd09 nsd 512 3 yes yes ready down 10 system

温馨提示

  • 1. 本站所有资源如无特殊说明,都需要本地电脑安装OFFICE2007和PDF阅读器。图纸软件为CAD,CAXA,PROE,UG,SolidWorks等.压缩文件请下载最新的WinRAR软件解压。
  • 2. 本站的文档不包含任何第三方提供的附件图纸等,如果需要附件,请联系上传者。文件的所有权益归上传用户所有。
  • 3. 本站RAR压缩包中若带图纸,网页内容里面会有图纸预览,若没有图纸预览就没有图纸。
  • 4. 未经权益所有人同意不得将文件中的内容挪作商业或盈利用途。
  • 5. 人人文库网仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对用户上传分享的文档内容本身不做任何修改或编辑,并不能对任何下载内容负责。
  • 6. 下载文件中如有侵权或不适当内容,请与我们联系,我们立即纠正。
  • 7. 本站不保证下载资源的准确性、安全性和完整性, 同时也不承担用户因使用这些下载资源对自己和他人造成任何形式的伤害或损失。

评论

0/150

提交评论