虚拟机学习云环境第二篇Hadoop和Hbase 部署.docx_第1页
虚拟机学习云环境第二篇Hadoop和Hbase 部署.docx_第2页
虚拟机学习云环境第二篇Hadoop和Hbase 部署.docx_第3页
虚拟机学习云环境第二篇Hadoop和Hbase 部署.docx_第4页
虚拟机学习云环境第二篇Hadoop和Hbase 部署.docx_第5页
已阅读5页,还剩8页未读 继续免费阅读

下载本文档

版权说明:本文档由用户提供并上传,收益归属内容提供方,若内容存在侵权,请进行举报或认领

文档简介

目标:在本人的T450笔记本win7操作系统环境下熟悉云计算环境 基于开源项目nutch实现大量文本内容的快速分词及检索. Hadoop、HBase、ZooKeep、Solr、Nutch上一篇完成了Linux基础环境的搭建和集群内SSH无密码登录的设置这一篇完成Hadoop软件安装一、创建集群目录 mkdir -p /data/clustercd /data/clustermkdir tmpmkdir -p hdfs/datamkdir -p hdfs/namemkdir -p hdfs/journal二、zookeep安装在MNODE节点上执行:把zookeeper-3.4.9.tar.gz 拷贝到/data/cluster目录下cd /data/clustertar vzxfzookeeper-3.4.9.tar.gzcd /data/cluster/zookeeper-3.4.9/confcp zoo_sample.cfg zoo.cfg修改zoo.cfg ,修改后内容如下:# The number of milliseconds of each ticktickTime=2000# The number of ticks that the initial# synchronization phase can takeinitLimit=10# The number of ticks that can pass between# sending a request and getting an acknowledgementsyncLimit=5# the directory where the snapshot is stored.# do not use /tmp for storage, /tmp here is just# example sakes.dataDir=/data/cluster/zookeeper-3.4.9/data# the port at which the clients will connectclientPort=2181server.1=MNODE:2888:3888server.2=SNODE:2888:3888# the maximum number of client connections.# increase this if you need to handle more clients#maxClientCnxns=60# Be sure to read the maintenance section of the# administrator guide before turning on autopurge.# /doc/current/zookeeperAdmin.html#sc_maintenance# The number of snapshots to retain in dataDir#autopurge.snapRetainCount=3# Purge task interval in hours# Set to 0 to disable auto purge feature#autopurge.purgeInterval=1cd /data/cluster/zookeeper-3.4.9mkdir datacd /data/cluster/zookeeper-3.4.9/datatouch myidecho 1 myidscp r /data/cluster/zookeeper-3.4.9SNODE:/data/cluster在SNODE上 执行: echo 2 /data/cluster/zookeeper-3.4.9/data/myid在MNODE,SNODE上分别执行:cd /data/cluster/zookeeper-3.4.9/bin./zkServer.sh start./zkServer.sh status注意: myid文件中的内容应与zoo.cfg中的 server.X对应三、hadoop安装在MNODE上执行:将hadoop-2.7.3.tar.gz 拷贝到/data/cluster下cd /data/clustertar vzxf hadoop-2.7.3.tar.gzcd /data/cluster/hadoop-2.7.3/etc/hadoop修改hadoop-env.sh 、yarn-env.sh、 mapred-env.shexport JAVA_HOME=/usr/java/jdk1.8.0_121修改core-site.xml fs.defaultFS hdfs:/mycluster hadoop.tmp.dir /data/cluster/tmp dfs.journalnode.edits.dir /data/cluster/hdfs/journal ha.zookeeper.quorum MNODE:2181,SNODE:2181 修改hdfs-site.xml .dir file:/data/cluster/hdfs/name dfs.datanode.data.dir file:/data/cluster/hdfs/data dfs.replication 2 nodes.myclusterMNODE,SNODEnode.rpc-address.mycluster.MNODEMNODE:9000node.rpc-address.mycluster.SNODESNODE:9000node.http-address.mycluster.MNODEMNODE:50070node.http-address.mycluster.SNODESNODE:50070node.shared.edits.dirqjournal:/MNODE:8485;SNODE:8485/node.ha.ConfiguredFailoverProxyProviderdfs.ha.fencing.methodssshfencedfs.ha.fencing.ssh.private-key-files/root/.ssh/id_rsadfs.ha.automatic-failover.enabledtrue修改mapred-site.xmlcp mapred-site.xml.template mapred-site.xml yarn 修改yarn-site.xml yarn.resourcemanager.ha.enabledtrueyarn.resourcemanager.ha.rm-idsrm1,rm2yarn.resourcemanager.hostname.rm1MNODEyarn.resourcemanager.hostname.rm2SNODEyarn.resourcemanager.recovery.enabledtrueyarn.resourcemanager.store.classorg.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStoreyarn.resourcemanager.zk-addressMNODE:2181,SNODE:2181yarn.resourcemanager.cluster-idyarn-hayarn.resourcemanager.ha.automatic-failover.enabledtrueyarn.client.failover-proxy-providerorg.apache.hadoop.yarn.client.ConfiguredRMFailoverProxyProvider yarn.nodemanager.aux-services mapreduce_shuffle yarn.nodemanager.auxservices.mapreduce.shuffle.class org.apache.hadoop.mapred.ShuffleHandler yarn.nodemanager.resource.memory-mb 10240 yarn.resourcemanager.address.rm1 MNODE:8032 yarn.resourcemanager.scheduler.address.rm1 MNODE:8030 yarn.resourcemanager.resource-tracker.address.rm1 MNODE:8031 yarn.resourcemanager.admin.address.rm1 MNODE:8033 yarn.resourcemanager.webapp.address.rm1 MNODE:8088 yarn.resourcemanager.address.rm2 SNODE:8032 yarn.resourcemanager.scheduler.address.rm2 SNODE:8030 yarn.resourcemanager.resource-tracker.address.rm2 SNODE:8031 yarn.resourcemanager.admin.address.rm2 SNODE:8033 yarn.resourcemanager.webapp.address.rm2 SNODE:8088 修改slavesMNODESNODE将配置好的Hadoop 复制到各个节点对应位置上,通过 scp传送:scp -r /data/cluster/hadoop-2.7.3SNODE: /data/cluster/在MNODE,SNODE上分别执行:cd /data/cluster/zookeeper-3.4.8/bin./zkServer.sh start./zkServer.sh statuscd /data/cluster/hadoop-2.7.3sbin/hadoop-daemon.shstartjournalnode在MNODE上执行:cd /data/cluster/hadoop-2.7.3bin/hdfs zkfc -formatZKbin/hdfsnamenode -format mycluster在MNODE,SNODE上分别执行:sbin/hadoop-daemon.shstart namenode在SNODE上执行:cd /data/cluster/hadoop-2.7.3bin/hdfs zkfc -formatZKbin/hdfs namenode-bootstrapStandbysbin/hadoop-daemon.shstartnamenode在MNODE上执行:cd /data/cluster/hadoop-2.7.3sbin/start-all.sh在SNODE上执行:(为了启动备份ResourceManager)cd /data/cluster/hadoop-2.7.3sbin/start-yarn.sh在MNODE上执行:bin/hdfs haadmin -getServiceState MNODE#查看MNODE的namenode状态bin/hdfs haadmin -getServiceState SNODE#查看SNODE的namenode状态bin/yarn rmadmin -getServiceState rm1#查看rm1的ResourceManager状态bin/yarn rmadmin -getServiceState rm2#查看rm2的ResourceManager状态在MNODE、SNODE上分别执行:jps(查看进程启动情况)返回结果大致如下:在浏览器中打开28:8088/查看集群情况结果如下:在MNode机器上执行,将hbase软件hbase-1.3.0-bin.tar.gz拷贝到 /data/cluster 目录下tar vzxf hbase-1.3.0-bin.tar.gzcd /data/cluster/hbase-1.3.0/conf修改hbase-env.shexport JAVA_HOME=/usr/java/jdk1.8.0_121export HBASE_MANAGES_ZK=falseexport HBASE_CLASSPATH=/data/cluster/hadoop-2.7.3/etc/hadoop修改hbase-site.xml hbase.rootdir hdfs:/mycluster/hbase hbase.cluster.distributed true hbase.zookeeper.quorum MNODE,SNODE perty.dataDir /data/cluster/zookeeper-3.4.9/data hbase.master 60000 perty.clientPort 2181 修改regionserversMNODESNODEscp -r /data/cluster/hbase-1.3.0 SNODE:/data/cluster/在MNODE上执行:cd /d

温馨提示

  • 1. 本站所有资源如无特殊说明,都需要本地电脑安装OFFICE2007和PDF阅读器。图纸软件为CAD,CAXA,PROE,UG,SolidWorks等.压缩文件请下载最新的WinRAR软件解压。
  • 2. 本站的文档不包含任何第三方提供的附件图纸等,如果需要附件,请联系上传者。文件的所有权益归上传用户所有。
  • 3. 本站RAR压缩包中若带图纸,网页内容里面会有图纸预览,若没有图纸预览就没有图纸。
  • 4. 未经权益所有人同意不得将文件中的内容挪作商业或盈利用途。
  • 5. 人人文库网仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对用户上传分享的文档内容本身不做任何修改或编辑,并不能对任何下载内容负责。
  • 6. 下载文件中如有侵权或不适当内容,请与我们联系,我们立即纠正。
  • 7. 本站不保证下载资源的准确性、安全性和完整性, 同时也不承担用户因使用这些下载资源对自己和他人造成任何形式的伤害或损失。

评论

0/150

提交评论