




已阅读5页,还剩15页未读, 继续免费阅读
版权说明:本文档由用户提供并上传,收益归属内容提供方,若内容存在侵权,请进行举报或认领
文档简介
Nutch相关框架视频教程第十三讲1、改变负载三台机器,改变负载host2(NameNode、DataNode、TaskTracker) host6(SecondaryNameNode、DataNode、TaskTracker)host8(JobTracker 、DataNode、TaskTracker)指定SecondaryNameNode为host6:vi conf/masters指定host6scp conf/masters host6:/home/hadoop/hadoop-1.1.2/conf/mastersscp conf/masters host8:/home/hadoop/hadoop-1.1.2/conf/mastersvi conf/hdfs-site.xml dfs.http.address host2:50070 dfs.secondary.http.address host6:50090scp conf/hdfs-site.xml host6:/home/hadoop/hadoop-1.1.2/conf/hdfs-site.xmlscp conf/hdfs-site.xml host8:/home/hadoop/hadoop-1.1.2/conf/hdfs-site.xml指定JobTracker为host8:vi conf/mapred-site.xml mapred.job.tracker host8:9001scp conf/mapred-site.xml host6:/home/hadoop/hadoop-1.1.2/conf/mapred-site.xmlscp conf/mapred-site.xml host8:/home/hadoop/hadoop-1.1.2/conf/mapred-site.xml vi conf/core-site.xml fs.checkpoint.dir /home/hadoop/dfs/filesystem/namesecondaryscp conf/core-site.xml host6:/home/hadoop/hadoop-1.1.2/conf/core-site.xmlscp conf/core-site.xml host8:/home/hadoop/hadoop-1.1.2/conf/core-site.xml配置host8:host8上的脚本start-mapred.sh会启动host2和host6上面的TaskTracker,所以需要对host8执行:ssh-keygen -t rsa(密码为空,路径默认) ssh-copy-id -i .ssh/id_rsa.pub hadoophost2 ssh-copy-id -i .ssh/id_rsa.pub hadoophost6 ssh-copy-id -i .ssh/id_rsa.pub hadoophost8可以在host8上面通过ssh无密码登陆host2和host6ssh host2ssh host6ssh host8在/home/hadoop/.bashrc 中追加:export PATH=/home/hadoop/hadoop-1.1.2/bin:$PATHhost2: 执行start-dfs.shhost8: 执行start-mapred.sh2、SecondaryNameNodessh host6停止secondarynamenodehadoop-1.1.2/bin/hadoop-daemon.sh stop secondarynamenode强制合并fsimage和eidtshadoop-1.1.2/bin/hadoop secondarynamenode -checkpoint force启动secondarynamenodehadoop-1.1.2/bin/hadoop-daemon.sh start secondarynamenode3、启用回收站 erval 10080第十四讲1、 动态增加DataNode节点和TaskTracker节点以host226为例在host226上执行:指定主机名vi /etc/hostname指定主机名到IP地址的映射vi /etc/hosts增加用户和组addgroup hadoopadduser -ingroup hadoop hadoop 更改临时目录权限chmod 777 /tmp在host2上执行:vi conf/slaves增加host226ssh-copy-id -i .ssh/id_rsa.pub hadoophost226scp -r /home/hadoop/hadoop-1.1.2 hadoophost226:/home/hadoop/hadoop-1.1.2在host8上执行:vi conf/slaves增加host226ssh-copy-id -i .ssh/id_rsa.pub hadoophost226在host226上面执行:hadoop-daemon.sh start datanodehadoop-daemon.sh start tasktracker在/etc/hosts的配置文件中,localhost到ip地址的映射要放到ipv4的最后面第十五讲1、限制hadoop节点连接NameNode:vi conf/hdfs-site.xml dfs.hosts /home/hadoop/hadoop-1.1.2/conf/include dfs.hosts.exclude /home/hadoop/hadoop-1.1.2/conf/exclude加入集群节点vi /home/hadoop/hadoop-1.1.2/conf/includeJobTracker:vi conf/mapred-site.xml mapred.hosts /home/hadoop/hadoop-1.1.2/conf/include mapred.hosts.exclude /home/hadoop/hadoop-1.1.2/conf/exclude 加入集群节点vi /home/hadoop/hadoop-1.1.2/conf/include重启集群2、动态删除DataNode节点和TaskTracker节点 vi /home/hadoop/hadoop-1.1.2/conf/exclude增加待删除的节点host226在NameNode上面执行:hadoop dfsadmin -refreshNodesvi hadoop-1.1.2/conf/slaves (去掉host226)vi hadoop-1.1.2/conf/include (去掉host226)hadoop dfsadmin -refreshNodes(使include的更改生效)rm hadoop-1.1.2/conf/excludeexclude主要是使一个datanode节点安全退役删除tasktracker方式一:vi /home/hadoop/hadoop-1.1.2/conf/exclude增加待删除的节点host226在JobTracker上面执行:hadoop mradmin -refreshNodesvi hadoop-1.1.2/conf/slaves (去掉host226)vi hadoop-1.1.2/conf/include (去掉host226)hadoop mradmin -refreshNodes(使include的更改生效)rm hadoop-1.1.2/conf/exclude删除tasktracker方式二:vi /home/hadoop/hadoop-1.1.2/conf/include删除待删除的节点host226在JobTracker上面执行:hadoop mradmin -refreshNodesvi hadoop-1.1.2/conf/slaves (去掉host226)第十六讲1、运行基准测试hadoop jar hadoop-test-1.1.2.jarhadoop jar hadoop-test-1.1.2.jar DFSCIOTest -write -nrFiles 12 -fileSize 100000 -resFile testhadoop jar hadoop-test-1.1.2.jar DFSCIOTest -read -nrFiles 12 -fileSize 100000 -resFile testhadoop jar hadoop-test-1.1.2.jar DFSCIOTest -clear第十七讲Ganglia主要是用来监控大规模分布式系统的性能,如:cpu 、内存、硬盘、负载、网络流量等。Ganglia支持通过浏览器访问,强大的图表展示方式很容易直观地了解每个节点以及整个集群的工作状态,对集群运行参数调整、提高系统整体资源利用率起到重要作用。1、 配置服务端host6作为服务端:创建用户和组:addgroup gangliaadduser -ingroup ganglia ganglia安装:apt-get install gmetadapt-get install rrdtoolapt-get install ganglia-webfrontendapt-get install ganglia-monitor配置gmond:vi /etc/ganglia/gmond.conf先找到setuid = yes,改成setuid =no; 在找到cluster块中的name,改成name =”hadoop-cluster”;配置gmetad:vi /etc/ganglia/gmetad.conf在这个配置文件中增加datasource,即增加以下内容:data_source “hadoop-cluster” 10 host2 host6 host8gridname Hadoop指定web文件夹:ln -s /usr/share/ganglia-webfrontend /var/www/ganglia指定主机名:vi /etc/apache2/apache2.conf添加:ServerName host6重启服务:/etc/init.d/gmetad restart/etc/init.d/ganglia-monitor restart/etc/init.d/apache2 restart2、 配置客户端在host2和host8上安装数据收集服务:创建用户和组:addgroup gangliaadduser -ingroup ganglia ganglia安装:apt-get install ganglia-monitor配置gmond:vi /etc/ganglia/gmond.conf先找到setuid = yes,改成setuid =no; 在找到cluster块中的name,改成name =”hadoop-cluster”; 重启服务:/etc/init.d/ganglia-monitor restart3、 访问页面http:/host6/ganglia如果页面中的Choose a Source有unspecified, 重启gmetad即可:/etc/init.d/gmetad restart4、 集成hadoopvi conf/perties设置内容为:# 大于0.20以后的版本用ganglia31*.sink.ganglia.class=org.apache.hadoop.metrics2.sink.ganglia.GangliaSink31*.sink.ganglia.period=10# default for supportsparse is false*.sink.ganglia.supportsparse=true*.sink.ganglia.slope=jvm.metrics.gcCount=zero,jvm.metrics.memHeapUsedM=both*.sink.ganglia.dmax=jvm.metrics.threadsBlocked=70,jvm.metrics.memHeapUsedM=40#组播地址1namenode.sink.ganglia.servers=1:8649datanode.sink.ganglia.servers=1:8649jobtracker.sink.ganglia.servers=1:8649tasktracker.sink.ganglia.servers=1:8649maptask.sink.ganglia.servers=1:8649reducetask.sink.ganglia.servers=1:8649dfs.class=org.apache.hadoop.metrics.ganglia.GangliaContext31dfs.period=10dfs.servers=1:8649mapred.class=org.apache.hadoop.metrics.ganglia.GangliaContext31mapred.period=10mapred.servers=1:8649jvm.class=org.apache.hadoop.metrics.ganglia.GangliaContext31jvm.period=10jvm.servers=1:8649把配置文件复制到集群其他节点,重启集群。第十八讲1、准备压缩数据从dmoz下载url库wget /rdf/content.rdf.u8.gzgunzip content.rdf.u8.gz准备nutch1.6svn co /repos/asf/nutch/tags/release-1.6/cp release-1.6/conf/nutch-site.xml.template release-1.6/conf/nutch-site.xmlvi release-1.6/conf/nutch-site.xml增加: nutchcd release-1.6antcd .使用DmozParser把dmoz的URL库解析为文本release-1.6/runtime/local/bin/nutch org.apache.nutch.tools.DmozParser content.rdf.u8 urls &将url文本内容放到HDFS上面hadoop fs -put urls urls2、以不同压缩方法注入URL进入nutch主目录cd release-1.6以未压缩的方式注入URLruntime/deploy/bin/nutch inject data_no_compress/crawldb urls以默认压缩的方式注入URLvi conf/nutch-site.xml pression.type BLOCK press true press.map.output true pression.codec press.DefaultCodec pression.codec press.DefaultCodec antruntime/deploy/bin/nutch inject data_default_compress/crawldb urls以Gzip压缩的方式注入URLvi conf/nutch-site.xml pression.type BLOCK press true press.map.output true pression.codec press.GzipCodec pression.codec press.GzipCodec antruntime/deploy/bin/nutch inject data_gzip_compress/crawldb urls以BZip2的压缩方式注入URLvi conf/nutch-site.xml pression.type BLOCK press true press.map.output true pression.codec press.BZip2Codec pression.codec press.BZip2Codec antruntime/deploy/bin/nutch inject data_bzip2_compress/crawldb urls以Snappy的方式注入URLvi conf/nutch-site.xml pression.type BLOCK press true press.map.output true pression.codec press.SnappyCodec pression.codec press.SnappyCodec antruntime/deploy/bin/nutch inject data_snappy_compress/crawldb urls压缩类型的影响块大小的影响3、Hadoop配置Snappy压缩下载解压:wget /files/snappy-1.1.0.tar.gztar -xzvf snappy-1.1.0.tar.gzcd snappy-1.0.5编译:./configuremakemake install复制库文件:scp /usr/local/lib/libsnappy* host2:/home/hadoop/hadoop-1.1.2/lib/native/Linux-amd64-64/scp /usr/local/lib/libsnappy* host6:/home/hadoop/hadoop-1.1.2/lib/native/Linux-amd64-64/scp /usr/local/lib/libsnappy* host8:/home/hadoop/hadoop-1.1.2/lib/native/Linux-amd64-64/在每一台集群机器上面修改环境变量:vi /home/hadoop/.bashrc追加:export LD_LIBRARY_PATH=/home/hadoop/hadoop-1.1.2/lib/native/Linux-amd64-64第十九讲1、Ganglia以组播方式监控同一个网段中的多个集群vi /etc/ganglia/gmetad.conf data_source cluster1 10 host2 data_source cluster2 10 host6 data_source cluster3 10 host8/etc/init.d/gmetad restart下面要分别指定节点使用的端口:cluster1:vi /etc/ganglia/gmond.conf指定集群名称:cluster name = cluster1 owner = unspecified latlong = unspecified url = unspecified指定端口:udp_send_channel mcast_join = 1 port = 8661 ttl = 1udp_recv_channel mcast_join = 1 port = 8661 bind = 1/etc/init.d/ganglia-monitor restartcluster2:vi /etc/ganglia/gmond.conf指定集群名称:cluster name = cluster2 owner = unspecified latlong = unspecified url = unspecified指定端口:udp_send_channel mcast_join = 1 port = 8662 ttl = 1udp_recv_channel mcast_join = 1 port = 8662 bind = 1/etc/init.d/ganglia-monitor restartcluster3:vi /etc/ganglia/gmond.conf指定集群名称:cluster name = cluster3 owner = unspecified latlong = unspecified url = unspecified指定端口:udp_send_channel mcast_join = 1 port = 8663 ttl = 1udp_recv_channel mcast_join = 1 port = 8663 bind = 1/etc/init.d/ganglia-monitor restart2、Ganglia以单播方式监控同一个网段中的多个集群vi /etc/ganglia/gmetad.conf data_source cluster1 10 host2 data_source cluster2 10 host6 data_source cluster3 10 host8/etc/init.d/gmetad restartcluster1:vi /etc/ganglia/gmond.conf指定集群名称:cluster name = cluster1 owner = unspecified latlong = unspecified url = unspecified指定接收数据的节点:udp_send_channel # mcast_join = 1 host = host2 port = 8649 ttl = 1udp_recv_channel # mcast_join = 1 port = 8649 # bind = 1 /etc/init.d/ganglia-monitor restartcluster2:vi /etc/ganglia/gmond.conf指定集群名称:cluster name = cluster2 owner = unspecified latlong = unspecified url = unspecified指定接收数据的节点:udp_send_channel # mcast_join = 1 host = host6 port = 8649 ttl = 1udp_recv_channel # mcast_join = 1 port = 8649 # bind = 1 /etc/init.d/ganglia-monitor restartcluster3:vi /etc/ganglia/gmond.conf指定集群名称:cluster name = cluster3 owner = unspecified latlong = unspecified url = unspecified指定接收数据的节点:udp_send_channel # mcast_join = 1 host = host8 port = 8649 ttl = 1udp_recv_channel # mcast_join = 1 port = 8649 # bind = 1/etc/init.d/ganglia-monitor restart3、Ganglia监控不同网段中的多个集群不同网段中的主机如果属于同一个集群,则无法使用ganglia的多播配置方法,必须使用单播。下面把host226加入cluster1:在host226上安装数据收集服务:创建用户和组:addgroup gangliaadduser -ingroup ganglia ganglia安装:apt-get install ganglia-monitor配置gmond:vi /etc/ganglia/gmond.conf先找到setuid = yes,改成setuid =no; 在找到cluster块中的name,改成name =”cluster1”; 指定端口(要注意刚才第一步演示组播的时候已经把UDP端口改为8661):udp_send_channel mcast_join = 1 port = 8661 ttl = 1udp_recv_channel mcast_join = 1 port = 8661 bind = 1重启服务:/etc/init.d/ganglia-monitor restart第二十讲1、Ganglia以单播方式监控跨多个网段的单一集群vi /etc/ganglia/gmetad.conf data_source hadoop-cluster 10 host6 /etc/init.d/gmetad restart在集群的所有节点中指定以下配置:vi /etc/ganglia/gmond.conf指定集群名称:cluster name = hadoop-cluster owner = unspecified latlong = unspecified url = unspecified指定接收数据的节点:udp_send_channel # mcast_join = 1 host = host6 port = 8649 ttl = 1udp_recv_channel # mcast_join = 1 port = 8649 # bind = 1 /etc/init.d/ganglia-monitor restart2、配置Hadoop集群使用单播地址vi conf/perties设置内容为:# 大于0.20以后的版本用ganglia31*.sink.ganglia.class=org.apache.hadoop.metrics2.sink.ganglia.GangliaSink31*.sink.ganglia.period=10# default for supportsparse is false*.sink.ganglia.supportsparse=true*.sink.ganglia.slope=jvm.metrics.gcCount=zero,jvm.metrics.memHeapUsedM=both*.sink.ganglia.dmax=jvm.metrics.threadsBlocked=70,jvm.metrics.memHeapUsedM=40namenode.sink.ganglia.servers=host6datanode.sink.ganglia.servers= host6jobtracker.sink.ganglia.servers= host6tasktracker.sink.ganglia.servers= host6maptask.sink.ganglia.servers=
温馨提示
- 1. 本站所有资源如无特殊说明,都需要本地电脑安装OFFICE2007和PDF阅读器。图纸软件为CAD,CAXA,PROE,UG,SolidWorks等.压缩文件请下载最新的WinRAR软件解压。
- 2. 本站的文档不包含任何第三方提供的附件图纸等,如果需要附件,请联系上传者。文件的所有权益归上传用户所有。
- 3. 本站RAR压缩包中若带图纸,网页内容里面会有图纸预览,若没有图纸预览就没有图纸。
- 4. 未经权益所有人同意不得将文件中的内容挪作商业或盈利用途。
- 5. 人人文库网仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对用户上传分享的文档内容本身不做任何修改或编辑,并不能对任何下载内容负责。
- 6. 下载文件中如有侵权或不适当内容,请与我们联系,我们立即纠正。
- 7. 本站不保证下载资源的准确性、安全性和完整性, 同时也不承担用户因使用这些下载资源对自己和他人造成任何形式的伤害或损失。
最新文档
- 2025版蔬菜采摘与快速运输一体化服务合同
- 2025房地产合同数据安全与隐私保护及合规检查合同
- 2025年度大型矿山资源承包开发合作协议
- 2025版汽车音响配件批发合作协议
- 2025房地产租赁权转租合同补充协议
- 2025年度政府公文翻译服务合同
- 2025版纺织品行业节能减排项目合同
- 2025版信息技术项目上岗服务合同书
- 2025年新能源充电站租赁合同及运营维护服务协议
- 2025年智能家居玻璃配件购销合同模板
- 影片备案报告范文
- 绿色简约实拍杨善洲介绍
- 2024年11月-矿山隐蔽致灾因素普查
- 安宁疗护临床实践
- 电力系统自动化技术培训课件
- 真空断路器拆除施工方案
- 校服供货方案及安排
- 老旧装置安全风险评估报告
- 2024年高中生暑期社会实践活动总结
- 小学校园欺凌专项治理委员会名单及职责
- 神经根型腰椎病课件
评论
0/150
提交评论