




已阅读5页,还剩259页未读, 继续免费阅读
版权说明:本文档由用户提供并上传,收益归属内容提供方,若内容存在侵权,请进行举报或认领
文档简介
浪曦 本机windows的ip 台式机的ip 笔记本的 linux master ip 3.3台机机 两个linux slave1 slave2 ip 3.100 3.101上传工具保证FTP开启rootlocalhost # /etc/init.d/vsftpd restart关闭 vsftpd: 失败为 vsftpd 启动 vsftpd: 确定rootlocalhost #rootlocalhost vsftpd# lsftpusers user_list vsftpd.conf vsftpd_conf_migrate.shrootlocalhost vsftpd#ftpusers user_list这两个文件中的root用户注释掉 然后 重启 rootlocalhost vsftpd# /etc/init.d/vsftpd restart配置主机名rootlocalhost # vim /etc/sysconfig/networkNETWORKING=yesNETWORKING_IPV6=noHOSTNAME=masterrootlocalhost # hostname master三台主机都配置主机名rootlocalhost # vim /etc/sysconfig/networkNETWORKING=yesNETWORKING_IPV6=noHOSTNAME= slave1rootlocalhost # hostname slave1rootlocalhost # vim /etc/sysconfig/networkNETWORKING=yesNETWORKING_IPV6=noHOSTNAME= slave2rootlocalhost # hostname slave2关闭防火墙和selinuxrootmaster # vim /etc/sysconfig/selinux# This file controls the state of SELinux on the system.# SELINUX= can take one of these three values:# enforcing - SELinux security policy is enforced.# permissive - SELinux prints warnings instead of enforcing.# disabled - SELinux is fully disabled.SELINUX=disabled# SELINUXTYPE= type of policy in use. Possible values are:# targeted - Only targeted network daemons are protected.# strict - Full SELinux protection.SELINUXTYPE=targetedrootmaster # service iptables status表格:filterChain INPUT (policy ACCEPT)num target prot opt source destination Chain FORWARD (policy ACCEPT)num target prot opt source destination Chain OUTPUT (policy ACCEPT)num target prot opt source destination 域名解析使得master和slave1 slave2 连接对方三台都要做rootmaster # vim /etc/hosts# Do not remove the following line, or various programs# that require network functionality will fail. localhost.localdomain localhost:1 localhost6.localdomain6 localhost6 master01 slave200 slave1安装jdk在三个机器上创建文件夹rootmaster # mkdir installerrootmaster #上传jdkE:开发工具jdk-6u27-linux-i586-rpm.bin到installer目录rootmaster installer# ll总计 78876-rw-r-r- 1 root root 80680219 12-01 12:50 jdk-6u27-linux-i586-rpm.binrootmaster installer# chmod a+x jdk-6u27-linux-i586-rpm.bin rootmaster installer#rootmaster installer# ./jdk-6u27-linux-i586-rpm.bin Unpacking.Checksumming.Extracting.UnZipSFX 5.50 of 17 February 2002, by Info-ZIP (Zip-B). inflating: jdk-6u27-linux-i586.rpm 这时候可以再开一个连接进行远程拷贝rootmaster installer# scp jdk-6u27-linux-i586-rpm.bin slave1:/root/installerrootslave1s password: jdk-6u27-linux-i586-rpm.bin 拷贝完在三个机器上都要安装jdk。rootmaster installer# java -versionjava version 1.6.0_27添加用户三台机器都做rootmaster # useradd hadooprootmaster # passwd hadoopChanging password for user hadoop.New UNIX password: BAD PASSWORD: it is based on a dictionary wordRetype new UNIX password: passwd: all authentication tokens updated successfully.rootmaster #并且在slave1和slave2使用hadoop用户创建installers目录rootslave1 # su - hadoophadoopslave1 $ mkdir installer配置ssh等效性rootmaster # ssh-keygen -t rsaGenerating public/private rsa key pair.Enter file in which to save the key (/root/.ssh/id_rsa): Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /root/.ssh/id_rsa.Your public key has been saved in /root/.ssh/id_rsa.pub.The key fingerprint is:5b:d6:30:95:6f:37:b1:d8:e0:ec:b4:cb:94:cc:3f:cc rootmasterrootmaster #一路回车这一个动作在三台机器上都执行hadoopmaster $ cd .ssh/hadoopmaster .ssh$ lsid_rsa id_rsa.pubhadoopmaster .ssh$ hadoopmaster .ssh$ cat id_rsaid_rsa id_rsa.pub hadoopmaster .ssh$ cat id_rsa.pub authorized_keyshadoopmaster .ssh$ lsauthorized_keys id_rsa id_rsa.pubhadoopmaster .ssh$将生成的authorized_keys文件拷贝到slave1和slave2hadoopmaster .ssh$ scp authorized_keys slave1:/.ssh/The authenticity of host slave1 (03) cant be established.RSA key fingerprint is 61:e5:be:d1:92:41:b4:22:4e:88:ff:b1:b1:a1:64:bb.Are you sure you want to continue connecting (yes/no)? yesWarning: Permanently added slave1,03 (RSA) to the list of known hosts.hadoopslave1s password: authorized_keys 100% 395 0.4KB/s 00:00 hadoopmaster .ssh$然后进入slave1hadoopslave1 .ssh$ cat id_rsa.pub authorized_keys hadoopslave1 .ssh$ cat authorized_keys ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAw8taarZ+/ndWV04MqGsnT5cKcYs5LqMmtocWSsIxfUttYpMjwcgktjEPSByb/SFPE3alx0/Te7bjG8nFu2HHV4v+2jNfraqoBjIrO3/ITzHOSGduYmM4xbvBcXCAX5BSawwbpKn8RifPM5M1ZbExFhdZ0njsYSBlq6ZAMV+2F77enfwCI6jB/WhtfClj4QpWuMTQ8O/gqaMbM0OMrIuY84ssoYfDSpl2uUtGBBGY3cyyTDEbQukRH5doapSNPwZQs6lJSVIO7JWLGMfOQbvsqlS0r1nly57I1b7hAMZcGdVWZy2CGclQX3s8a7vjpJ8+iTFtwiAdydFsP+aQ9ldUw= hadoopmasterssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAqhiMNhNlBZ1+aC+tU9O8HKTd7lSMmqhi7FcBKue/q/H37hyMp+PqS/BVYStvEhtHzcy+1/SJWKqSV0ut1Qh8zUo42w81KW/g1xCt5fAJLe61/XtC2WyTrwfVQbFVXCPTpAarYJTlgy+ZgarD8Qg4hS642dmXKbSUQf/Mjbxd7PpcAZx1GCVOX3wck+7LIQJuLInlAFIXhyP0rq+I80CX9u40utkgJQd6ZVvsqJdnB+eeXr08w16GEOSY8ER2Vksbw69PGJjjKz1eMFpCUNatlf3bgmLp+JBOnlbgEizc21ogwcnyTXKCP9j3ZHTO2pDxAaHJ2hYJnOjr2+GSALzeOw= hadoopslave1hadoopslave1 .ssh$然后在slave1 传输到slave2hadoopslave1 .ssh$ scp authorized_keys slave2:/.sshThe authenticity of host slave2 (02) cant be established.RSA key fingerprint is 61:e5:be:d1:92:41:b4:22:4e:88:ff:b1:b1:a1:64:bb.Are you sure you want to continue connecting (yes/no)? yesWarning: Permanently added slave2,02 (RSA) to the list of known hosts.hadoopslave2s password: authorized_keys 100% 790 0.8KB/s 00:00 hadoopslave1 .ssh$到slave2上hadoopslave2 .ssh$ cat id_rsa.pub authorized_keyshadoopslave2 .ssh$ cat authorized_keys ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAw8taarZ+/ndWV04MqGsnT5cKcYs5LqMmtocWSsIxfUttYpMjwcgktjEPSByb/SFPE3alx0/Te7bjG8nFu2HHV4v+2jNfraqoBjIrO3/ITzHOSGduYmM4xbvBcXCAX5BSawwbpKn8RifPM5M1ZbExFhdZ0njsYSBlq6ZAMV+2F77enfwCI6jB/WhtfClj4QpWuMTQ8O/gqaMbM0OMrIuY84ssoYfDSpl2uUtGBBGY3cyyTDEbQukRH5doapSNPwZQs6lJSVIO7JWLGMfOQbvsqlS0r1nly57I1b7hAMZcGdVWZy2CGclQX3s8a7vjpJ8+iTFtwiAdydFsP+aQ9ldUw= hadoopmasterssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAqhiMNhNlBZ1+aC+tU9O8HKTd7lSMmqhi7FcBKue/q/H37hyMp+PqS/BVYStvEhtHzcy+1/SJWKqSV0ut1Qh8zUo42w81KW/g1xCt5fAJLe61/XtC2WyTrwfVQbFVXCPTpAarYJTlgy+ZgarD8Qg4hS642dmXKbSUQf/Mjbxd7PpcAZx1GCVOX3wck+7LIQJuLInlAFIXhyP0rq+I80CX9u40utkgJQd6ZVvsqJdnB+eeXr08w16GEOSY8ER2Vksbw69PGJjjKz1eMFpCUNatlf3bgmLp+JBOnlbgEizc21ogwcnyTXKCP9j3ZHTO2pDxAaHJ2hYJnOjr2+GSALzeOw= hadoopslave1ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAzyFZKYRXh1HIm+p/kh/P268u6CHQJ88M+vEcb0fEjpXhNoDaVDceuYhQZxc0E/3dJRd86jaRNWnV+G+IPN00ykV2+UJhE2yjsdMa+Yqwy6XU14H25lMaImJGtxpoXO+3kWKJZ1uGB0E2TU2nS+Epb8EI+6ezZ0ilQhgwpc0kQR/jN6d6hUKKK5yTxKZg4agn4QsOZhyBNQZX7tLofHELR970T5n7to19UejB1j09AVdME+TYf7q3reLYHtVA1NsD7+wQcPB3WOKCRhHU5Uas+Rd3ukIP2/H8h13mJ5NHhq5FzxdVa62OPw9BKZVVO2vXp7SvxJG0MW0Aw8fO+AuRQ= hadoopslave2hadoopslave2 .ssh$然后将这个文件传回slave1和masterhadoopslave2 .ssh$ scp authorized_keys master:/.ssh/The authenticity of host master (00) cant be established.RSA key fingerprint is 61:e5:be:d1:92:41:b4:22:4e:88:ff:b1:b1:a1:64:bb.Are you sure you want to continue connecting (yes/no)? yesWarning: Permanently added master,00 (RSA) to the list of known hosts.hadoopmasters password: authorized_keys 100% 1185 1.2KB/s 00:00 hadoopslave2 .ssh$hadoopslave2 .ssh$ scp authorized_keys slave1:/.ssh/The authenticity of host slave1 (03) cant be established.RSA key fingerprint is 61:e5:be:d1:92:41:b4:22:4e:88:ff:b1:b1:a1:64:bb.Are you sure you want to continue connecting (yes/no)? yesWarning: Permanently added slave1,03 (RSA) to the list of known hosts.hadoopslave1s password: authorized_keys 100% 1185 1.2KB/s 00:00 hadoopslave2 .ssh$在三台机器上修改权限hadoopmaster .ssh$ chmod 600 authorized_keys到这里配置完毕,可以直接使用Ssh slave1 链接 不需要提示密码。配置Hadoop上传hadoop使用hadoop用户解压缩hadoopmaster installer$ tar xzf hadoop-1.2.1.tar.gzhadoopmaster installer$ ll总计 62428drwxr-xr-x 15 hadoop hadoop 4096 2013-07-23 hadoop-1.2.1-rw-r-r- 1 hadoop hadoop 63851630 12-01 13:20 hadoop-1.2.1.tar.gzhadoopmaster installer$创建软连接hadoopmaster installer$ mv hadoop-1.2.1 .hadoopmaster installer$ cd .hadoopmaster $ ln -s hadoop-1.2.1/ hadoophadoopmaster $ ll总计 8lrwxrwxrwx 1 hadoop hadoop 13 12-01 13:22 hadoop - hadoop-1.2.1/drwxr-xr-x 15 hadoop hadoop 4096 2013-07-23 hadoop-1.2.1drwxrwxr-x 2 hadoop hadoop 4096 12-01 13:22 installerhadoopmaster $配置环境变量hadoopmaster $ vim .bashrc# .bashrc# Source global definitionsif -f /etc/bashrc ; then . /etc/bashrcfi# User specific aliases and functions#Hadoop1.0export JAVA_HOME=/usr/java/jdk1.6.0_27export HADOOP1_HOME=/home/hadoop/hadoopexport PATH=$PATH:$JAVA_HOME/bin:$HADOOP1_HOME/binexport CLASSPATH=$CLASSPATH:$JAVA_HOME/lib拷贝到slave1和slave2hadoopmaster $ scp .bashrc slave1:.bashrc 100% 308 0.3KB/s 00:00 hadoopmaster $ scp .bashrc slave2:The authenticity of host slave2 (02) cant be established.RSA key fingerprint is 61:e5:be:d1:92:41:b4:22:4e:88:ff:b1:b1:a1:64:bb.Are you sure you want to continue connecting (yes/no)? yesWarning: Permanently added slave2,02 (RSA) to the list of known hosts.bashrc 100% 308 0.3KB/s 00:00 hadoopmaster $配置hadoop文件hadoopmaster $ cd hadoophadoopmaster hadoop$ cd confhadoopmaster conf$ vim hadoop-env.sh/usr/java/jdk1.6.0_27hadoopmaster conf$ vim core-site.xml hdfs:/master:9000 hadoop.tmp.dir /home/hadoop/tmp hadoopmaster conf$ vim hdfs-site.xml dfs.data.dir /data/hadoop dfs.replication 2 hadoopmaster conf$ vim mapred-site.xml mapred.job.tracker master:9001 hadoopmaster conf$ vim masters masterhadoopmaster conf$ vim slavesslave1slave2在slave节点创建数据目录要使用root用户hadoopslave1 $ su - root口令:rootslave1 # mkdir -p /data/hadooprootslave1 # chown hadoop.hadoop /data/hadoop/rootslave2 # mkdir -p /data/hadooprootslave2 # chown hadoop.hadoop /data/hadoop/rootslave2 #拷贝hadoopmaster $ scp -r hadoop-1.2.1/ slave1:在slave1 创建软连接hadoopslave1 $ ln -s hadoop-1.2.1/ hadoophadoopmaster $ scp -r hadoop-1.2.1/ slave2:hadoopslave2 $ ln -s hadoop-1.2.1/ hadoophadoopslave2 $ ll总计 8lrwxrwxrwx 1 hadoop hadoop 13 12-01 13:51 hadoop - hadoop-1.2.1/drwxr-xr-x 11 hadoop hadoop 4096 12-01 13:51 hadoop-1.2.1drwxrwxr-x 2 hadoop hadoop 4096 12-01 13:06 installer格式化hadoopmaster $ hadoop namenode -format14/12/01 13:49:36 INFO namenode.NameNode: STARTUP_MSG: /*STARTUP_MSG: Starting NameNodeSTARTUP_MSG: host = master/00STARTUP_MSG: args = -formatSTARTUP_MSG: version = 1.2.1STARTUP_MSG: build = /repos/asf/hadoop/common/branches/branch-1.2 -r 1503152; compiled by mattf on Mon Jul 22 15:23:09 PDT 2013STARTUP_MSG: java = 1.6.0_27*/14/12/01 13:49:37 INFO util.GSet: Computing capacity for map BlocksMap14/12/01 13:49:37 INFO util.GSet: VM type = 32-bit14/12/01 13:49:37 INFO util.GSet: 2.0% max memory = 10138419214/12/01 13:49:37 INFO util.GSet: capacity = 219 = 524288 entries14/12/01 13:49:37 INFO util.GSet: recommended=524288, actual=52428814/12/01 13:49:37 INFO namenode.FSNamesystem: fsOwner=hadoop14/12/01 13:49:37 INFO namenode.FSNamesystem: supergroup=supergroup14/12/01 13:49:37 INFO namenode.FSNamesystem: isPermissionEnabled=true14/12/01 13:49:37 INFO namenode.FSNamesystem: dfs.block.invalidate.limit=10014/12/01 13:49:37 INFO namenode.FSNamesystem: isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)14/12/01 13:49:37 INFO namenode.FSEditLog: node.edits.toleration.length = 014/12/01 13:49:37 INFO namenode.NameNode: Caching file names occuring more than 10 times 14/12/01 13:49:37 INFO common.Storage: Image file /home/hadoop/tmp/dfs/name/current/fsimage of size 112 bytes saved in 0 seconds.14/12/01 13:49:38 INFO namenode.FSEditLog: closing edit log: position=4, editlog=/home/hadoop/tmp/dfs/name/current/edits14/12/01 13:49:38 INFO namenode.FSEditLog: close success: truncate to 4, editlog=/home/hadoop/tmp/dfs/name/current/edits14/12/01 13:49:38 INFO common.Storage: Storage directory /home/hadoop/tmp/dfs/name has been successfully formatted.14/12/01 13:49:38 INFO namenode.NameNode: SHUTDOWN_MSG: /*SHUTDOWN_MSG: Shutting down NameNode at master/00*/启动集群hadoopmaster $ start-all.sh starting namenode, logging to /home/hadoop/hadoop-1.2.1/libexec/./logs/hadoop-hadoop-namenode-master.outslave2: starting datanode, logging to /home/hadoop/hadoop-1.2.1/libexec/./logs/hadoop-hadoop-datanode-slave2.outslave1: starting datanode, logging to /home/hadoop/hadoop-1.2.1/libexec/./logs/hadoop-hadoop-datanode-slave1.outThe authenticity of host master (00) cant be established.RSA key fingerprint is 61:e5:be:d1:92:41:b4:22:4e:88:ff:b1:b1:a1:64:bb.Are you sure you want to continue connecting (yes/no)? yesmaster: Warning: Permanently added master,00 (RSA) to the list of known hosts.master: starting secondarynamenode, logging to /home/hadoop/hadoop-1.2.1/libexec/./logs/hadoop-hadoop-secondarynamenode-master.outstarting jobtracker, logging to /home/hadoop/hadoop-1.2.1/libexec/./logs/hadoop-hadoop-jobtracker-master.outslave1: starting tasktracker, logging to /home/hadoop/hadoop-1.2.1/libexec/./logs/hadoop-hadoop-tasktracker-slave1.outslave2: starting tasktracker, logging to /home/hadoop/hadoop-1.2.1/libexec/./logs/hadoop-hadoop-tasktracker-slave2.outhadoopmaster $ jps15276 NameNode15630 Jps15447 SecondaryNameNode15519 JobTrackerhadoopslave1 $ jps15216 DataNode15390 Jps15312 TaskTrackerhadoopslave1 $hadoopslave2 $ jps15244 TaskTracker15322 Jps15149 DataNodehadoopslave2 $验证创建input目录然后hadoopmaster input$ echo hello worldtest2.txthadoopmaster input$ echo hello hadoop test1.txthadoopmaster input$ lstest1.txt test2.txthadoopmaster input$ hadoop fs -ls ls: Cannot access .: No such file or directory.hadoopmaster input$ hadoop fs -mkdir input hadoopmaster input$ hadoop fs -ls Found 1 itemsdrwxr-xr-x - hadoop supergroup 0 2014-12-01 13:59 /user/hadoop/inputhadoopmaster input$ hadoop fs -put test1.txt inputhadoopmaster input$ hadoop fs -ls inputFound 1 items-rw-r-r- 2 hadoop supergroup 13 2014-12-01 14:00 /user/hadoop/input/test1.txt测试MapReducehadoopmaster input$ cd hadoopmaster $ cd hadoophadoopmaster hadoop$ hadoop jar hadoop-examples-1.2.1.jar wordcount input output14/12/01 14:08:17 INFO input.FileInputFormat: Total input paths to process : 214/12/01 14:08:17 INFO util.NativeCodeLoader: Loaded the native-hadoop library14/12/01 14:08:17 WARN snappy.LoadSnappy: Snappy native library not loaded14/12/01 14:08:17 INFO mapred.JobClient: Running job: job_201412011351_000114/12/01 14:08:18 INFO mapred.JobClient: map 0% reduce 0%14/12/01 14:08:34 INFO mapred.JobClient: map 50% reduce 0%14/12/01 14:08:37 INFO mapred.JobClient: map 100% reduce 0%14/12/01 14:08:43 INFO mapred.JobClient: map 100% reduce 100%14/12/01 14:08:44 INFO mapred.JobClient: Job complete: job_201412011351_000114/12/01 14:08:44 INFO mapred.JobClient: Counters: 2914/12/01 14:08:44 INFO mapred.JobClient: Job Counters 14/12/01 14:08:44 INFO mapred.JobClient: Launched reduce tasks=114/12/01 14:08:44 INFO mapred
温馨提示
- 1. 本站所有资源如无特殊说明,都需要本地电脑安装OFFICE2007和PDF阅读器。图纸软件为CAD,CAXA,PROE,UG,SolidWorks等.压缩文件请下载最新的WinRAR软件解压。
- 2. 本站的文档不包含任何第三方提供的附件图纸等,如果需要附件,请联系上传者。文件的所有权益归上传用户所有。
- 3. 本站RAR压缩包中若带图纸,网页内容里面会有图纸预览,若没有图纸预览就没有图纸。
- 4. 未经权益所有人同意不得将文件中的内容挪作商业或盈利用途。
- 5. 人人文库网仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对用户上传分享的文档内容本身不做任何修改或编辑,并不能对任何下载内容负责。
- 6. 下载文件中如有侵权或不适当内容,请与我们联系,我们立即纠正。
- 7. 本站不保证下载资源的准确性、安全性和完整性, 同时也不承担用户因使用这些下载资源对自己和他人造成任何形式的伤害或损失。
最新文档
- 2025内蒙古省际劳务协作招聘岗位考前自测高频考点模拟试题完整答案详解
- 河北省【中职专业高考】2025年中职高考对口升学(理论考试)真题卷【教育与体育大类】模拟练习
- 警民共建活动协议书范文5篇
- 安全培训教学下载课件
- 2025年安徽省粮油经贸有限公司第二批次招聘4人考前自测高频考点模拟试题及答案详解(考点梳理)
- 2025年三片式球阀项目合作计划书
- 安全培训效果问卷调查课件
- HSV-1-HSV-2-IN-3-生命科学试剂-MCE
- 农作物合作种植协议书5篇
- HCV-NS5B-polymerase-IN-3-生命科学试剂-MCE
- 内部审核检查记录表
- 2025年肾脏病学CKD患者透析并发症应对模拟考试答案及解析
- 【《航空发动机最小点火量的计算过程概述》1000字】
- 2025-2026学年七年级上册数学(人教版)教学计划(三篇)
- 八师兵团职工考试题库及答案
- 数据安全国家标准体系(2025 版)
- 潍坊市2026届高三开学调研监测考试物理试题及答案
- 抢救护理记录书写规范与技巧
- 2025至2030中国牙刷丝行业项目调研及市场前景预测评估报告
- 文明礼仪课件高中
- 人教版(2024)八年级上册生物期末复习必考知识点提纲
评论
0/150
提交评论