基于Hadoop 2.2.0的eclipse开发环境搭建.doc_第1页
基于Hadoop 2.2.0的eclipse开发环境搭建.doc_第2页
基于Hadoop 2.2.0的eclipse开发环境搭建.doc_第3页
基于Hadoop 2.2.0的eclipse开发环境搭建.doc_第4页
基于Hadoop 2.2.0的eclipse开发环境搭建.doc_第5页
已阅读5页,还剩22页未读 继续免费阅读

下载本文档

版权说明:本文档由用户提供并上传,收益归属内容提供方,若内容存在侵权,请进行举报或认领

文档简介

云计算小组文档基于Hadoop 2.2.0的eclipse开发环境搭建2014A0224基于Hadoop 2.2.0的eclipse开发环境搭建说明手册云计算小组陕西省天地网技术重点实验室西安交通大学编写人:XXXXXX审阅人: 文档创建时间:2014年02月24日最后更新时间:2014年02月24日基于Hadoop 2.2.0的eclipse开发环境搭建文档目录基于Hadoop 2.2.0的eclipse开发环境搭建 说明手册1目录11前言21.1版本修订22系统需求32.1系统硬件32.2系统软件32.3系统环境33安装与配置43.1准备工作43.1.1安装ant43.1.2安装eclipse43.2安装Hadoop for eclipse插件53.3Eclipse上运行WordCount程序103.3.1创建MapReduce项目103.3.2创建WordCount123.3.3运行WordCount程序153.3.4查看WordCount运行结果163.3.5Eclipse控制台无法输出信息的解决方案173.3.6Eclipse下mapreduce作业无法上传云端解决方案213.4在终端上运行WordCount程序243.4.1准备工作243.4.2运行WordCount程序241 前言本文是Hadoop 2.2.0的eclipse开发环境搭建的说明手册。包括Hadoop 2.2.0版本在单个计算节点下eclipse的安装,Hadoop插件的导入以及WordCount示例程序的运行,主要用于测试开发目的。1.1 版本修订2014.02.24:初次编写。2014.02.25:增加了log4j问题和mapreduce作业上传到云端的解决方法,另增加了在终端上运行WordCount程序的实现方法。2 系统需求一下说明Hadoop 2.2.0安装的基本需求。2.1 系统硬件Intel x86_64服务器。2.2 系统软件建议采用Ubuntu 12.04.3 LTS Server x64版本或者其他长期支持版本。另外,Hadoop建议使用最新的Hadoop 2.2.0版本,本文的安装说明基于该版本。Hadoop的安装可以参照hadoop 2.2.0单节点安装一文。建议采用jdk-7u45-linux-x64,它与Hadoop 2.2.0的相容性较好,本文默认用户已经安装了该JDK。2.3 系统环境建议在安装前,将系统升级到最新:$ sudo apt-get update$ sudo apt-get upgrade3 安装与配置在搭建Hadoop 2.2.0的eclipse开发环境前,需要安装一些必要的组件。3.1 准备工作3.1.1 安装ant从Apache官方可以下载到ant 1.9.3,或者在symnds下载:/software/apache/ant/binaries/apache-ant-1.9.3-bin.tar.gz下载后解压,执行:$ tar zxvf apache-ant-1.9.3-bin.tar.gz$ mv apache-ant-1.9.3 /opt则MAVEN_HOME地址可以如下配置,在/etc/profile中修改:$ sudo vim /etc/profile在末尾加入:export ANT_HOME=/opt/apache-ant-1.9.3export PATH=$ANT_HOME/bin:$PATH生效:$ source /etc/profile测试:$ ant -version3.1.2 安装eclipse下载eclipse:从官网/downloads/下载Eclipse IDE for Java EE Developers的Linux 64Bit版本eclipse-jee-kepler-SR1-linux-gtk-x86_64.tar.gz下载后解压,执行:$ tar zxvf eclipse-jee-kepler-R-linux-gtk-x86_64.tar.gz$ mv eclipse /opt启动eclipse:$ cd /opt/eclipse$ ./eclipse创建桌面快捷方式:$ sudo vim /usr/share/applications/eclipse.desktop写入以下内容:Desktop EntryName=EclipseComment=Eclipse SDKEncoding=UTF-8Exec=/opt/eclipse/eclipseIcon=/opt/eclipse/icon.xpmTerminal=falseType=ApplicationCategories=Application;Development;3.2 安装Hadoop for eclipse插件第一步:从百度网盘/s/1o68X8L0下载下载插件hadoop-eclipse-plugin-2.2.0.jar复制插件到eclipse安装目录的plugins下,重启ecclipse即可生效:$ cp hadoop-eclipse-plugin-2.2.0.jar /opt/eclipse/plugins重启eclipse,如下图:细心的你从上图中左侧Project Explorer下面发现DFS Locations,说明Eclipse已经识别刚才放入的Hadoop Eclipse插件了。第二步:选择Window菜单下的Preference,然后弹出一个窗体,在窗体的左侧,有一列选项,里面会多出Hadoop Map/Reduce选项,点击此选项,选择Hadoop的安装目录(如我的Hadoop目录:/opt/hadoop-2.2.0)。结果如下图:第三步:切换Map/Reduce工作目录,有两种方法:1)选择Window菜单下选择Open Perspective,弹出一个窗体,从中选择Map/Reduce选项即可进行切换。2)在Eclipse软件的右上角,点击图标 中的 ,点击Other选项,也可以弹出上图,从中选择Map/Reduce,然后点击OK即可确定。切换到Map/Reduce工作目录下的界面如下图所示。第四步:建立与Hadoop集群的连接,在Eclipse软件下面的Map/Reduce Locations进行右击,弹出一个选项,选择New Hadoop Location,然后弹出一个窗体。注意上图中的红色标注的地方,是需要我们关注的地方。1、Location Name:可以任意起,标识一个Map/Reduce Location2、Map/Reduce MasterHost:happy(Master.Happy的IP地址,也可以填99) Port:9001 3、DFS Master Use M/R Master host:前面的勾上(因为我们的NameNode和JobTracker都在一个机器上)Port:9000 4、User name:happy(默认为liunx系统管理员名字)接着点击Advanced parameters从中找见hadoop.tmp.dir,修改成为我们Hadoop集群中设置的地址,我们的Hadoop集群是/hadoop/tmp,这个参数在core-site.xml进行了配置。点击finish之后,会发现Eclipse软件下面的Map/Reduce Locations出现一条信息,就是我们刚才建立的Map/Reduce Location。第五步:查看HDFS文件系统,并尝试建立文件夹和上传文件。点击Eclipse软件左侧的DFS Locations下面的Win8ToHadoop,就会展示出HDFS上的文件结构。到此为止,我们的Hadoop Eclipse开发环境已经配置完毕3.3 Eclipse上运行WordCount程序3.3.1 创建MapReduce项目从File菜单,选择Other,找到Map/Reduce Project,然后选择它。接着,填写MapReduce工程的名字为WordCountProject,点击finish完成。目前为止我们已经成功创建了MapReduce项目,我们发现在Eclipse软件的左侧多了我们的刚才建立的项目。3.3.2 创建WordCount选择WordCountProject工程,右击弹出菜单,然后选择New,接着选择Class,然后填写如下信息:因为我们直接用Hadoop2.2.0自带的WordCount程序,所以报名需要和代码中的一致为org.apache.hadoop.examples,类名也必须一致为WordCount。将/opt/hadoop/share/hadoop/mapreduce下的hadoop-mapreduce-examples-2.2.0.jar解压,在/opt/hadoop/share/hadoop/mapreduce/sources/org/apache/hadoop/examples下找到”WordCount.java”文件,用记事本打开,然后把vim打开,然后把代码复制到刚才建立的java文件中。1package org.apache.hadoop.examples;2import java.io.IOException;3import java.util.StringTokenizer;4import org.apache.hadoop.conf.Configuration;5import org.apache.hadoop.fs.Path;6import org.apache.hadoop.io.IntWritable;7import org.apache.hadoop.io.Text;8import org.apache.hadoop.mapreduce.Job;9import org.apache.hadoop.mapreduce.Mapper;10import org.apache.hadoop.mapreduce.Reducer;11import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;12import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;13import org.apache.hadoop.util.GenericOptionsParser;1415public class WordCount 1617 public static class TokenizerMapper18 extends Mapper1920 private final static IntWritable one = new IntWritable(1);21 private Text word = new Text();2223 public void map(Object key, Text value, Context context24 ) throws IOException, InterruptedException 25 StringTokenizer itr = new StringTokenizer(value.toString();26 while (itr.hasMoreTokens() 27 word.set(itr.nextToken();28 context.write(word, one);29 30 31 3233 public static class IntSumReducer34 extends Reducer 35 private IntWritable result = new IntWritable();3637 public void reduce(Text key, Iterable values,38 Context context39 ) throws IOException, InterruptedException 40 int sum = 0;41 for (IntWritable val : values) 42 sum += val.get();43 44 result.set(sum);45 context.write(key, result);46 47 4849 public static void main(String args) throws Exception 50 Configuration conf = new Configuration();51 String otherArgs = new GenericOptionsParser(conf, args).getRemainingArgs();52 if (otherArgs.length != 2) 53 System.err.println(Usage: wordcount );54 System.exit(2);55 56 Job job = new Job(conf, word count);57 job.setJarByClass(WordCount.class);58 job.setMapperClass(TokenizerMapper.class);59 job.setCombinerClass(IntSumReducer.class);60 job.setReducerClass(IntSumReducer.class);61 job.setOutputKeyClass(Text.class);62 job.setOutputValueClass(IntWritable.class);63 FileInputFormat.addInputPath(job, new Path(otherArgs0);64 FileOutputFormat.setOutputPath(job, new Path(otherArgs1);65 System.exit(job.waitForCompletion(true) ? 0 : 1);66 673.3.3 运行WordCount程序选择Wordcount.java程序,右击一次按照Run ASRun Configuration配置。然后会弹出如下图,按照下图进行操作。选择Wordcount.java程序,右击一次按照Run ASRun on Hadoop,运行结果如下:注意:要确保HDFS中不存在output目录,否则会抛出异常。3.3.4 查看WordCount运行结果查看Eclipse软件左侧,右击DFS LocationsWin8ToHadoop文件夹,点击刷新按钮Refresh,我们刚才出现的文件夹output会出现。记得output文件夹是运行程序时自动创建的,如果已经存在相同的的文件夹,要么程序换个新的输出文件夹,要么删除HDFS上的那个重名文件夹,不然会出错。打开newoutput文件夹,打开part-r-00000文件,可以看见执行后的结果。3.3.5 Eclipse控制台无法输出信息的解决方案从115网盘/8Fm6TeS 礼包码:5lbcqltwwdw1下载perties拷贝到WordCountProject的src目录下。删除output文件夹,选择Wordcount.java程序,右击一次按照Run ASRun on Hadoop,重新运行结果如下:2014-02-25 10:04:21,048 WARN org.apache.hadoop.util.NativeCodeLoader - Unable to load native-hadoop library for your platform. using builtin-java classes where applicable2014-02-25 10:04:21,895 INFO org.apache.hadoop.conf.Configuration.deprecation - session.id is deprecated. Instead, use dfs.metrics.session-id2014-02-25 10:04:21,896 INFO org.apache.hadoop.metrics.jvm.JvmMetrics - Initializing JVM Metrics with processName=JobTracker, sessionId=2014-02-25 10:04:22,189 WARN org.apache.hadoop.mapreduce.JobSubmitter - No job jar file set. User classes may not be found. See Job or Job#setJar(String).2014-02-25 10:04:22,250 INFO org.apache.hadoop.mapreduce.lib.input.FileInputFormat - Total input paths to process : 22014-02-25 10:04:22,347 INFO org.apache.hadoop.mapreduce.JobSubmitter - number of splits:22014-02-25 10:04:22,361 INFO org.apache.hadoop.conf.Configuration.deprecation - is deprecated. Instead, use 2014-02-25 10:04:22,364 INFO org.apache.hadoop.conf.Configuration.deprecation - mapred.output.value.class is deprecated. Instead, use mapreduce.job.output.value.class2014-02-25 10:04:22,365 INFO org.apache.hadoop.conf.Configuration.deprecation - bine.class is deprecated. Instead, use bine.class2014-02-25 10:04:22,365 INFO org.apache.hadoop.conf.Configuration.deprecation - mapreduce.map.class is deprecated. Instead, use mapreduce.job.map.class2014-02-25 10:04:22,365 INFO org.apache.hadoop.conf.Configuration.deprecation - is deprecated. Instead, use 2014-02-25 10:04:22,365 INFO org.apache.hadoop.conf.Configuration.deprecation - mapreduce.reduce.class is deprecated. Instead, use mapreduce.job.reduce.class2014-02-25 10:04:22,365 INFO org.apache.hadoop.conf.Configuration.deprecation - mapred.input.dir is deprecated. Instead, use mapreduce.input.fileinputformat.inputdir2014-02-25 10:04:22,365 INFO org.apache.hadoop.conf.Configuration.deprecation - mapred.output.dir is deprecated. Instead, use mapreduce.output.fileoutputformat.outputdir2014-02-25 10:04:22,366 INFO org.apache.hadoop.conf.Configuration.deprecation - mapred.map.tasks is deprecated. Instead, use mapreduce.job.maps2014-02-25 10:04:22,366 INFO org.apache.hadoop.conf.Configuration.deprecation - mapred.output.key.class is deprecated. Instead, use mapreduce.job.output.key.class2014-02-25 10:04:22,368 INFO org.apache.hadoop.conf.Configuration.deprecation - mapred.working.dir is deprecated. Instead, use mapreduce.job.working.dir2014-02-25 10:04:22,646 INFO org.apache.hadoop.mapreduce.JobSubmitter - Submitting tokens for job: job_local1572888905_00012014-02-25 10:04:22,851 WARN org.apache.hadoop.conf.Configuration - file:/tmp/hadoop-happy/mapred/staging/happy1572888905/.staging/job_local1572888905_0001/job.xml:an attempt to override final parameter: erval; Ignoring.2014-02-25 10:04:22,851 WARN org.apache.hadoop.conf.Configuration - file:/tmp/hadoop-happy/mapred/staging/happy1572888905/.staging/job_local1572888905_0001/job.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.attempts; Ignoring.2014-02-25 10:04:23,276 WARN org.apache.hadoop.conf.Configuration - file:/tmp/hadoop-happy/mapred/local/localRunner/happy/job_local1572888905_0001/job_local1572888905_0001.xml:an attempt to override final parameter: erval; Ignoring.2014-02-25 10:04:23,277 WARN org.apache.hadoop.conf.Configuration - file:/tmp/hadoop-happy/mapred/local/localRunner/happy/job_local1572888905_0001/job_local1572888905_0001.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.attempts; Ignoring.2014-02-25 10:04:23,285 INFO org.apache.hadoop.mapreduce.Job - The url to track the job: http:/localhost:8080/2014-02-25 10:04:23,285 INFO org.apache.hadoop.mapreduce.Job - Running job: job_local1572888905_00012014-02-25 10:04:23,288 INFO org.apache.hadoop.mapred.LocalJobRunner - OutputCommitter set in config null2014-02-25 10:04:23,293 INFO org.apache.hadoop.mapred.LocalJobRunner - OutputCommitter is org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter2014-02-25 10:04:23,357 INFO org.apache.hadoop.mapred.LocalJobRunner - Waiting for map tasks2014-02-25 10:04:23,357 INFO org.apache.hadoop.mapred.LocalJobRunner - Starting task: attempt_local1572888905_0001_m_000000_02014-02-25 10:04:23,414 INFO org.apache.hadoop.mapred.Task - Using ResourceCalculatorProcessTree : 2014-02-25 10:04:23,417 INFO org.apache.hadoop.mapred.MapTask - Processing split: hdfs:/happy:9000/input/test2.txt:0+132014-02-25 10:04:23,426 INFO org.apache.hadoop.mapred.MapTask - Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer2014-02-25 10:04:23,667 INFO org.apache.hadoop.mapred.MapTask - (EQUATOR) 0 kvi 26214396(104857584)2014-02-25 10:04:23,667 INFO org.apache.hadoop.mapred.MapTask - mapreduce.task.io.sort.mb: 1002014-02-25 10:04:23,667 INFO org.apache.hadoop.mapred.MapTask - soft limit at 838860802014-02-25 10:04:23,667 INFO org.apache.hadoop.mapred.MapTask - bufstart = 0; bufvoid = 1048576002014-02-25 10:04:23,667 INFO org.apache.hadoop.mapred.MapTask - kvstart = 26214396; length = 65536002014-02-25 10:04:23,745 INFO org.apache.hadoop.mapred.LocalJobRunner - 2014-02-25 10:04:23,748 INFO org.apache.hadoop.mapred.MapTask - Starting flush of map output2014-02-25 10:04:23,748 INFO org.apache.hadoop.mapred.MapTask - Spilling map output2014-02-25 10:04:23,748 INFO org.apache.hadoop.mapred.MapTask - bufstart = 0; bufend = 21; bufvoid = 1048576002014-02-25 10:04:23,748 INFO org.apache.hadoop.mapred.MapTask - kvstart = 26214396(104857584); kvend = 26214392(104857568); length = 5/65536002014-02-25 10:04:23,759 INFO org.apache.hadoop.mapred.MapTask - Finished spill 02014-02-25 10:04:23,761 INFO org.apache.hadoop.mapred.Task - Task:attempt_local1572888905_0001_m_000000_0 is done. And is in the process of committing2014-02-25 10:04:23,771 INFO org.apache.hadoop.mapred.LocalJobRunner - map2014-02-25 10:04:23,772 INFO org.apache.hadoop.mapred.Task - Task attempt_local1572888905_0001_m_000000_0 done.2014-02-25 10:04:23,772 INFO org.apache.hadoop.mapred.LocalJobRunner - Finishing task: attempt_local1572888905_0001_m_000000_02014-02-25 10:04:23,772 INFO org.apache.hadoop.mapred.LocalJobRunner - Starting task: attempt_local1572888905_0001_m_000001_02014-02-25 10:04:23,776 INFO org.apache.hadoop.mapred.Task - Using ResourceCalculatorProcessTree : 2014-02-25 10:04:23,777 INFO org.apache.hadoop.mapred.MapTask - Processing split: hdfs:/happy:9000/input/test1.txt:0+122014-02-25 10:04:23,777 INFO org.apache.hadoop.mapred.MapTask - Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer2014-02-25 10:04:24,016 INFO org.apache.hadoop.mapred.MapTask - (EQUATOR) 0 kvi 26214396(104857584)2014-02-25 10:04:24,016 INFO org.apache.hadoop.mapred.MapTask - mapreduce.task.io.sort.mb: 1002014-02-25 10:04:24,016 INFO org.apache.hadoop.mapred.MapTask - soft limit at 838860802014-02-25 10:04:24,016 INFO org.apache.hadoop.mapred.MapTask - bufstart = 0; bufvoid = 1048576002014-02-25 10:04:24,016 INFO org.apache.hadoop.mapred.MapTask - kvstart = 26214396; length = 65536002014-02-25 10:04:24,022 INFO org.apache.hadoop.mapred.LocalJobRunner - 2014-02-25 10:04:24,022 INFO org.apache.hadoop.mapred.MapTask - Starting flush of map output2014-02-25 10:04:24,022 INFO org.apache.hadoop.mapred.MapTask - Spilling map output2014-02-25 10:04:24,022 INFO org.apache.hadoop.mapred.MapTask - bufstart = 0; bufend = 20; bufvoid = 1048576002014-02-25 10:04:24,022 INFO org.apache.hadoop.mapred.MapTask - kvstart = 26214396(104857584); kvend = 26214392(104857568); length = 5/65536002014-02-25 10:04:24,024 INFO org.apache.hadoop.mapred.MapTask - Finished spill 02014-02-25 10:04:24,026 INFO org.apache.hadoop.mapred.Task - Task:attempt_local1572888905_0001_m_000001_0 is done. And is in the process of committing2014-02-25 10:04:24,028 INFO org.apache.hadoop.mapred.LocalJobRunner - map2014-02-25 10:04:24,028 INFO org.apache.hadoop.mapred.Task - Task attempt_local1572888905_0001_m_000001_0 done.2014-02-25 10:04:24,029 INFO org.apache.hadoop.mapred.LocalJobRunner - Finishing task: attempt_local1572888905_0001_m_000001_02014-02-25 10:04:24,029 INFO org.apache.hadoop.mapred.LocalJobRunner - Map task executor complete.2014-02-25 10:04:24,041 INFO org.apache.hadoop.mapred.Task - Using ResourceCalculatorProcessTree : 2014-02-25 10:04:24,046 INFO org.apache.hadoop.mapred.Merger - Merging 2 sorted segments2014-02-25 10:04:24,049 INFO org.apache.hadoop.mapred.Merger - Down to the last merge-pass, with 2 segments left of total size: 36 bytes2014-02-25 10:04:24,050 INFO org.apache.hadoop.mapred.LocalJobRunner - 2014-02-25 10:04:24,066 INFO org.apache.hadoop.conf.Configuration.deprecation - mapred.skip.on is deprecated. Instead, use mapreduce.job.skiprecords2014-02-25 10:04:24,170 INFO org.apache.hadoop.mapred.Task - Task:attempt_local1572888905_0001_r_000000_0 is done. And is in the process of committing2014-02-25 10:04:24,173 INFO org.apache.hadoop.mapred.LocalJobRunner - 2014-02-25 10:04:24,173 INFO org.apache.hadoop.mapred.Task - Task attempt_local1572888905_0001_r_000000_0 is allowed to commit now2014-02-25 10:04:24,186 INFO org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter - Saved output of task attempt_local1572888905_0001_r_000000_0 to hdfs:/happy:9000/output/_temporary/0/task_local1572888905_0001_r_0000002014-02-25 10:04:24,187 INFO org.apache.hadoop.mapred.LocalJobRunner - reduce reduce2014-02-25 10:04:24,187 INFO org.apache.hadoop.mapred.Task - Task attempt_local1572888905_0001_r_000000_0 done.2014-02-25 10:04:24,288 INFO org.apache.hadoop.mapreduce.Job - Job job_local1572888905_0001 running in uber mode : false2014-02-25 10:04:24,288 INFO org.apache.hadoop.mapreduce.Job - map 100% reduce 100%2014-02-25 10:04:24,290 INFO org.apache.hadoop.mapreduce.Job - Job job_local1572888905_0001

温馨提示

  • 1. 本站所有资源如无特殊说明,都需要本地电脑安装OFFICE2007和PDF阅读器。图纸软件为CAD,CAXA,PROE,UG,SolidWorks等.压缩文件请下载最新的WinRAR软件解压。
  • 2. 本站的文档不包含任何第三方提供的附件图纸等,如果需要附件,请联系上传者。文件的所有权益归上传用户所有。
  • 3. 本站RAR压缩包中若带图纸,网页内容里面会有图纸预览,若没有图纸预览就没有图纸。
  • 4. 未经权益所有人同意不得将文件中的内容挪作商业或盈利用途。
  • 5. 人人文库网仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对用户上传分享的文档内容本身不做任何修改或编辑,并不能对任何下载内容负责。
  • 6. 下载文件中如有侵权或不适当内容,请与我们联系,我们立即纠正。
  • 7. 本站不保证下载资源的准确性、安全性和完整性, 同时也不承担用户因使用这些下载资源对自己和他人造成任何形式的伤害或损失。

评论

0/150

提交评论