hadoop学习心得.doc_第1页
hadoop学习心得.doc_第2页
hadoop学习心得.doc_第3页
hadoop学习心得.doc_第4页
hadoop学习心得.doc_第5页
已阅读5页,还剩1页未读 继续免费阅读

下载本文档

版权说明:本文档由用户提供并上传,收益归属内容提供方,若内容存在侵权,请进行举报或认领

文档简介

1. FileInputFormat splits only large files. Here “large” means larger than an HDFS block. The split size is normally the size of an HDFS block, which is appropriate for most applications; however,it is possible to control this value by setting various Hadoop properties.2.So the split size is blockSize.3.Making the minimum split size greater than the block size increases the split size, but at the cost of locality.4. One reason for this is that FileInputFormat generates splits in such a way that each split is all or part of a single file. If the file is very small (“small” means significantly smaller than an HDFS block) and there are a lot of them, then each map task will process very little input, and there will be a lot of them (one per file), each of which imposes extra bookkeeping overhead.hadoop处理大量小数据文件效果不好:hadoop对数据的处理是分块处理的,默认是64M分为一个数据块,如果存在大量小数据文件(例如:2-3M一个的文件)这样的小数据文件远远不到一个数据块的大小就要按一个数据块来进行处理。这样处理带来的后果由两个:1.存储大量小文件占据存储空间,致使存储效率不高检索速度也比大文件慢。2.在进行MapReduce运算的时候这样的小文件消费计算能力,默认是按块来分配Map任务的(这个应该是使用小文件的主要缺点)那么如何解决这个问题呢?1.使用Hadoop提供的Har文件,Hadoop命令手册中有可以对小文件进行归档。2.自己对数据进行处理,把若干小文件存储成超过64M的大文件。FileInputFormat is the base class for all implementations of InputFormat that use files as their data source (see Figure 7-2). It provides two things: a place to define which files are included as the input to a job, and an implementation for generating splits for the input files. The job of dividing splits into records is performed by subclasses.An InputSplit has a length in bytes, and a set of storage locations, which are just hostname strings. Notice that a split doesnt contain the input data; it is just a reference to the data.As a MapReduce application writer, you dont need to deal with InputSplits directly, as they are created by an InputFormat. An InputFormat is responsible for creating the input splits, and dividing them into records. Before we see some concrete examples of InputFormat, lets briefly examine how it is used in MapReduce. Heres the interface:public interface InputFormat InputSplit getSplits(JobConf job, int numSplits) throws IOException;RecordReader getRecordReader(InputSplit split,JobConf job,Reporter reporter) throws IOException;The JobClient calls the getSplits() method.On a tasktracker, the map task passes the split to the getRecordReader() method on InputFormat to obtain a RecordReader for that split.A related requirement that sometimes crops up is for mappers to have access to the full contents of a file. Not splitting the file gets you part of the way there, but you also need to have a RecordReader that delivers the file contents as the value of the record.One reason for this is that FileInputFormat generates splits in such a way that each split is all or part of a single file. If the file is very small (“small” means significantly smaller than an HDFS block) and there are a lot of them, then each map task will process very little input, and there will be a lot of them (one per file), each of which imposes extra bookkeeping overhead.Example 7-2. An InputFormat for reading a whole file as a recordpublic class WholeFileInputFormatextends FileInputFormat Overrideprotected boolean isSplitable(FileSystem fs, Path filename) return false;Overridepublic RecordReader getRecordReader(InputSplit split, JobConf job, Reporter reporter) throws IOException return new WholeFileRecordReader(FileSplit) split, job);We implement getRecordReader() to return a custom implementation of RecordReader.Example 7-3. The RecordReader used by WholeFileInputFormat for reading a whole file as a recordclass WholeFileRecordReader implements RecordReader private FileSplit fileSplit;private Configuration conf;private boolean processed = false;public WholeFileRecordReader(FileSplit fileSplit, Configuration conf)throws IOException this.fileSplit = fileSplit;this.conf = conf;Overridepublic NullWritable createKey() return NullWritable.get();Overridepublic BytesWritable createValue() return new BytesWritable();Overridepublic long getPos() throws IOException return processed ? fileSplit.getLength() : 0;Overridepublic float getProgress() throws IOException return processed ? 1.0f : 0.0f;Overridepublic boolean next(NullWritable key, BytesWritable value) throws IOException if (!processed) byte contents = new byte(int) fileSplit.getLength();Path file = fileSplit.getPath();FileSystem fs = file.getFileSystem(conf);FSDataInputStream in = null;try in = fs.open(file);IOUtils.readFully(in, contents, 0, contents.length);value.set(contents, 0, contents.length); finally IOUtils.closeStream(in);processed = true;return true;return false;Overridepublic void close() throws IOException / do nothingInput splits are represented by the Java interface, InputSplit (which, like all of the classes mentioned in this section, is in the org.apache.hadoop.mapred package):public interface InputSplit extends Writable long getLength() throws IOException;String getLocations() throws IOException;An InputSplit has a length in bytes, and a set of storage locations, which are just hostname strings. Notice that a split doesnt contain the input data; it is just a reference to the data. The storage locations are used by the MapReduce system to place map tasks as close to the splits data as possible, and the size is used to order the splits so that the largest get processed first, in an attempt to minimize the job runtime (this is an instanceof a greedy approximation algorithm).As a MapReduce application writer, you dont need to deal with InputSplits directly, as they are created by an InputFormat. An InputFormat is responsible for creating the input splits, and dividing them into records. Before we see some concrete examples of InputFormat, lets briefly examine how it is used in MapReduce. Heres the interface:public interface InputFormat InputSplit getSplits(JobConf job, int numSplits) throws IOException;RecordReader getRecordReader(InputSplit split,JobConf job,Reporter reporter) throws IOException;Having calculated the splits, the client sends them to the jobtracker, whichuses their storage locations to schedule map tasks to process them on the tasktrackers.A path may represent a file, a directory, or, by using a glob, a collection of files and directories. A path representing a directory includes all the files in the director

温馨提示

  • 1. 本站所有资源如无特殊说明,都需要本地电脑安装OFFICE2007和PDF阅读器。图纸软件为CAD,CAXA,PROE,UG,SolidWorks等.压缩文件请下载最新的WinRAR软件解压。
  • 2. 本站的文档不包含任何第三方提供的附件图纸等,如果需要附件,请联系上传者。文件的所有权益归上传用户所有。
  • 3. 本站RAR压缩包中若带图纸,网页内容里面会有图纸预览,若没有图纸预览就没有图纸。
  • 4. 未经权益所有人同意不得将文件中的内容挪作商业或盈利用途。
  • 5. 人人文库网仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对用户上传分享的文档内容本身不做任何修改或编辑,并不能对任何下载内容负责。
  • 6. 下载文件中如有侵权或不适当内容,请与我们联系,我们立即纠正。
  • 7. 本站不保证下载资源的准确性、安全性和完整性, 同时也不承担用户因使用这些下载资源对自己和他人造成任何形式的伤害或损失。

评论

0/150

提交评论