(转)lzo文件的并行map处理

本文介绍了如何在Hadoop集群中启用LZO压缩,并通过创建索引来实现LZO文件的并行Map操作,显著提高Hive查询速度。此外,还详细讲解了如何配置Hadoop以支持LZO压缩文件。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >


Hadoop集群中启用了lzo后,还需要一些配置,才能使集群能够对单个的lzo文件进行并行的map操作,以提升job的执行速度。
   首先,要为lzo文件创建index。下面的命令对某个目录里的lzo文件创建index:
 
 
  1. $HADOOP_HOME/bin/hadoop jar $HADOOP_HOME/lib/hadoop-lzo-0.4.10.jar com.hadoop.compression.lzo.LzoIndexer /log/source/cd/ 
   使用该命令创建index要花些时间的,我一个7.5GB大小的文件,创建index,花了2分30秒的样子。 其实创建index时还有另外一个参数即com.hadoop.compression.lzo.DistributedLzoIndexer。两个选项可以参考:https://github.com/kevinweil/hadoop-lzo,该文章对这两个选项的解释,我不是很明白,但使用后一个参数可以减少创建index时所花费的时间而对mapreduce任务的执行没有影响。如下:
 
 
  1. $HADOOP_HOME/bin/hadoop jar $HADOOP_HOME/lib/hadoop-lzo-0.4.10.jar com.hadoop.compression.lzo.DistributedLzoIndexer /log/source/cd/  
   然后,在Hive中创建表时,要指定INPUTFORMAT和OUTPUTFORMAT,否则集群仍然不能对lzo进行并行的map处理。在hive中创建表时加入下列语句:
 
 
  1. SET FILEFORMAT      
  2. INPUTFORMAT "com.hadoop.mapred.DeprecatedLzoTextInputFormat"   
  3. OUTPUTFORMAT "org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat"; 
   执行了这两步操作后,对hive执行速度的提升还是很明显的。在测试中,我们使用一个7.5GB大小的lzo文件,执行稍微复杂一点的Hive命令,使用上述配置后仅需34秒的时间,而原来要180秒。



README.md

Hadoop-LZO

Hadoop-LZO is a project to bring splittable LZO compression to Hadoop. LZO is an ideal compression format for Hadoop due to its combination of speed and compression size. However, LZO files are not natively splittable, meaning the parallelism that is the core of Hadoop is gone. This project re-enables that parallelism with LZO compressed files, and also comes with standard utilities (input/output streams, etc) for working with LZO files.

Origins

This project builds off the great work done at http://code.google.com/p/hadoop-gpl-compression. As of issue 41, the differences in this codebase are the following.

  • it fixes a few bugs in hadoop-gpl-compression -- notably, it allows the decompressor to read small or uncompressable lzo files, and also fixes the compressor to follow the lzo standard when compressing small or uncompressible chunks. it also fixes a number of inconsistenly caught and thrown exception cases that can occur when the lzo writer gets killed mid-stream, plus some other smaller issues (see commit log).
  • it adds the ability to work with Hadoop streaming via the com.apache.hadoop.mapred.DeprecatedLzoTextInputFormat class
  • it adds an easier way to index lzo files (com.hadoop.compression.lzo.LzoIndexer)
  • it adds an even easier way to index lzo files, in a distributed manner (com.hadoop.compression.lzo.DistributedLzoIndexer)

Hadoop and LZO, Together at Last

LZO is a wonderful compression scheme to use with Hadoop because it's incredibly fast, and (with a bit of work) it's splittable. Gzip is decently fast, but cannot take advantage of Hadoop's natural map splits because it's impossible to start decompressing a gzip stream starting at a random offset in the file. LZO's block format makes it possible to start decompressing at certain specific offsets of the file -- those that start new LZO block boundaries. In addition to providing LZO decompression support, these classes provide an in-process indexer (com.hadoop.compression.lzo.LzoIndexer) and a map-reduce style indexer which will read a set of LZO files and output the offsets of LZO block boundaries that occur near the natural Hadoop block boundaries. This enables a large LZO file to be split into multiple mappers and processed in parallel. Because it is compressed, less data is read off disk, minimizing the number of IOPS required. And LZO decompression is so fast that the CPU stays ahead of the disk read, so there is no performance impact from having to decompress data as it's read off disk.

Building and Configuring

To get started, see http://code.google.com/p/hadoop-gpl-compression/wiki/FAQ. This project is built exactly the same way; please follow the answer to "How do I configure Hadoop to use these classes?" on that page.

You can read more about Hadoop, LZO, and how we're using it at Twitter at http://www.cloudera.com/blog/2009/11/17/hadoop-at-twitter-part-1-splittable-lzo-compression/.

Once the libs are built and installed, you may want to add them to the class paths and library paths. That is, in hadoop-env.sh, set

    export HADOOP_CLASSPATH=/path/to/your/hadoop-lzo-lib.jar
    export JAVA_LIBRARY_PATH=/path/to/hadoop-lzo-native-libs:/path/to/standard-hadoop-native-libs

Note that there seems to be a bug in /path/to/hadoop/bin/hadoop; comment out the line

    JAVA_LIBRARY_PATH=''

because it keeps Hadoop from keeping the alteration you made to JAVA_LIBRARY_PATH above. (Update: seehttps://issues.apache.org/jira/browse/HADOOP-6453). Make sure you restart your jobtrackers and tasktrackers after uploading and changing configs so that they take effect.

Using Hadoop and LZO

Reading and Writing LZO Data

The project provides LzoInputStream and LzoOutputStream wrapping regular streams, to allow you to easily read and write compressed LZO data.

Indexing LZO Files

At this point, you should also be able to use the indexer to index lzo files in Hadoop (recall: this makes them splittable, so that they can be analyzed in parallel in a mapreduce job). Imagine that big_file.lzo is a 1 GB LZO file. You have two options:

  • index it in-process via:

    hadoop jar /path/to/your/hadoop-lzo.jar com.hadoop.compression.lzo.LzoIndexer big_file.lzo
    
  • index it in a map-reduce job via:

    hadoop jar /path/to/your/hadoop-lzo.jar com.hadoop.compression.lzo.DistributedLzoIndexer big_file.lzo
    

Either way, after 10-20 seconds there will be a file named big_file.lzo.index. The newly-created index file tells the LzoTextInputFormat's getSplits function how to break the LZO file into splits that can be decompressed and processed in parallel. Alternatively, if you specify a directory instead of a filename, both indexers will recursively walk the directory structure looking for .lzo files, indexing any that do not already have corresponding .lzo.index files.

Running MR Jobs over Indexed Files

Now run any job, say wordcount, over the new file. In Java-based M/R jobs, just replace any uses of TextInputFormat by LzoTextInputFormat. In streaming jobs, add "-inputformat com.hadoop.mapred.DeprecatedLzoTextInputFormat" (streaming still uses the old APIs, and needs a class that inherits from org.apache.hadoop.mapred.InputFormat). For Pig jobs, email me or check the pig list -- I have custom LZO loader classes that work but are not (yet) contributed back.

Note that if you forget to index an .lzo file, the job will work but will process the entire file in a single split, which will be less efficient.


一、引言 随着互联网和信息技术的快速发展,数据产生的速度和规模呈现出爆炸式增长。如何高效地处理和分析这些海量数据成为了当下互联网企业和科研机构亟需解决的问题。在这个背景下,MapReduce大数据处理平台应运而生。本文将从算法的角度,对MapReduce的核心算法进行讲解。 二、MapReduce的算法原理 MapReduce的算法实现主要包括两个阶段,分别是Map阶段和Reduce阶段。具体而言,MapReduce的执行过程如下: 1. 输入数据划分:将大规模数据集划分成多个小的数据块,每个数据块大小通常是64MB或128MB。 2. Map函数执行:在每个计算节点上并行执行Map函数,将输入数据映射为一系列的键值对。 3. 中间结果合并:将所有Map函数生成的键值对按照键值进行分组,然后在每个计算节点上并行执行Combine函数,将相同键值的值进行合并,降低数据传输的开销。 4. Reduce函数执行:将所有Combine函数生成的键值对按照键值进行分组,然后在每个计算节点上并行执行Reduce函数,将相同键值的值进行合并计算,最终得到处理结果。 5. 输出结果:将每个计算节点上得到的结果输出到本地文件系统或分布式文件系统中,最终得到处理结果。 下面分别对Map阶段和Reduce阶段的算法进行详细讲解。 三、Map阶段的算法 Map阶段的算法主要包括数据划分、数据映射和数据输出三个部分。 1. 数据划分 数据划分是将大规模数据集划分成多个小的数据块的过程。MapReduce将数据划分成固定大小的数据块,并将每个数据块分配给一个计算节点进行处理。数据划分的目的是将大规模的数据集划分成多个小的任务,然后在多个计算节点上并行处理这些任务,提高大规模数据处理的效率和可靠性。 2. 数据映射 数据映射是将输入数据映射为一系列的键值对的过程。MapReduce将输入数据分成多个小的数据块,然后在每个计算节点上并行执行Map函数,将输入数据映射为一系列的键值对。Map函数通常由用户自己定义,根据具体的需求进行编写。Map函数的输入是一条记录,输出是一系列的键值对。 3. 数据输出 数据输出是将Map函数生成的中间结果输出到本地文件系统或分布式文件系统中的过程。Map函数生成的中间结果通常是一系列的键值对,其中键表示数据的某个特征,值表示该特征对应的计数。MapReduce将中间结果输出到本地文件系统或分布式文件系统中,以供Reduce函数进行处理。 四、Reduce阶段的算法 Reduce阶段的算法主要包括数据分组、数据合并和数据输出三个部分。 1. 数据分组 数据分组是将Map函数生成的中间结果按照键值进行分组的过程。MapReduce将中间结果按照键值进行分组,相同键值的值被分配到同一个Reduce函数进行处理。数据分组的目的是将相同键值的值合并到同一个Reduce函数进行处理,提高数据处理的效率。 2. 数据合并 数据合并是将相同键值的值进行合并计算的过程。Reduce函数将相同键值的值进行合并计算,得到最终的结果。Reduce函数通常由用户自己定义,根据具体的需求进行编写。 3. 数据输出 数据输出是将Reduce函数生成的结果输出到本地文件系统或分布式文件系统中的过程。Reduce函数生成的结果通常是一条记录,MapReduce将结果输出到本地文件系统或分布式文件系统中,以供用户进行后续的处理。 五、MapReduce的优化算法 为了提高MapReduce处理效率和性能,研究人员和工程师们提出了一系列的优化算法,包括Combiner算法、排序算法、压缩算法、分区算法和负载均衡算法等。 1. Combiner算法 Combiner算法是在Map阶段的数据合并过程中进行计算的算法。它可以减少数据传输的开销,提高Map阶段的处理效率。Combiner算法通常对Map函数生成的中间结果进行合并计算,将相同键值的值进行合并,降低数据传输的开销。 2. 排序算法 排序算法是在Reduce阶段的数据合并过程中进行的算法。MapReduce需要将中间结果按照键值进行排序,以便Reduce函数进行合并计算。排序算法的实现通常采用外部排序算法,将大规模数据集划分为若干个小的数据块进行排序,然后进行归并操作,得到最终的结果。 3. 压缩算法 压缩算法是在MapReduce的数据传输过程中进行的算法。MapReduce需要将大规模数据集进行传输,因此采用压缩算法可以减少数据传输的开销,提高数据传输的速度。压缩算法通常采用LZO、Snappy、Gzip等算法进行压缩和解压缩操作。 4. 分区算法 分区算法是在Reduce阶段的数据分组过程中进行的算法。MapReduce需要将中间结果按照键值进行分组,以便Reduce函数进行合并计算。分区算法通常采用哈希函数对键值进行分区,将相同哈希值的值分配到同一个Reduce函数进行处理。 5. 负载均衡算法 负载均衡算法是在MapReduce的计算节点分配过程中进行的算法。MapReduce将输入数据划分成多个小的数据块,然后分配给多个计算节点进行处理。负载均衡算法可以根据计算节点的负载
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值