资料
lzo.jar链接:https://pan.baidu.com/s/13PtZPMvmRLXn243hS1X-pQ
提取码:v6t6
放置Jar包固定位置
将编译好后的hadoop-lzo-0.4.20.jar 放入hadoop-3.1.3/share/hadoop/common/
[scorpion@warehouse102 02_hadoop]$ mv hadoop-lzo-0.4.20.jar /opt/module/hadoop-3.1.3/share/hadoop/common/
core-site.xml增加配置支持LZO压缩
[scorpion@warehouse102 hadoop]$ clear
[scorpion@warehouse102 hadoop]$ pwd
/opt/module/hadoop-3.1.3/etc/hadoop
[scorpion@warehouse102 hadoop]$ vim core-site.xml
<property>
<name>io.compression.codecs</name>
<value>
org.apache.hadoop.io.compress.GzipCodec,
org.apache.hadoop.io.compress.DefaultCodec,
org.apache.hadoop.io.compress.BZip2Codec,
org.apache.hadoop.io.compress.SnappyCodec,
com.hadoop.compression.lzo.LzoCodec,
com.hadoop.compression.lzo.LzopCodec
</value>
</property>
<property>
<name>io.compression.codec.lzo.class</name>
<value>com.hadoop.compression.lzo.LzoCodec</value>
</property>
分发同步
// 分发lzo.jar
[scorpion@warehouse102 02_hadoop]$ cd /opt/module/hadoop-3.1.3/share/hadoop/common/
[scorpion@warehouse102 common]$ pwd
/opt/module/hadoop-3.1.3/share/hadoop/common
[scorpion@warehouse102 common]$ xsync ./hadoop-lzo-0.4.20.jar
// 分发core配置文件
[scorpion@warehouse102 hadoop]$ cd /opt/module/hadoop-3.1.3/etc/hadoop/
[scorpion@warehouse102 hadoop]$ xsync ./core-site.xml
压缩选择
- map输入端:考虑数据量,数据量小选压缩快的,数据量大选择是否能切片(bzip2直接能够切片 或者 lzo需要创建索引)
- map输出端:考虑速度,lzo使用最多
- reduce输出端:如果是需要数据永久保存,则压缩比越小越好;如果是下一个mapreduce输入,则回到第一点的map输入端也就是考虑数据量
LZO创建索引
创建LZO文件的索引,LZO压缩文件的可切片特性依赖于其索引,故我们需要手动为LZO压缩文件创建索引。若无索引,则LZO文件的切片只有一个。
测试数据链接:https://pan.baidu.com/s/1aWTmRtgdTmUp7kz8jYCYiw
提取码:kjea
// 文件后缀必须为.lzo
[scorpion@warehouse102 hadoop-3.1.3]$ hadoop fs -mkdir /input
[scorpion@warehouse102 hadoop-3.1.3]$ hadoop fs -put bigtable.lzo /input
[scorpion@warehouse102 hadoop-3.1.3]$ hadoop jar /opt/module/hadoop-3.1.3/share/hadoop/common/hadoop-lzo-0.4.20.jar com.hadoop.compression.lzo.DistributedLzoIndexer /input/bigtable.lzo
测试
// 从INFO mapreduce.JobSubmitter: number of splits:2 》 2个切片
[scorpion@warehouse102 hadoop-3.1.3]$ hadoop jar /opt/module/hadoop-3.1.3/share/hadoop/mapreduce/hadoop-mapreduce-examples-3.1.3.jar wordcount -Dmapreduce.job.inputformat.class=com.hadoop.mapreduce.LzoTextInputFormat /input /output2
任务报错解决
Container [pid=8468,containerID=container_1594198338753_0001_01_000002] is running 318740992B beyond the ‘VIRTUAL’ memory limit. Current usage: 111.5 MB of 1 GB physical memory used; 2.4 GB of 2.1 GB virtual memory used. Killing container.
Dump of the process-tree for container_1594198338753_0001_01_000002 :
解决办法:
修改集群节点所有的yarn-site.xml文件,增加如下配置后重启集群
<!--是否启动一个线程检查每个任务正使用的物理内存量,如果任务超出分配值,则直接将其杀掉,默认是true -->
<property>
<name>yarn.nodemanager.pmem-check-enabled</name>
<value>false</value>
</property>
<!--是否启动一个线程检查每个任务正使用的虚拟内存量,如果任务超出分配值,则直接将其杀掉,默认是true -->
<property>
<name>yarn.nodemanager.vmem-check-enabled</name>
<value>false</value>
</property>