1、依赖
<dependency>
<groupId>org.anarres.lzo</groupId>
<artifactId>lzo-core</artifactId>
<version>1.0.0</version>
</dependency>
<dependency>
<span style="white-space:pre"> </span><groupId>org.anarres.lzo</groupId>
<span style="white-space:pre"> </span><artifactId>lzo-hadoop</artifactId>
<span style="white-space:pre"> </span><version>1.0.0</version>
</dependency>
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-core</artifactId>
<version>1.0.3</version>
</dependency>如果在eclipse里无法用maven插件下载这些依赖包,则需要手动将这三个包安装到本地maven库中,手动安装请参考
maven手动安装jar及源码
2、测试代码
(1)压缩
OutputStream out = new FileOutputStream(new File("D:\\log\\123.lzo"));
LzoAlgorithm algorithm = LzoAlgorithm.LZO1X;
LzoCompressor compressor = LzoLibrary.getInstance().newCompressor(algorithm, null);
LzoOutputStream stream = new LzoOutputStream(out, compressor, 256);
stream.write("我是中国人".getBytes("UTF-8"));
stream.close();(2)解压
<span style="white-space:pre"> </span>InputStream in = new FileInputStream(new File("D:\\log\\123.lzo"));
LzoAlgorithm algorithm = LzoAlgorithm.LZO1X;
LzoDecompressor decompressor = LzoLibrary.getInstance().newDecompressor(algorithm, null);
LzoInputStream stream = new LzoInputStream(in, decompressor);
OutputStream outputStream = new FileOutputStream(new File("D:\\log\\data.txt"));
int read = 0;
byte[] bytes = new byte[1024];
while ((read = stream.read(bytes)) != -1) {
outputStream.write(bytes, 0, read);
}
outputStream.close();
stream.close();

本文介绍如何使用LZO库在Hadoop环境中进行文件的压缩与解压操作。通过具体的示例代码展示了依赖配置、文件压缩流程及解压过程。适用于希望了解或实践Hadoop环境下高效数据压缩技术的开发者。
499

被折叠的 条评论
为什么被折叠?



