HDFS支持多种压缩算法,需根据存储需求、处理速度和CPU资源权衡选择:
根据选择的算法安装对应依赖(以CentOS为例):
sudo yum install snappy snappy-develsudo yum install lzo lzo-devel(需编译Hadoop时启用LZO支持)sudo yum install zstd zstd-develcore-site.xml(全局压缩设置)该文件定义了Hadoop框架支持的压缩编解码器,需添加以下配置:
<property>
<name>io.compression.codecs</name>
<value>org.apache.hadoop.io.compress.SnappyCodec,org.apache.hadoop.io.compress.GzipCodec,org.apache.hadoop.io.compress.BZip2Codec,org.apache.hadoop.io.compress.LzoCodec,org.apache.hadoop.io.compress.ZStandardCodec</value>
</property>
<property>
<name>io.compression.codec.snappy.class</name>
<value>org.apache.hadoop.io.compress.SnappyCodec</value>
</property>
<property>
<name>io.compression.codec.default</name>
<value>org.apache.hadoop.io.compress.SnappyCodec</value>
</property>
io.compression.codecs:列出所有支持的编解码器(逗号分隔)。io.compression.codec.default:设置默认压缩算法(可选)。hdfs-site.xml(HDFS特定设置)该文件优化HDFS对压缩的支持,需调整以下参数:
<property>
<name>dfs.replication</name>
<value>3</value>
</property>
<property>
<name>dfs.blocksize</name>
<value>134217728</value> <!-- 128MB(默认),可根据数据大小调整 -->
</property>
<property>
<name>dfs.namenode.handler.count</name>
<value>100</value>
</property>
<property>
<name>dfs.datanode.handler.count</name>
<value>100</value>
</property>
<property>
<name>io.compression.codec.gzip.level</name>
<value>6</value> <!-- Gzip压缩级别(1-9,默认6) -->
</property>
dfs.blocksize:增大块大小可减少压缩后的文件数量,提升并行处理效率。dfs.namenode/datanode.handler.count:增加处理线程数,应对压缩/解压的网络请求。Hadoop提供了hadoop jar命令,可直接压缩/解压HDFS文件:
hadoop jar $HADOOP_HOME/share/hadoop/common/hadoop-common-*.jar compress \
-D mapreduce.output.fileoutputformat.compress=true \
-D mapreduce.output.fileoutputformat.compress.codec=org.apache.hadoop.io.compress.SnappyCodec \
/input/path/file.txt /output/path/file.snappy
hadoop jar $HADOOP_HOME/share/hadoop/common/hadoop-common-*.jar decompress \
/input/path/file.snappy /output/path/file.txt
或使用管道组合命令(如Gzip):
hadoop fs -cat /input/path/file.gz | gunzip | hadoop fs -put - /output/path/file.txt
在mapred-site.xml中配置:
<property>
<name>mapreduce.map.output.compress</name>
<value>true</value>
</property>
<property>
<name>mapreduce.map.output.compress.codec</name>
<value>org.apache.hadoop.io.compress.SnappyCodec</value>
</property>
同样在mapred-site.xml中配置:
<property>
<name>mapreduce.output.fileoutputformat.compress</name>
<value>true</value>
</property>
<property>
<name>mapreduce.output.fileoutputformat.compress.codec</name>
<value>org.apache.hadoop.io.compress.SnappyCodec</value>
</property>
<property>
<name>mapreduce.output.fileoutputformat.compress.type</name>
<value>BLOCK</value> <!-- 压缩类型:BLOCK(默认,适合列式存储)、RECORD(适合行式存储) -->
</property>
Configuration conf = new Configuration();
conf.set("mapreduce.map.output.compress", "true");
conf.set("mapreduce.map.output.compress.codec", "org.apache.hadoop.io.compress.SnappyCodec");
conf.set("mapreduce.output.fileoutputformat.compress", "true");
conf.set("mapreduce.output.fileoutputformat.compress.codec", "org.apache.hadoop.io.compress.SnappyCodec");
conf.set("mapreduce.output.fileoutputformat.compress.type", "BLOCK");
Job job = Job.getInstance(conf, "Compressed MapReduce Job");
// 其他作业配置...
System.exit(job.waitForCompletion(true) ? 0 : 1);
hdfs dfs -ls /output/path/,若文件扩展名为.snappy、.gz等,则表示压缩成功。hdfs dfs -du -h /input/path/(原始大小)与hdfs dfs -du -h /output/path/(压缩后大小)对比。hdfs dfsadmin -report,确认Compression Codecs包含已配置的算法。level参数),平衡速度与压缩比。