参数说明
/*
* <li>read or write test</li>
* <li>date and time the test finished</li>
* <li>number of files</li>
* <li>total number of bytes processed</li>
* <li>throughput in mb/sec (total number of bytes / sum of processing times)</li>
* <li>average i/o rate in mb/sec per file</li>
* <li>standard deviation of i/o rate </li>
* </ul>
*/
- Number of files (处理的文件个数,每个都对应一个map任务数)
- throughput(计算公式如下)
Throughput(N)=∑i=0Nfilesizei∑i=0Ntimei Throughput(N)=\frac{\sum_{i=0}^{N}{filesize_i}}{\sum_{i=0}^{N}{time_i}} Throughput(N)=∑i=0Ntimei∑i=0Nfilesizei
- average i/o rate(计算公式如下)可以看到这个是单个文件
AverageIOrate(N)=∑i=0NrateiN=∑i=0NfilesizeitimeiN Average IO rate(N)=\frac{\sum_{i=0}^{N}{rate_i}}{N}=\frac{\sum_{i=0}^{N}{\frac{filesize_i}{time_i}}}{N} AverageIOrate(N)=N∑i=0Nratei=N∑i=0Ntimeifilesizei
- concurrent throughput(并发平均吞吐)
ConcurrentThroughput=Throughput(N)∗N=∑i=0Nfilesizei∑i=0Ntimei∗N Concurrent Throughput=Throughput(N)*N=\frac{\sum_{i=0}^{N}{filesize_i}}{\sum_{i=0}^{N}{time_i}}*N ConcurrentThroughput=Throughput(N)∗N=∑i=0Ntimei∑i=0Nfilesizei∗N
- concurrent average IO rate(并发平均IO)
ConcurrentAverageIOrate=AverageIOrate(N)∗N=∑i=0Nfilesizeitimei Concurrent Average IO rate=Average IO rate(N)*N=\sum_{i=0}^{N}{\frac{filesize_i}{time_i}} ConcurrentAverageIOrate=AverageIOrate(N)∗N=i=0∑Ntimeifilesizei
上述2个公式N表示的是集群mapslot数目,本次测试的时候,为6
- N计算公式(
mapred.tasktracker.map.tasks.maximum缺省为2,mapred.tasktracker.reduce.tasks.maximum缺省为2,maxreduces为你mapreduce集群中集群的机器的数量)
N=mapSlots=mapred.tasktracker.map.tasks.maximum∗maxreducesmapred.tasktracker.reduce.tasks.maximum N=mapSlots=mapred.tasktracker.map.tasks.maximum*\frac{maxreduces}{mapred.tasktracker.reduce.tasks.maximum} N=mapSlots=mapred.tasktracker.map.tasks.maximum∗mapred.tasktracker.reduce.tasks.maximummaxreduces
环境信息
- hdp 3.1.4
- os CentOS 7.6
- hibench
hadoop-2.7.3到hadoop-3.1.1跨度较大,部分接口有变动,因此需要修改hibench来支持测试
// pom.xml 修改hadoop.mr2.version为3.1.1
<hadoop.mr2.version>3.1.1</hadoop.mr2.version>
// TestDFSIOEnh.java移除copyMerge接口
- FileUtil.copyMerge(fs, DfsioeConfig.getInstance().getReportDir(fsConfig), fs, DfsioeConfig.getInstance().getReportTmp(fsConfig), false, fsConfig, null);
- FileUtil.
- LOG.info("remote report file " + DfsioeConfig.getInstance().getReportTmp(fsConfig) + " merged.");
+ BufferedReader lines = new BufferedReader(new InputStreamReader(fs.open(new Path(DfsioeConfig.getInstance().getReportDir(fsConfig),"part-r-00000"))));
//TestDFSIO.java FileInputFormat log接口依赖在2.7中为org.apache.commons.logging.Log,3.1.1中变更为org.slf4j.Logger,修改log处理方式
- private static final Log LOG = FileInputFormat.LOG;
+ private static final Log LOG = (Log) FileInputFormat.LOG;
编译:
~/apache-maven-3.5.4/bin/mvn -Phadoopbench -Dspark=2.2 -Dscala=2.11 clean package -X

本文详细解析了Hadoop性能测试中的关键指标,包括读写测试、完成时间、处理文件数量、总字节数、吞吐量、平均IO速率及标准偏差等,同时介绍了并发吞吐量和并发平均IO的计算方法,适用于理解Hadoop集群的性能瓶颈。
953





