HBase数据在HDFS下是如何存储的?
HBase中每张Table在根目录(/HBase)下用一个文件夹存储,Table名为文件夹名,在Table文件夹下每个Region同样用一个文件夹存储,每个Region文件夹下的每个列族也用文件夹存储,而每个列族下存储的就是一些HFile文件,HFile就是HBase数据在HFDS下存储格式,其整体目录结构如下:
/hbase/<tablename>/<encoded-regionname>/<column-family>/<filename>
HBase数据写路径
(图来自Cloudera)
在put数据时会先将数据的更新操作信息和数据信息写入WAL,在写入到WAL后,数据就会被放到MemStore中,当MemStore满后数据就会被flush到磁盘(即形成HFile文件),在这过程涉及到的flush,split,compaction等操作都容易造成节点不稳定,数据导入慢,耗费资源等问题,在海量数据的导入过程极大的消耗了系统性能,避免这些问题最好的方法就是使用BlukLoad的方式来加载数据到HBase中。
原理
利用HBase数据按照HFile格式存储在HDFS的原理,使用Mapreduce直接生成HFile格式文件后,RegionServers再将HFile文件移动到相应的Region目录下
其流程如下图:
(图来自Cloudera)
导入过程
1.使用MapReduce生成HFile文件
GenerateHFile类
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 |
|
GenerateHFileMain类
public class GenerateHFileMain { public static void main(String[] args) throws IOException, ClassNotFoundException, InterruptedException { final String INPUT_PATH= "hdfs://master:9000/INFO/Input"; final String OUTPUT_PATH= "hdfs://master:9000/HFILE/Output"; Configuration conf = HBaseConfiguration.create(); Connection connection = ConnectionFactory.createConnection(conf); Table table = connection.getTable(TableName.valueOf("TRAVEL")); Job job=Job.getInstance(conf); job.getConfiguration().set("mapred.jar", "/home/hadoop/TravelProject/out/artifacts/Travel/Travel.jar"); //预先将程序打包再将jar分发到集群上 job.setJarByClass(GenerateHFileMain.class); job.setMapperClass(GenerateHFile.class); job.setMapOutputKeyClass(ImmutableBytesWritable.class); job.setMapOutputValueClass(Put.class); job.setOutputFormatClass(HFileOutputFormat2.class); HFileOutputFormat2.configureIncrementalLoad(job,table,connection.getRegionLocator(TableName.valueOf("TRAVEL"))) FileInputFormat.addInputPath(job,new Path(INPUT_PATH)); FileOutputFormat.setOutputPath(job,new Path(OUTPUT_PATH)); System.exit(job.waitForCompletion(true)?0:1); }
注意
1.Mapper的输出Key类型必须是包含Rowkey的ImmutableBytesWritable格式,Value类型必须为KeyValue或Put类型,当导入的数据有多列时使用Put,只有一个列时使用KeyValue
2.job.setMapOutPutValueClass的值决定了job.setReduceClass的值,这里Reduce主要起到了对数据进行排序的作用,当job.setMapOutPutValueClass的值Put.class和KeyValue.class分别对应job.setReduceClass的PutSortReducer和KeyValueSortReducer
3.在创建表时对表进行预分区再结合MapReduce的并行计算机制能加快HFile文件的生成,如果对表进行了预分区(Region)就设置Reduce数等于分区数(Region)
4.在多列族的情况下需要进行多次的context.write
2.通过BlukLoad方式加载HFile文件
public class LoadIncrementalHFileToHBase { public static void main(String[] args) throws Exception { Connection connection = ConnectionFactory.createConnection(conf); Admin admin = connection.getAdmin(); Table table = connection.getTable(TableName.valueOf("TRAVEL")); LoadIncrementalHFiles load = new LoadIncrementalHFiles(conf); load.doBulkLoad(new Path("hdfs://master:9000/HFILE/OutPut"), admin,table,connection.getRegionLocator(TableName.valueOf("TRAVEL"))); } }
package hbase_mr;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.hbase.TableName;
import org.apache.hadoop.hbase.client.*;
import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
import org.apache.hadoop.hbase.mapreduce.HFileOutputFormat2;
import org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles;
import org.apache.hadoop.hbase.util.Bytes;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.input.TextInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import org.junit.Test;
import java.io.IOException;
public class HbaseBulkLoad {
static class MyMapper extends Mapper<LongWritable, Text, ImmutableBytesWritable, Put>{
private static final String COLUMNNAME1="name";
private static final String COLUMNNAME2="age";
private static final String FAMILYNAME="f1";
@Override
protected void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException {
String[] split = value.toString().split(",");
String rowKey=split[0];
String name_value = split[1].split(":")[1];
String age_value=split[2].split(":")[1];
Put p=new Put(Bytes.toBytes(rowKey));
p.addColumn(FAMILYNAME.getBytes(),COLUMNNAME1.getBytes(),name_value.getBytes());
p.addColumn(FAMILYNAME.getBytes(),COLUMNNAME2.getBytes(),age_value.getBytes());
context.write(new ImmutableBytesWritable(rowKey.getBytes()),p);
}
}
public static void main(String[] args) throws Exception{
Configuration conf=new Configuration();
Connection conn= ConnectionFactory.createConnection(conf);
Table table = conn.getTable(TableName.valueOf("people"));
Admin admin = conn.getAdmin();
String input = args[0];
String output = args[1];
Path inPath = new Path(input);
Path outPath = new Path(output);
Job job=Job.getInstance(conf,"Bulkload");
job.setJarByClass(HbaseBulkLoad.class);
job.setMapperClass(MyMapper.class);
job.setMapOutputKeyClass(ImmutableBytesWritable.class);
job.setMapOutputValueClass(Put.class);
job.setInputFormatClass(TextInputFormat.class);
job.setOutputFormatClass(HFileOutputFormat2.class);
FileSystem fs=FileSystem.get(conf);
if(fs.exists(outPath)){
fs.delete(outPath);
}
FileInputFormat.setInputPaths(job,inPath);
FileOutputFormat.setOutputPath(job,outPath);
HFileOutputFormat2.configureIncrementalLoad(job,table,conn.getRegionLocator(TableName.valueOf("people")));
boolean b = job.waitForCompletion(true);
if(b){
LoadIncrementalHFiles loadIncrementalHFiles = new LoadIncrementalHFiles(conf);
loadIncrementalHFiles.doBulkLoad(outPath,admin,table,conn.getRegionLocator(TableName.valueOf("people")));
}
System.exit(b?0:1);
}
}
由于BulkLoad是绕过了Write to WAL,Write to MemStore及Flush to disk的过程,所以并不能通过WAL来进行一些复制数据的操作
优点:
1.导入过程不占用Region资源
2.能快速导入海量的数据
3.节省内存