hbase分布式安装

hbase支持单机、伪分布式、分布式安装:
1、单机安装
在一台机器上,是否本地文件系统(非HDFS),一般用于练习
2、伪分布式
一台机器,hbase和hadoop安装在同一台机器,可用于开发
3、分布式
多台机器,可以考虑hbase分配几台机器、hadoop分配几台机器,自己本机使用虚拟机,所以准备3台:192.168.197.131,192.168.197.130,192.168.197.132,并且131作为hbase-master/hbase-regioner,同时也作为hadoop-namenode/hadoop-datanode,其他2台同样作为hbase-regioner,也作为hadoop-datanode
其中hadoop安装,请参见http://blog.youkuaiyun.com/skyering/article/details/6457466

[color=red][b]一、SSH免密登录[/b][/color]
分别在3台机器上创建hadoop,hbase用户,分别安装hadoop和hbase,免密登录请参见:http://dien.iteye.com/admin/blogs/2163246
其中hadoop,hbase用户共享一堆公私钥(方法一)

[color=red][b]二、更改机器名[/b][/color]
192.168.197.131 hadoop-namenode
192.168.197.130 hadoop-datanode
192.168.197.132 hadoop-datanode2
分别按以上分配进行机器名更改,并且每天机器的/etc/hosts中,必须配置解析其他2台的机器名,同时要确保注释掉127.0.0.1(见以下截图),否则hbase其中过程中,regioner会无法连接上master

[img]http://dl2.iteye.com/upload/attachment/0104/2358/f06126b8-b67f-3ead-b06e-aa16f3d7f019.jpg[/img]

[color=red][b]三、确认版本[/b][/color]
进行安装之前,确认hbase,hadoop匹配的版本
Table 2.1. Hadoop version support matrix
[table]
| | HBase-0.92.x|HBase-0.94.x|HBase-0.96|
|Hadoop-0.20.205| S |X |X |
|Hadoop-0.22.x | S |X |X |
|Hadoop-1.0.x |S |S |S |
|Hadoop-1.1.x |NT |S |S |
|Hadoop-0.23.x |X |S |NT |
|Hadoop-2.x |X |S |S |
[/table]
S = supported and tested,支持
X = not supported,不支持
NT = not tested enough.可以运行但测试不充分
由于 HBase 依赖 Hadoop,它配套发布了一个Hadoop jar 文件在它的 lib 下。该套装jar仅用于独立模式。在分布式模式下,Hadoop版本必须和HBase下的版本一致。用你运行的分布式Hadoop版本jar文件替换HBase lib目录下的Hadoop jar文件,以避免版本不匹配问题。确认替换了集群中所有HBase下的jar文件。Hadoop版本不匹配问题有不同表现,但看起来都像挂掉了。

此次选择hadoop-0.20.205,hbase-0.92.0

[color=red][b]四、ulimit 和 nproc[/b][/color]
HBase是数据库,会在同一时间使用很多的文件句柄。大多数linux系统使用的默认值1024是不能满足的,会导致FAQ: Why do I see "java.io.IOException...(Too many open files)" in my logs?异常。还可能会发生这样的异常

2010-04-06 03:04:37,542 INFO org.apache.hadoop.hdfs.DFSClient: Exception increateBlockOutputStream java.io.EOFException
2010-04-06 03:04:37,542 INFO org.apache.hadoop.hdfs.DFSClient: Abandoning block blk_-6935524980745310745_1391901

所以你需要修改你的最大文件句柄限制。可以设置到10k。大致的数学运算如下:每列族至少有1个存储文件(StoreFile) 可能达到5-6个如果区域有压力。将每列族的存储文件平均数目和每区域服务器的平均区域数目相乘。例如:假设一个模式有3个列族,每个列族有3个存储文件,每个区域服务器有100个区域,JVM 将打开3 * 3 * 100 = 900 个文件描述符(不包含打开的jar文件,配置文件等)

你还需要修改 hbase 用户的 nproc,在压力下,如果过低会造成 OutOfMemoryError异常[3] [4]。

需要澄清的,这两个设置是针对操作系统的,不是HBase本身的。有一个常见的错误是HBase运行的用户,和设置最大值的用户不是一个用户。在HBase启动的时候,第一行日志会现在ulimit信息,确保其正确。[5]

3台机器全部进行更改:
vi /etc/security/limits.conf

[img]http://dl2.iteye.com/upload/attachment/0104/2365/2605ed9d-d775-3a32-a687-a4f7127a00af.jpg[/img]
hadoop - nofile 32768
这个的含义是:
hadoop最大能打开的文件数不超过65536
hadoop hard nproc 16384
这个的含义是:
hadoop用户最大能开启的进程数不超过16384
vi /etc/pam.d/login

[img]http://dl2.iteye.com/upload/attachment/0104/2367/873c948d-b021-342c-9d72-ec5ea5ef0f4f.jpg[/img]
pam_limits.so模块可以使用在对一般应用程序使用的资源限制方面。如果需要在SSH服务器上对来自不同用户的ssh访问进行限制,就可以调用该模块来实现相关功能。当需要限制用户admin登录到SSH服务器时的最大连接数(防止同一个用户开启过多的登录进程),就可以在/etc/pam.d/sshd文件中增加一行对pam_limits.so模块的调用:
session required pam_limit.so

还有注销再登录,这些配置才能生效!

[color=red][b]五、上传hbase-0.92.0.tar.gz[/b][/color]
3台分别上传hbase-0.92.0.tar.gz到hbase的主目录下(/home/hbase),并进行解压,设置环境变量:

[img]http://dl2.iteye.com/upload/attachment/0104/2401/4311190f-16b0-3520-91b0-6586b3e2ac3c.jpg[/img]

[color=red][b]六、为hbase,zookeeper创建临时目录[/b][/color]
3台机器分别为hbase,zookeeper创建临时目录

[img]http://dl2.iteye.com/upload/attachment/0104/2403/e7da58a1-3b22-3d49-9eab-11c9a0be2e42.jpg[/img]

[color=red][b]七、配置${HBASE_HOME}/conf/hbase-env.sh[/b][/color]
3台机器分别配置:

[img]http://dl2.iteye.com/upload/attachment/0104/2411/5e73f1a5-2a24-3894-a619-9c50078b2174.jpg[/img]

[img]http://dl2.iteye.com/upload/attachment/0104/2413/04fe65e9-c4a7-319d-aaad-5a75140a4cc2.jpg[/img]

[color=red][b]八、配置${HBASE_HOME}/conf/hbase-site.xml[/b][/color]
3台机器分别配置:
注意hbase.rootdir请使用机器名,否则启动有问题
[img]http://dl2.iteye.com/upload/attachment/0104/2431/c2948004-f8fc-3f19-9fea-a022cb9e0cea.jpg[/img]

[color=red][b]九、配置hadoop的${HADOOP_HOME}/conf/hdfs-site.xml[/b][/color]
3台机器分别配置:

[img]http://dl2.iteye.com/upload/attachment/0104/2435/62d0272f-c80c-3970-ae38-9b7454fa4c90.jpg[/img]
一个 Hadoop HDFS Datanode 有一个同时处理文件的上限. 这个参数叫 xcievers (Hadoop的作者把这个单词拼错了). 在你加载之前,先确认下你有没有配置这个文件conf/hdfs-site.xml里面的xceivers参数,至少要有4096:
对于HDFS修改配置要记得重启.
如果没有这一项配置,你可能会遇到奇怪的失败。你会在Datanode的日志中看到xcievers exceeded,但是运行起来会报 missing blocks错误。例如: 02/12/1220:10:31 INFO hdfs.DFSClient: Could not obtain blockblk_XXXXXXXXXXXXXXXXXXXXXX_YYYYYYYY from any node: java.io.IOException: No livenodes contain current block. Will get new block locations from namenode andretry...

[color=red][b]十、配置131的${HBASE_HOME}/conf/regionservers文件[/b][/color]
192.168.197.131
192.168.197.130
192.168.197.132

十一、启动hdfs
进入131的hadoop,启动hdfs:start-dfs.sh

[img]http://dl2.iteye.com/upload/attachment/0104/2443/8ccfc305-3302-35c1-9e71-026c1688b12b.jpg[/img]
可以进去各台机器,查看日志确认是否启动成功

十二、启动hbase
进入131机器的hbase,启动hbase:start-hbase.sh

[img]http://dl2.iteye.com/upload/attachment/0104/2445/a1e6d4cd-d2a6-385c-9469-1ff77ee2b4ac.jpg[/img]
同样进去各台机器查看日志,也可以访问http://192.168.197.131:60010来查看mater情况(可以看到3太regioner机器是否启动成功);http://192.168.197.131:60030,http://192.168.197.130:60030,http://192.168.197.132:60030分别查看regioner情况

十一、客户端编码测试:
resources下提供hbase-site.xml

<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
<property>
<name>hbase.master</name>
<value>192.168.197.131:60000</value>
</property>

<property>
<name>hbase.zookeeper.quorum</name>
<value>192.168.197.131,192.168.197.130,192.168.197.132</value>
</property>

</configuration>


public class HBaseHelper {
private static final Logger log = LoggerFactory
.getLogger(HBaseHelper.class);
private Configuration conf = null;
private HTablePool tablePool = null;

public HBaseHelper() {
this.conf = HBaseConfiguration.create();
tablePool = new HTablePool(this.conf, 1000);
}

public HBaseHelper(Configuration conf) {
this.conf = HBaseConfiguration.create(conf);
tablePool = new HTablePool(this.conf, 1000);
}

public void creatTable(String tableName, String[] familys) throws Exception {
HBaseAdmin admin = new HBaseAdmin(conf);
if (admin.tableExists(tableName)) {
log.info("table already exists!");
} else {
HTableDescriptor tableDesc = new HTableDescriptor(tableName);
for (int i = 0; i < familys.length; i++) {
tableDesc.addFamily(new HColumnDescriptor(familys[i]));
}
admin.createTable(tableDesc);
log.info("create table {} ok.", tableName);
}
admin.close();
}

public void deleteTable(String tableName) throws Exception {
HBaseAdmin admin = new HBaseAdmin(conf);
admin.disableTable(tableName);
admin.deleteTable(tableName);
log.info("delete table {} ok.", tableName);
admin.close();
}

public void addRecord(String tableName, String rowKey, String family,
String qualifier, String value) throws Exception {
HTable table = (HTable)tablePool.getTable(tableName);
Put put = new Put(Bytes.toBytes(rowKey));
put.add(Bytes.toBytes(family), Bytes.toBytes(qualifier),
Bytes.toBytes(value));
table.put(put);
log.info("insert recored {} to table {} ok.", rowKey, tableName);
table.close();
}

/**
* 删除一行记录
*/
public void delRecord(String tableName, String rowKey) throws IOException {
HTable table = (HTable)tablePool.getTable(tableName);
List<Delete> list = new ArrayList<Delete>();
Delete del = new Delete(rowKey.getBytes());
list.add(del);
table.delete(list);
log.info("del recored {} ok.", rowKey);
table.close();
}

/**
* 查找一行记录
*/
public KeyValue getOneRecord(String tableName, String rowKey)
throws Exception {
KeyValue result = null;
HTable table = (HTable)tablePool.getTable(tableName);
Get get = new Get(rowKey.getBytes());
Result rs = table.get(get);
if (rs != null) {
result = rs.raw()[0];
}
/*
* for (KeyValue kv : rs.raw()) { System.out.print(new
* String(kv.getRow()) + " "); System.out.print(new
* String(kv.getFamily()) + ":"); System.out.print(new
* String(kv.getQualifier()) + " "); System.out.print(kv.getTimestamp()
* + " "); System.out.println(new String(kv.getValue())); }
*/
table.close();
return result;
}

/**
* 显示所有数据
*/
public List<KeyValue> getAllRecord(String tableName) throws Exception {
List<KeyValue> result = null;
HTable table = (HTable)tablePool.getTable(tableName);
Scan s = new Scan();
ResultScanner ss = table.getScanner(s);
if (ss != null) {
result = new ArrayList<KeyValue>();
for (Result r : ss) {
for (KeyValue kv : r.raw()) {
result.add(kv);
}
}
}
table.close();
return result;
}

public static void main(String[] args) {
// TODO Auto-generated method stub
try {
String tablename = "scores";
String[] familys = { "grade", "course" };
HBaseHelper hbaseHelper = new HBaseHelper();

hbaseHelper.creatTable(tablename, familys);

// add record zkb
hbaseHelper.addRecord(tablename, "zkb", "grade", "", "5");
hbaseHelper.addRecord(tablename, "zkb", "course", "", "90");
hbaseHelper.addRecord(tablename, "zkb", "course", "math", "97");
hbaseHelper.addRecord(tablename, "zkb", "course", "art", "87");
// add record baoniu
hbaseHelper.addRecord(tablename, "baoniu", "grade", "", "4");
hbaseHelper.addRecord(tablename, "baoniu", "course", "math", "89");

System.out.println("===========get one record========");
KeyValue kv1 = hbaseHelper.getOneRecord(tablename, "zkb");
System.out.print(new String(kv1.getRow()) + " ");
System.out.print(new String(kv1.getFamily()) + ":");
System.out.print(new String(kv1.getQualifier()) + " ");
System.out.print(kv1.getTimestamp() + " ");
System.out.println(new String(kv1.getValue()));
System.out.println("===========show all record========");
List<KeyValue> kvList = hbaseHelper.getAllRecord(tablename);
if (kvList != null) {
for (KeyValue kv : kvList) {
System.out.print(new String(kv.getRow()) + " ");
System.out.print(new String(kv.getFamily()) + ":");
System.out.print(new String(kv.getQualifier()) + " ");
System.out.print(kv.getTimestamp() + " ");
System.out.println(new String(kv.getValue()));
System.out.println(kv.toString());
}
}
System.out.println("===========del one record========");
hbaseHelper.delRecord(tablename, "baoniu");
kvList = hbaseHelper.getAllRecord(tablename);
if (kvList != null) {
for (KeyValue kv : kvList) {
System.out.print(new String(kv.getRow()) + " ");
System.out.print(new String(kv.getFamily()) + ":");
System.out.print(new String(kv.getQualifier()) + " ");
System.out.print(kv.getTimestamp() + " ");
System.out.println(new String(kv.getValue()));
}
}

System.out.println("===========show all record========");
kvList = hbaseHelper.getAllRecord(tablename);
if (kvList != null) {
for (KeyValue kv : kvList) {
System.out.print(new String(kv.getRow()) + " ");
System.out.print(new String(kv.getFamily()) + ":");
System.out.print(new String(kv.getQualifier()) + " ");
System.out.print(kv.getTimestamp() + " ");
System.out.println(new String(kv.getValue()));
}
}
} catch (Exception e) {
e.printStackTrace();
}
}

}
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值