Home  >  Article  >  Database  >  HBase0.9x问题总结

HBase0.9x问题总结

WBOY
WBOYOriginal
2016-06-07 17:36:00989browse

1.最近hbase的rgion经常挂掉一个,查看该节点日志发现如下错误: 2014-02-22 01:52:02,194 ERROR org.apache.Hadoop.hbase.regio

1.最近hbase的rgion经常挂掉一个,查看该节点日志发现如下错误:

2014-02-22 01:52:02,194 ERROR org.apache.Hadoop.hbase.regionserver.HRegionServer: Close and delete failed

org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException: org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException: No lease on /hbase/.logs/testhd3,60020,1392948100268/testhd3%2C60020%2C1392948100268.1393004989411 File does not exist. Holder DFSClient_hb_rs_testhd3,60020,1392948100268 does not have any open files.

at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:1631)

at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:1622)

查了很长时间也没找到hbase的问题,后来根据网上资料查看了hadoop的日志如下:

2014-02-22 01:52:00,935 ERROR org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException as:hadoop cause:org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException: No lease on /hbase/.logs/testhd3,60020,1392948100268/testhd3%2C60020%2C1392948100268.1393004989411 File does not exist. Holder DFSClient_hb_rs_testhd3,60020,1392948100268 does not have any open files.

2014-02-22 01:52:00,936 INFO org.apache.hadoop.ipc.Server: IPC Server handler 3 on 9000, call addBlock(/hbase/.logs/testhd3,60020,1392948100268/testhd3%2C60020%2C1392948100268.1393004989411, DFSClient_hb_rs_testhd3,60020,1392948100268, null) from 172.72.101.213:59979: error: org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException: No lease on /hbase/.logs/testhd3,60020,1392948100268/testhd3%2C60020%2C1392948100268.1393004989411 File does not exist. Holder DFSClient_hb_rs_testhd3,60020,1392948100268 does not have any open files.

org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException: No lease on /hbase/.logs/testhd3,60020,1392948100268/testhd3%2C60020%2C1392948100268.1393004989411 File does not exist. Holder DFSClient_hb_rs_testhd3,60020,1392948100268 does not have any open files.

at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:1631)

at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:1622)

结果发现两个日志有几乎相同的记录,可以确认hbase的问题是由hadoop引起,修改如下:

解决办法,调整xcievers参数

默认是4096,改为8192

vi /home/dwhftp/opt/hadoop/conf/hdfs-site.xml

dfs.datanode.max.xcievers

8192

dfs.datanode.max.xcievers 参数说明

一个 Hadoop HDFS Datanode 有一个同时处理文件的上限. 这个参数叫 xcievers (Hadoop的作者把这个单词拼错了). 在你加载之前,先确认下你有没有配置这个文件conf/hdfs-site.xml里面的xceivers参数,至少要有4096:

dfs.datanode.max.xcievers

4096

HBase 的详细介绍:请点这里
HBase 的下载地址:请点这里

相关阅读:

Hadoop+HBase搭建云存储总结 PDF

HBase 结点之间时间不一致造成regionserver启动失败

Hadoop+ZooKeeper+HBase集群配置

Hadoop集群安装&HBase实验环境搭建

基于Hadoop集群的HBase集群的配置 ‘

Hadoop安装部署笔记之-HBase完全分布模式安装

单机版搭建HBase环境图文教程详解

linux

Statement:
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn