分享

Hbase批量写数据的时候,报错java.util.concurrent.RejectedExecutionException

linjikai8888 发表于 2015-12-7 10:36:48 [显示全部楼层] 回帖奖励 阅读模式 关闭右栏 7 32831
log报错信息如下,我检查了下集群,集群是正常的,mapreduce可以正常跑,不明白Hbase为什么会突然报这个错误信息。看了下这个表,目前已经有34个块了。

请有经验的帮忙看看,不甚感激!~

2015-12-07 09:00:07.974 [org.apache.hadoop.hbase.client.AsyncProcess] [Thread-1] [WARN] - #2, the task was rejected by the pool. This is unexpected. Server is dn1,60020,1448846406902
java.util.concurrent.RejectedExecutionException: Task java.util.concurrent.FutureTask@46253fdd rejected from java.util.concurrent.ThreadPoolExecutor@4e9baba9[Terminated, pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 91983]
        at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2048)
        at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:821)
        at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1372)
        at java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:110)
        at org.apache.hadoop.hbase.client.AsyncProcess.sendMultiAction(AsyncProcess.java:565)
        at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:538)
        at org.apache.hadoop.hbase.client.AsyncProcess.logAndResubmit(AsyncProcess.java:817)
        at org.apache.hadoop.hbase.client.AsyncProcess.receiveGlobalFailure(AsyncProcess.java:764)
        at org.apache.hadoop.hbase.client.AsyncProcess.sendMultiAction(AsyncProcess.java:574)
        at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:538)
        at org.apache.hadoop.hbase.client.AsyncProcess.logAndResubmit(AsyncProcess.java:817)
        at org.apache.hadoop.hbase.client.AsyncProcess.receiveGlobalFailure(AsyncProcess.java:764)
        at org.apache.hadoop.hbase.client.AsyncProcess.sendMultiAction(AsyncProcess.java:574)
        at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:538)
        at org.apache.hadoop.hbase.client.AsyncProcess.logAndResubmit(AsyncProcess.java:817)
        at org.apache.hadoop.hbase.client.AsyncProcess.receiveGlobalFailure(AsyncProcess.java:764)
        at org.apache.hadoop.hbase.client.AsyncProcess.sendMultiAction(AsyncProcess.java:574)
        at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:538)
        at org.apache.hadoop.hbase.client.AsyncProcess.logAndResubmit(AsyncProcess.java:817)
        at org.apache.hadoop.hbase.client.AsyncProcess.receiveGlobalFailure(AsyncProcess.java:764)
        at org.apache.hadoop.hbase.client.AsyncProcess.sendMultiAction(AsyncProcess.java:574)
        at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:538)
        at org.apache.hadoop.hbase.client.AsyncProcess.logAndResubmit(AsyncProcess.java:817)
        at org.apache.hadoop.hbase.client.AsyncProcess.receiveGlobalFailure(AsyncProcess.java:764)
        at org.apache.hadoop.hbase.client.AsyncProcess.sendMultiAction(AsyncProcess.java:574)
        at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:538)
        at org.apache.hadoop.hbase.client.AsyncProcess.logAndResubmit(AsyncProcess.java:817)
        at org.apache.hadoop.hbase.client.AsyncProcess.receiveGlobalFailure(AsyncProcess.java:764)
        at org.apache.hadoop.hbase.client.AsyncProcess.sendMultiAction(AsyncProcess.java:574)
        at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:538)
        at org.apache.hadoop.hbase.client.AsyncProcess.logAndResubmit(AsyncProcess.java:817)
        at org.apache.hadoop.hbase.client.AsyncProcess.receiveGlobalFailure(AsyncProcess.java:764)
        at org.apache.hadoop.hbase.client.AsyncProcess.sendMultiAction(AsyncProcess.java:574)
        at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:538)
        at org.apache.hadoop.hbase.client.AsyncProcess.logAndResubmit(AsyncProcess.java:817)
        at org.apache.hadoop.hbase.client.AsyncProcess.receiveGlobalFailure(AsyncProcess.java:764)
        at org.apache.hadoop.hbase.client.AsyncProcess.sendMultiAction(AsyncProcess.java:574)
        at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:538)
        at org.apache.hadoop.hbase.client.AsyncProcess.logAndResubmit(AsyncProcess.java:817)
        at org.apache.hadoop.hbase.client.AsyncProcess.receiveGlobalFailure(AsyncProcess.java:764)
        at org.apache.hadoop.hbase.client.AsyncProcess.sendMultiAction(AsyncProcess.java:574)
        at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:538)
        at org.apache.hadoop.hbase.client.AsyncProcess.logAndResubmit(AsyncProcess.java:817)
        at org.apache.hadoop.hbase.client.AsyncProcess.receiveGlobalFailure(AsyncProcess.java:764)
        at org.apache.hadoop.hbase.client.AsyncProcess.sendMultiAction(AsyncProcess.java:574)
        at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:538)
        at org.apache.hadoop.hbase.client.AsyncProcess.logAndResubmit(AsyncProcess.java:817)
        at org.apache.hadoop.hbase.client.AsyncProcess.receiveGlobalFailure(AsyncProcess.java:764)
        at org.apache.hadoop.hbase.client.AsyncProcess.sendMultiAction(AsyncProcess.java:574)
        at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:538)
        at org.apache.hadoop.hbase.client.AsyncProcess.logAndResubmit(AsyncProcess.java:817)
        at org.apache.hadoop.hbase.client.AsyncProcess.receiveGlobalFailure(AsyncProcess.java:764)
        at org.apache.hadoop.hbase.client.AsyncProcess.sendMultiAction(AsyncProcess.java:574)
        at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:538)
        at org.apache.hadoop.hbase.client.AsyncProcess.logAndResubmit(AsyncProcess.java:817)
        at org.apache.hadoop.hbase.client.AsyncProcess.receiveGlobalFailure(AsyncProcess.java:764)
        at org.apache.hadoop.hbase.client.AsyncProcess.sendMultiAction(AsyncProcess.java:574)
        at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:538)
        at org.apache.hadoop.hbase.client.AsyncProcess.logAndResubmit(AsyncProcess.java:817)
        at org.apache.hadoop.hbase.client.AsyncProcess.receiveGlobalFailure(AsyncProcess.java:764)
        at org.apache.hadoop.hbase.client.AsyncProcess.sendMultiAction(AsyncProcess.java:574)
        at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:538)
        at org.apache.hadoop.hbase.client.AsyncProcess.logAndResubmit(AsyncProcess.java:817)
        at org.apache.hadoop.hbase.client.AsyncProcess.receiveGlobalFailure(AsyncProcess.java:764)
        at org.apache.hadoop.hbase.client.AsyncProcess.sendMultiAction(AsyncProcess.java:574)
        at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:538)
        at org.apache.hadoop.hbase.client.AsyncProcess.logAndResubmit(AsyncProcess.java:817)
        at org.apache.hadoop.hbase.client.AsyncProcess.receiveGlobalFailure(AsyncProcess.java:764)
        at org.apache.hadoop.hbase.client.AsyncProcess.sendMultiAction(AsyncProcess.java:574)
        at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:538)
        at org.apache.hadoop.hbase.client.AsyncProcess.logAndResubmit(AsyncProcess.java:817)
        at org.apache.hadoop.hbase.client.AsyncProcess.receiveGlobalFailure(AsyncProcess.java:764)
        at org.apache.hadoop.hbase.client.AsyncProcess.sendMultiAction(AsyncProcess.java:574)
        at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:538)
        at org.apache.hadoop.hbase.client.AsyncProcess.logAndResubmit(AsyncProcess.java:817)
        at org.apache.hadoop.hbase.client.AsyncProcess.receiveGlobalFailure(AsyncProcess.java:764)
        at org.apache.hadoop.hbase.client.AsyncProcess.sendMultiAction(AsyncProcess.java:574)
        at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:538)
        at org.apache.hadoop.hbase.client.AsyncProcess.logAndResubmit(AsyncProcess.java:817)
        at org.apache.hadoop.hbase.client.AsyncProcess.receiveGlobalFailure(AsyncProcess.java:764)
        at org.apache.hadoop.hbase.client.AsyncProcess.sendMultiAction(AsyncProcess.java:574)
        at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:538)
        at org.apache.hadoop.hbase.client.AsyncProcess.logAndResubmit(AsyncProcess.java:817)
        at org.apache.hadoop.hbase.client.AsyncProcess.receiveGlobalFailure(AsyncProcess.java:764)
        at org.apache.hadoop.hbase.client.AsyncProcess.sendMultiAction(AsyncProcess.java:574)
        at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:538)
        at org.apache.hadoop.hbase.client.AsyncProcess.logAndResubmit(AsyncProcess.java:817)
        at org.apache.hadoop.hbase.client.AsyncProcess.receiveGlobalFailure(AsyncProcess.java:764)
        at org.apache.hadoop.hbase.client.AsyncProcess.sendMultiAction(AsyncProcess.java:574)
        at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:349)
        at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:286)
        at org.apache.hadoop.hbase.client.HTable.backgroundFlushCommits(HTable.java:1001)
        at org.apache.hadoop.hbase.client.HTable.flushCommits(HTable.java:1334)
        at org.apache.hadoop.hbase.client.HTable.put(HTable.java:961)
        at com.xmgps.xmjj.hdstorage.hbase.opt.HBaseOperator.putRows(HBaseOperator.java:260)
        at com.xmgps.xmjj.bayonetimport.hbase.HbaseImportService.importHbase(HbaseImportService.java:209)
        at com.xmgps.xmjj.bayonetimport.hbase.HbaseImportService.handleMsg(HbaseImportService.java:198)
        at com.xmgps.xmjj.bayonetimport.ftp.worker.BayonetWorkerRunner.running(BayonetWorkerRunner.java:155)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
        at com.xmgps.xmjj.bayonetimport.ftp.util.EasyThread.run(EasyThread.java:21)
2015-12-07 09:00:07.977 [org.apache.hadoop.hbase.client.AsyncProcess] [Thread-1] [INFO] - #2, table=BAYONETDATA, attempt=22/35 failed 31 ops, last exception: java.util.concurrent.RejectedExecutionException: Task java.util.concurrent.FutureTask@46253fdd rejected from java.util.concurrent.ThreadPoolExecutor@4e9baba9[Terminated, pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 91983] on dn1,60020,1448846406902, tracking started Mon Dec 07 08:55:58 CST 2015, retrying after 20129 ms, replay 31 ops.




已有(7)人评论

跳转到指定楼层
lmlm1234 发表于 2015-12-7 11:46:12
log报错信息如下,我检查了下集群,集群是正常的,mapreduce可以正常跑,不明白Hbase为什么会突然报这个错误信息。看了下这个表,目前已经有34个块了。
回复

使用道具 举报

诗景尘 发表于 2015-12-7 11:51:54
根据错误提示,应该是线程执行器已经关闭了,再次提交线程任务的话,就会报这个错误,检查一下代码中有没有什么提前关闭了,希望能帮助到你
回复

使用道具 举报

cranberries8 发表于 2015-12-8 09:55:09
你的hbase hbase.htable.threads.max 是多大?是不是线程池满了
回复

使用道具 举报

linjikai8888 发表于 2015-12-10 17:37:26
感谢大家的帮忙,已经找到问题了。

是项目程序在写芒果DB的模块,写数据实在太慢,导致内存溢出,从而影响到Hbase写数据。

目前已经恢复了~
回复

使用道具 举报

xiaobaiyang 发表于 2017-12-21 09:08:19
你好,你是使用多线程模式吗?出现这个问题,你获取Hbase  table 应该使用的是旧版接口类“HTable”,
源代码对HTable描述:
/**
* An implementation of {@link Table}. Used to communicate with a single HBase table.
* Lightweight. Get as needed and just close when done.
* Instances of this class SHOULD NOT be constructed directly.
* Obtain an instance via {@link Connection}. See {@link ConnectionFactory}
* class comment for an example of how.
*
* <p>This class is NOT thread safe for reads nor writes.
* In the case of writes (Put, Delete), the underlying write buffer can
* be corrupted if multiple threads contend over a single HTable instance.
* In the case of reads, some fields used by a Scan are shared among all threads.
*
* <p>HTable is no longer a client API. Use {@link Table} instead. It is marked
* InterfaceAudience.Private as of hbase-1.0.0 indicating that this is an
* HBase-internal class as defined in
* <a href="https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/InterfaceClassification.html">Hadoop
* Interface Classification</a>. There are no guarantees for backwards
* source / binary compatibility and methods or the class can
* change or go away without deprecation.
* <p>Near all methods of this * class made it out to the new {@link Table}
* Interface or were * instantiations of methods defined in {@link HTableInterface}.
* A few did not. Namely, the {@link #getStartEndKeys}, {@link #getEndKeys},
* and {@link #getStartKeys} methods. These three methods are available
* in {@link RegionLocator} as of 1.0.0 but were NOT marked as
* deprecated when we released 1.0.0. In spite of this oversight on our
* part, these methods will be removed in 2.0.0.
*
* @see Table
* @see Admin
* @see Connection
* @see ConnectionFactory
*/

所以应该使用
Connection connection = ConnectionFactory.createConnection(configuration);
Table table = connection.getTable(TableName.valueOf(tableName));
这种方式获取Table,就可以规避这个问题了
若是多线程模式,这获取Table应该放在线程内部
回复

使用道具 举报

xiaobaiyang 发表于 2017-12-21 09:12:30
你应该使用
HTable hTable = new HTable(configuration, tableName);
方式获取table,这种方式有线程安全问题,
/**
* An implementation of {@link Table}. Used to communicate with a single HBase table.
* Lightweight. Get as needed and just close when done.
* Instances of this class SHOULD NOT be constructed directly.
* Obtain an instance via {@link Connection}. See {@link ConnectionFactory}
* class comment for an example of how.
*
* <p>This class is NOT thread safe for reads nor writes.
* In the case of writes (Put, Delete), the underlying write buffer can
* be corrupted if multiple threads contend over a single HTable instance.
* In the case of reads, some fields used by a Scan are shared among all threads.
*
* <p>HTable is no longer a client API. Use {@link Table} instead. It is marked
* InterfaceAudience.Private as of hbase-1.0.0 indicating that this is an
* HBase-internal class as defined in
* <a href="https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/InterfaceClassification.html">Hadoop
* Interface Classification</a>. There are no guarantees for backwards
* source / binary compatibility and methods or the class can
* change or go away without deprecation.
* <p>Near all methods of this * class made it out to the new {@link Table}
* Interface or were * instantiations of methods defined in {@link HTableInterface}.
* A few did not. Namely, the {@link #getStartEndKeys}, {@link #getEndKeys},
* and {@link #getStartKeys} methods. These three methods are available
* in {@link RegionLocator} as of 1.0.0 but were NOT marked as
* deprecated when we released 1.0.0. In spite of this oversight on our
* part, these methods will be removed in 2.0.0.
*
* @see Table
* @see Admin
* @see Connection
* @see ConnectionFactory
*/
这是源码对HTable的描述。
所有应该使用下述方式进行获取table可以规避这个问题
Connection connection = ConnectionFactory.createConnection(configuration);
Table table = connection.getTable(TableName.valueOf(tableName));

如果使用多线程模式,这要放在线程里,否则也会有问题
回复

使用道具 举报

您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

关闭

推荐上一条 /2 下一条