分享

HBase启动后,一个datanode节点就挂了,其他两个正常

linbowei 发表于 2015-7-25 11:33:20 [显示全部楼层] 回帖奖励 阅读模式 关闭右栏 2 20178
  这几天安装好了HBase,同时也可以在bin/hbase shell里面进行操作,用eclipse对hbase经常操作也没有问题。今天突然发现,只有我的HBbase启动或者对hbase进行操作,就有一个datanode挂了,其他两个正常运行。报错的日志如下:

2015-07-25 11:19:03,285 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.1.141:59480, dest: /192.168.1.141:50010, bytes: 124, op: HDFS_WRITE, cliID: DFSClient_2086078551, offset: 0, srvID: DS-71136963-192.168.1.141-50010-1434034598345, blockid: blk_6706534820442431628_1285, duration: 132225000
2015-07-25 11:19:03,286 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder 1 for block blk_6706534820442431628_1285 terminating
2015-07-25 11:19:03,291 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Reconciled asynchronous block scan with filesystem. 0 blocks concurrently deleted during scan, 1 blocks concurrently added during scan, 0 ongoing creations ignored
2015-07-25 11:19:03,291 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Reconciled asynchronous block report against current state in 1 ms
2015-07-25 11:19:03,312 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: BlockReport of 54 blocks took 1 msec to generate and 21 msecs for RPC and NN processing
2015-07-25 11:19:04,602 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving block blk_4289957336059980021_1286 src: /192.168.1.141:59482 dest: /192.168.1.141:50010
2015-07-25 11:19:04,834 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /192.168.1.141:59482, dest: /192.168.1.141:50010, bytes: 124, op: HDFS_WRITE, cliID: DFSClient_2086078551, offset: 0, srvID: DS-71136963-192.168.1.141-50010-1434034598345, blockid: blk_4289957336059980021_1286, duration: 218966000
2015-07-25 11:19:04,835 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder 1 for block blk_4289957336059980021_1286 terminating
2015-07-25 11:19:04,855 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: DataNode is shutting down: org.apache.hadoop.ipc.RemoteException: org.apache.hadoop.hdfs.protocol.UnregisteredDatanodeException: Data node 192.168.1.141:50010 is attempting to report storage ID DS-71136963-192.168.1.141-50010-1434034598345. Node 192.168.1.142:50010 is expected to serve this storage.
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getDatanode(FSNamesystem.java:4608)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.blockReceived(FSNamesystem.java:4016)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.blockReceived(NameNode.java:1029)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:415)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)

    at org.apache.hadoop.ipc.Client.call(Client.java:1070)
    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:225)
    at com.sun.proxy.$Proxy5.blockReceived(Unknown Source)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.offerService(DataNode.java:938)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.run(DataNode.java:1458)
    at java.lang.Thread.run(Thread.java:744)

2015-07-25 11:19:04,936 INFO org.mortbay.log: Stopped SelectChannelConnector@0.0.0.0:50075
2015-07-25 11:19:05,052 INFO org.apache.hadoop.ipc.Server: Stopping server on 50020
2015-07-25 11:19:05,054 INFO org.apache.hadoop.ipc.metrics.RpcInstrumentation: shut down
2015-07-25 11:19:05,055 INFO org.apache.hadoop.ipc.Server: Stopping IPC Server listener on 50020
2015-07-25 11:19:05,055 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Waiting for threadgroup to exit, active threads is 1
2015-07-25 11:19:05,058 INFO org.apache.hadoop.ipc.Server: IPC Server handler 2 on 50020: exiting
2015-07-25 11:19:05,058 INFO org.apache.hadoop.ipc.Server: IPC Server handler 1 on 50020: exiting
2015-07-25 11:19:05,059 INFO org.apache.hadoop.ipc.Server: IPC Server handler 0 on 50020: exiting
2015-07-25 11:19:05,062 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(192.168.1.141:50010, storageID=DS-71136963-192.168.1.141-50010-1434034598345, infoPort=50075, ipcPort=50020):DataXceiveServer:java.nio.channels.AsynchronousCloseException
    at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:205)
    at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:248)
    at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:100)
    at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:131)
    at java.lang.Thread.run(Thread.java:744)

2015-07-25 11:19:05,062 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Exiting DataXceiveServer
2015-07-25 11:19:05,062 INFO org.apache.hadoop.ipc.Server: Stopping IPC Server Responder
2015-07-25 11:19:05,182 INFO org.apache.hadoop.hdfs.server.datanode.DataBlockScanner: Exiting DataBlockScanner thread.
2015-07-25 11:19:06,060 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Waiting for threadgroup to exit, active threads is 0
2015-07-25 11:19:06,167 INFO org.apache.hadoop.hdfs.server.datanode.FSDatasetAsyncDiskService: Shutting down all async disk service threads...
2015-07-25 11:19:06,168 INFO org.apache.hadoop.hdfs.server.datanode.FSDatasetAsyncDiskService: All async disk service threads have been shut down.
2015-07-25 11:19:06,168 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(192.168.1.141:50010, storageID=DS-71136963-192.168.1.141-50010-1434034598345, infoPort=50075, ipcPort=50020):Finishing DataNode in: FSDataset{dirpath='/opt/data/hadoop/hdfs/data/current'}
2015-07-25 11:19:06,176 WARN org.apache.hadoop.metrics2.util.MBeans: Hadoop:service=DataNode,name=DataNodeInfo
javax.management.InstanceNotFoundException: Hadoop:service=DataNode,name=DataNodeInfo
    at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getMBean(DefaultMBeanServerInterceptor.java:1095)
    at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.exclusiveUnregisterMBean(DefaultMBeanServerInterceptor.java:427)
    at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.unregisterMBean(DefaultMBeanServerInterceptor.java:415)
    at com.sun.jmx.mbeanserver.JmxMBeanServer.unregisterMBean(JmxMBeanServer.java:546)
    at org.apache.hadoop.metrics2.util.MBeans.unregister(MBeans.java:71)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.unRegisterMXBean(DataNode.java:522)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.shutdown(DataNode.java:737)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.run(DataNode.java:1471)
    at java.lang.Thread.run(Thread.java:744)
2015-07-25 11:19:06,178 INFO org.apache.hadoop.ipc.Server: Stopping server on 50020
2015-07-25 11:19:06,179 INFO org.apache.hadoop.ipc.metrics.RpcInstrumentation: shut down
2015-07-25 11:19:06,180 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Waiting for threadgroup to exit, active threads is 0
2015-07-25 11:19:06,181 WARN org.apache.hadoop.metrics2.util.MBeans: Hadoop:service=DataNode,name=FSDatasetState-DS-71136963-192.168.1.141-50010-1434034598345
javax.management.InstanceNotFoundException: Hadoop:service=DataNode,name=FSDatasetState-DS-71136963-192.168.1.141-50010-1434034598345
    at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getMBean(DefaultMBeanServerInterceptor.java:1095)
    at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.exclusiveUnregisterMBean(DefaultMBeanServerInterceptor.java:427)
    at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.unregisterMBean(DefaultMBeanServerInterceptor.java:415)
    at com.sun.jmx.mbeanserver.JmxMBeanServer.unregisterMBean(JmxMBeanServer.java:546)
    at org.apache.hadoop.metrics2.util.MBeans.unregister(MBeans.java:71)
    at org.apache.hadoop.hdfs.server.datanode.FSDataset.shutdown(FSDataset.java:2067)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.shutdown(DataNode.java:799)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.run(DataNode.java:1471)
    at java.lang.Thread.run(Thread.java:744)
2015-07-25 11:19:06,182 WARN org.apache.hadoop.hdfs.server.datanode.FSDatasetAsyncDiskService: AsyncDiskService has already shut down.
2015-07-25 11:19:06,182 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Exiting Datanode
2015-07-25 11:19:06,185 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down DataNode at node1/192.168.1.141
************************************************************/

已有(2)人评论

跳转到指定楼层
NEOGX 发表于 2015-7-25 11:52:07
是否storageID=DS-71136963-192.168.1.141-50010-1434034598345, 发生冲突

参考:
Hadoop启动异常:UnregisteredDatanodeException
http://www.aboutyun.com/thread-10599-1-1.html



回复

使用道具 举报

linbowei 发表于 2015-7-25 13:47:24
NEOGX 发表于 2015-7-25 11:52
是否storageID=DS-71136963-192.168.1.141-50010-1434034598345, 发生冲突

参考:

确实是storageID冲突了,问题已经解决了,thanks
回复

使用道具 举报

您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

关闭

推荐上一条 /2 下一条