分享

hadoop2.2中datanode启动不了

guofeng 2013-12-17 23:01:20 发表于 问题解答 [显示全部楼层] 回帖奖励 阅读模式 关闭右栏 0 9568
查看datanode日志如下
  1. tail -f hadoop-hadoop-datanode-localhost.log
复制代码
  1. org.apache.hadoop.hdfs.server.common.Storage: Lock on /hadoop/dfs/data/in_use.lock acquired by nodename 3176@localhost.localdomain
  2. 2013-12-17 09:37:37,849 FATAL org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for block pool Block pool BP-118139405-127.0.0.1-1387244216848 (storage id DS-1555732297-127.0.0.1-50010-1387210909439) service to /192.168.101.205:9000
  3. java.io.IOException: Incompatible clusterIDs in /hadoop/dfs/data: namenode clusterID = CID-5cc652c0-b894-4a4a-865e-f429bb7ab426; datanode clusterID = CID-6f9592c7-3813-4723-86d8-923b0ddc71df
  4.         at org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:391)
  5.         at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:191)
  6.         at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:219)
  7.         at org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:837)
  8.         at org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:808)
  9.         at org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:280)
  10.         at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:222)
  11.         at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:664)
  12.         at java.lang.Thread.run(Thread.java:662)
  13. 2013-12-17 09:37:37,851 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Ending block pool service for: Block pool BP-118139405-127.0.0.1-1387244216848 (storage id DS-1555732297-127.0.0.1-50010-1387210909439) service to /192.168.101.205:9000
复制代码
从上面可以得到
  1. namenode clusterID = CID-5cc652c0-b894-4a4a-865e-f429bb7ab426
复制代码
由于namenode id 和 datanode id不一致导致 datanode不能正常启动

解决办法
  1. [hadoop@localhost ~]$ vi dfs/data/current/VERSION
复制代码
  1. #Tue Dec 17 22:22:14 CST 2013
  2. storageID=DS-871617491-127.0.0.1-50010-1387268095088
  3. clusterID=CID-630bf54f-7043-46f7-80dc-4013fa91b7b4
  4. cTime=0
  5. storageType=DATA_NODE
  6. layoutVersion=-47
复制代码
将上面的信息改为下面的内容
  1. #Tue Dec 17 22:22:14 CST 2013
  2. storageID=DS-871617491-127.0.0.1-50010-1387268095088
  3. clusterID=CID-5cc652c0-b894-4a4a-865e-f429bb7ab426
  4. cTime=0
  5. storageType=DATA_NODE
  6. layoutVersion=-47
复制代码
重启一下hadoop就ok了




没找到任何评论,期待你打破沉寂

您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

关闭

推荐上一条 /2 下一条