分享

求助hdfs追加文件报租约问题

xulijian 发表于 2016-3-10 10:24:37 [显示全部楼层] 回帖奖励 阅读模式 关闭右栏 2 23589
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException): Failed to APPEND_FILE /topics/2016-03-10/1/DealOgg2DB for DFSClient_NONMAPREDUCE_1822926576_63 on 10.161.32.89 because this file lease is currently owned by DFSClient_NONMAPREDUCE_114948786_63 on 10.161.32.91
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:2933)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInternal(FSNamesystem.java:2685)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInt(FSNamesystem.java:2985)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:2952)
        at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.append(NameNodeRpcServer.java:653)
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.append(ClientNamenodeProtocolServerSideTranslatorPB.java:421)
        at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2049)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2045)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:415)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2043)

        at org.apache.hadoop.ipc.Client.call(Client.java:1468)
        at org.apache.hadoop.ipc.Client.call(Client.java:1399)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232)
        at com.sun.proxy.$Proxy15.append(Unknown Source)
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.append(ClientNamenodeProtocolTranslatorPB.java:313)
        at sun.reflect.GeneratedMethodAccessor15.invoke(Unknown Source)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
        at com.sun.proxy.$Proxy16.append(Unknown Source)
        at org.apache.hadoop.hdfs.DFSClient.callAppend(DFSClient.java:1767)
        at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1803)
        at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1796)
        at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:323)
        at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:319)
        at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
        at org.apache.hadoop.hdfs.DistributedFileSystem.append(DistributedFileSystem.java:319)
        at org.apache.hadoop.fs.FileSystem.append(FileSystem.java:1163)
        at com.asiainfo.cbss.stat.util.HdfsUtil.save(HdfsUtil.java:113)
        at com.asiainfo.cbss.stat.kafka.MessageUtil$.saveHdfsByPartition(DealMessage.scala:1142)
        at com.asiainfo.cbss.stat.streaming.DealOgg2DB$$anonfun$3$$anonfun$apply$3.apply(DealOgg2DB.scala:113)
        at com.asiainfo.cbss.stat.streaming.DealOgg2DB$$anonfun$3$$anonfun$apply$3.apply(DealOgg2DB.scala:111)
        at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsWithIndex$1$$anonfun$apply$18.apply(RDD.scala:727)
        at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsWithIndex$1$$anonfun$apply$18.apply(RDD.scala:727)
        at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
        at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:300)
        at org.apache.spark.rdd.RDD.iterator(RDD.scala:264)
        at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
        at org.apache.spark.scheduler.Task.run(Task.scala:88)
        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)


看晚上分析的原因是DATANODE关闭租约信息失败,再次请求的时候发下租约还存在,报的异常。请问下如何解决此问题,网上说等待60s让租约自动失效,想问问如何能手动关闭此租约,保证程序正常运行

已有(2)人评论

跳转到指定楼层
Alkaloid0515 发表于 2016-3-10 12:06:09
自动关闭不说办法,楼主看看是不是其它问题造成的
回复

使用道具 举报

xuanxufeng 发表于 2016-3-10 12:18:02
楼主的追加是什么情况下追加的。
一般来讲hadoop允许上传文件,只有在测试的情况下才允许追加。


下面仅供参考

往hdfs上追加数据【hadoop写数据】
http://www.aboutyun.com/forum.php?mod=viewthread&tid=10305
HDFS追本溯源:租约,读写过程的容错处理及NN的主要数据结构
http://www.aboutyun.com/forum.php?mod=viewthread&tid=17620


回复

使用道具 举报

您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

关闭

推荐上一条 /2 下一条