分享

flume采集阿里应用服务器日志到公司内网问题

mjjian0 发表于 2017-3-23 10:54:46 [显示全部楼层] 只看大图 回帖奖励 阅读模式 关闭右栏 4 14833
情况是这样的:
公司应用服务器在阿里云,hadoop集群在我们公司内网。
现在要把我们应用服务器日志通过flume采集到内网hdfs。
flume配置:
producer.channels = c
producer.sinks = r

#source section
producer.sources.s.type = exec
producer.sources.s.command = tail -f n+1 /usr/local/tomcat/logs/xnCreditLog/xnCreditLog-visit.log

producer.sinks.r.type = hdfs
producer.sinks.r.hdfs.path = hdfs://公网ip:9000/flume/accesslog/%y-%m-%d
producer.sinks.r.hdfs.filePrefix = accesslog
producer.sinks.r.hdfs.fileSuffix = .log
producer.sinks.r.hdfs.fileType = DataStream
producer.sinks.r.hdfs.writeFormat = Text
producer.sinks.r.hdfs.useLocalTimeStamp = true
producer.sinks.r.hdfs.rollInterval = 0
producer.sinks.r.hdfs.rollSize = 20971520
producer.sinks.r.hdfs.rollCount = 0
producer.sinks.r.hdfs.callTimeout = 60000

#define channel
producer.channels.c.type = memory
producer.channels.c.capacity = 10000

producer.sources.s.channels = c
producer.sinks.r.channel = c


问题:现在我们hdfs 只有文件,但是文件里面没有内容
%GTL7_KCI6${V2U79H810.png

flume  agent启动后报错:

2017-03-22 16:36:24,041 (lifecycleSupervisor-1-3) [INFO - org.apache.flume.instrumentation.MonitoredCounterGroup.start(MonitoredCounterGroup.java:95)] Component type: SINK, name: r started
2017-03-22 16:36:24,418 (SinkRunner-PollingRunner-DefaultSinkProcessor) [INFO - org.apache.flume.sink.hdfs.HDFSDataStream.configure(HDFSDataStream.java:57)] Serializer = TEXT, UseRawLocalFileSystem = false
2017-03-22 16:36:24,753 (SinkRunner-PollingRunner-DefaultSinkProcessor) [INFO - org.apache.flume.sink.hdfs.BucketWriter.open(BucketWriter.java:231)] Creating hdfs://61.130.181.123:9000/flume/accesslog/17-03-22/accesslog.1490171784418.log.tmp
2017-03-22 16:36:24,970 (hdfs-r-call-runner-0) [WARN - org.apache.hadoop.util.NativeCodeLoader.<clinit>(NativeCodeLoader.java:62)] Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
2017-03-22 16:36:29,271 (Thread-7) [INFO - org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:1368)] Exception in createBlockOutputStream
java.net.ConnectException: Connection timed out
        at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
        at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:744)
        at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
        at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531)
        at org.apache.hadoop.hdfs.DFSOutputStream.createSocketForPipeline(DFSOutputStream.java:1533)
        at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:1309)
        at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1262)
        at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:448)
2017-03-22 16:36:29,273 (Thread-7) [INFO - org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1265)] Abandoning BP-1251744720-192.168.0.200-1488999774966:blk_1073747272_6481
2017-03-22 16:36:29,286 (Thread-7) [INFO - org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1269)] Excluding datanode DatanodeInfoWithStorage[192.168.0.203:50010,DS-8e5b1f98-b691-4919-b376-d2cb22c566a6,DISK]
2017-03-22 16:36:32,301 (Thread-7) [INFO - org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:1368)] Exception in createBlockOutputStream
java.net.ConnectException: Connection timed out
        at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
        at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:744)
        at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
        at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531)
        at org.apache.hadoop.hdfs.DFSOutputStream.createSocketForPipeline(DFSOutputStream.java:1533)
        at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:1309)
        at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1262)
        at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:448)
2017-03-22 16:36:32,302 (Thread-7) [INFO - org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1265)] Abandoning BP-1251744720-192.168.0.200-1488999774966:blk_1073747273_6482
2017-03-22 16:36:32,312 (Thread-7) [INFO - org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1269)] Excluding datanode DatanodeInfoWithStorage[192.168.0.202:50010,DS-be868a99-6a04-4d5a-a595-5e2e79ed8491,DISK]
2017-03-22 16:36:35,329 (Thread-7) [INFO - org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:1368)] Exception in createBlockOutputStream
java.net.ConnectException: Connection timed out
        at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
        at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:744)
        at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
        at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531)
        at org.apache.hadoop.hdfs.DFSOutputStream.createSocketForPipeline(DFSOutputStream.java:1533)
        at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:1309)
        at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1262)
        at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:448)
2017-03-22 16:36:35,330 (Thread-7) [INFO - org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1265)] Abandoning BP-1251744720-192.168.0.200-1488999774966:blk_1073747274_6483
2017-03-22 16:36:35,337 (Thread-7) [INFO - org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1269)] Excluding datanode DatanodeInfoWithStorage[192.168.0.201:50010,DS-a59ec2d7-0113-4ed6-8306-715ca1ee875d,DISK]
2017-03-22 16:36:35,353 (Thread-7) [WARN - org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:557)] DataStreamer Exception
org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /flume/accesslog/17-03-22/accesslog.1490171784418.log.tmp could only be replicated to 0 nodes instead of minReplication (=1).  There are 3 datanode(s) running and 3 node(s) are excluded in this operation.
        at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1571)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getNewBlockTargets(FSNamesystem.java:3107)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:3031)
        at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:725)
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:492)
        at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2049)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2045)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:415)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2043)

        at org.apache.hadoop.ipc.Client.call(Client.java:1475)
        at org.apache.hadoop.ipc.Client.call(Client.java:1412)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
        at com.sun.proxy.$Proxy10.addBlock(Unknown Source)
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:418)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:191)
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
        at com.sun.proxy.$Proxy11.addBlock(Unknown Source)
        at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1455)
        at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1251)
        at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:448)
2017-03-22 16:36:35,354 (hdfs-r-call-runner-3) [WARN - org.apache.hadoop.hdfs.DFSOutputStream.flushOrSync(DFSOutputStream.java:2072)] Error while syncing






已有(5)人评论

跳转到指定楼层
easthome001 发表于 2017-3-23 11:27:51
本帖最后由 easthome001 于 2017-3-23 11:30 编辑

存在几个问题
1.防火墙是否关闭
2.producer.sinks.r.hdfs.rollInterval = 0
producer.sinks.r.hdfs.rollSize = 20971520
producer.sinks.r.hdfs.rollCount = 0
上面滚动楼主设置的 有点大。可以尝试先设置小一点。这么大内容,可能会超时。3.hadoop副本是多少,有几个datanode
4.jps看下进程是否还在5.datanode空间是否足够



回复

使用道具 举报

mjjian0 发表于 2017-3-23 11:38:54
easthome001 发表于 2017-3-23 11:27
存在几个问题
1.防火墙是否关闭
2.producer.sinks.r.hdfs.rollInterval = 0

尝试过了,也是同样的问题存在

点评

上图大家都看下,才能把帮助诊断  发表于 2017-3-23 11:40
回复

使用道具 举报

nextuser 发表于 2017-3-23 19:44:59
数据传不上来,看看空间是否足够
bin/hadoop dfsadmin -report

第二如果不是空间的问题,那就是网络的问题。
无论是防火墙,网络,还是datanode挂掉等原因

回复

使用道具 举报

韩利鹏 发表于 2017-9-30 17:51:29
应为你的只有namenode是公网的,在上传文件的时候需要跟datanode去通讯,但是又链接不上去,所以才出现上面的问题,能出现文件夹,但是写不上文件。可以中继一下,在你的内网中再去使用一个flume(中间可以采用avro去传输),这样就可以了,看到问题久远,相比你一定早解决了,我的回答是给后面遇到相同问题的人看的,我也是遇到了这个问题。花了一天才找到问题所在,还有问题可以直接群里喊我,企鹅群:246068961
回复

使用道具 举报

您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

关闭

推荐上一条 /2 下一条