分享

java api如何判断hdfs已经连接或者能够连接

本帖最后由 czs208112 于 2017-10-29 19:15 编辑

写了个hdfs管理的图形化工具,遇到了一个比较麻烦的问题。工具在连接不存在的hdfs时,会卡死,日志显示有一个线程每隔一段时间就会尝试连接hdfs。

[mw_shl_code=applescript,true][QC] INFO [main] org.apache.hadoop.ipc.Client$Connection.handleConnectionTimeout(838) | Retrying connect to server: node1/192.168.128.140:9000. Already tried 0 time(s); maxRetries=45
[QC] INFO [main] org.apache.hadoop.ipc.Client$Connection.handleConnectionTimeout(838) | Retrying connect to server: node1/192.168.128.140:9000. Already tried 1 time(s); maxRetries=45
[QC] INFO [main] org.apache.hadoop.ipc.Client$Connection.handleConnectionTimeout(838) | Retrying connect to server: node1/192.168.128.140:9000. Already tried 2 time(s); maxRetries=45
[QC] INFO [main] org.apache.hadoop.ipc.Client$Connection.handleConnectionTimeout(838) | Retrying connect to server: node1/192.168.128.140:9000. Already tried 3 time(s); maxRetries=45
[QC] INFO [main] org.apache.hadoop.ipc.Client$Connection.handleConnectionTimeout(838) | Retrying connect to server: node1/192.168.128.140:9000. Already tried 4 time(s); maxRetries=45
[QC] INFO [main] org.apache.hadoop.ipc.Client$Connection.handleConnectionTimeout(838) | Retrying connect to server: node1/192.168.128.140:9000. Already tried 5 time(s); maxRetries=45
[QC] INFO [main] org.apache.hadoop.ipc.Client$Connection.handleConnectionTimeout(838) | Retrying connect to server: node1/192.168.128.140:9000. Already tried 6 time(s); maxRetries=45
[QC] INFO [main] org.apache.hadoop.ipc.Client$Connection.handleConnectionTimeout(838) | Retrying connect to server: node1/192.168.128.140:9000. Already tried 7 time(s); maxRetries=45
[QC] INFO [main] org.apache.hadoop.ipc.Client$Connection.handleConnectionTimeout(838) | Retrying connect to server: node1/192.168.128.140:9000. Already tried 8 time(s); maxRetries=45
[QC] INFO [main] org.apache.hadoop.ipc.Client$Connection.handleConnectionTimeout(838) | Retrying connect to server: node1/192.168.128.140:9000. Already tried 9 time(s); maxRetries=45
[QC] INFO [main] org.apache.hadoop.ipc.Client$Connection.handleConnectionTimeout(838) | Retrying connect to server: node1/192.168.128.140:9000. Already tried 10 time(s); maxRetries=45
[QC] INFO [main] org.apache.hadoop.ipc.Client$Connection.handleConnectionTimeout(838) | Retrying connect to server: node1/192.168.128.140:9000. Already tried 11 time(s); maxRetries=45
[QC] INFO [main] org.apache.hadoop.ipc.Client$Connection.handleConnectionTimeout(838) | Retrying connect to server: node1/192.168.128.140:9000. Already tried 12 time(s); maxRetries=45
[QC] INFO [main] org.apache.hadoop.ipc.Client$Connection.handleConnectionTimeout(838) | Retrying connect to server: node1/192.168.128.140:9000. Already tried 13 time(s); maxRetries=45
[QC] INFO [main] org.apache.hadoop.ipc.Client$Connection.handleConnectionTimeout(838) | Retrying connect to server: node1/192.168.128.140:9000. Already tried 14 time(s); maxRetries=45
[QC] INFO [main] org.apache.hadoop.ipc.Client$Connection.handleConnectionTimeout(838) | Retrying connect to server: node1/192.168.128.140:9000. Already tried 15 time(s); maxRetries=45
[QC] INFO [main] org.apache.hadoop.ipc.Client$Connection.handleConnectionTimeout(838) | Retrying connect to server: node1/192.168.128.140:9000. Already tried 16 time(s); maxRetries=45
[QC] INFO [main] org.apache.hadoop.ipc.Client$Connection.handleConnectionTimeout(838) | Retrying connect to server: node1/192.168.128.140:9000. Already tried 17 time(s); maxRetries=45
[QC] INFO [main] org.apache.hadoop.ipc.Client$Connection.handleConnectionTimeout(838) | Retrying connect to server: node1/192.168.128.140:9000. Already tried 18 time(s); maxRetries=45
[QC] INFO [main] org.apache.hadoop.ipc.Client$Connection.handleConnectionTimeout(838) | Retrying connect to server: node1/192.168.128.140:9000. Already tried 19 time(s); maxRetries=45
[QC] INFO [main] org.apache.hadoop.ipc.Client$Connection.handleConnectionTimeout(838) | Retrying connect to server: node1/192.168.128.140:9000. Already tried 20 time(s); maxRetries=45
[QC] INFO [main] org.apache.hadoop.ipc.Client$Connection.handleConnectionTimeout(838) | Retrying connect to server: node1/192.168.128.140:9000. Already tried 21 time(s); maxRetries=45
[QC] INFO [main] org.apache.hadoop.ipc.Client$Connection.handleConnectionTimeout(838) | Retrying connect to server: node1/192.168.128.140:9000. Already tried 22 time(s); maxRetries=45
[QC] INFO [main] org.apache.hadoop.ipc.Client$Connection.handleConnectionTimeout(838) | Retrying connect to server: node1/192.168.128.140:9000. Already tried 23 time(s); maxRetries=45
[QC] INFO [main] org.apache.hadoop.ipc.Client$Connection.handleConnectionTimeout(838) | Retrying connect to server: node1/192.168.128.140:9000. Already tried 24 time(s); maxRetries=45
[QC] INFO [main] org.apache.hadoop.ipc.Client$Connection.handleConnectionTimeout(838) | Retrying connect to server: node1/192.168.128.140:9000. Already tried 25 time(s); maxRetries=45
[QC] INFO [main] org.apache.hadoop.ipc.Client$Connection.handleConnectionTimeout(838) | Retrying connect to server: node1/192.168.128.140:9000. Already tried 26 time(s); maxRetries=45
[QC] INFO [main] org.apache.hadoop.ipc.Client$Connection.handleConnectionTimeout(838) | Retrying connect to server: node1/192.168.128.140:9000. Already tried 27 time(s); maxRetries=45
[QC] INFO [main] org.apache.hadoop.ipc.Client$Connection.handleConnectionTimeout(838) | Retrying connect to server: node1/192.168.128.140:9000. Already tried 28 time(s); maxRetries=45
[QC] INFO [main] org.apache.hadoop.ipc.Client$Connection.handleConnectionTimeout(838) | Retrying connect to server: node1/192.168.128.140:9000. Already tried 29 time(s); maxRetries=45
[/mw_shl_code]

我大概分析了下原因,即便是不存在的hdfs集群,
fileSystem = FileSystem.get(URI.create(url), config);
都会返回不为空fileSystem对象,fileSystem没有可以判断连接成功的方法或属性。这导致后面调用fileSystem 的方法连接hdfs时连接超时并一直尝试。

我另起了个线程运行fileSystem的方法,并调用定时器,超过5s就停掉fileSystem,但似乎效果不好,

官方难道没有一个测试连接的方法,或者能返回连接状态的方法吗?另外看了下eclipse-hadoop插件,好像也是这样,连接不存在的hdfs,会一直尝试连接,只不过没有阻塞界面。


已有(4)人评论

跳转到指定楼层
czs208112 发表于 2017-10-29 22:00:38
已解决.默认配置ipc连接10秒超时,重试45次,我会被卡460s,
我改成3s超时,重试1次,也就阻塞6s。
[mw_shl_code=java,true]config.set("ipc.client.connect.timeout", "3000");
config.set("ipc.client.connect.max.retries.on.timeouts", "1");[/mw_shl_code]
回复

使用道具 举报

desehawk 发表于 2017-10-29 19:51:20

用下下面的方法

hadoop path.jpg

链接
https://hadoop.apache.org/docs/s ... /fs/FileSystem.html

可以写上根路径
回复

使用道具 举报

czs208112 发表于 2017-10-29 20:44:35

谢谢了,但是exists方法还是不行。似乎只要hdfs地址不存在,试图连接hdfs的方法都会阻塞这里。太蛋疼了。 QQ截图20171029204214.png QQ截图20171029204306.png
回复

使用道具 举报

czs208112 发表于 2017-10-29 20:48:43
我估计checkPath(Path path)应该可以,但是api文档有这个方法,jar包里却没有,奇了怪了。hadoop2.7.4环境
回复

使用道具 举报

您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

关闭

推荐上一条 /2 下一条