分享

spark执行任务报错 java.net.ConnectException

haorengoodman 发表于 2015-1-9 19:22:09 [显示全部楼层] 回帖奖励 阅读模式 关闭右栏 5 139309
spark执行任务时,总是抛连接拒绝的信息,详细信息如下:
  1. 15/01/09 03:10:23 INFO storage.MemoryStore: ensureFreeSpace(163705) called with curMem=0, maxMem=280248975
  2. 15/01/09 03:10:23 INFO storage.MemoryStore: Block broadcast_0 stored as values in memory (estimated size 159.9 KB, free 267.1 MB)
  3. 15/01/09 03:10:23 INFO storage.MemoryStore: ensureFreeSpace(23010) called with curMem=163705, maxMem=280248975
  4. 15/01/09 03:10:23 INFO storage.MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 22.5 KB, free 267.1 MB)
  5. 15/01/09 03:10:23 INFO storage.BlockManagerInfo: Added broadcast_0_piece0 in memory on hadoop-001:52181 (size: 22.5 KB, free: 267.2 MB)
  6. 15/01/09 03:10:23 INFO storage.BlockManagerMaster: Updated info of block broadcast_0_piece0
  7. 15/01/09 03:10:23 INFO spark.SparkContext: Created broadcast 0 from textFile at FirstSparkDemo.java:29
  8. Exception in thread "main" java.net.ConnectException: Call From hadoop-001/192.168.12.211 to hadoop-002:8020 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
  9.         at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
  10.         at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
  11.         at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
  12.         at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
  13.         at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:783)
  14.         at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:730)
  15.         at org.apache.hadoop.ipc.Client.call(Client.java:1414)
  16.         at org.apache.hadoop.ipc.Client.call(Client.java:1363)
  17.         at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
  18.         at com.sun.proxy.$Proxy13.getFileInfo(Unknown Source)
  19.         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
  20.         at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
  21.         at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
  22.         at java.lang.reflect.Method.invoke(Method.java:606)
  23.         at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:190)
  24.         at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103)
  25.         at com.sun.proxy.$Proxy13.getFileInfo(Unknown Source)
  26.         at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:699)
  27.         at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1762)
  28.         at org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1124)
  29.         at org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1120)
  30.         at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
  31.         at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1120)
  32.         at org.apache.hadoop.fs.Globber.getFileStatus(Globber.java:57)
  33.         at org.apache.hadoop.fs.Globber.glob(Globber.java:248)
  34.         at org.apache.hadoop.fs.FileSystem.globStatus(FileSystem.java:1642)
  35.         at org.apache.hadoop.mapred.FileInputFormat.singleThreadedListStatus(FileInputFormat.java:257)
  36.         at org.apache.hadoop.mapred.FileInputFormat.listStatus(FileInputFormat.java:228)
  37.         at org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:304)
  38.         at org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:201)
  39.         at org.apache.spark.rdd.RDD$anonfun$partitions$2.apply(RDD.scala:205)
  40.         at org.apache.spark.rdd.RDD$anonfun$partitions$2.apply(RDD.scala:203)
  41.         at scala.Option.getOrElse(Option.scala:120)
  42.         at org.apache.spark.rdd.RDD.partitions(RDD.scala:203)
  43.         at org.apache.spark.rdd.MappedRDD.getPartitions(MappedRDD.scala:28)
  44.         at org.apache.spark.rdd.RDD$anonfun$partitions$2.apply(RDD.scala:205)
  45.         at org.apache.spark.rdd.RDD$anonfun$partitions$2.apply(RDD.scala:203)
  46.         at scala.Option.getOrElse(Option.scala:120)
  47.         at org.apache.spark.rdd.RDD.partitions(RDD.scala:203)
  48.         at org.apache.spark.rdd.MappedRDD.getPartitions(MappedRDD.scala:28)
  49.         at org.apache.spark.rdd.RDD$anonfun$partitions$2.apply(RDD.scala:205)
  50.         at org.apache.spark.rdd.RDD$anonfun$partitions$2.apply(RDD.scala:203)
  51.         at scala.Option.getOrElse(Option.scala:120)
  52.         at org.apache.spark.rdd.RDD.partitions(RDD.scala:203)
  53.         at org.apache.spark.SparkContext.runJob(SparkContext.scala:1351)
  54.         at org.apache.spark.rdd.RDD.reduce(RDD.scala:867)
  55.         at org.apache.spark.api.java.JavaRDDLike$class.reduce(JavaRDDLike.scala:346)
  56.         at org.apache.spark.api.java.JavaRDD.reduce(JavaRDD.scala:32)
  57.         at com.spark.gt.myspark.FirstSparkDemo.main(FirstSparkDemo.java:49)
  58.         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
  59.         at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
  60.         at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
  61.         at java.lang.reflect.Method.invoke(Method.java:606)
  62.         at org.apache.spark.deploy.SparkSubmit$.launch(SparkSubmit.scala:358)
  63.         at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:75)
  64.         at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
  65. Caused by: java.net.ConnectException: Connection refused
  66.         at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
  67.         at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:739)
  68.         at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
  69.         at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:529)
  70.         at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:493)
  71.         at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:604)
  72.         at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:699)
  73.         at org.apache.hadoop.ipc.Client$Connection.access$2800(Client.java:367)
  74.         at org.apache.hadoop.ipc.Client.getConnection(Client.java:1462)
  75.         at org.apache.hadoop.ipc.Client.call(Client.java:1381)
  76.         ... 49 more
复制代码

已经按照帖子  http://www.aboutyun.com/thread-8321-1-1.html   修改 hdfs-site.xml 文件,但是重新启动集群,执行spark任务,仍然出现如上的异常
求解决方案!!!!!!



已有(5)人评论

跳转到指定楼层
tntzbzc 发表于 2015-1-9 19:38:52
修改的是那个节点的。

所有节点务必保证hdfs-site.xml 一致
回复

使用道具 举报

jdayco 发表于 2015-1-9 21:20:18
如果是直接在开发环境执行的话是有问题的,请打包 然后spark-submit 试下。
回复

使用道具 举报

haorengoodman 发表于 2015-1-9 21:56:28
jdayco 发表于 2015-1-9 21:20
如果是直接在开发环境执行的话是有问题的,请打包 然后spark-submit 试下。

./spark-submit --class com.spark.gt.myspark.FirstSparkDemo --master spark://hadoop-002:7077 /root/myspark-0.0.1-SNAPSHOT.jar
已经试了,还是如上的问题

回复

使用道具 举报

liyo 发表于 2015-1-9 23:17:30
确定datanode节点都没问题??
回复

使用道具 举报

easthome001 发表于 2015-1-9 23:59:59
楼主可以提供更多的信息,产生这个问题的原因太多了。防火墙、端口、配置等等。建议楼主这些都检查下
回复

使用道具 举报

您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

关闭

推荐上一条 /2 下一条