分享

【求助】Spark配置Zookeeper模式HA后集群无法使用问题

唐运 发表于 2014-12-25 13:46:24 [显示全部楼层] 回帖奖励 阅读模式 关闭右栏 2 49468
[root@nn1 ~]# spark-shell
Spark assembly has been built with Hive, including Datanucleus jars on classpath
14/12/25 13:40:47 INFO spark.SecurityManager: Changing view acls to: root
14/12/25 13:40:47 INFO spark.SecurityManager: Changing modify acls to: root
14/12/25 13:40:47 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(root); users with modify permissions: Set(root)
14/12/25 13:40:47 INFO spark.HttpServer: Starting HTTP Server
14/12/25 13:40:47 INFO server.Server: jetty-8.y.z-SNAPSHOT
14/12/25 13:40:47 INFO server.AbstractConnector: Started SocketConnector@0.0.0.0:57127
14/12/25 13:40:47 INFO util.Utils: Successfully started service 'HTTP class server' on port 57127.
Welcome to
      ____              __
     / __/__  ___ _____/ /__
    _\ \/ _ \/ _ `/ __/  '_/
   /___/ .__/\_,_/_/ /_/\_\   version 1.1.1
      /_/

Using Scala version 2.10.4 (OpenJDK 64-Bit Server VM, Java 1.6.0_33)
Type in expressions to have them evaluated.
Type :help for more information.
14/12/25 13:40:52 WARN spark.SparkConf:
SPARK_CLASSPATH was detected (set to '/usr/lib/hbase/hbase-0.94.2-cdh4.2.0-security.jar:/usr/lib/zookeeper/zookeeper-3.4.5-cdh4.2.0.jar:/usr/lib/hadoop-0.20-mapreduce/hadoop-core-2.0.0-mr1-cdh4.2.0.jar').
This is deprecated in Spark 1.0+.

Please instead use:
- ./spark-submit with --driver-class-path to augment the driver classpath
- spark.executor.extraClassPath to augment the executor classpath
        
14/12/25 13:40:52 WARN spark.SparkConf: Setting 'spark.executor.extraClassPath' to '/usr/lib/hbase/hbase-0.94.2-cdh4.2.0-security.jar:/usr/lib/zookeeper/zookeeper-3.4.5-cdh4.2.0.jar:/usr/lib/hadoop-0.20-mapreduce/hadoop-core-2.0.0-mr1-cdh4.2.0.jar' as a work-around.
14/12/25 13:40:52 WARN spark.SparkConf: Setting 'spark.driver.extraClassPath' to '/usr/lib/hbase/hbase-0.94.2-cdh4.2.0-security.jar:/usr/lib/zookeeper/zookeeper-3.4.5-cdh4.2.0.jar:/usr/lib/hadoop-0.20-mapreduce/hadoop-core-2.0.0-mr1-cdh4.2.0.jar' as a work-around.
14/12/25 13:40:52 INFO spark.SecurityManager: Changing view acls to: root
14/12/25 13:40:52 INFO spark.SecurityManager: Changing modify acls to: root
14/12/25 13:40:52 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(root); users with modify permissions: Set(root)
14/12/25 13:40:52 INFO slf4j.Slf4jLogger: Slf4jLogger started
14/12/25 13:40:53 INFO Remoting: Starting remoting
14/12/25 13:40:53 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkDriver@nn1.9961.bj:27967]
14/12/25 13:40:53 INFO Remoting: Remoting now listens on addresses: [akka.tcp://sparkDriver@nn1.9961.bj:27967]
14/12/25 13:40:53 INFO util.Utils: Successfully started service 'sparkDriver' on port 27967.
14/12/25 13:40:53 INFO spark.SparkEnv: Registering MapOutputTracker
14/12/25 13:40:53 INFO spark.SparkEnv: Registering BlockManagerMaster
14/12/25 13:40:53 INFO storage.DiskBlockManager: Created local directory at /data/tmp1/spark-local-20141225134053-fe24
14/12/25 13:40:53 INFO storage.DiskBlockManager: Created local directory at /data/tmp2/spark-local-20141225134053-fc5f
14/12/25 13:40:53 INFO util.Utils: Successfully started service 'Connection manager for block manager' on port 43596.
14/12/25 13:40:53 INFO network.ConnectionManager: Bound socket to port 43596 with id = ConnectionManagerId(nn1.9961.bj,43596)
14/12/25 13:40:53 INFO storage.MemoryStore: MemoryStore started with capacity 517.5 MB
14/12/25 13:40:53 INFO storage.BlockManagerMaster: Trying to register BlockManager
14/12/25 13:40:53 INFO storage.BlockManagerMasterActor: Registering block manager nn1.9961.bj:43596 with 517.5 MB RAM, BlockManagerId(<driver>, nn1.9961.bj, 43596, 0)
14/12/25 13:40:53 INFO storage.BlockManagerMaster: Registered BlockManager
14/12/25 13:40:53 INFO spark.HttpFileServer: HTTP File server directory is /tmp/spark-6391a70b-65f6-4ee4-88db-1e7607e02863
14/12/25 13:40:53 INFO spark.HttpServer: Starting HTTP Server
14/12/25 13:40:53 INFO server.Server: jetty-8.y.z-SNAPSHOT
14/12/25 13:40:53 INFO server.AbstractConnector: Started SocketConnector@0.0.0.0:36766
14/12/25 13:40:53 INFO util.Utils: Successfully started service 'HTTP file server' on port 36766.
14/12/25 13:40:53 INFO server.Server: jetty-8.y.z-SNAPSHOT
14/12/25 13:40:53 INFO server.AbstractConnector: Started SelectChannelConnector@0.0.0.0:4040
14/12/25 13:40:53 INFO util.Utils: Successfully started service 'SparkUI' on port 4040.
14/12/25 13:40:53 INFO ui.SparkUI: Started SparkUI at http://nn1.9961.bj:4040
14/12/25 13:40:54 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
14/12/25 13:40:54 INFO client.AppClient$ClientActor: Connecting to master spark://192.168.154.101:7077...
14/12/25 13:40:54 INFO cluster.SparkDeploySchedulerBackend: SchedulerBackend is ready for scheduling beginning after reached minRegisteredResourcesRatio: 0.0
14/12/25 13:40:54 INFO client.AppClient$ClientActor: Connecting to master spark://192.168.154.102:7077...
14/12/25 13:40:54 INFO repl.SparkILoop: Created spark context..
Spark context available as sc.

scala> 14/12/25 13:41:14 INFO client.AppClient$ClientActor: Connecting to master spark://192.168.154.101:7077...
14/12/25 13:41:14 INFO client.AppClient$ClientActor: Connecting to master spark://192.168.154.102:7077...
14/12/25 13:41:34 INFO client.AppClient$ClientActor: Connecting to master spark://192.168.154.101:7077...
14/12/25 13:41:34 INFO client.AppClient$ClientActor: Connecting to master spark://192.168.154.102:7077...
14/12/25 13:41:54 ERROR cluster.SparkDeploySchedulerBackend: Application has been killed. Reason: All masters are unresponsive! Giving up.
14/12/25 13:41:54 ERROR scheduler.TaskSchedulerImpl: Exiting due to error from cluster scheduler: All masters are unresponsive! Giving up.
You have new mail in /var/spool/mail/root


启动spark-shell和spark程序报错,不知道为什么。
手动切换主从spark master可以正常切换。

已有(2)人评论

跳转到指定楼层
bioger_hit 发表于 2014-12-25 15:54:35
确保下面链接能否访问
spark://192.168.154.101:7077

检查下配置文件是否都一致

回复

使用道具 举报

desehawk 发表于 2014-12-25 15:58:02
检查下host配置,并确保下面

spark://192.168.154.101:7077
替换为
spark://hostname:7077
回复

使用道具 举报

您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

关闭

推荐上一条 /2 下一条