分享

spark配置ha(用zookeeper)

linbowei 发表于 2016-5-17 14:57:53 [显示全部楼层] 只看大图 回帖奖励 阅读模式 关闭右栏 20 20500
langke93 发表于 2016-5-18 16:23:25

楼主下面内容核实下,是不是那里遗漏了

注意:

master挂掉只影响新Application的调度,对于在故障期间已经运行的 application不会受到影响。
因为涉及到多个Master,所以对于应用程序的提交就有了一点变化,因为应用程序需要知道当前的Master的IP地址和端口。这种HA方案处理这种情况很简单,只需要在SparkContext指向一个Master列表就可以了,如spark://host1:port1,host2:port2,host3:port3,应用程序会轮询列表。
该HA方案使用起来很简单,首先启动一个ZooKeeper集群,然后在不同节点上启动Master,注意这些节点需要具有相同的zookeeper配置(ZooKeeper URL 和目录)。

System propertyMeaning

spark.deploy.recoveryMode
Set to ZOOKEEPER to enable standby Master recovery mode (default: NONE).

spark.deploy.zookeeper.url
The ZooKeeper cluster url (e.g., 192.168.1.100:2181,192.168.1.101:2181).

spark.deploy.zookeeper.dir
The directory in ZooKeeper to store recovery state (default: /spark).

Master可以在任何时候添加或移除。如果发生故障切换,新的Master将联系所有以前注册的Application和Worker告知Master的改变。

注意:不能将Master定义在conf/spark-env.sh里了,而是直接在Application中定义。涉及的参数是 export SPARK_MASTER_IP=bigdata001,这项不配置或者为空。否则,无法启动多个master。


回复

使用道具 举报

linbowei 发表于 2016-5-18 16:28:27
langke93 发表于 2016-5-18 16:23
楼主下面内容核实下,是不是那里遗漏了

安装官网的配置,没有漏!你可以看看配置

回复

使用道具 举报

linbowei 发表于 2016-5-18 18:13:34
langke93 发表于 2016-5-18 16:23
楼主下面内容核实下,是不是那里遗漏了

16/05/18 18:07:01 INFO Worker: hadoopspark01:7077 Disassociated !
16/05/18 18:07:01 ERROR Worker: Connection to master failed! Waiting for master to reconnect...
16/05/18 18:07:01 INFO Worker: Not spawning another attempt to register with the master, since there is an attempt scheduled already.
16/05/18 18:07:01 WARN Worker: Failed to connect to master hadoopspark01:7077
akka.actor.ActorNotFound: Actor not found for: ActorSelection[Anchor(akka.tcp://sparkMaster@hadoopspark01:7077/), Path(/user/Master)]
    at akka.actor.ActorSelection$$anonfun$resolveOne$1.apply(ActorSelection.scala:65)
    at akka.actor.ActorSelection$$anonfun$resolveOne$1.apply(ActorSelection.scala:63)
    at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:32)
    at akka.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:55)
    at akka.dispatch.BatchingExecutor$Batch.run(BatchingExecutor.scala:73)
    at akka.dispatch.ExecutionContexts$sameThreadExecutionContext$.unbatchedExecute(Future.scala:74)
    at akka.dispatch.BatchingExecutor$class.execute(BatchingExecutor.scala:120)
    at akka.dispatch.ExecutionContexts$sameThreadExecutionContext$.execute(Future.scala:73)
    at scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:40)
    at scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:248)
    at akka.pattern.PromiseActorRef.$bang(AskSupport.scala:266)
    at akka.actor.EmptyLocalActorRef.specialHandle(ActorRef.scala:533)
    at akka.actor.DeadLetterActorRef.specialHandle(ActorRef.scala:569)
    at akka.actor.DeadLetterActorRef.$bang(ActorRef.scala:559)
    at akka.remote.RemoteActorRefProvider$RemoteDeadLetterActorRef.$bang(RemoteActorRefProvider.scala:91)
    at akka.actor.ActorRef.tell(ActorRef.scala:123)
    at akka.dispatch.Mailboxes$$anon$1$$anon$2.enqueue(Mailboxes.scala:44)
    at akka.dispatch.QueueBasedMessageQueue$class.cleanUp(Mailbox.scala:439)
    at akka.dispatch.UnboundedMailbox$MessageQueue.cleanUp(Mailbox.scala:559)
    at akka.dispatch.Mailbox.cleanUp(Mailbox.scala:310)
    at akka.dispatch.MessageDispatcher.unregister(AbstractDispatcher.scala:202)
    at akka.dispatch.MessageDispatcher.detach(AbstractDispatcher.scala:138)
    at akka.actor.dungeon.FaultHandling$class.akka$actor$dungeon$FaultHandling$$finishTerminate(FaultHandling.scala:212)
    at akka.actor.dungeon.FaultHandling$class.terminate(FaultHandling.scala:172)
    at akka.actor.ActorCell.terminate(ActorCell.scala:369)
    at akka.actor.ActorCell.invokeAll$1(ActorCell.scala:462)
    at akka.actor.ActorCell.systemInvoke(ActorCell.scala:478)
    at akka.dispatch.Mailbox.processAllSystemMessages(Mailbox.scala:263)
    at akka.dispatch.Mailbox.run(Mailbox.scala:219)
    at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:397)
    at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
    at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
    at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
    at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
16/05/18 18:07:13 INFO Worker: Retrying connection to master (attempt # 1)
16/05/18 18:07:13 INFO Worker: Connecting to master hadoopspark01:7077...
16/05/18 18:07:25 INFO Worker: Retrying connection to master (attempt # 2)
16/05/18 18:07:25 ERROR SparkUncaughtExceptionHandler: Uncaught exception in thread Thread[sparkWorker-akka.actor.default-dispatcher-2,5,main]
java.util.concurrent.RejectedExecutionException: Task java.util.concurrent.FutureTask@338b180b rejected from java.util.concurrent.ThreadPoolExecutor@70d7949c[Running, pool size = 1, active threads = 0, queued tasks = 0, completed tasks = 2]
    at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2048)
    at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:821)
    at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1372)
    at java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:110)
    at org.apache.spark.deploy.worker.Worker$$anonfun$org$apache$spark$deploy$worker$Worker$$reregisterWithMaster$1.apply$mcV$sp(Worker.scala:269)
    at org.apache.spark.util.Utils$.tryOrExit(Utils.scala:1119)
    at org.apache.spark.deploy.worker.Worker.org$apache$spark$deploy$worker$Worker$$reregisterWithMaster(Worker.scala:234)
    at org.apache.spark.deploy.worker.Worker$$anonfun$receive$1.applyOrElse(Worker.scala:521)
    at org.apache.spark.rpc.akka.AkkaRpcEnv.org$apache$spark$rpc$akka$AkkaRpcEnv$$processMessage(AkkaRpcEnv.scala:177)
    at org.apache.spark.rpc.akka.AkkaRpcEnv$$anonfun$actorRef$lzycompute$1$1$$anon$1$$anonfun$receiveWithLogging$1$$anonfun$applyOrElse$4.apply$mcV$sp(AkkaRpcEnv.scala:126)
    at org.apache.spark.rpc.akka.AkkaRpcEnv.org$apache$spark$rpc$akka$AkkaRpcEnv$$safelyCall(AkkaRpcEnv.scala:197)
    at org.apache.spark.rpc.akka.AkkaRpcEnv$$anonfun$actorRef$lzycompute$1$1$$anon$1$$anonfun$receiveWithLogging$1.applyOrElse(AkkaRpcEnv.scala:125)
    at scala.runtime.AbstractPartialFunction$mcVL$sp.apply$mcVL$sp(AbstractPartialFunction.scala:33)
    at scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:33)
    at scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:25)
    at org.apache.spark.util.ActorLogReceive$$anon$1.apply(ActorLogReceive.scala:59)
    at org.apache.spark.util.ActorLogReceive$$anon$1.apply(ActorLogReceive.scala:42)
    at scala.PartialFunction$class.applyOrElse(PartialFunction.scala:118)
    at org.apache.spark.util.ActorLogReceive$$anon$1.applyOrElse(ActorLogReceive.scala:42)
    at akka.actor.Actor$class.aroundReceive(Actor.scala:467)
    at org.apache.spark.rpc.akka.AkkaRpcEnv$$anonfun$actorRef$lzycompute$1$1$$anon$1.aroundReceive(AkkaRpcEnv.scala:92)
    at akka.actor.ActorCell.receiveMessage(ActorCell.scala:516)
    at akka.actor.ActorCell.invoke(ActorCell.scala:487)
    at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:238)
    at akka.dispatch.Mailbox.run(Mailbox.scala:220)
    at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:397)
    at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
    at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
    at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
    at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
16/05/18 18:07:25 INFO ShutdownHookManager: Shutdown hook called
这是日志,帮忙看看,问题在哪里?

回复

使用道具 举报

xw2016 发表于 2016-6-1 12:55:13
我也搭建了基于zookeeper的standalone HA,回去试试有没这问题
回复

使用道具 举报

linbowei 发表于 2016-6-1 15:34:50
xw2016 发表于 2016-6-1 12:55
我也搭建了基于zookeeper的standalone HA,回去试试有没这问题

http://blog.csdn.net/java_0605/article/details/51498208
你看看这个,我在vmware上成功的,但是在openstack的虚拟机上是失败的

回复

使用道具 举报

xw2016 发表于 2016-6-1 16:25:21
linbowei 发表于 2016-6-1 15:34
http://blog.csdn.net/java_0605/article/details/51498208
你看看这个,我在vmware上成功的,但是在ope ...

看了你的文章:Possible gotcha: If you have multiple Masters in your cluster but fail to correctly configure the Masters to use ZooKeeper, the Masters will fail to discover each other and think they’re all leaders. This will not lead to a healthy cluster state (as all Masters will schedule independently).


这里说master之间不能互相发现,是不是通信有问题,检查下IP之类的。
回复

使用道具 举报

您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

关闭

推荐上一条 /2 下一条