立即注册 登录
About云-梭伦科技 返回首页

小小布衣的个人空间 https://www.aboutyun.com/?8159 [收藏] [复制] [分享] [RSS]

日志

跑spark任务,在spark-shell行,但是在spark-submit中报错

已有 2038 次阅读2015-1-8 17:45 | spark, cdh, wordcount, spark-submit

请大神,帮我看看这个问题,
1.我自己写的WordCount在本地能跑,在spark-shell上能跑,但是在spark-submit上报错
2.排除scala版本的问题,本地和集群的scala单位版本都是2.10.4
3.跑官方的WordCount报错和自己WordCount报错是一样的
4.我提交方式是:./spark-submit --master spark://10.1.0.141:7077  --class org.apache.spark.examples.SparkPi  --name Spark-Pi --executor-memory 500M  --driver-memory 512M  lib/spark-examples-1.1.0-cdh5.2.0-hadoop2.5.0-cdh5.2.0.jar 1000(这样的话,应该也不会存在jar包冲突的问题,但是错误依旧)

15/01/08 15:19:58 INFO Remoting: Remoting now listens on addresses: [akka.tcp://sparkDriver@n5.china.com:20545]
15/01/08 15:19:58 INFO Utils: Successfully started service 'sparkDriver' on port 20545.
15/01/08 15:19:58 INFO SparkEnv: Registering MapOutputTracker
15/01/08 15:19:58 INFO SparkEnv: Registering BlockManagerMaster
15/01/08 15:19:58 INFO DiskBlockManager: Created local directory at /tmp/spark-local-20150108151958-cadb
15/01/08 15:19:58 INFO Utils: Successfully started service 'Connection manager for block manager' on port 63094.
15/01/08 15:19:58 INFO ConnectionManager: Bound socket to port 63094 with id = ConnectionManagerId(n5.china.com,63094)
15/01/08 15:19:58 INFO MemoryStore: MemoryStore started with capacity 265.4 MB
15/01/08 15:19:58 INFO BlockManagerMaster: Trying to register BlockManager
15/01/08 15:19:58 INFO BlockManagerMasterActor: Registering block manager n5.china.com:63094 with 265.4 MB RAM
15/01/08 15:19:58 INFO BlockManagerMaster: Registered BlockManager
15/01/08 15:19:58 INFO HttpFileServer: HTTP File server directory is /tmp/spark-3b796d21-db73-4aad-84bc-23cd0ce3fab7
15/01/08 15:19:58 INFO HttpServer: Starting HTTP Server
15/01/08 15:19:58 INFO Utils: Successfully started service 'HTTP file server' on port 38860.
15/01/08 15:19:58 INFO Utils: Successfully started service 'SparkUI' on port 4040.
15/01/08 15:19:58 INFO SparkUI: Started SparkUI at http://n5.china.com:4040
15/01/08 15:19:59 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
15/01/08 15:19:59 INFO SparkContext: Added JAR file:/opt/cloudera/parcels/CDH-5.2.0-1.cdh5.2.0.p0.36/jars/spark-examples-1.1.0-cdh5.2.0-hadoop2.5.0-cdh5.2.0.jar at http://10.1.0.182:38860/jars/spark-examples-1.1.0-cdh5.2.0-hadoop2.5.0-cdh5.2.0.jar with timestamp 1420701599601
15/01/08 15:19:59 INFO AppClient$ClientActor: Connecting to master spark://10.1.0.141:7077...
15/01/08 15:19:59 INFO SparkDeploySchedulerBackend: SchedulerBackend is ready for scheduling beginning after reached minRegisteredResourcesRatio: 0.0
15/01/08 15:19:59 INFO SparkContext: Starting job: reduce at SparkPi.scala:35
15/01/08 15:19:59 INFO DAGScheduler: Got job 0 (reduce at SparkPi.scala:35) with 1000 output partitions (allowLocal=false)
15/01/08 15:19:59 INFO DAGScheduler: Final stage: Stage 0(reduce at SparkPi.scala:35)
15/01/08 15:19:59 INFO DAGScheduler: Parents of final stage: List()
15/01/08 15:19:59 INFO DAGScheduler: Missing parents: List()
15/01/08 15:19:59 INFO DAGScheduler: Submitting Stage 0 (MappedRDD[1] at map at SparkPi.scala:31), which has no missing parents
15/01/08 15:20:00 INFO MemoryStore: ensureFreeSpace(1728) called with curMem=0, maxMem=278302556
15/01/08 15:20:00 INFO MemoryStore: Block broadcast_0 stored as values in memory (estimated size 1728.0 B, free 265.4 MB)
15/01/08 15:20:00 INFO MemoryStore: ensureFreeSpace(1125) called with curMem=1728, maxMem=278302556
15/01/08 15:20:00 INFO MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 1125.0 B, free 265.4 MB)
15/01/08 15:20:00 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on n5.china.com:63094 (size: 1125.0 B, free: 265.4 MB)
15/01/08 15:20:00 INFO BlockManagerMaster: Updated info of block broadcast_0_piece0
15/01/08 15:20:00 INFO DAGScheduler: Submitting 1000 missing tasks from Stage 0 (MappedRDD[1] at map at SparkPi.scala:31)
15/01/08 15:20:00 INFO TaskSchedulerImpl: Adding task set 0.0 with 1000 tasks
15/01/08 15:20:15 WARN TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient memory
15/01/08 15:20:19 INFO AppClient$ClientActor: Connecting to master spark://10.1.0.141:7077...
15/01/08 15:20:30 WARN TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient memory
15/01/08 15:20:39 INFO AppClient$ClientActor: Connecting to master spark://10.1.0.141:7077...
15/01/08 15:20:45 WARN TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient memory
15/01/08 15:20:59 ERROR SparkDeploySchedulerBackend: Application has been killed. Reason: All masters are unresponsive! Giving up.
15/01/08 15:20:59 INFO TaskSchedulerImpl: Removed TaskSet 0.0, whose tasks have all completed, from pool 
15/01/08 15:20:59 INFO TaskSchedulerImpl: Cancelling stage 0
15/01/08 15:20:59 INFO DAGScheduler: Failed to run reduce at SparkPi.scala:35
Exception in thread "main" org.apache.spark.SparkException: Job aborted due to stage failure: All masters are unresponsive! Giving up.
        at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1185)
        at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1174)
        at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1173)
        at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
        at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
        at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1173)
        at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:688)
        at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:688)
        at scala.Option.foreach(Option.scala:236)
        at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:688)
        at org.apache.spark.scheduler.DAGSchedulerEventProcessActor$$anonfun$receive$2.applyOrElse(DAGScheduler.scala:1391)
        at akka.actor.ActorCell.receiveMessage(ActorCell.scala:498)
        at akka.actor.ActorCell.invoke(ActorCell.scala:456)
        at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:237)
        at akka.dispatch.Mailbox.run(Mailbox.scala:219)
        at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:386)
        at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
        at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
        at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
        at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
[root@n5 bin]# 


路过

雷人

握手

鲜花

鸡蛋

评论 (0 个评论)

facelist doodle 涂鸦板

您需要登录后才可以评论 登录 | 立即注册

关闭

推荐上一条 /2 下一条