分享

spark 报错: Error while invoking RpcHandler#receive() on RPC

grinsky 发表于 2017-3-7 10:23:16 [显示全部楼层] 回帖奖励 阅读模式 关闭右栏 3 40696
执行的命令如下;
[root@master ~]# spark2-submit  --master yarn-client  --class org.apache.spark.examples.SparkPi   /opt/cloudera/parcels/SPARK2/lib/spark2/examples/jars/spark-examples_2.11-2.0.0.cloudera1.jar 100
有时可以执行成功,但大多数的时候都是执行失败,报错如下:

17/03/07 10:14:26 ERROR cluster.YarnSchedulerBackend$YarnSchedulerEndpoint: Sending RequestExecutors(0,0,Map(),Set()) to AM was unsuccessful
java.lang.RuntimeException: java.io.InvalidClassException: org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages$RequestExecutors; local class incompatible: stream classdesc serialVersionUID = 8358056598889300671, local class serialVersionUID = 7786662598091731105

系统环境:
CENTOS6.5x64 JDK1.8  SPARK2.0 CHD5.9 CM5.9

详细错误:
[mw_shl_code=bash,true]Warning: Master yarn-client is deprecated since 2.0. Please use master "yarn" with specified deploy mode instead.
17/03/07 10:23:50 INFO spark.SparkContext: Running Spark version 2.0.0.cloudera1
17/03/07 10:23:51 INFO spark.SecurityManager: Changing view acls to: root
17/03/07 10:23:51 INFO spark.SecurityManager: Changing modify acls to: root
17/03/07 10:23:51 INFO spark.SecurityManager: Changing view acls groups to:
17/03/07 10:23:51 INFO spark.SecurityManager: Changing modify acls groups to:
17/03/07 10:23:51 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users  with view permissions: Set(root); groups with view permissions: Set(); users  with modify permissions: Set(root); groups with modify permissions: Set()
17/03/07 10:23:51 INFO util.Utils: Successfully started service 'sparkDriver' on port 50523.
17/03/07 10:23:51 INFO spark.SparkEnv: Registering MapOutputTracker
17/03/07 10:23:51 INFO spark.SparkEnv: Registering BlockManagerMaster
17/03/07 10:23:51 INFO storage.DiskBlockManager: Created local directory at /tmp/blockmgr-c7b2c027-3e1f-42ec-9b67-7f6763892140
17/03/07 10:23:51 INFO memory.MemoryStore: MemoryStore started with capacity 408.9 MB
17/03/07 10:23:51 INFO spark.SparkEnv: Registering OutputCommitCoordinator
17/03/07 10:23:51 INFO util.log: Logging initialized @1628ms
17/03/07 10:23:51 INFO server.Server: jetty-9.2.z-SNAPSHOT
17/03/07 10:23:51 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@2f4854d6{/jobs,null,AVAILABLE}
17/03/07 10:23:51 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@61d9efe0{/jobs/json,null,AVAILABLE}
17/03/07 10:23:51 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@7e70bd39{/jobs/job,null,AVAILABLE}
17/03/07 10:23:51 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@e6516e{/jobs/job/json,null,AVAILABLE}
17/03/07 10:23:51 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@6de54b40{/stages,null,AVAILABLE}
17/03/07 10:23:51 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@43ed0ff3{/stages/json,null,AVAILABLE}
17/03/07 10:23:51 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@388ffbc2{/stages/stage,null,AVAILABLE}
17/03/07 10:23:51 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@a50b09c{/stages/stage/json,null,AVAILABLE}
17/03/07 10:23:51 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@4da855dd{/stages/pool,null,AVAILABLE}
17/03/07 10:23:51 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@6691490c{/stages/pool/json,null,AVAILABLE}
17/03/07 10:23:51 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@2187fff7{/storage,null,AVAILABLE}
17/03/07 10:23:51 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@2e5c7f0b{/storage/json,null,AVAILABLE}
17/03/07 10:23:51 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@21d5c1a0{/storage/rdd,null,AVAILABLE}
17/03/07 10:23:51 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@4de025bf{/storage/rdd/json,null,AVAILABLE}
17/03/07 10:23:51 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@538613b3{/environment,null,AVAILABLE}
17/03/07 10:23:51 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@1eef9aef{/environment/json,null,AVAILABLE}
17/03/07 10:23:51 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@11389053{/executors,null,AVAILABLE}
17/03/07 10:23:51 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@5db99216{/executors/json,null,AVAILABLE}
17/03/07 10:23:51 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@3ec11999{/executors/threadDump,null,AVAILABLE}
17/03/07 10:23:51 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@5c1bd44c{/executors/threadDump/json,null,AVAILABLE}
17/03/07 10:23:51 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@9f46d94{/static,null,AVAILABLE}
17/03/07 10:23:51 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@18cc679e{/,null,AVAILABLE}
17/03/07 10:23:51 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@2e77b8cf{/api,null,AVAILABLE}
17/03/07 10:23:51 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@2c4ca0f9{/stages/stage/kill,null,AVAILABLE}
17/03/07 10:23:51 INFO server.ServerConnector: Started ServerConnector@3a4b0e5d{HTTP/1.1}{0.0.0.0:4040}
17/03/07 10:23:51 INFO server.Server: Started @1723ms
17/03/07 10:23:51 INFO util.Utils: Successfully started service 'SparkUI' on port 4040.
17/03/07 10:23:51 INFO ui.SparkUI: Bound SparkUI to 0.0.0.0, and started at http://xx.xx.xx.xx:4040
17/03/07 10:23:51 INFO spark.SparkContext: Added JAR file:/opt/cloudera/parcels/SPARK2/lib/spark2/examples/jars/spark-examples_2.11-2.0.0.cloudera1.jar at spark://xx.xx.xx.xx:50523/jars/spark-examples_2.11-2.0.0.cloudera1.jar with timestamp 1488853431744
17/03/07 10:23:51 INFO util.Utils: Using initial executors = 0, max of spark.dynamicAllocation.initialExecutors, spark.dynamicAllocation.minExecutors and spark.executor.instances
17/03/07 10:23:52 INFO client.RMProxy: Connecting to ResourceManager at master/xx.xx.xx.xx:8032
17/03/07 10:23:52 INFO yarn.Client: Requesting a new application from cluster with 4 NodeManagers
17/03/07 10:23:52 INFO yarn.Client: Verifying our application has not requested more than the maximum memory capability of the cluster (55075 MB per container)
17/03/07 10:23:52 INFO yarn.Client: Will allocate AM container, with 896 MB memory including 384 MB overhead
17/03/07 10:23:52 INFO yarn.Client: Setting up container launch context for our AM
17/03/07 10:23:52 INFO yarn.Client: Setting up the launch environment for our AM container
17/03/07 10:23:52 INFO yarn.Client: Preparing resources for our AM container
17/03/07 10:23:52 INFO yarn.Client: Uploading resource file:/tmp/spark-d9036de4-8bd3-47b3-91f2-25f9d0e1e1c0/__spark_conf__4592355155148431407.zip -> hdfs://master:8020/user/root/.sparkStaging/application_1488794978246_0014/__spark_conf__.zip
17/03/07 10:23:52 INFO spark.SecurityManager: Changing view acls to: root
17/03/07 10:23:52 INFO spark.SecurityManager: Changing modify acls to: root
17/03/07 10:23:52 INFO spark.SecurityManager: Changing view acls groups to:
17/03/07 10:23:52 INFO spark.SecurityManager: Changing modify acls groups to:
17/03/07 10:23:52 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users  with view permissions: Set(root); groups with view permissions: Set(); users  with modify permissions: Set(root); groups with modify permissions: Set()
17/03/07 10:23:52 INFO yarn.Client: Submitting application application_1488794978246_0014 to ResourceManager
17/03/07 10:23:52 INFO impl.YarnClientImpl: Submitted application application_1488794978246_0014
17/03/07 10:23:52 INFO cluster.SchedulerExtensionServices: Starting Yarn extension services with app application_1488794978246_0014 and attemptId None
17/03/07 10:23:53 INFO yarn.Client: Application report for application_1488794978246_0014 (state: ACCEPTED)
17/03/07 10:23:53 INFO yarn.Client:
         client token: N/A
         diagnostics: N/A
         ApplicationMaster host: N/A
         ApplicationMaster RPC port: -1
         queue: root.users.root
         start time: 1488853432955
         final status: UNDEFINED
         tracking URL: http://master:8088/proxy/application_1488794978246_0014/
         user: root
17/03/07 10:23:54 INFO yarn.Client: Application report for application_1488794978246_0014 (state: ACCEPTED)
17/03/07 10:23:55 INFO cluster.YarnSchedulerBackend$YarnSchedulerEndpoint: ApplicationMaster registered as NettyRpcEndpointRef(null)
17/03/07 10:23:55 INFO cluster.YarnClientSchedulerBackend: Add WebUI Filter. org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter, Map(PROXY_HOSTS -> master, PROXY_URI_BASES -> http://master:8088/proxy/application_1488794978246_0014), /proxy/application_1488794978246_0014
17/03/07 10:23:55 INFO ui.JettyUtils: Adding filter: org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter
17/03/07 10:23:55 INFO yarn.Client: Application report for application_1488794978246_0014 (state: RUNNING)
17/03/07 10:23:55 INFO yarn.Client:
         client token: N/A
         diagnostics: N/A
         ApplicationMaster host: xx.xx.xx.xx
         ApplicationMaster RPC port: 0
         queue: root.users.root
         start time: 1488853432955
         final status: UNDEFINED
         tracking URL: http://master:8088/proxy/application_1488794978246_0014/
         user: root
17/03/07 10:23:55 INFO cluster.YarnClientSchedulerBackend: Application application_1488794978246_0014 has started running.
17/03/07 10:23:55 INFO util.Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 53530.
17/03/07 10:23:55 INFO netty.NettyBlockTransferService: Server created on xx.xx.xx.xx:53530
17/03/07 10:23:55 INFO storage.BlockManager: external shuffle service port = 7337
17/03/07 10:23:55 INFO storage.BlockManagerMaster: Registering BlockManager BlockManagerId(driver, xx.xx.xx.xx, 53530)
17/03/07 10:23:55 INFO storage.BlockManagerMasterEndpoint: Registering block manager xx.xx.xx.xx:53530 with 408.9 MB RAM, BlockManagerId(driver, xx.xx.xx.xx, 53530)
17/03/07 10:23:56 INFO storage.BlockManagerMaster: Registered BlockManager BlockManagerId(driver, xx.xx.xx.xx, 53530)
17/03/07 10:23:56 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@20ab3e3a{/metrics/json,null,AVAILABLE}
17/03/07 10:23:56 INFO scheduler.EventLoggingListener: Logging events to hdfs://master:8020/user/spark/spark2ApplicationHistory/application_1488794978246_0014
17/03/07 10:23:56 INFO util.Utils: Using initial executors = 0, max of spark.dynamicAllocation.initialExecutors, spark.dynamicAllocation.minExecutors and spark.executor.instances
17/03/07 10:23:56 INFO cluster.YarnClientSchedulerBackend: SchedulerBackend is ready for scheduling beginning after reached minRegisteredResourcesRatio: 0.8
17/03/07 10:23:56 WARN spark.SparkContext: Use an existing SparkContext, some configuration may not take effect.
17/03/07 10:23:56 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@24534cb0{/SQL,null,AVAILABLE}
17/03/07 10:23:56 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@106d77da{/SQL/json,null,AVAILABLE}
17/03/07 10:23:56 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@7b6c6e70{/SQL/execution,null,AVAILABLE}
17/03/07 10:23:56 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@3a894088{/SQL/execution/json,null,AVAILABLE}
17/03/07 10:23:56 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@65e0b505{/static/sql,null,AVAILABLE}
17/03/07 10:23:56 INFO internal.SharedState: Warehouse path is 'file:/root/spark-warehouse'.
17/03/07 10:23:56 INFO spark.SparkContext: Starting job: reduce at SparkPi.scala:38
17/03/07 10:23:56 INFO scheduler.DAGScheduler: Got job 0 (reduce at SparkPi.scala:38) with 100 output partitions
17/03/07 10:23:56 INFO scheduler.DAGScheduler: Final stage: ResultStage 0 (reduce at SparkPi.scala:38)
17/03/07 10:23:56 INFO scheduler.DAGScheduler: Parents of final stage: List()
17/03/07 10:23:56 INFO scheduler.DAGScheduler: Missing parents: List()
17/03/07 10:23:56 INFO scheduler.DAGScheduler: Submitting ResultStage 0 (MapPartitionsRDD[1] at map at SparkPi.scala:34), which has no missing parents
17/03/07 10:23:56 INFO memory.MemoryStore: Block broadcast_0 stored as values in memory (estimated size 1832.0 B, free 408.9 MB)
17/03/07 10:23:56 INFO memory.MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 1146.0 B, free 408.9 MB)
17/03/07 10:23:56 INFO storage.BlockManagerInfo: Added broadcast_0_piece0 in memory on xx.xx.xx.xx:53530 (size: 1146.0 B, free: 408.9 MB)
17/03/07 10:23:56 INFO spark.SparkContext: Created broadcast 0 from broadcast at DAGScheduler.scala:1012
17/03/07 10:23:56 INFO scheduler.DAGScheduler: Submitting 100 missing tasks from ResultStage 0 (MapPartitionsRDD[1] at map at SparkPi.scala:34)
17/03/07 10:23:56 INFO cluster.YarnScheduler: Adding task set 0.0 with 100 tasks
17/03/07 10:23:57 INFO spark.ExecutorAllocationManager: Requesting 1 new executor because tasks are backlogged (new desired total will be 1)
17/03/07 10:23:58 INFO spark.ExecutorAllocationManager: Requesting 2 new executors because tasks are backlogged (new desired total will be 3)
17/03/07 10:23:59 INFO spark.ExecutorAllocationManager: Requesting 4 new executors because tasks are backlogged (new desired total will be 7)
17/03/07 10:24:00 INFO cluster.YarnSchedulerBackend$YarnDriverEndpoint: Registered executor NettyRpcEndpointRef(null) (xx.xx.xx.61:63549) with ID 1
17/03/07 10:24:00 INFO spark.ExecutorAllocationManager: New executor 1 has registered (new total is 1)
17/03/07 10:24:00 INFO storage.BlockManagerMasterEndpoint: Registering block manager datanode1:7097 with 408.9 MB RAM, BlockManagerId(1, datanode1, 7097)
17/03/07 10:24:00 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 0.0 (TID 0, datanode1, executor 1, partition 0, PROCESS_LOCAL, 5349 bytes)
17/03/07 10:24:00 INFO cluster.YarnSchedulerBackend$YarnDriverEndpoint: Launching task 0 on executor id: 1 hostname: datanode1.
17/03/07 10:24:00 INFO spark.ExecutorAllocationManager: Requesting 8 new executors because tasks are backlogged (new desired total will be 15)
17/03/07 10:24:00 INFO storage.BlockManagerInfo: Added broadcast_0_piece0 in memory on datanode1:7097 (size: 1146.0 B, free: 408.9 MB)
17/03/07 10:24:01 INFO scheduler.TaskSetManager: Starting task 1.0 in stage 0.0 (TID 1, datanode1, executor 1, partition 1, PROCESS_LOCAL, 5351 bytes)
17/03/07 10:24:01 INFO cluster.YarnSchedulerBackend$YarnDriverEndpoint: Launching task 1 on executor id: 1 hostname: datanode1.
17/03/07 10:24:01 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 0.0 (TID 0) in 1001 ms on datanode1 (executor 1) (1/100)
17/03/07 10:24:01 INFO scheduler.TaskSetManager: Starting task 2.0 in stage 0.0 (TID 2, datanode1, executor 1, partition 2, PROCESS_LOCAL, 5351 bytes)
17/03/07 10:24:01 INFO cluster.YarnSchedulerBackend$YarnDriverEndpoint: Launching task 2 on executor id: 1 hostname: datanode1.
17/03/07 10:24:01 INFO scheduler.TaskSetManager: Finished task 1.0 in stage 0.0 (TID 1) in 108 ms on datanode1 (executor 1) (2/100)
17/03/07 10:24:01 INFO scheduler.TaskSetManager: Starting task 3.0 in stage 0.0 (TID 3, datanode1, executor 1, partition 3, PROCESS_LOCAL, 5351 bytes)
17/03/07 10:24:01 INFO cluster.YarnSchedulerBackend$YarnDriverEndpoint: Launching task 3 on executor id: 1 hostname: datanode1.
17/03/07 10:24:01 INFO scheduler.TaskSetManager: Finished task 2.0 in stage 0.0 (TID 2) in 80 ms on datanode1 (executor 1) (3/100)
17/03/07 10:24:01 INFO scheduler.TaskSetManager: Starting task 4.0 in stage 0.0 (TID 4, datanode1, executor 1, partition 4, PROCESS_LOCAL, 5351 bytes)
17/03/07 10:24:01 INFO cluster.YarnSchedulerBackend$YarnDriverEndpoint: Launching task 4 on executor id: 1 hostname: datanode1.
……
17/03/07 10:24:03 INFO scheduler.TaskSetManager: Starting task 57.0 in stage 0.0 (TID 57, datanode1, executor 2, partition 57, PROCESS_LOCAL, 5355 bytes)
17/03/07 10:24:03 INFO cluster.YarnSchedulerBackend$YarnDriverEndpoint: Launching task 57 on executor id: 2 hostname: datanode1.
17/03/07 10:24:03 INFO scheduler.TaskSetManager: Finished task 55.0 in stage 0.0 (TID 55) in 73 ms on datanode1 (executor 2) (55/100)
17/03/07 10:24:03 ERROR server.TransportRequestHandler: Error while invoking RpcHandler#receive() on RPC id 8852199698812112711
java.lang.ClassNotFoundException: org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages$RetrieveSparkProps$
        at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
        at java.lang.Class.forName0(Native Method)
        at java.lang.Class.forName(Class.java:348)
        at org.apache.spark.serializer.JavaDeserializationStream$$anon$1.resolveClass(JavaSerializer.scala:67)
        at java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1620)
        at java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1521)
……
        at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382)
        at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354)
        at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111)
        at java.lang.Thread.run(Thread.java:745)
17/03/07 10:24:03 INFO scheduler.TaskSetManager: Starting task 58.0 in stage 0.0 (TID 58, datanode1, executor 3, partition 58, PROCESS_LOCAL, 5355 bytes)
17/03/07 10:24:03 INFO cluster.YarnSchedulerBackend$YarnDriverEndpoint: Launching task 58 on executor id: 3 hostname: datanode1.
17/03/07 10:24:03 INFO scheduler.TaskSetManager: Finished task 54.0 in stage 0.0 (TID 54) in 103 ms on datanode1 (executor 3) (56/100)
17/03/07 10:24:03 INFO scheduler.TaskSetManager: Starting task 59.0 in stage 0.0 (TID 59, datanode1, executor 1, partition 59, PROCESS_LOCAL, 5355 bytes)
17/03/07 10:24:03 INFO cluster.YarnSchedulerBackend$YarnDriverEndpoint: Launching task 59 on executor id: 1 hostname: datanode1.
17/03/07 10:24:03 INFO scheduler.TaskSetManager: Finished task 56.0 in stage 0.0 (TID 56) in 87 ms on datanode1 (executor 1) (57/100)
17/03/07 10:24:03 INFO scheduler.TaskSetManager: Starting task 60.0 in stage 0.0 (TID 60, datanode1, executor 2, partition 60, PROCESS_LOCAL, 5355 bytes)
17/03/07 10:24:03 INFO cluster.YarnSchedulerBackend$YarnDriverEndpoint: Launching task 60 on executor id: 2 hostname: datanode1.
17/03/07 10:24:03 INFO scheduler.TaskSetManager: Finished task 57.0 in stage 0.0 (TID 57) in 56 ms on datanode1 (executor 2) (58/100)
17/03/07 10:24:03 ERROR server.TransportRequestHandler: Error while invoking RpcHandler#receive() on RPC id 6815447345450222473
java.lang.ClassNotFoundException: org.apache.spark.scheduler.cluster.CoarseGrainedClusterMessages$RetrieveSparkProps$
        at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
        at java.lang.Class.forName0(Native Method)
        at java.lang.Class.forName(Class.java:348)
        at org.apache.spark.serializer.JavaDeserializationStream$$anon$1.resolveClass(JavaSerializer.scala:67)
……
        at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354)
        at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111)
        at java.lang.Thread.run(Thread.java:745)
17/03/07 10:24:03 INFO scheduler.TaskSetManager: Starting task 61.0 in stage 0.0 (TID 61, datanode1, executor 1, partition 61, PROCESS_LOCAL, 5355 bytes)
17/03/07 10:24:03 INFO cluster.YarnSchedulerBackend$YarnDriverEndpoint: Launching task 61 on executor id: 1 hostname: datanode1.
……
17/03/07 10:24:04 INFO scheduler.TaskSetManager: Starting task 98.0 in stage 0.0 (TID 98, datanode1, executor 3, partition 98, PROCESS_LOCAL, 5355 bytes)
17/03/07 10:24:04 INFO cluster.YarnSchedulerBackend$YarnDriverEndpoint: Launching task 98 on executor id: 3 hostname: datanode1.
17/03/07 10:24:04 INFO scheduler.TaskSetManager: Finished task 94.0 in stage 0.0 (TID 94) in 63 ms on datanode1 (executor 3) (96/100)
17/03/07 10:24:04 INFO scheduler.TaskSetManager: Starting task 99.0 in stage 0.0 (TID 99, datanode1, executor 2, partition 99, PROCESS_LOCAL, 5355 bytes)
17/03/07 10:24:04 INFO cluster.YarnSchedulerBackend$YarnDriverEndpoint: Launching task 99 on executor id: 2 hostname: datanode1.
17/03/07 10:24:04 INFO scheduler.TaskSetManager: Finished task 96.0 in stage 0.0 (TID 96) in 54 ms on datanode1 (executor 2) (97/100)
17/03/07 10:24:04 INFO scheduler.TaskSetManager: Finished task 97.0 in stage 0.0 (TID 97) in 46 ms on datanode1 (executor 1) (98/100)
17/03/07 10:24:04 INFO scheduler.TaskSetManager: Finished task 98.0 in stage 0.0 (TID 98) in 44 ms on datanode1 (executor 3) (99/100)
17/03/07 10:24:04 INFO scheduler.TaskSetManager: Finished task 99.0 in stage 0.0 (TID 99) in 49 ms on datanode1 (executor 2) (100/100)
17/03/07 10:24:04 INFO scheduler.DAGScheduler: ResultStage 0 (reduce at SparkPi.scala:38) finished in 7.704 s
17/03/07 10:24:04 INFO cluster.YarnScheduler: Removed TaskSet 0.0, whose tasks have all completed, from pool
17/03/07 10:24:04 INFO scheduler.DAGScheduler: Job 0 finished: reduce at SparkPi.scala:38, took 7.970202 s
Pi is roughly 3.1416919141691912
17/03/07 10:24:04 INFO server.ServerConnector: Stopped ServerConnector@3a4b0e5d{HTTP/1.1}{0.0.0.0:4040}
17/03/07 10:24:04 INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@2c4ca0f9{/stages/stage/kill,null,UNAVAILABLE}
17/03/07 10:24:04 INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@2e77b8cf{/api,null,UNAVAILABLE}
17/03/07 10:24:04 INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@18cc679e{/,null,UNAVAILABLE}
17/03/07 10:24:04 INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@9f46d94{/static,null,UNAVAILABLE}
17/03/07 10:24:04 INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@5c1bd44c{/executors/threadDump/json,null,UNAVAILABLE}
17/03/07 10:24:04 INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@3ec11999{/executors/threadDump,null,UNAVAILABLE}
17/03/07 10:24:04 INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@5db99216{/executors/json,null,UNAVAILABLE}
17/03/07 10:24:04 INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@11389053{/executors,null,UNAVAILABLE}
17/03/07 10:24:04 INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@1eef9aef{/environment/json,null,UNAVAILABLE}
17/03/07 10:24:04 INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@538613b3{/environment,null,UNAVAILABLE}
17/03/07 10:24:04 INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@4de025bf{/storage/rdd/json,null,UNAVAILABLE}
17/03/07 10:24:04 INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@21d5c1a0{/storage/rdd,null,UNAVAILABLE}
17/03/07 10:24:04 INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@2e5c7f0b{/storage/json,null,UNAVAILABLE}
17/03/07 10:24:04 INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@2187fff7{/storage,null,UNAVAILABLE}
17/03/07 10:24:04 INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@6691490c{/stages/pool/json,null,UNAVAILABLE}
17/03/07 10:24:04 INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@4da855dd{/stages/pool,null,UNAVAILABLE}
17/03/07 10:24:04 INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@a50b09c{/stages/stage/json,null,UNAVAILABLE}
17/03/07 10:24:04 INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@388ffbc2{/stages/stage,null,UNAVAILABLE}
17/03/07 10:24:04 INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@43ed0ff3{/stages/json,null,UNAVAILABLE}
17/03/07 10:24:04 INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@6de54b40{/stages,null,UNAVAILABLE}
17/03/07 10:24:04 INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@e6516e{/jobs/job/json,null,UNAVAILABLE}
17/03/07 10:24:04 INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@7e70bd39{/jobs/job,null,UNAVAILABLE}
17/03/07 10:24:04 INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@61d9efe0{/jobs/json,null,UNAVAILABLE}
17/03/07 10:24:04 INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@2f4854d6{/jobs,null,UNAVAILABLE}
17/03/07 10:24:04 INFO ui.SparkUI: Stopped Spark web UI at http://xx.xx.xx.xx:4040
17/03/07 10:24:04 INFO cluster.YarnClientSchedulerBackend: Interrupting monitor thread
17/03/07 10:24:04 INFO cluster.YarnClientSchedulerBackend: Shutting down all executors
17/03/07 10:24:04 INFO cluster.YarnSchedulerBackend$YarnDriverEndpoint: Asking each executor to shut down
17/03/07 10:24:04 INFO cluster.SchedulerExtensionServices: Stopping SchedulerExtensionServices
(serviceOption=None,
services=List(),
started=false)
17/03/07 10:24:04 INFO cluster.YarnClientSchedulerBackend: Stopped
17/03/07 10:24:04 INFO spark.MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
17/03/07 10:24:04 INFO memory.MemoryStore: MemoryStore cleared
17/03/07 10:24:04 INFO storage.BlockManager: BlockManager stopped
17/03/07 10:24:04 INFO storage.BlockManagerMaster: BlockManagerMaster stopped
17/03/07 10:24:04 INFO scheduler.OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
17/03/07 10:24:04 INFO spark.SparkContext: Successfully stopped SparkContext
17/03/07 10:24:04 INFO util.ShutdownHookManager: Shutdown hook called
17/03/07 10:24:04 INFO util.ShutdownHookManager: Deleting directory /tmp/spark-d9036de4-8bd3-47b3-91f2-25f9d0e1e1c0
[/mw_shl_code]


已有(3)人评论

跳转到指定楼层
grinsky 发表于 2017-3-7 12:09:52
汗,是我没把自己的jar包分发到其他节点。
之前一直这么用都没出问题。没想到现在必须要分发到其他节点才可以。
回复

使用道具 举报

nextuser 发表于 2017-3-7 14:21:41
grinsky 发表于 2017-3-7 12:09
汗,是我没把自己的jar包分发到其他节点。
之前一直这么用都没出问题。没想到现在必须要分发到其他节点才 ...

spark-examples_2.11-2.0.0.cloudera1.jar 这个包??还是楼主自定义包?
回复

使用道具 举报

grinsky 发表于 2017-3-9 10:51:05
nextuser 发表于 2017-3-7 14:21
spark-examples_2.11-2.0.0.cloudera1.jar 这个包??还是楼主自定义包?

自定义的,cloudera的那个安装后就都有了。
回复

使用道具 举报

您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

关闭

推荐上一条 /2 下一条