分享

Spark、Shark集群安装部署及遇到的问题解决



问题导读:
1.Spark、Shark集群环境配置?
2.集群各种脚本写法?











1.部署环境

  1. OS:Red Hat Enterprise Linux Server release 6.4 (Santiago)
  2. Hadoop:Hadoop 2.4.1
  3. Hive:0.11.0
  4. JDK:1.7.0_60
  5. Python:2.6.6(spark集群需要python2.6以上,否则无法在spark集群上运行py)
  6. Spark:0.9.1(最新版是1.1.0)
  7. Shark:0.9.1(目前最新的版本,但是只能够兼容到spark-0.9.1,见shark 0.9.1 release)
  8. Zookeeper:2.3.5(配置HA时使用,Spark HA配置参见我的博文:Spark:Master High Availability(HA)高可用配置的2种实现)
  9. Scala:2.11.2
复制代码



2.Spark集群规划

  1. 账户:ebupt
  2. master:eb174
  3. slaves:eb174、eb175、eb176
复制代码



3.建立ssh
  1. cd ~
  2. #生成公钥和私钥
  3. ssh-keygen -q -t rsa -N "" -f /home/ebupt/.ssh/id_rsa
  4. cd .ssh
  5. cat id_rsa.pub > authorized_keys
  6. chmod go-wx authorized_keys
  7. #把文件authorized_keys复制到所有子节点的/home/ebupt/.ssh目录下
  8. scp ~/.ssh/authorized_keys ebupt@eb175:~/.ssh/
  9. scp ~/.ssh/authorized_keys ebupt@eb176:~/.ssh/
复制代码




另一个简单的方法:
由于实验室集群eb170可以ssh到所有的机器,因此直接拷贝eb170的~/.ssh/所有文件到eb174的~/.ssh/中。这样做的好处是不破坏原有的eb170的ssh免登陆。
  1. [ebupt@eb174 ~]$rm ~/.ssh/*
  2. [ebupt@eb170 ~]$scp -r ~/.ssh/ ebupt@eb174:~/.ssh/
复制代码



4.部署scala,完全拷贝到所有节点

  1. tar zxvf scala-2.11.2.tgz
  2. ln -s /home/ebupt/eb/scala-2.11.2 ~/scala
  3. vi ~/.bash_profile
  4. #添加环境变量
  5. export SCALA_HOME=$HOME/scala
  6. export PATH=$PATH:$SCALA_HOME/bin
复制代码


通过scala –version便可以查看到当前的scala版本,说明scala安装成功。

  1. [ebupt@eb174 ~]$ scala -version
  2. Scala code runner version 2.11.2 -- Copyright 2002-2013, LAMP/EPFL
复制代码



5.安装spark,完全拷贝到所有节点
解压建立软连接,配置环境变量,略。

  1. [ebupt@eb174 ~]$ vi spark/conf/slaves
  2. #add the slaves
  3. eb174
  4. eb175
  5. eb176
  6. [ebupt@eb174 ~]$ vi spark/conf/spark-env.sh
  7. export SCALA_HOME=/home/ebupt/scala
  8. export JAVA_HOME=/home/ebupt/eb/jdk1.7.0_60
  9. export SPARK_MASTER_IP=eb174
  10. export SPARK_WORKER_MEMORY=4000m
复制代码



6.安装shark,完全拷贝到所有节点
解压建立软连接,配置环境变量,略。

  1. [ebupt@eb174 ~]$ vi shark/conf/shark-env.sh
复制代码

  1. export SPARK_MEM=1g
  2. # (Required) Set the master program's memory
  3. export SHARK_MASTER_MEM=1g
  4. # (Optional) Specify the location of Hive's configuration directory. By default,
  5. # Shark run scripts will point it to $SHARK_HOME/conf
  6. export HIVE_HOME=/home/ebupt/hive
  7. export HIVE_CONF_DIR="$HIVE_HOME/conf"
  8. # For running Shark in distributed mode, set the following:
  9. export HADOOP_HOME=/home/ebupt/hadoop
  10. export SPARK_HOME=/home/ebupt/spark
  11. export MASTER=spark://eb174:7077
  12. # Only required if using Mesos:
  13. #export MESOS_NATIVE_LIBRARY=/usr/local/lib/libmesos.so
  14. source $SPARK_HOME/conf/spark-env.sh
  15. #LZO compression native lib
  16. export LD_LIBRARY_PATH=/home/ebupt/hadoop/share/hadoop/common
  17. # (Optional) Extra classpath
  18. export SPARK_LIBRARY_PATH=/home/ebupt/hadoop/lib/native
  19. # Java options
  20. # On EC2, change the local.dir to /mnt/tmp
  21. SPARK_JAVA_OPTS=" -Dspark.local.dir=/tmp "
  22. SPARK_JAVA_OPTS+="-Dspark.kryoserializer.buffer.mb=10 "
  23. SPARK_JAVA_OPTS+="-verbose:gc -XX:-PrintGCDetails -XX:+PrintGCTimeStamps "
  24. SPARK_JAVA_OPTS+="-XX:MaxPermSize=256m "
  25. SPARK_JAVA_OPTS+="-Dspark.cores.max=12 "
  26. export SPARK_JAVA_OPTS
  27. # (Optional) Tachyon Related Configuration
  28. #export TACHYON_MASTER="" # e.g. "localhost:19998"
  29. #export TACHYON_WAREHOUSE_PATH=/sharktables # Could be any valid path name
  30. export SCALA_HOME=/home/ebupt/scala
  31. export JAVA_HOME=/home/ebupt/eb/jdk1.7.0_60
复制代码




7.同步到slaves的脚本

7.1 master(eb174)的~/.bash_profile
  1. # .bash_profile
  2. # Get the aliases and functions
  3. if [ -f ~/.bashrc ]; then
  4.         . ~/.bashrc
  5. fi
  6. # User specific environment and startup programs
  7. PATH=$PATH:$HOME/bin
  8. export PATH
  9. export JAVA_HOME=/home/ebupt/eb/jdk1.7.0_60
  10. export PATH=$JAVA_HOME/bin:$PATH
  11. export CLASSPATH=.:$CLASSPATH:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
  12. export HADOOP_HOME=$HOME/hadoop
  13. export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin
  14. export ZOOKEEPER_HOME=$HOME/zookeeper
  15. export PATH=$ZOOKEEPER_HOME/bin:$PATH
  16. export HIVE_HOME=$HOME/hive
  17. export PATH=$HIVE_HOME/bin:$PATH
  18. export HBASE_HOME=$HOME/hbase
  19. export PATH=$PATH:$HBASE_HOME/bin
  20. export MAVEN_HOME=$HOME/eb/apache-maven-3.0.5
  21. export PATH=$PATH:$MAVEN_HOME/bin
  22. export STORM_HOME=$HOME/storm
  23. export PATH=$PATH:$STORM_HOME/storm-yarn-master/bin:$STORM_HOME/storm-0.9.0-wip21/bin
  24. export SCALA_HOME=$HOME/scala
  25. export PATH=$PATH:$SCALA_HOME/bin
  26. export SPARK_HOME=$HOME/spark
  27. export PATH=$PATH:$SPARK_HOME/bin
  28. export SHARK_HOME=$HOME/shark
  29. export PATH=$PATH:$SHARK_HOME/bin
复制代码




7.2 同步脚本:syncInstall.sh
  1. scp -r /home/ebupt/eb/scala-2.11.2 ebupt@eb175:/home/ebupt/eb/
  2. scp -r /home/ebupt/eb/scala-2.11.2 ebupt@eb176:/home/ebupt/eb/
  3. scp -r /home/ebupt/eb/spark-1.0.2-bin-hadoop2 ebupt@eb175:/home/ebupt/eb/
  4. scp -r /home/ebupt/eb/spark-1.0.2-bin-hadoop2 ebupt@eb176:/home/ebupt/eb/
  5. scp -r /home/ebupt/eb/spark-0.9.1-bin-hadoop2 ebupt@eb175:/home/ebupt/eb/
  6. scp -r /home/ebupt/eb/spark-0.9.1-bin-hadoop2 ebupt@eb176:/home/ebupt/eb/
  7. scp ~/.bash_profile ebupt@eb175:~/
  8. scp ~/.bash_profile ebupt@eb176:~/
复制代码




7.3 配置脚本:build.sh
  1. #!/bin/bash
  2. source ~/.bash_profile
  3. ssh eb175 > /dev/null 2>&1 << eeooff
  4. ln -s /home/ebupt/eb/scala-2.11.2/ /home/ebupt/scala
  5. ln -s /home/ebupt/eb/spark-0.9.1-bin-hadoop2/ /home/ebupt/spark
  6. ln -s /home/ebupt/eb/shark-0.9.1-bin-hadoop2/ /home/ebupt/shark
  7. source ~/.bash_profile
  8. exit
  9. eeooff
  10. echo eb175 done!
  11. ssh eb176 > /dev/null 2>&1 << eeooffxx
  12. ln -s /home/ebupt/eb/scala-2.11.2/ /home/ebupt/scala
  13. ln -s /home/ebupt/eb/spark-0.9.1-bin-hadoop2/ /home/ebupt/spark
  14. ln -s /home/ebupt/eb/shark-0.9.1-bin-hadoop2/ /home/ebupt/shark
  15. source ~/.bash_profile
  16. exit
  17. eeooffxx
  18. echo eb176 done!
复制代码




8 遇到的问题及其解决办法

8.1 安装shark-0.9.1和spark-1.0.2时,运行shark shell,执行sql报错。
  1. shark> select * from test;
  2. 17.096: [Full GC 71198K->24382K(506816K), 0.3150970 secs]
  3. Exception in thread "main" java.lang.VerifyError: class org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$SetOwnerRequestProto overrides final method getUnknownFields.()Lcom/google/protobuf/UnknownFieldSet;
  4. at java.lang.ClassLoader.defineClass1(Native Method)
  5. at java.lang.ClassLoader.defineClass(ClassLoader.java:800)
  6. at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
  7. at java.net.URLClassLoader.defineClass(URLClassLoader.java:449)
  8. at java.net.URLClassLoader.access$100(URLClassLoader.java:71)
  9. at java.net.URLClassLoader$1.run(URLClassLoader.java:361)
  10. at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
  11. at java.security.AccessController.doPrivileged(Native Method)
  12. at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
  13. at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
  14. at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
  15. at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
  16. at java.lang.Class.getDeclaredMethods0(Native Method)
  17. at java.lang.Class.privateGetDeclaredMethods(Class.java:2531)
  18. at java.lang.Class.privateGetPublicMethods(Class.java:2651)
  19. at java.lang.Class.privateGetPublicMethods(Class.java:2661)
  20. at java.lang.Class.getMethods(Class.java:1467)
  21. at sun.misc.ProxyGenerator.generateClassFile(ProxyGenerator.java:426)
  22. at sun.misc.ProxyGenerator.generateProxyClass(ProxyGenerator.java:323)
  23. at java.lang.reflect.Proxy.getProxyClass0(Proxy.java:636)
  24. at java.lang.reflect.Proxy.newProxyInstance(Proxy.java:722)
  25. at org.apache.hadoop.ipc.ProtobufRpcEngine.getProxy(ProtobufRpcEngine.java:92)
  26. at org.apache.hadoop.ipc.RPC.getProtocolProxy(RPC.java:537)
  27. at org.apache.hadoop.hdfs.NameNodeProxies.createNNProxyWithClientProtocol(NameNodeProxies.java:334)
  28. at org.apache.hadoop.hdfs.NameNodeProxies.createNonHAProxy(NameNodeProxies.java:241)
  29. at org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.java:141)
  30. at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:576)
  31. at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:521)
  32. at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:146)
  33. at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2397)
  34. at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:89)
  35. at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2431)
  36. at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2413)
  37. at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:368)
  38. at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)
  39. at org.apache.hadoop.hive.ql.Context.getScratchDir(Context.java:180)
  40. at org.apache.hadoop.hive.ql.Context.getMRScratchDir(Context.java:231)
  41. at org.apache.hadoop.hive.ql.Context.getMRTmpFileURI(Context.java:288)
  42. at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.getMetaData(SemanticAnalyzer.java:1274)
  43. at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.getMetaData(SemanticAnalyzer.java:1059)
  44. at shark.parse.SharkSemanticAnalyzer.analyzeInternal(SharkSemanticAnalyzer.scala:137)
  45. at org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:279)
  46. at shark.SharkDriver.compile(SharkDriver.scala:215)
  47. at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:337)
  48. at org.apache.hadoop.hive.ql.Driver.run(Driver.java:909)
  49. at shark.SharkCliDriver.processCmd(SharkCliDriver.scala:338)
  50. at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:413)
  51. at shark.SharkCliDriver$.main(SharkCliDriver.scala:235)
  52. at shark.SharkCliDriver.main(SharkCliDriver.scala)
复制代码




原因:不知道它在说什么,大概是说“protobuf”版本有问题.
解决:找到 jar 包 “hive-exec-0.11.0-shark-0.9.1.jar” 在$SHARK_HOME/lib_managed/jars/edu.berkeley.cs.shark/hive-exec, 删掉有关protobuf,重新打包,该报错不再有,脚本如下所示。
  1. cd $SHARK_HOME/lib_managed/jars/edu.berkeley.cs.shark/hive-exec
  2. unzip hive-exec-0.11.0-shark-0.9.1.jar
  3. rm -f com/google/protobuf/*
  4. rm  hive-exec-0.11.0-shark-0.9.1.jar
  5. zip -r hive-exec-0.11.0-shark-0.9.1.jar *
  6. rm -rf com hive-exec-log4j.properties javaewah/ javax/ javolution/ META-INF/ org/
复制代码



8.2  安装shark-0.9.1和spark-1.0.2时,spark集群正常运行,跑一下简单的job也是可以的,但是shark的job始终出现Spark cluster looks dead, giving up. 在运行shark-shell(shark-withinfo )时,都会看到连接不上spark的master。报错类似如下:
  1. shark> select * from t1;
  2. 16.452: [GC 282770K->32068K(1005568K), 0.0388780 secs]
  3. org.apache.spark.SparkException: Job aborted: Spark cluster looks down
  4.         at org.apache.spark.scheduler.DAGScheduler$anonfun$org$apache$spark$scheduler$DAGScheduler$abortStage$1.apply(DAGScheduler.scala:1028)
  5.         at org.apache.spark.scheduler.DAGScheduler$anonfun$org$apache$spark$scheduler$DAGScheduler$abortStage$1.apply(DAGScheduler.scala:1026)
  6.         at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
  7.         at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
  8.         at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$abortStage(DAGScheduler.scala:1026)
  9.         at org.apache.spark.scheduler.DAGScheduler$anonfun$processEvent$10.apply(DAGScheduler.scala:619)
  10.         at org.apache.spark.scheduler.DAGScheduler$anonfun$processEvent$10.apply(DAGScheduler.scala:619)
  11.         at scala.Option.foreach(Option.scala:236)
  12.         at org.apache.spark.scheduler.DAGScheduler.processEvent(DAGScheduler.scala:619)
  13.         at org.apache.spark.scheduler.DAGScheduler$anonfun$start$1$anon$2$anonfun$receive$1.applyOrElse(DAGScheduler.scala:207)
  14.         at akka.actor.ActorCell.receiveMessage(ActorCell.scala:498)
  15.         at akka.actor.ActorCell.invoke(ActorCell.scala:456)
  16.         at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:237)
  17.         at akka.dispatch.Mailbox.run(Mailbox.scala:219)
  18.         at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:386)
  19.         at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
  20.         at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
  21.         at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
  22.         at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
  23. FAILED: Execution Error, return code -101 from shark.execution.SparkTask
复制代码




原因:网上有很多人遇到同样的问题,spark集群是好的,但是shark就是不能够很好的运行。查看shark-0.9.1的release发现
  1. Release date: April 10, 2014
  2. Shark 0.9.1 is a maintenance release that stabilizes 0.9.0, which bumps up Scala compatibility to 2.10.3 and Hive compliance to 0.11. The core dependencies for this version are:
  3. Scala 2.10.3
  4. Spark 0.9.1
  5. AMPLab’s Hive 0.9.0
  6. (Optional) Tachyon 0.4.1
复制代码



这是因为shark版本只兼容到spark-0.9.1,版本不兼容导致无法找到spark集群的master服务。
解决:回退spark版本到spark-0.9.1,scala版本不用回退。回退后运行正常。


9.集群成功运行

9.1启动spark集群standalone模式

  1. [ebupt@eb174 ~]$ ./spark/sbin/start-all.sh
复制代码


9.2测试spark集群

  1. [ebupt@eb174 ~]$ ./spark/bin/run-example org.apache.spark.examples.SparkPi 10 spark://eb174:7077
复制代码



9.3 Spark Master UI:http://eb174:8080/
5-1.png



10 参考资料






引用:http://www.cnblogs.com/byrhuangqiang/p/3955564.html
欢迎加入about云群90371779322273151432264021 ,云计算爱好者群,亦可关注about云腾讯认证空间||关注本站微信

已有(1)人评论

跳转到指定楼层
hery 发表于 2014-12-26 12:11:10
回复

使用道具 举报

您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

关闭

推荐上一条 /2 下一条