分享

Java及Web程序调用hadoop2.6

问题导读
1、配置过程中会遇到哪些问题,如何解决?
2、Java调用Hadoop2.6 ,运行MR程序需要做哪些配置?
3、如何通过Web程序调用Hadoop?




1. hadoop集群:

1.1 系统及硬件配置:

     hadoop版本:2.6 ;三台虚拟机:node101(192.168.0.101)、node102(192.168.0.102)、node103(192.168.0.103); 每台机器2G内存、1个CPU核;

     node101: NodeManager、 NameNode、ResourceManager、DataNode;
     node102: NodeManager、DataNode 、SecondaryNameNode、JobHistoryServer;
     node103: NodeManager 、DataNode;

1.2 配置过程中遇到的问题:

    1) NodeManager启动不了;

          最开始配置的虚拟机配置的是512M内存,所以在yarn-site.xml 中的“yarn.nodemanager.resource.memory-mb”配置为512(其默认配置是1024),查看日志,报错:
  1. org.apache.hadoop.yarn.exceptions.YarnRuntimeException: Recieved SHUTDOWN signal from Resourcemanager ,Registration of NodeManager failed, Message from ResourceManager: NodeManager from  node101 doesn't satisfy minimum allocations, Sending SHUTDOWN signal to the NodeManager.
复制代码


         把它改为1024或者以上就可以正常启动NodeManager了,我设置的是2048;
   2) 任务可以提交,但是不会继续运行

        a. 由于这里每个虚拟机只配置了一个核,但是yarn-site.xml里面的“yarn.nodemanager.resource.cpu-vcores”默认配置是8,这样在分配资源的时候会有问题,所以把这个参数配置为1;

        b. 出现下面的错误:
  1. is running beyond virtual memory limits. Current usage: 96.6 MB of 1.5 GB physical memory used; 1.6 GB of 1.5 GB virtual memory used. Killing container.
复制代码


这个应该是map、reduce、NodeManager的资源配置没有配置好,大小配置不正确导致的,但是我改了好久,感觉应该是没问题的,但是一直报这个错,最后没办法,把这个检查去掉了,即把yarn-site.xml 中的“yarn.nodemanager.vmem-check-enabled”配置为false;这样就可以提交任务了。
1.3 配置文件(希望有高人可以指点下资源配置情况,可以不出现上面b的错误,而不是使用去掉检查的方法):

1)hadoop-env.sh 和yarn-env.sh 中配置jdk,同时HADOOP_HEAPSIZE和YARN_HEAPSIZE配置为512;

2)hdfs-site.xml 配置数据存储路径和secondaryname所在节点:
  1. <configuration>  
  2. <property>  
  3.   <name>dfs.namenode.name.dir</name>  
  4.   <value>file:////data/hadoop/hdfs/name</value>  
  5.   <description>Determines where on the local filesystem the DFS name node  
  6.       should store the name table(fsimage).  If this is a comma-delimited list  
  7.       of directories then the name table is replicated in all of the  
  8.       directories, for redundancy. </description>  
  9. </property>  
  10. <property>  
  11.   <name>dfs.datanode.data.dir</name>  
  12.   <value>file:///data/hadoop/hdfs/data</value>  
  13.   <description>Determines where on the local filesystem an DFS data node  
  14.   should store its blocks.  If this is a comma-delimited  
  15.   list of directories, then data will be stored in all named  
  16.   directories, typically on different devices.  
  17.   Directories that do not exist are ignored.  
  18.   </description>  
  19. </property>  
  20. <property>  
  21. <name>dfs.namenode.secondary.http-address</name>  
  22. <value>node102:50090</value>  
  23. </property>  
  24. </configuration>  
复制代码



3)core-site.xml 配置namenode:
  1. <configuration>  
  2. <property>  
  3. <name>fs.defaultFS</name>  
  4.   <value>hdfs://node101:8020</value>  
  5. </property>  
  6. </configuration>
复制代码


4) mapred-site.xml 配置map和reduce的资源:
  1. <configuration>  
  2. <property>  
  3.   <name>mapreduce.framework.name</name>  
  4.   <value>yarn</value>  
  5.   <description>The runtime framework for executing MapReduce jobs.  
  6.   Can be one of local, classic or yarn.  
  7.   </description>  
  8. </property>  
  9.   
  10. <!-- jobhistory properties -->  
  11. <property>  
  12.   <name>mapreduce.jobhistory.address</name>  
  13.   <value>node102:10020</value>  
  14.   <description>MapReduce JobHistory Server IPC host:port</description>  
  15. </property>  
  16.   
  17.   
  18. <property>  
  19. <name>mapreduce.map.memory.mb</name>  
  20. <value>1024</value>  
  21. </property>  
  22. <property>  
  23. <name>mapreduce.reduce.memory.mb</name>  
  24. <value>1024</value>  
  25. </property>  
  26. <property>  
  27. <name>mapreduce.map.java.opts</name>  
  28. <value>-Xmx512m</value>  
  29. </property>  
  30. <property>  
  31. <name>mapreduce.reduce.java.opts</name>  
  32. <value>-Xmx512m</value>  
  33. </property>  
  34. </configuration>
复制代码


5)yarn-site.xml 配置resourcemanager及相关资源:
  1. <configuration>  
  2.   
  3. <property>  
  4.     <description>The hostname of the RM.</description>  
  5.     <name>yarn.resourcemanager.hostname</name>  
  6.     <value>node101</value>  
  7.   </property>      
  8.    
  9.   <property>  
  10.     <description>The address of the applications manager interface in the RM.</description>  
  11.     <name>yarn.resourcemanager.address</name>  
  12.     <value>${yarn.resourcemanager.hostname}:8032</value>  
  13.   </property>  
  14.   
  15.   <property>  
  16.     <description>The address of the scheduler interface.</description>  
  17.     <name>yarn.resourcemanager.scheduler.address</name>  
  18.     <value>${yarn.resourcemanager.hostname}:8030</value>  
  19.   </property>  
  20.   
  21.   <property>  
  22.     <description>The http address of the RM web application.</description>  
  23.     <name>yarn.resourcemanager.webapp.address</name>  
  24.     <value>${yarn.resourcemanager.hostname}:8088</value>  
  25.   </property>  
  26.   
  27.   <property>  
  28.     <description>The https adddress of the RM web application.</description>  
  29.     <name>yarn.resourcemanager.webapp.https.address</name>  
  30.     <value>${yarn.resourcemanager.hostname}:8090</value>  
  31.   </property>  
  32.   
  33.   <property>  
  34.     <name>yarn.resourcemanager.resource-tracker.address</name>  
  35.     <value>${yarn.resourcemanager.hostname}:8031</value>  
  36.   </property>  
  37.   
  38.   <property>  
  39.     <description>The address of the RM admin interface.</description>  
  40.     <name>yarn.resourcemanager.admin.address</name>  
  41.     <value>${yarn.resourcemanager.hostname}:8033</value>  
  42.   </property>  
  43.   
  44.   <property>  
  45.     <description>List of directories to store localized files in. An   
  46.       application's localized file directory will be found in:  
  47.       ${yarn.nodemanager.local-dirs}/usercache/${user}/appcache/application_${appid}.  
  48.       Individual containers' work directories, called container_${contid}, will  
  49.       be subdirectories of this.  
  50.    </description>  
  51.     <name>yarn.nodemanager.local-dirs</name>  
  52.     <value>/data/hadoop/yarn/local</value>  
  53.   </property>  
  54.   
  55.   <property>  
  56.     <description>Whether to enable log aggregation</description>  
  57.     <name>yarn.log-aggregation-enable</name>  
  58.     <value>true</value>  
  59.   </property>  
  60.   
  61.   <property>  
  62.     <description>Where to aggregate logs to.</description>  
  63.     <name>yarn.nodemanager.remote-app-log-dir</name>  
  64.     <value>/data/tmp/logs</value>  
  65.   </property>  
  66.   
  67.   <property>  
  68.     <description>Amount of physical memory, in MB, that can be allocated   
  69.     for containers.</description>  
  70.     <name>yarn.nodemanager.resource.memory-mb</name>  
  71.     <value>2048</value>  
  72.   </property>  
  73. <property>  
  74.     <name>yarn.scheduler.minimum-allocation-mb</name>  
  75.     <value>512</value>  
  76. </property>  
  77. <property>  
  78.     <name>yarn.nodemanager.vmem-pmem-ratio</name>  
  79.     <value>1.0</value>  
  80. </property>  
  81. <property>  
  82.     <name>yarn.nodemanager.vmem-check-enabled</name>  
  83.     <value>false</value>  
  84. </property>  
  85. <!--   
  86. <property>  
  87.      <description>The class to use as the resource scheduler.</description>  
  88.          <name>yarn.resourcemanager.scheduler.class</name>  
  89.              <value>org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler</value>  
  90.                </property>  
  91.   
  92.   <property>  
  93.       <description>fair-scheduler conf location</description>  
  94.           <name>yarn.scheduler.fair.allocation.file</name>  
  95.               <value>${yarn.home.dir}/etc/hadoop/fairscheduler.xml</value>  
  96.                 </property>  
  97. -->  
  98. <property>  
  99.     <name>yarn.nodemanager.resource.cpu-vcores</name>  
  100.     <value>1</value>  
  101. </property>  
  102.   <property>  
  103.     <description>the valid service name should only contain a-zA-Z0-9_ and can not start with numbers</description>  
  104.     <name>yarn.nodemanager.aux-services</name>  
  105.     <value>mapreduce_shuffle</value>  
  106.   </property>  
  107.   <property>  
  108.     <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>  
  109.       <value>org.apache.hadoop.mapred.ShuffleHandler</value>  
  110.       </property>  
  111. </configuration>
复制代码


2. Java调用Hadoop2.6 ,运行MR程序:
需修改下面两个地方:

1) 调用主程序的Configuration需要配置:
  1. Configuration conf = new Configuration();  
  2.       
  3.     conf.setBoolean("mapreduce.app-submission.cross-platform", true);// 配置使用跨平台提交任务  
  4.     conf.set("fs.defaultFS", "hdfs://node101:8020");//指定namenode   
  5.     conf.set("mapreduce.framework.name", "yarn");  // 指定使用yarn框架  
  6.     conf.set("yarn.resourcemanager.address", "node101:8032"); // 指定resourcemanager  
  7.     conf.set("yarn.resourcemanager.scheduler.address", "node101:8030");// 指定资源分配器  
复制代码


2) 添加下面的类到classpath:
2.jpg

==
1.jpg


==
2.jpg

其他地方不用修改,这样就可以运行;


3. Web程序调用Hadoop2.6,运行MR程序;

程序可以在
百度网盘:
链接:http://pan.baidu.com/s/1cv7WU 密码:x7c9

java web程序调用hadoop2.6 下载


这个web程序调用部分和上面的java是一样的,基本都没有修改,所使用到的jar包也全部放在了lib下面。

最后有一点,我运行了三个map,但是三个map不是均匀分布的:
1.jpg


可以看到node103分配了两个map,node101分配了1一个map;还有一次是node101分配了两个map,node103分配了一个map;两次node102都没有分配到map任务,这个应该是资源管理和任务分配的地方还是有点问题的缘故。



已有(13)人评论

跳转到指定楼层
arBen 发表于 2015-1-21 08:45:47
大神,感谢分享!
回复

使用道具 举报

韩克拉玛寒 发表于 2015-1-21 09:12:13
嗯,写的很不错,分享学习了
回复

使用道具 举报

stark_summer 发表于 2015-1-21 10:39:41
回复

使用道具 举报

hovi_820 发表于 2015-1-21 11:40:08
2015-01-21 11:38:07,604  INFO [main] (RMProxy.java:98) - Connecting to ResourceManager at /192.168.6.133:8032
2015-01-21 11:38:07,994  WARN [main] (JobSubmitter.java:261) - No job jar file set.  User classes may not be found. See Job or Job#setJar(String).
2015-01-21 11:38:08,002  INFO [main] (FileInputFormat.java:281) - Total input paths to process : 1
2015-01-21 11:38:08,154  WARN [ResponseProcessor for block BP-698856896-192.168.6.133-1421746766729:blk_1073741890_1073] (DFSOutputStream.java:954) - DFSOutputStream ResponseProcessor exception  for block BP-698856896-192.168.6.133-1421746766729:blk_1073741890_1073
java.io.EOFException: Premature EOF: no length prefix available
        at org.apache.hadoop.hdfs.protocolPB.PBHelper.vintPrefixed(PBHelper.java:2203)
        at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:176)
        at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer$ResponseProcessor.run(DFSOutputStream.java:867)
2015-01-21 11:38:08,157  WARN [DataStreamer for file /tmp/hadoop-yarn/staging/dell/.staging/job_1421800170028_0029/job.split block BP-698856896-192.168.6.133-1421746766729:blk_1073741890_1073] (DFSOutputStream.java:1211) - Error Recovery for block BP-698856896-192.168.6.133-1421746766729:blk_1073741890_1073 in pipeline 192.168.6.132:50010, 192.168.6.131:50010: bad datanode 192.168.6.132:50010
2015-01-21 11:38:08,181  WARN [ResponseProcessor for block BP-698856896-192.168.6.133-1421746766729:blk_1073741890_1074] (DFSOutputStream.java:954) - DFSOutputStream ResponseProcessor exception  for block BP-698856896-192.168.6.133-1421746766729:blk_1073741890_1074
java.io.IOException: Bad response ERROR_CHECKSUM for block BP-698856896-192.168.6.133-1421746766729:blk_1073741890_1074 from datanode 192.168.6.131:50010
        at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer$ResponseProcessor.run(DFSOutputStream.java:897)
2015-01-21 11:38:08,182  INFO [main] (JobSubmitter.java:545) - Cleaning up the staging area /tmp/hadoop-yarn/staging/dell/.staging/job_1421800170028_0029
Exception in thread "main" java.io.IOException: All datanodes 192.168.6.131:50010 are bad. Aborting...
        at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1206)
        at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.processDatanodeError(DFSOutputStream.java:1004)
        at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:548)
2015-01-21 11:38:08,189 ERROR [Thread-2] (DFSClient.java:941) - Failed to close inode 16531
java.io.IOException: All datanodes 192.168.6.131:50010 are bad. Aborting...
        at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1206)
        at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.processDatanodeError(DFSOutputStream.java:1004)
        at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:548) 我报这个错误啊
回复

使用道具 举报

langke93 发表于 2015-1-21 12:09:02
hovi_820 发表于 2015-1-21 11:40
2015-01-21 11:38:07,604  INFO [main] (RMProxy.java:98) - Connecting to ResourceManager at /192.168.6 ...
检查下集群及进程情况,最好重启下
回复

使用道具 举报

hb1984 发表于 2015-1-21 14:46:39
谢谢楼主分享。              
回复

使用道具 举报

stark_summer 发表于 2015-1-26 14:37:13
总结的好详细
回复

使用道具 举报

stark_summer 发表于 2015-1-26 17:40:19
回复

使用道具 举报

12下一页
您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

关闭

推荐上一条 /2 下一条