分享

伪分布式安装Hadoop2.2实战

本帖最后由 guofeng 于 2013-12-17 19:10 编辑

建立用户
  1. [root@ruo91 ~]# useradd hadoop
复制代码
ssh 本地连接无需密码
  1. [root@ruo91 ~]# su - hadoop
  2. [hadoop@ruo91 ~]$ ssh-keygen -t dsa -P '' -f ~/.ssh/id_dsa
  3. [hadoop@ruo91 ~]$ cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
  4. [hadoop@ruo91 ~]$ chmod 644 ~/.ssh/authorized_keys
复制代码
测试本地ssh连接
  1. [hadoop@ruo91 ~]$ ssh localhost
  2. Last login: Thu Oct 17 11:49:09 2013 from localhost.localdomain
  3. Hello?
复制代码
jdk安装
  1. [hadoop@ruo91 ~]$ tar xzvf jdk-7u45-linux-x64.tar.gz
  2. [hadoop@ruo91 ~]$ mv jdk1.7.0_40 jdk
复制代码
配置jdk环境变量
  1. [hadoop@ruo91 ~]$ nano ~/.bash_profile
  2. # JAVA
  3. export JAVA_HOME=$HOME/jdk
  4. export PATH=$PATH:$JAVA_HOME/bin
复制代码
下载hadoop2.2
  1. [hadoop@ruo91 ~]$ wget  http://mirrors.hust.edu.cn/apache/hadoop/core/hadoop-2.2.0/hadoop-2.2.0.tar.gz
  2. [hadoop@ruo91 ~]$ tar xzvf hadoop-2.2.0.tar.gz
  3. [hadoop@ruo91 ~]$ mv hadoop-2.2.0 2.2.0
复制代码
配置hadoop环境变量
  1. [hadoop@ruo91 ~]$ vi ~/.bash_profile
  2. # Hadoop
  3. export HADOOP_PREFIX="$HOME/2.2.0"
  4. export PATH=$PATH:$HADOOP_PREFIX/bin
  5. export PATH=$PATH:$HADOOP_PREFIX/sbin
  6. export HADOOP_MAPRED_HOME=${HADOOP_PREFIX}
  7. export HADOOP_COMMON_HOME=${HADOOP_PREFIX}
  8. export HADOOP_HDFS_HOME=${HADOOP_PREFIX}
  9. export YARN_HOME=${HADOOP_PREFIX}
  10. # Native Path
  11. export HADOOP_COMMON_LIB_NATIVE_DIR=${HADOOP_PREFIX}/lib/native
  12. export HADOOP_OPTS="-Djava.library.path=$HADOOP_PREFIX/lib
复制代码
hadoop-env.sh
  1. [hadoop@ruo91 ~]$ vi $HADOOP_PREFIX/etc/hadoop/hadoop-env.sh
  2. export JAVA_HOME=$HOME/jdk
  3. export HADOOP_COMMON_LIB_NATIVE_DIR=${HADOOP_PREFIX}/lib/native
  4. export HADOOP_OPTS="-Djava.library.path=$HADOOP_PREFIX/lib"
复制代码
yarn-env.sh
  1. [hadoop@ruo91 ~]$ vi $HADOOP_PREFIX/etc/hadoop/yarn-env.sh
  2. export HADOOP_COMMON_LIB_NATIVE_DIR=${HADOOP_PREFIX}/lib/native
  3. export HADOOP_OPTS="-Djava.library.path=$HADOOP_PREFIX/lib"
复制代码
配置环境生效
  1. [hadoop@ruo91 ~]$  source ~/.bash_profile
复制代码
core-site.xml
  1. [hadoop@ruo91 ~]$ vi $HADOOP_PREFIX/etc/hadoop/core-site.xml
复制代码
  1. <?xml version="1.0" encoding="UTF-8"?>
  2. <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
  3. <!--
  4.   Licensed under the Apache License, Version 2.0 (the "License");
  5.   you may not use this file except in compliance with the License.
  6.   You may obtain a copy of the License at
  7.     http://www.apache.org/licenses/LICENSE-2.0
  8.   Unless required by applicable law or agreed to in writing, software
  9.   distributed under the License is distributed on an "AS IS" BASIS,
  10.   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  11.   See the License for the specific language governing permissions and
  12.   limitations under the License. See accompanying LICENSE file.
  13. -->
  14. <!-- Put site-specific property overrides in this file. -->
  15. <configuration>
  16.   <property>
  17.      <name>fs.default.name</name>
  18.      <value>hdfs://localhost:9000</value>
  19.      <final>true</final>
  20.   </property>
  21. </configuration>
复制代码
hdfs-site.xml
  1. [hadoop@ruo91 ~]$ vi $HADOOP_PREFIX/etc/hadoop/hdfs-site.xml
复制代码
  1. <?xml version="1.0" encoding="UTF-8"?>
  2. <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
  3. <!--
  4.   Licensed under the Apache License, Version 2.0 (the "License");
  5.   you may not use this file except in compliance with the License.
  6.   You may obtain a copy of the License at
  7.     http://www.apache.org/licenses/LICENSE-2.0
  8.   Unless required by applicable law or agreed to in writing, software
  9.   distributed under the License is distributed on an "AS IS" BASIS,
  10.   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  11.   See the License for the specific language governing permissions and
  12.   limitations under the License. See accompanying LICENSE file.
  13. -->
  14. <!-- Put site-specific property overrides in this file. -->
  15. <configuration>
  16.   <property>
  17.     <name>dfs.namenode.name.dir</name>
  18.     <value>file:/home/hadoop/dfs/name</value>
  19.     <final>true</final>
  20.   </property>
  21.   <property>
  22.     <name>dfs.datanode.data.dir</name>
  23.     <value>file:/home/hadoop/dfs/data</value>
  24.     <final>true</final>
  25.   </property>
  26.   <property>
  27.     <name>dfs.permissions</name>
  28.     <value>false</value>
  29.    </property>
  30. </configuration>
复制代码
mapred-site.xml
  1. [hadoop@ruo91 ~]$ vi $HADOOP_PREFIX/etc/hadoop/mapred-site.xml
复制代码
  1. <?xml version="1.0"?>
  2. <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
  3. <!--
  4.   Licensed under the Apache License, Version 2.0 (the "License");
  5.   you may not use this file except in compliance with the License.
  6.   You may obtain a copy of the License at
  7.     http://www.apache.org/licenses/LICENSE-2.0
  8.   Unless required by applicable law or agreed to in writing, software
  9.   distributed under the License is distributed on an "AS IS" BASIS,
  10.   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  11.   See the License for the specific language governing permissions and
  12.   limitations under the License. See accompanying LICENSE file.
  13. -->
  14. <!-- Put site-specific property overrides in this file. -->
  15. <configuration>
  16.   <property>
  17.      <name>mapreduce.framework.name</name>
  18.      <value>yarn</value>
  19.   </property>
  20.   <property>
  21.     <name>mapred.system.dir</name>
  22.     <value>file:/home/hadoop/mapred/system</value>
  23.     <final>true</final>
  24.   </property>
  25.   <property>
  26.      <name>mapred.local.dir</name>
  27.      <value>file:/home/hadoop/mapred/local</value>
  28.      <final>true</final>
  29.   </property>
  30. </configuration>
复制代码
yarn-site.xml
  1. [hadoop@ruo91 ~]$ vi $HADOOP_PREFIX/etc/hadoop/yarn-site.xml
复制代码
  1. <?xml version="1.0"?>
  2. <!--
  3.   Licensed under the Apache License, Version 2.0 (the "License");
  4.   you may not use this file except in compliance with the License.
  5.   You may obtain a copy of the License at
  6.     http://www.apache.org/licenses/LICENSE-2.0
  7.   Unless required by applicable law or agreed to in writing, software
  8.   distributed under the License is distributed on an "AS IS" BASIS,
  9.   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  10.   See the License for the specific language governing permissions and
  11.   limitations under the License. See accompanying LICENSE file.
  12. -->
  13. <configuration>
  14.   <!-- Site specific YARN configuration properties -->
  15.   <property>
  16.     <name>yarn.nodemanager.aux-services</name>
  17.     <value>mapreduce_shuffle</value>
  18.   </property>
  19.   <property>
  20.     <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
  21.     <value>org.apache.hadoop.mapred.ShuffleHandler</value>
  22.    </property>
  23. </configuration>
复制代码
Hadoop启动

Name Node format
  1. [hadoop@ruo91 ~]$ hdfs namenode -format
  2. 13/10/17 23:18:29 INFO namenode.NameNode: STARTUP_MSG:
  3. /************************************************************
  4. STARTUP_MSG: Starting NameNode
  5. STARTUP_MSG:   host = ruo91.yongbok.net/127.0.0.1
  6. STARTUP_MSG:   args = [-format]
  7. STARTUP_MSG:   version = 2.2.0
  8. STARTUP_MSG:   classpath = /home/hadoop/2.2.0/etc/hadoop:/home/hadoop/2.2.0/share/hadoop/common/lib/jettison-
  9. ........
  10. ....................
  11. ..........................
  12. Blah blah ~
  13. ................................
  14. 2.2.0.jar:/home/hadoop/2.2.0/contrib/capacity-scheduler/*.jar
  15. STARTUP_MSG:   build = https://svn.apache.org/repos/asf/hadoop/common -r 1529768; compiled by 'hortonmu' on 2013-10-07T06:28Z
  16. STARTUP_MSG:   java = 1.7.0_45
  17. ************************************************************/
  18. 13/10/17 23:18:29 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
  19. Java HotSpot(TM) 64-Bit Server VM warning: You have loaded library /home/hadoop/2.2.0/lib/native/libhadoop.so.1.0.0 which might have disabled stack guard. The VM will try to fix the stack guard now.
  20. It's highly recommended that you fix the library with 'execstack -c ', or link it with '-z noexecstack'.
  21. 13/10/17 23:18:30 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
  22. Formatting using clusterid: CID-060a7239-4bac-4801-a720-9702eb60c341
  23. 13/10/17 23:18:31 INFO namenode.HostFileManager: read includes:
  24. HostSet(
  25. )
  26. 13/10/17 23:18:31 INFO namenode.HostFileManager: read excludes:
  27. HostSet(
  28. )
  29. 13/10/17 23:18:31 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000
  30. 13/10/17 23:18:31 INFO util.GSet: Computing capacity for map BlocksMap
  31. 13/10/17 23:18:31 INFO util.GSet: VM type       = 64-bit
  32. 13/10/17 23:18:31 INFO util.GSet: 2.0% max memory = 966.7 MB
  33. 13/10/17 23:18:31 INFO util.GSet: capacity      = 2^21 = 2097152 entries
  34. 13/10/17 23:18:31 INFO blockmanagement.BlockManager: dfs.block.access.token.enable=false
  35. 13/10/17 23:18:31 INFO blockmanagement.BlockManager: defaultReplication         = 1
  36. 13/10/17 23:18:31 INFO blockmanagement.BlockManager: maxReplication             = 512
  37. 13/10/17 23:18:31 INFO blockmanagement.BlockManager: minReplication             = 1
  38. 13/10/17 23:18:31 INFO blockmanagement.BlockManager: maxReplicationStreams      = 2
  39. 13/10/17 23:18:31 INFO blockmanagement.BlockManager: shouldCheckForEnoughRacks  = false
  40. 13/10/17 23:18:31 INFO blockmanagement.BlockManager: replicationRecheckInterval = 3000
  41. 13/10/17 23:18:31 INFO blockmanagement.BlockManager: encryptDataTransfer        = false
  42. 13/10/17 23:18:31 INFO namenode.FSNamesystem: fsOwner             = hadoop (auth:SIMPLE)
  43. 13/10/17 23:18:31 INFO namenode.FSNamesystem: supergroup          = supergroup
  44. 13/10/17 23:18:31 INFO namenode.FSNamesystem: isPermissionEnabled = false
  45. 13/10/17 23:18:31 INFO namenode.FSNamesystem: HA Enabled: false
  46. 13/10/17 23:18:31 INFO namenode.FSNamesystem: Append Enabled: true
  47. 13/10/17 23:18:32 INFO util.GSet: Computing capacity for map INodeMap
  48. 13/10/17 23:18:32 INFO util.GSet: VM type       = 64-bit
  49. 13/10/17 23:18:32 INFO util.GSet: 1.0% max memory = 966.7 MB
  50. 13/10/17 23:18:32 INFO util.GSet: capacity      = 2^20 = 1048576 entries
  51. 13/10/17 23:18:32 INFO namenode.NameNode: Caching file names occuring more than 10 times
  52. 13/10/17 23:18:32 INFO namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033
  53. 13/10/17 23:18:32 INFO namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0
  54. 13/10/17 23:18:32 INFO namenode.FSNamesystem: dfs.namenode.safemode.extension     = 30000
  55. 13/10/17 23:18:32 INFO namenode.FSNamesystem: Retry cache on namenode is enabled
  56. 13/10/17 23:18:32 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
  57. 13/10/17 23:18:32 INFO util.GSet: Computing capacity for map Namenode Retry Cache
  58. 13/10/17 23:18:32 INFO util.GSet: VM type       = 64-bit
  59. 13/10/17 23:18:32 INFO util.GSet: 0.029999999329447746% max memory = 966.7 MB
  60. 13/10/17 23:18:32 INFO util.GSet: capacity      = 2^15 = 32768 entries
  61. 13/10/17 23:18:32 INFO common.Storage: Storage directory /home/hadoop/dfs/name has been successfully formatted.
  62. 13/10/17 23:18:32 INFO namenode.FSImage: Saving image file /home/hadoop/dfs/name/current/fsimage.ckpt_0000000000000000000 using no compression
  63. 13/10/17 23:18:32 INFO namenode.FSImage: Image file /home/hadoop/dfs/name/current/fsimage.ckpt_0000000000000000000 of size 198 bytes saved in 0 seconds.
  64. 13/10/17 23:18:32 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0
  65. 13/10/17 23:18:32 INFO util.ExitUtil: Exiting with status 0
  66. 13/10/17 23:18:32 INFO namenode.NameNode: SHUTDOWN_MSG:
  67. /************************************************************
  68. SHUTDOWN_MSG: Shutting down NameNode at ruo91.yongbok.net/127.0.0.1
  69. ************************************************************/
复制代码
Hadoop(name node, data node, yarn) 启动
  1. [hadoop@ruo91 ~]$ start-all.sh
复制代码
  1. [hadoop@localhost ~]$ jps
复制代码
Hadoop 视图界面
http://localhost:8088 or http://localhost:50070
tmp7ad43093.png

http://localhost:50070
tmp6cc9ed20.png






已有(3)人评论

跳转到指定楼层
lzw 发表于 2013-12-17 20:03:04
很不错的,步骤很详细。
回复

使用道具 举报

浵琂潕誋 发表于 2014-8-7 18:57:27
看了好多人的博客都没成功,这次终于成功了,给楼主赞一个
回复

使用道具 举报

chenny 发表于 2015-2-25 15:10:58
参考了学习了
回复

使用道具 举报

您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

关闭

推荐上一条 /2 下一条