分享

hadoop安装成功 jps看不到进程的各种原因总结

在hadoop的安装过程中,对于一些常见的错误,我们已经做了修改,关键是它不报任何的错误。这让我们摸不着头脑。这里总结一下:
1.主机名与配置文件不一致
2.主机名包含特殊字符

思考:
是什么原因造成这种现象?






已有(5)人评论

跳转到指定楼层
pig2 发表于 2014-3-13 19:23:48
本帖最后由 pig2 于 2014-5-22 01:59 编辑
1.主机名与配置文件不一致

启动成功,但是看不到5个进程
  1. hadoop@node1:~/hadoop$ bin/start-all.sh
  2. This script is Deprecated. Instead use start-dfs.sh and start-mapred.sh
  3. starting namenode, logging to /home/hadoop/hadoop/bin/../logs/hadoop-hadoop-namenode-node1.out
  4. node3: starting datanode, logging to /home/hadoop/hadoop/bin/../logs/hadoop-hadoop-datanode-node3.out
  5. node2: starting datanode, logging to /home/hadoop/hadoop/bin/../logs/hadoop-hadoop-datanode-node2.out
  6. node1: starting secondarynamenode, logging to /home/hadoop/hadoop/bin/../logs/hadoop-hadoop-secondarynamenode-node1.out
  7. starting jobtracker, logging to /home/hadoop/hadoop/bin/../logs/hadoop-hadoop-jobtracker-node1.out
  8. node3: starting tasktracker, logging to /home/hadoop/hadoop/bin/../logs/hadoop-hadoop-tasktracker-node3.out
  9. node2: starting tasktracker, logging to /home/hadoop/hadoop/bin/../logs/hadoop-hadoop-tasktracker-node2.out
  10. hadoop@node1:~/hadoop$ jps
  11. 16993 SecondaryNameNode
  12. 17210 Jps
复制代码
配置与日志如下:
  1. hadoop@node1:~/hadoop/conf$ cat core-site.xml
  2. <?xml version="1.0"?>
  3. <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
  4. <!-- Put site-specific property overrides in this file. -->
  5. <configuration>
  6. <property>
  7. <name>hadoop.tmp.dir</name>
  8. <value>/home/hadoop/hadoop/tmp</value>
  9. <description></description>
  10. </property>
  11. <property>
  12. <name>fs.default.name</name>
  13. <value><font color="#ff0000">hdfs://masternode:54310</font></value>
  14. <description></description>
  15. </property>
  16. </configuration>
  17. hadoop@node1:~/hadoop/conf$ cat hdfs-site.xml
  18. <?xml version="1.0"?>
  19. <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
  20. <!-- Put site-specific property overrides in this file. -->
  21. <configuration>
  22. <property>
  23. <name>dfs.replication</name>
  24. <value>3</value>
  25. <description></description>
  26. </property>
  27. </configuration>
  28. hadoop@node1:~/hadoop/conf$ cat mapred-site.xml
  29. <?xml version="1.0"?>
  30. <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
  31. <!-- Put site-specific property overrides in this file. -->
  32. <configuration>
  33. <property>
  34. <name>mapred.job.tracker</name>
  35. <value><font color="#ff0000">masternode:54311</font></value>
  36. <description></description>
  37. </property>
  38. </configuration>
  39. jobtracker的log文件如下:
  40. 2006-03-11 23:54:44,348 FATAL org.apache.hadoop.mapred.JobTracker: java.net.BindException: Problem binding to masternode/122.72.28.136:54311 : Cannot assign requested address
  41. at org.apache.hadoop.ipc.Server.bind(Server.java:218)
  42. at org.apache.hadoop.ipc.Server$Listener.<init>(Server.java:289)
  43. at org.apache.hadoop.ipc.Server.<init>(Server.java:1443)
  44. at org.apache.hadoop.ipc.RPC$Server.<init>(RPC.java:343)
  45. at org.apache.hadoop.ipc.WritableRpcEngine$Server.<init>(WritableRpcEngine.java:324)
  46. at org.apache.hadoop.ipc.WritableRpcEngine.getServer(WritableRpcEngine.java:284)
  47. at org.apache.hadoop.ipc.WritableRpcEngine.getServer(WritableRpcEngine.java:45)
  48. at org.apache.hadoop.ipc.RPC.getServer(RPC.java:331)
  49. at org.apache.hadoop.mapred.JobTracker.<init>(JobTracker.java:1450)
  50. at org.apache.hadoop.mapred.JobTracker.startTracker(JobTracker.java:258)
  51. at org.apache.hadoop.mapred.JobTracker.startTracker(JobTracker.java:250)
  52. at org.apache.hadoop.mapred.JobTracker.startTracker(JobTracker.java:245)
  53. at org.apache.hadoop.mapred.JobTracker.main(JobTracker.java:4164)
  54. Caused by: java.net.BindException: Cannot assign requested address
  55. at sun.nio.ch.Net.bind(Native Method)
  56. at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:119)
  57. at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:59)
  58. at org.apache.hadoop.ipc.Server.bind(Server.java:216)
  59. ... 12 more
  60. 2006-03-11 23:54:44,353 INFO org.apache.hadoop.mapred.JobTracker: SHUTDOWN_MSG:
  61. /************************************************************
  62. SHUTDOWN_MSG: Shutting down JobTracker at node1/192.168.10.237
  63. ************************************************************/
  64. namenode的log文件如下:
  65. 2006-03-11 23:54:37,009 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: java.net.BindException: Problem binding to masternode/122.72.28.136:54310 : Cannot assign requested address
  66. at org.apache.hadoop.ipc.Server.bind(Server.java:218)
  67. at org.apache.hadoop.ipc.Server$Listener.<init>(Server.java:289)
  68. at org.apache.hadoop.ipc.Server.<init>(Server.java:1443)
  69. at org.apache.hadoop.ipc.RPC$Server.<init>(RPC.java:343)
  70. at org.apache.hadoop.ipc.WritableRpcEngine$Server.<init>(WritableRpcEngine.java:324)
  71. at org.apache.hadoop.ipc.WritableRpcEngine.getServer(WritableRpcEngine.java:284)
  72. at org.apache.hadoop.ipc.WritableRpcEngine.getServer(WritableRpcEngine.java:45)
  73. at org.apache.hadoop.ipc.RPC.getServer(RPC.java:331)
  74. at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:305)
  75. at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:433)
  76. at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:421)
  77. at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1359)
  78. at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1368)
  79. Caused by: java.net.BindException: Cannot assign requested address
  80. at sun.nio.ch.Net.bind(Native Method)
  81. at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:119)
  82. at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:59)
  83. at org.apache.hadoop.ipc.Server.bind(Server.java:216)
  84. ... 12 more
  85. 2006-03-11 23:54:37,010 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
  86. /************************************************************
  87. SHUTDOWN_MSG: Shutting down NameNode at node1/192.168.10.237
  88. ************************************************************
复制代码
host如下:
  1. hadoop@node1:~/hadoop/conf$ cat masters
  2. node1
  3. hadoop@node1:~/hadoop/conf$ cat slaves
  4. node2
  5. node3
  6. hadoop@node1:~/hadoop/conf$ cat /etc/hosts
  7. 127.0.0.1       localhost
  8. 192.168.10.237  node1.node1     node1
  9. 192.168.10.238  node2
  10. 192.168.10.239  node3
复制代码
原因:主机名与配置文件不一致

解决办法:

1.修改主机名为masternode:具体如何修改查看:ubuntu修改hostname
修改后结果如下:
  1. hadoop@node1:~/hadoop/conf$ cat masters
  2. masternode
复制代码

---------------------------------------------------------------------------------------------------------------------------

2.主机名错误


(此错误总结来自about云官方群39327136)
由【云将】hadoop jackysparrow(61214484) 与【云神】Karmic Koala(2448096355)共同探讨

错误如下:

start-all.sh 没有报错 但是master 和slave都没有启动服务

jps.jpg



start.jpg


配置文件:


core-site.xml:

core.jpg



错误原因:
主机名 有点“.“

解决办法:

同样修改主机名:尽量不要包含特殊字符,修改后结果如下:(当然你也可以改成其他名字)
  1. hadoop@node1:~/hadoop/conf$ cat masters
  2. centOS64
复制代码
---------------------------------------------------------------------------------------------------------------------------------------------------

3.xml文件配置错误


jps的时候没有看到5个进程。
也格式化的,只看到一个进程,剩下就是报错。
这个最大的可能就是xml的格式错误。
要么没有配对,要么配对的情况有字符。导致。这里重现几个错误
starting namenode, logging to /usr/hadoop/libexec/../logs/hadoop-root-namenode-aboutyun.out
[Fatal Error] hdfs-site.xml:14:16: The end-tag for element type "configuration" must end with a '>' delimiter.
localhost: starting datanode, logging to /usr/hadoop/libexec/../logs/hadoop-root-datanode-aboutyun.out
localhost: [Fatal Error] hdfs-site.xml:14:16: The end-tag for element type "configuration" must end with a '>' delimiter.
localhost: secondarynamenode running as process 3170. Stop it first.
jobtracker running as process 3253. Stop it first.
localhost: starting tasktracker, logging to /usr/hadoop/libexec/../logs/hadoop-root-tasktracker-aboutyun.out

上面是因为标记不配对造成的
错误2:


[Fatal Error] hdfs-site.xml:14:17: Content is not allowed in trailing section.
localhost: starting datanode, logging to /usr/hadoop/libexec/../logs/hadoop-root-datanode-aboutyun.out
localhost: [Fatal Error] hdfs-site.xml:14:17: Content is not allowed in trailing section.

这个错误,并不是不配对,而是因为在xml之外多个字符。是一个很小的横杠,在两个箭头中间部分。

4.注意权限问题:
如下图所示,如果为了hadoop集群新建了用户,hadoop的临时目录要注意权限一致,可以通过‘ll’命令查看文件权限。

---------------------------------------------------------------------------------------------------------------------------------------------------

5.没有安装JDK

这个错误不太常见
jdk.jpg


不知道怎么安装可以查看
linux(ubuntu)安装Java jdk环境变量设置及小程序测试

这里简单说一下步骤:
1.下载合适的JDK,要和本系统相一致(jdk分为Linux版本,window版本,32位和64位)
2.配置环境变量
3.通过java -version,即可查看版本,如果出现上面截图错误,说明环境变量错误。


注意的地方:
1.可以不卸载以前的版本
2.注意路径添加的是bin路径,而不是JDK路径。

转载注明出处:http://www.aboutyun.com/thread-7118-1-1.html


回复

使用道具 举报

Elaine_Zhang 发表于 2014-10-24 12:02:43
hadoop安装和启动的问题还是很多啊,我都不知道怎么弄
回复

使用道具 举报

mac 发表于 2015-11-30 13:33:57
请问 使用 jps  命令 ,  会非常慢 才显示 , hadoop的 集群 。  请问 是 什么 原因呢????
回复

使用道具 举报

神雕爱大侠 发表于 2017-1-15 14:12:30
Ubuntu装hadoop,用localhost能运行,用ip或主机名不能成功
本人用的是阿里云的服务器,Ubuntu是 16.04.1 (64位),Hadoop版本是1.2.1,JDK是1.8。core-site.xml,mapred-site.xml的配置是localhost时,运行起来没有问题

但是改成主机名时,就会失败

日志是报地址绑定不上去。我查了查,9000端口也没有被占用
其中主机名也改过,hosts文件配置如下,ssh zxy免密登录也能成功。
哪位大神给予指导一下


回复

使用道具 举报

fengfengda 发表于 2017-8-21 18:21:53
hadoop启动之后jps第一次可以成功50070的web页面也可以访问但是刷新一下就不能访问了然后jps出现错误
QQ截图20170821170933.png
回复

使用道具 举报

您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

关闭

推荐上一条 /2 下一条