2013-08-31 10:29:31,241 ERROR org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem initialization failed.java.io.FileNotFoundException: /opt/data/hadoop/hdfs/name/current/VERSION (Permission denied) at java.io.RandomAccessFile.open(Native Method) at java.io.RandomAccessFile.(RandomAccessFile.java:212) at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.read(Storage.java:237) at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.read(Storage.java:233) at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:418) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:291) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.(FSNamesystem.java:270) at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:271) at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:303) at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:433) at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:421) at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.read(Storage.java:237) at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.read(Storage.java:233) at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:418) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:110) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:291) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.(FSNamesystem.java:270) at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:271) at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:303) at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:433) at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:421) at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1359) at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1368)
处理方式:
[root@master bin]# cd /opt/data/hadoop/hdfs/name/current/
[root@master current]# ls
edits fsimage fstime VERSION
[root@master current]# ll
total 32
-rw-r--r-- 1 root root 4 Aug 31 10:07 edits
-rw-r--r-- 1 root root 1753 Aug 31 10:07 fsimage
-rw-r--r-- 1 root root 8 Aug 31 10:07 fstime
-rw-r--r-- 1 root root 101 Aug 31 10:07 VERSION
[root@master current]# pwd
/opt/data/hadoop/hdfs/name/current
[root@master current]# chown hadoop:hadoop -R /opt/data/hadoop/hdfs/name/current/
切换到hadoop用户,start-all.sh 问题解决。
01.[hadoop@master bin]$ cd /opt/modules/hadoop/hadoop-0.21.0/bin
02.[hadoop@master bin]$ ls
03.hadoop hadoop-daemons.sh mapred slaves.sh start-dfs.sh stop-balancer.sh
04.hadoop-config.sh hdfs mapred-config.sh start-all.sh start-mapred.sh stop-dfs.sh
05.hadoop-daemon.sh hdfs-config.sh rcc start-balancer.sh stop-all.sh stop-mapred.sh
06.[hadoop@master bin]$ sh start-all.sh
07.This script is Deprecated. Instead use start-dfs.sh and start-mapred.sh
08.starting namenode, logging to /opt/modules/hadoop/hadoop-0.21.0/bin/../logs/hadoop-hadoop-namenode-master.out
09.datanode2: datanode running as process 5909. Stop it first.
10.master: datanode running as process 10204. Stop it first.
11.datanode1: datanode running as process 5805. Stop it first.
12.master: starting secondarynamenode, logging to /opt/modules/hadoop/hadoop-0.21.0/bin/../logs/hadoop-hadoop-secondarynamenode-master.out
13.jobtracker running as process 10450. Stop it first.
14.datanode2: tasktracker running as process 6007. Stop it first.
15.master: tasktracker running as process 10602. Stop it first.
16.datanode1: tasktracker running as process 5903. Stop it first.