用户组:游客
有了block,提供数据容错和可用性的冗余备份(replication)机制可以更好的工作。在HDFS中,为了防止数据块损坏,或者磁盘及机器当机,每一个block在不同机器上都有几份备份(默认为3)。如果一个block不能用了,HDFS会以一种对用户透明的方式拷贝一份新的备份出来,从而把集群的数据安全级别恢复到以前的水平(你也可以通过提高冗余备份数来提高数据的安全级别)。
Hadoop48 Namenode Hadoop47 Second Namenode, Datanode Hadoop46 Datanode
<property> <name>hadoop.mydata.dir</name> <value>/data/zhouhh/myhadoop</value> <description>A base for other directories.${user.name} </description> </property> <property> <name>hadoop.tmp.dir</name> <value>/tmp/hadoop-${user.name}</value> <description>A base for other temporary directories.</description> </property> <property> <name>fs.checkpoint.dir</name> <value>${hadoop.data.dir}/dfs/namesecondary</value> <description>Determines where on the local filesystem the DFS secondary name node should store the temporary images to merge. If this is a comma-delimited list of directories then the image is replicated in all of the directories for redundancy. </description> </property> <property> <name>fs.checkpoint.edits.dir</name> <value>${fs.checkpoint.dir}</value> <description>Determines where on the local filesystem the DFS secondary name node should store the temporary edits to merge. If this is a comma-delimited list of directoires then teh edits is replicated in all of the directoires for redundancy. Default value is same as fs.checkpoint.dir </description> </property> <property> <name>fs.checkpoint.period</name> <value>20</value> <description>The number of seconds between two periodic checkpoints.default is 3600 second </description> </property> <property> <name>fs.checkpoint.size</name> <value>67108864</value> <description>The size of the current edit log (in bytes) that triggers a periodic checkpoint even if the fs.checkpoint.period hasn’t expired. </description> </property>复制代码
[zhouhh@Hadoop48 conf]$ cat masters Hadoop47复制代码
<property> <name>dfs.name.dir</name> <value>${hadoop.mydata.dir}/dfs/name</value> <description>Determines where on the local filesystem the DFS name node should store the name table(fsimage). If this is a comma-delimited list of directories then the name table is replicated in all of the directories, for redundancy. Default value is:${hadoop.tmp.dir}/dfs/name </description> </property> <property> <name>dfs.secondary.http.address</name> <value>Hadoop47:55090</value> <description> The secondary namenode http server address and port. If the port is 0 then the server will start on a free port. </description> </property>复制代码
[zhouhh@Hadoop48 conf]$ start-all.sh复制代码
[zhouhh@Hadoop48 conf]$ jps 9633 Bootstrap 10746 JobTracker 10572 NameNode 10840 Jps复制代码
[zhouhh@Hadoop47 ~]$ jps 23157 DataNode 23362 TaskTracker 23460 Jps 23250 SecondaryNameNode复制代码
2012-09-25 19:27:54,816 ERROR security.UserGroupInformation – PriviledgedActionException as:zhouhh cause:org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot delete /data/zhouhh/myhadoop/mapred/ system. Name node is in safe mode.
[zhouhh@Hadoop48 hadoop-1.0.3]$ fs -put README.txt /user/zhouhh/README.txt复制代码
[zhouhh@Hadoop48 hadoop-1.0.3]$ fs -ls . Found 1 items -rw-r–r– 2 zhouhh supergroup 1381 2012-09-26 14:03 /user/zhouhh/README.txt复制代码
[zhouhh@Hadoop48 hadoop-1.0.3]$ cat test中文.txt 这是测试文件 test001 by zhouhh http://abloz.com 2012.9.26复制代码
[zhouhh@Hadoop48 hadoop-1.0.3]$ hadoop fs -put test中文.txt .复制代码
[zhouhh@Hadoop48 hadoop-1.0.3]$ hadoop fs -ls . Found 2 items -rw-r–r– 2 zhouhh supergroup 1381 2012-09-26 14:03 /user/zhouhh/README.txt -rw-r–r– 2 zhouhh supergroup 65 2012-09-26 14:10 /user/zhouhh/test中文.txt [zhouhh@Hadoop48 ~]$ hadoop fs -cat test中文.txt 这是测试文件 test001 by zhouhh http://abloz.com 2012.9.26复制代码
[zhouhh@Hadoop48 ~]$ jps 9633 Bootstrap 23006 Jps 19691 NameNode 19867 JobTracker复制代码
[zhouhh@Hadoop48 ~]$ kill -9 19691复制代码
[zhouhh@Hadoop48 ~]$ jps 9633 Bootstrap 23019 Jps 19867 JobTracker复制代码
[zhouhh@Hadoop47 hadoop-1.0.3]$ jps 1716 DataNode 3825 Jps 1935 TaskTracker 1824 SecondaryNameNode复制代码
[zhouhh@Hadoop48 ~]$ cd /data/zhouhh/myhadoop/dfs/name/复制代码
[zhouhh@Hadoop48 name]$ ls current image in_use.lock previous.checkpoint复制代码
[zhouhh@Hadoop48 name]$ cd .. 复制代码
[zhouhh@Hadoop47 hadoop-1.0.3]$ cd /data/zhouhh/myhadoop/dfs/ [zhouhh@Hadoop47 dfs]$ ls data namesecondary复制代码
[zhouhh@Hadoop47 dfs]$ cd namesecondary/ [zhouhh@Hadoop47 namesecondary]$ ls current image in_use.lock复制代码
[zhouhh@Hadoop47 namesecondary]$ cd .. [zhouhh@Hadoop47 dfs]$ scp sec.tar.gz Hadoop48:/data/zhouhh/myhadoop/dfs/ sec.tar.gz [zhouhh@Hadoop48 dfs]$ ls name1 sec.tar.gz复制代码
[zhouhh@Hadoop48 dfs]$ tar zxvf sec.tar.gz namesecondary/ namesecondary/current/ namesecondary/current/VERSION namesecondary/current/fsimage namesecondary/current/edits namesecondary/current/fstime namesecondary/image/ namesecondary/image/fsimage namesecondary/in_use.lock [zhouhh@Hadoop48 dfs]$ ls name1 namesecondary sec.tar.gz复制代码
[zhouhh@Hadoop48 dfs]$ start-all.sh [zhouhh@Hadoop48 dfs]$ jps 23940 Jps 9633 Bootstrap 19867 JobTracker 23791 NameNode复制代码
[zhouhh@Hadoop48 dfs]$ hadoop fs -ls . Found 2 items -rw-r–r– 2 zhouhh supergroup 1381 2012-09-26 14:03 /user/zhouhh/README.txt -rw-r–r– 2 zhouhh supergroup 65 2012-09-26 14:10 /user/zhouhh/test中文.txt复制代码
[zhouhh@Hadoop48 dfs]$ hadoop fs -cat test中文.txt 这是测试文件 test001 by zhouhh http://abloz.com 2012.9.26复制代码
[zhouhh@Hadoop48 dfs]$ hadoop fsck /user/zhouhh FSCK started by zhouhh from /192.168.10.48 for path /user/zhouhh at Wed Sep 26 14:42:31 CST 2012 ..Status: HEALTHY复制代码
使用道具 举报
本版积分规则 发表回复 回帖后跳转到最后一页
超级版主
1689
主题
2216
帖子
467
粉丝
查看 »