立即注册 登录
About云-梭伦科技 返回首页

langke93的个人空间 https://www.aboutyun.com/?1415 [收藏] [复制] [分享] [RSS]

日志

Hadoop 2.x中fsimage和edits合并源码分析及配置等说明

热度 1已有 1211 次阅读2015-1-5 11:44

为了让Standby NN的状态和Active NN保持同步,即元数据保持一致,它们都将会和JournalNodes守护进程通信。当Active NN执行任何有关命名空间的修改,它需要持久化到一半以上的JournalNodes上(通过edits log持久化存储),而Standby NN负责观察edits log的变化,它能够读取从JNs中读取edits信息,并更新其内部的命名空间。一旦Active NN出现故障,Standby NN将会保证从JNs中读出了全部的Edits,然后切换成Active状态。Standby NN读取全部的edits可确保发生故障转移之前,是和Active NN拥有完全同步的命名空间状态

这种机制是如何实现fsimage和edits的合并?在standby NameNode节点上会一直运行一个叫做CheckpointerThread的线程,这个线程调用StandbyCheckpointer类的doWork()函数,而doWork函数会每隔Math.min(checkpointCheckPeriod, checkpointPeriod)秒来坐一次合并操作,相关代码如下:
try {
Thread.sleep(1000 * checkpointConf.getCheckPeriod());
} catch (InterruptedException ie) {
}

public long getCheckPeriod() {
return Math.min(checkpointCheckPeriod, checkpointPeriod);
}

checkpointCheckPeriod = conf.getLong(
DFS_NAMENODE_CHECKPOINT_CHECK_PERIOD_KEY,
DFS_NAMENODE_CHECKPOINT_CHECK_PERIOD_DEFAULT);

checkpointPeriod = conf.getLong(DFS_NAMENODE_CHECKPOINT_PERIOD_KEY, 
DFS_NAMENODE_CHECKPOINT_PERIOD_DEFAULT);
上面的checkpointCheckPeriod和checkpointPeriod变量是通过获取hdfs-site.xml以下两个属性的值得到:
<property>
<name>dfs.namenode.checkpoint.period</name>
<value>3600</value>
<description>The number of seconds between two periodic checkpoints.
</description>
</property>

<property>
<name>dfs.namenode.checkpoint.check.period</name>
<value>60</value>
<description>The SecondaryNameNode and CheckpointNode will poll the NameNode
every 'dfs.namenode.checkpoint.check.period' seconds to query the number
of uncheckpointed transactions.
</description>
</property>

当达到下面两个条件的情况下,将会执行一次checkpoint:

boolean needCheckpoint = false;
if (uncheckpointed >= checkpointConf.getTxnCount()) {
LOG.info("Triggering checkpoint because there have been " + 
uncheckpointed + " txns since the last checkpoint, which " +
"exceeds the configured threshold " +
checkpointConf.getTxnCount());
needCheckpoint = true;
} else if (secsSinceLast >= checkpointConf.getPeriod()) {
LOG.info("Triggering checkpoint because it has been " +
secsSinceLast + " seconds since the last checkpoint, which " +
"exceeds the configured interval " + checkpointConf.getPeriod());
needCheckpoint = true;
}

 当上述needCheckpoint被设置成true的时候,StandbyCheckpointer类的doWork()函数将会调用doCheckpoint()函数正式处理checkpoint。当fsimage和edits的合并完成之后,它将会把合并后的fsimage上传到Active NameNode节点上,Active NameNode节点下载完合并后的fsimage,再将旧的fsimage删掉(Active NameNode上的)同时清除旧的edits文件。步骤可以归类如下:
  (1)、配置好HA后,客户端所有的更新操作将会写到JournalNodes节点的共享目录中,可以通过下面配置
<property>
  <name>dfs.namenode.shared.edits.dir</name>
  <value>qjournal://XXXX/mycluster</value>
</property>

<property>
  <name>dfs.journalnode.edits.dir</name>
  <value>/export1/hadoop2x/dfs/journal</value>
</property>

 (2)、Active Namenode和Standby NameNode从JournalNodes的edits共享目录中同步edits到自己edits目录中;
  (3)、Standby NameNode中的StandbyCheckpointer类会定期的检查合并的条件是否成立,如果成立会合并fsimage和edits文件;
  (4)、Standby NameNode中的StandbyCheckpointer类合并完之后,将合并之后的fsimage上传到Active NameNode相应目录中;
  (5)、Active NameNode接到最新的fsimage文件之后,将旧的fsimage和edits文件清理掉;
  (6)、通过上面的几步,fsimage和edits文件就完成了合并,由于HA机制,会使得Standby NameNode和Active NameNode都拥有最新的fsimage和edits文件(之前Hadoop 1.x的SecondaryNameNode中的fsimage和edits不是最新的)








路过

雷人

握手

鲜花

鸡蛋

发表评论 评论 (1 个评论)

回复 stark_summer 2015-1-5 13:59
好资源呀顶顶

facelist doodle 涂鸦板

您需要登录后才可以评论 登录 | 立即注册

关闭

推荐上一条 /2 下一条