分享

【求助】hadoop2.X分布式搭建两个NameNode均无法正常启动

smartleon 发表于 2015-7-20 18:50:21 [显示全部楼层] 回帖奖励 阅读模式 关闭右栏 2 18846
场景描述:hadoop2.5.2分布式部署,按照《传智播客7天视频》里第五天的教程部署,部署完成后,在主节点上启动hdfs,两个nameservers上的nameNode服务都没有启动(第一次是有一个启动成功,之后两个就都无法启动了),但是DataNode上的服务均启动成功,且能上传数据。MR是正常的。求教高手解答错误日志如下:

2015-07-20 17:35:12,528 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   host = hadoopcs01/192.168.1.201
STARTUP_MSG:   args = []
STARTUP_MSG:   version = 2.5.2
STARTUP_MSG:   classpath = /iwisdom/hadoop-2.5.2/etc/hadoop:/iwisdom/hadoop-2.5.2/share/hadoop/common/lib/commons-configuration-1.6.jar:/iwisdom/hadoop-2.5.2/share/hadoop/common/lib/jetty-6.1.26.jar:/iwisdom/hadoop-2.5.2/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/iwisdom/hadoop-2.5.2/share/hadoop/common/lib/hadoop-annotations-2.5.2.jar:/iwisdom/hadoop-2.5.2/share/hadoop/common/lib/commons-beanutils-core-1.8.0.jar:/iwisdom/hadoop-2.5.2/share/hadoop/common/lib/jets3t-0.9.0.jar:/iwisdom/hadoop-2.5.2/share/hadoop/common/lib/snappy-java-1.0.4.1.jar:/iwisdom/hadoop-2.5.2/share/hadoop/common/lib/commons-math3-3.1.1.jar:/iwisdom/hadoop-2.5.2/share/hadoop/common/lib/servlet-api-2.5.jar:/iwisdom/hadoop-2.5.2/share/hadoop/common/lib/api-asn1-api-1.0.0-M20.jar:/iwisdom/hadoop-2.5.2/share/hadoop/common/lib/api-util-1.0.0-M20.jar:/iwisdom/hadoop-2.5.2/share/hadoop/common/lib/paranamer-2.3.jar:/iwisdom/hadoop-2.5.2/share/hadoop/common/lib/jsch-0.1.42.jar:/iwisdom/hadoop-2.5.2/share/hadoop/common/lib/hadoop-auth-2.5.2.jar:/iwisdom/hadoop-2.5.2/share/hadoop/common/lib/jackson-xc-1.9.13.jar:/iwisdom/hadoop-2.5.2/share/hadoop/common/lib/commons-net-3.1.jar:/iwisdom/hadoop-2.5.2/share/hadoop/common/lib/hamcrest-core-1.3.jar:/iwisdom/hadoop-2.5.2/share/hadoop/common/lib/avro-1.7.4.jar:/iwisdom/hadoop-2.5.2/share/hadoop/common/lib/commons-collections-3.2.1.jar:/iwisdom/hadoop-2.5.2/share/hadoop/common/lib/jasper-runtime-5.5.23.jar:/iwisdom/hadoop-2.5.2/share/hadoop/common/lib/commons-beanutils-1.7.0.jar:/iwisdom/hadoop-2.5.2/share/hadoop/common/lib/commons-logging-1.1.3.jar:/iwisdom/hadoop-2.5.2/share/hadoop/common/lib/activation-1.1.jar:/iwisdom/hadoop-2.5.2/share/hadoop/common/lib/jackson-core-asl-1.9.13.jar:/iwisdom/hadoop-2.5.2/share/hadoop/common/lib/commons-el-1.0.jar:/iwisdom/hadoop-2.5.2/share/hadoop/common/lib/commons-io-2.4.jar:/iwisdom/hadoop-2.5.2/share/hadoop/common/lib/jasper-compiler-5.5.23.jar:/iwisdom/hadoop-2.5.2/share/hadoop/common/lib/asm-3.2.jar:/iwisdom/hadoop-2.5.2/share/hadoop/common/lib/jsp-api-2.1.jar:/iwisdom/hadoop-2.5.2/share/hadoop/common/lib/xz-1.0.jar:/iwisdom/hadoop-2.5.2/share/hadoop/common/lib/commons-httpclient-3.1.jar:/iwisdom/hadoop-2.5.2/share/hadoop/common/lib/netty-3.6.2.Final.jar:/iwisdom/hadoop-2.5.2/share/hadoop/common/lib/junit-4.11.jar:/iwisdom/hadoop-2.5.2/share/hadoop/common/lib/xmlenc-0.52.jar:/iwisdom/hadoop-2.5.2/share/hadoop/common/lib/slf4j-api-1.7.5.jar:/iwisdom/hadoop-2.5.2/share/hadoop/common/lib/guava-11.0.2.jar:/iwisdom/hadoop-2.5.2/share/hadoop/common/lib/jaxb-api-2.2.2.jar:/iwisdom/hadoop-2.5.2/share/hadoop/common/lib/jackson-mapper-asl-1.9.13.jar:/iwisdom/hadoop-2.5.2/share/hadoop/common/lib/stax-api-1.0-2.jar:/iwisdom/hadoop-2.5.2/share/hadoop/common/lib/commons-lang-2.6.jar:/iwisdom/hadoop-2.5.2/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/iwisdom/hadoop-2.5.2/share/hadoop/common/lib/mockito-all-1.8.5.jar:/iwisdom/hadoop-2.5.2/share/hadoop/common/lib/commons-compress-1.4.1.jar:/iwisdom/hadoop-2.5.2/share/hadoop/common/lib/commons-digester-1.8.jar:/iwisdom/hadoop-2.5.2/share/hadoop/common/lib/jersey-json-1.9.jar:/iwisdom/hadoop-2.5.2/share/hadoop/common/lib/jackson-jaxrs-1.9.13.jar:/iwisdom/hadoop-2.5.2/share/hadoop/common/lib/jersey-server-1.9.jar:/iwisdom/hadoop-2.5.2/share/hadoop/common/lib/apacheds-i18n-2.0.0-M15.jar:/iwisdom/hadoop-2.5.2/share/hadoop/common/lib/httpclient-4.2.5.jar:/iwisdom/hadoop-2.5.2/share/hadoop/common/lib/httpcore-4.2.5.jar:/iwisdom/hadoop-2.5.2/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar:/iwisdom/hadoop-2.5.2/share/hadoop/common/lib/jsr305-1.3.9.jar:/iwisdom/hadoop-2.5.2/share/hadoop/common/lib/commons-codec-1.4.jar:/iwisdom/hadoop-2.5.2/share/hadoop/common/lib/jetty-util-6.1.26.jar:/iwisdom/hadoop-2.5.2/share/hadoop/common/lib/zookeeper-3.4.6.jar:/iwisdom/hadoop-2.5.2/share/hadoop/common/lib/log4j-1.2.17.jar:/iwisdom/hadoop-2.5.2/share/hadoop/common/lib/commons-cli-1.2.jar:/iwisdom/hadoop-2.5.2/share/hadoop/common/lib/jettison-1.1.jar:/iwisdom/hadoop-2.5.2/share/hadoop/common/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/iwisdom/hadoop-2.5.2/share/hadoop/common/lib/java-xmlbuilder-0.4.jar:/iwisdom/hadoop-2.5.2/share/hadoop/common/lib/jersey-core-1.9.jar:/iwisdom/hadoop-2.5.2/share/hadoop/common/hadoop-common-2.5.2.jar:/iwisdom/hadoop-2.5.2/share/hadoop/common/hadoop-nfs-2.5.2.jar:/iwisdom/hadoop-2.5.2/share/hadoop/common/hadoop-common-2.5.2-tests.jar:/iwisdom/hadoop-2.5.2/share/hadoop/hdfs:/iwisdom/hadoop-2.5.2/share/hadoop/hdfs/lib/jetty-6.1.26.jar:/iwisdom/hadoop-2.5.2/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/iwisdom/hadoop-2.5.2/share/hadoop/hdfs/lib/servlet-api-2.5.jar:/iwisdom/hadoop-2.5.2/share/hadoop/hdfs/lib/jasper-runtime-5.5.23.jar:/iwisdom/hadoop-2.5.2/share/hadoop/hdfs/lib/commons-logging-1.1.3.jar:/iwisdom/hadoop-2.5.2/share/hadoop/hdfs/lib/jackson-core-asl-1.9.13.jar:/iwisdom/hadoop-2.5.2/share/hadoop/hdfs/lib/commons-el-1.0.jar:/iwisdom/hadoop-2.5.2/share/hadoop/hdfs/lib/commons-io-2.4.jar:/iwisdom/hadoop-2.5.2/share/hadoop/hdfs/lib/asm-3.2.jar:/iwisdom/hadoop-2.5.2/share/hadoop/hdfs/lib/jsp-api-2.1.jar:/iwisdom/hadoop-2.5.2/share/hadoop/hdfs/lib/netty-3.6.2.Final.jar:/iwisdom/hadoop-2.5.2/share/hadoop/hdfs/lib/xmlenc-0.52.jar:/iwisdom/hadoop-2.5.2/share/hadoop/hdfs/lib/guava-11.0.2.jar:/iwisdom/hadoop-2.5.2/share/hadoop/hdfs/lib/jackson-mapper-asl-1.9.13.jar:/iwisdom/hadoop-2.5.2/share/hadoop/hdfs/lib/commons-lang-2.6.jar:/iwisdom/hadoop-2.5.2/share/hadoop/hdfs/lib/jersey-server-1.9.jar:/iwisdom/hadoop-2.5.2/share/hadoop/hdfs/lib/jsr305-1.3.9.jar:/iwisdom/hadoop-2.5.2/share/hadoop/hdfs/lib/commons-codec-1.4.jar:/iwisdom/hadoop-2.5.2/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/iwisdom/hadoop-2.5.2/share/hadoop/hdfs/lib/jetty-util-6.1.26.jar:/iwisdom/hadoop-2.5.2/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/iwisdom/hadoop-2.5.2/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/iwisdom/hadoop-2.5.2/share/hadoop/hdfs/lib/jersey-core-1.9.jar:/iwisdom/hadoop-2.5.2/share/hadoop/hdfs/hadoop-hdfs-nfs-2.5.2.jar:/iwisdom/hadoop-2.5.2/share/hadoop/hdfs/hadoop-hdfs-2.5.2.jar:/iwisdom/hadoop-2.5.2/share/hadoop/hdfs/hadoop-hdfs-2.5.2-tests.jar:/iwisdom/hadoop-2.5.2/share/hadoop/yarn/lib/jetty-6.1.26.jar:/iwisdom/hadoop-2.5.2/share/hadoop/yarn/lib/protobuf-java-2.5.0.jar:/iwisdom/hadoop-2.5.2/share/hadoop/yarn/lib/servlet-api-2.5.jar:/iwisdom/hadoop-2.5.2/share/hadoop/yarn/lib/leveldbjni-all-1.8.jar:/iwisdom/hadoop-2.5.2/share/hadoop/yarn/lib/jackson-xc-1.9.13.jar:/iwisdom/hadoop-2.5.2/share/hadoop/yarn/lib/jersey-guice-1.9.jar:/iwisdom/hadoop-2.5.2/share/hadoop/yarn/lib/commons-collections-3.2.1.jar:/iwisdom/hadoop-2.5.2/share/hadoop/yarn/lib/commons-logging-1.1.3.jar:/iwisdom/hadoop-2.5.2/share/hadoop/yarn/lib/activation-1.1.jar:/iwisdom/hadoop-2.5.2/share/hadoop/yarn/lib/jackson-core-asl-1.9.13.jar:/iwisdom/hadoop-2.5.2/share/hadoop/yarn/lib/commons-io-2.4.jar:/iwisdom/hadoop-2.5.2/share/hadoop/yarn/lib/asm-3.2.jar:/iwisdom/hadoop-2.5.2/share/hadoop/yarn/lib/xz-1.0.jar:/iwisdom/hadoop-2.5.2/share/hadoop/yarn/lib/commons-httpclient-3.1.jar:/iwisdom/hadoop-2.5.2/share/hadoop/yarn/lib/jline-0.9.94.jar:/iwisdom/hadoop-2.5.2/share/hadoop/yarn/lib/netty-3.6.2.Final.jar:/iwisdom/hadoop-2.5.2/share/hadoop/yarn/lib/guice-servlet-3.0.jar:/iwisdom/hadoop-2.5.2/share/hadoop/yarn/lib/guava-11.0.2.jar:/iwisdom/hadoop-2.5.2/share/hadoop/yarn/lib/jaxb-api-2.2.2.jar:/iwisdom/hadoop-2.5.2/share/hadoop/yarn/lib/jackson-mapper-asl-1.9.13.jar:/iwisdom/hadoop-2.5.2/share/hadoop/yarn/lib/stax-api-1.0-2.jar:/iwisdom/hadoop-2.5.2/share/hadoop/yarn/lib/commons-lang-2.6.jar:/iwisdom/hadoop-2.5.2/share/hadoop/yarn/lib/jaxb-impl-2.2.3-1.jar:/iwisdom/hadoop-2.5.2/share/hadoop/yarn/lib/commons-compress-1.4.1.jar:/iwisdom/hadoop-2.5.2/share/hadoop/yarn/lib/guice-3.0.jar:/iwisdom/hadoop-2.5.2/share/hadoop/yarn/lib/jersey-json-1.9.jar:/iwisdom/hadoop-2.5.2/share/hadoop/yarn/lib/jackson-jaxrs-1.9.13.jar:/iwisdom/hadoop-2.5.2/share/hadoop/yarn/lib/jersey-server-1.9.jar:/iwisdom/hadoop-2.5.2/share/hadoop/yarn/lib/jersey-client-1.9.jar:/iwisdom/hadoop-2.5.2/share/hadoop/yarn/lib/jsr305-1.3.9.jar:/iwisdom/hadoop-2.5.2/share/hadoop/yarn/lib/commons-codec-1.4.jar:/iwisdom/hadoop-2.5.2/share/hadoop/yarn/lib/jetty-util-6.1.26.jar:/iwisdom/hadoop-2.5.2/share/hadoop/yarn/lib/zookeeper-3.4.6.jar:/iwisdom/hadoop-2.5.2/share/hadoop/yarn/lib/aopalliance-1.0.jar:/iwisdom/hadoop-2.5.2/share/hadoop/yarn/lib/log4j-1.2.17.jar:/iwisdom/hadoop-2.5.2/share/hadoop/yarn/lib/commons-cli-1.2.jar:/iwisdom/hadoop-2.5.2/share/hadoop/yarn/lib/jettison-1.1.jar:/iwisdom/hadoop-2.5.2/share/hadoop/yarn/lib/javax.inject-1.jar:/iwisdom/hadoop-2.5.2/share/hadoop/yarn/lib/jersey-core-1.9.jar:/iwisdom/hadoop-2.5.2/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.5.2.jar:/iwisdom/hadoop-2.5.2/share/hadoop/yarn/hadoop-yarn-api-2.5.2.jar:/iwisdom/hadoop-2.5.2/share/hadoop/yarn/hadoop-yarn-server-common-2.5.2.jar:/iwisdom/hadoop-2.5.2/share/hadoop/yarn/hadoop-yarn-server-tests-2.5.2.jar:/iwisdom/hadoop-2.5.2/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-2.5.2.jar:/iwisdom/hadoop-2.5.2/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.5.2.jar:/iwisdom/hadoop-2.5.2/share/hadoop/yarn/hadoop-yarn-common-2.5.2.jar:/iwisdom/hadoop-2.5.2/share/hadoop/yarn/hadoop-yarn-server-web-proxy-2.5.2.jar:/iwisdom/hadoop-2.5.2/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-2.5.2.jar:/iwisdom/hadoop-2.5.2/share/hadoop/yarn/hadoop-yarn-server-nodemanager-2.5.2.jar:/iwisdom/hadoop-2.5.2/share/hadoop/yarn/hadoop-yarn-client-2.5.2.jar:/iwisdom/hadoop-2.5.2/share/hadoop/mapreduce/lib/protobuf-java-2.5.0.jar:/iwisdom/hadoop-2.5.2/share/hadoop/mapreduce/lib/hadoop-annotations-2.5.2.jar:/iwisdom/hadoop-2.5.2/share/hadoop/mapreduce/lib/snappy-java-1.0.4.1.jar:/iwisdom/hadoop-2.5.2/share/hadoop/mapreduce/lib/leveldbjni-all-1.8.jar:/iwisdom/hadoop-2.5.2/share/hadoop/mapreduce/lib/paranamer-2.3.jar:/iwisdom/hadoop-2.5.2/share/hadoop/mapreduce/lib/hamcrest-core-1.3.jar:/iwisdom/hadoop-2.5.2/share/hadoop/mapreduce/lib/jersey-guice-1.9.jar:/iwisdom/hadoop-2.5.2/share/hadoop/mapreduce/lib/avro-1.7.4.jar:/iwisdom/hadoop-2.5.2/share/hadoop/mapreduce/lib/jackson-core-asl-1.9.13.jar:/iwisdom/hadoop-2.5.2/share/hadoop/mapreduce/lib/commons-io-2.4.jar:/iwisdom/hadoop-2.5.2/share/hadoop/mapreduce/lib/asm-3.2.jar:/iwisdom/hadoop-2.5.2/share/hadoop/mapreduce/lib/xz-1.0.jar:/iwisdom/hadoop-2.5.2/share/hadoop/mapreduce/lib/netty-3.6.2.Final.jar:/iwisdom/hadoop-2.5.2/share/hadoop/mapreduce/lib/junit-4.11.jar:/iwisdom/hadoop-2.5.2/share/hadoop/mapreduce/lib/guice-servlet-3.0.jar:/iwisdom/hadoop-2.5.2/share/hadoop/mapreduce/lib/jackson-mapper-asl-1.9.13.jar:/iwisdom/hadoop-2.5.2/share/hadoop/mapreduce/lib/commons-compress-1.4.1.jar:/iwisdom/hadoop-2.5.2/share/hadoop/mapreduce/lib/guice-3.0.jar:/iwisdom/hadoop-2.5.2/share/hadoop/mapreduce/lib/jersey-server-1.9.jar:/iwisdom/hadoop-2.5.2/share/hadoop/mapreduce/lib/aopalliance-1.0.jar:/iwisdom/hadoop-2.5.2/share/hadoop/mapreduce/lib/log4j-1.2.17.jar:/iwisdom/hadoop-2.5.2/share/hadoop/mapreduce/lib/javax.inject-1.jar:/iwisdom/hadoop-2.5.2/share/hadoop/mapreduce/lib/jersey-core-1.9.jar:/iwisdom/hadoop-2.5.2/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.5.2.jar:/iwisdom/hadoop-2.5.2/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.5.2.jar:/iwisdom/hadoop-2.5.2/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-2.5.2.jar:/iwisdom/hadoop-2.5.2/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.5.2.jar:/iwisdom/hadoop-2.5.2/share/hadoop/mapreduce/hadoop-mapreduce-client-app-2.5.2.jar:/iwisdom/hadoop-2.5.2/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-2.5.2.jar:/iwisdom/hadoop-2.5.2/share/hadoop/mapreduce/hadoop-mapreduce-client-common-2.5.2.jar:/iwisdom/hadoop-2.5.2/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-2.5.2.jar:/iwisdom/hadoop-2.5.2/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.5.2-tests.jar:/contrib/capacity-scheduler/*.jar:/contrib/capacity-scheduler/*.jar:/contrib/capacity-scheduler/*.jar
STARTUP_MSG:   build = https://git-wip-us.apache.org/repos/asf/hadoop.git -r cc72e9b000545b86b75a61f4835eb86d57bfafc0; compiled by 'jenkins' on 2014-11-14T23:45Z
STARTUP_MSG:   java = 1.7.0_79
************************************************************/
2015-07-20 17:35:12,555 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
2015-07-20 17:35:12,558 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: createNameNode []
2015-07-20 17:35:12,986 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
2015-07-20 17:35:13,226 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
2015-07-20 17:35:13,226 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system started
2015-07-20 17:35:13,236 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: fs.defaultFS is hdfs://ns1
2015-07-20 17:35:13,236 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Clients are to use ns1 to access this namenode/service.
2015-07-20 17:35:13,422 WARN org.apache.hadoop.util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
2015-07-20 17:35:13,594 INFO org.apache.hadoop.hdfs.DFSUtil: Starting web server as: ${dfs.web.authentication.kerberos.principal}
2015-07-20 17:35:13,594 INFO org.apache.hadoop.hdfs.DFSUtil: Starting Web-server for hdfs at: http://hadoopcs01:50070
2015-07-20 17:35:13,668 INFO org.mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
2015-07-20 17:35:13,673 INFO org.apache.hadoop.http.HttpRequestLog: Http request log for http.requests.namenode is not defined
2015-07-20 17:35:13,687 INFO org.apache.hadoop.http.HttpServer2: Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
2015-07-20 17:35:13,691 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context hdfs
2015-07-20 17:35:13,691 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static
2015-07-20 17:35:13,692 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs
2015-07-20 17:35:13,746 INFO org.apache.hadoop.http.HttpServer2: Added filter 'org.apache.hadoop.hdfs.web.AuthFilter' (class=org.apache.hadoop.hdfs.web.AuthFilter)
2015-07-20 17:35:13,748 INFO org.apache.hadoop.http.HttpServer2: addJerseyResourcePackage: packageName=org.apache.hadoop.hdfs.server.namenode.web.resources;org.apache.hadoop.hdfs.web.resources, pathSpec=/webhdfs/v1/*
2015-07-20 17:35:13,773 INFO org.apache.hadoop.http.HttpServer2: Jetty bound to port 50070
2015-07-20 17:35:13,773 INFO org.mortbay.log: jetty-6.1.26
2015-07-20 17:35:14,006 WARN org.apache.hadoop.security.authentication.server.AuthenticationFilter: 'signature.secret' configuration not set, using a random value as secret
2015-07-20 17:35:14,073 INFO org.mortbay.log: Started HttpServer2$SelectChannelConnectorWithSafeStartup@hadoopcs01:50070
2015-07-20 17:35:14,134 WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Only one image storage directory (dfs.namenode.name.dir) configured. Beware of data loss due to lack of redundant storage directories!
2015-07-20 17:35:14,184 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsLock is fair:true
2015-07-20 17:35:14,238 INFO org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000
2015-07-20 17:35:14,238 INFO org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true
2015-07-20 17:35:14,241 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
2015-07-20 17:35:14,243 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: The block deletion will start around 2015 Jul 20 17:35:14
2015-07-20 17:35:14,246 INFO org.apache.hadoop.util.GSet: Computing capacity for map BlocksMap
2015-07-20 17:35:14,246 INFO org.apache.hadoop.util.GSet: VM type       = 32-bit
2015-07-20 17:35:14,248 INFO org.apache.hadoop.util.GSet: 2.0% max memory 966.7 MB = 19.3 MB
2015-07-20 17:35:14,248 INFO org.apache.hadoop.util.GSet: capacity      = 2^22 = 4194304 entries
2015-07-20 17:35:14,309 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: dfs.block.access.token.enable=false
2015-07-20 17:35:14,309 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: defaultReplication         = 3
2015-07-20 17:35:14,309 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: maxReplication             = 512
2015-07-20 17:35:14,309 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: minReplication             = 1
2015-07-20 17:35:14,309 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: maxReplicationStreams      = 2
2015-07-20 17:35:14,310 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: shouldCheckForEnoughRacks  = false
2015-07-20 17:35:14,310 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: replicationRecheckInterval = 3000
2015-07-20 17:35:14,310 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: encryptDataTransfer        = false
2015-07-20 17:35:14,310 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: maxNumBlocksToLog          = 1000
2015-07-20 17:35:14,316 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner             = root (auth:SIMPLE)
2015-07-20 17:35:14,316 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup          = supergroup
2015-07-20 17:35:14,316 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isPermissionEnabled = true
2015-07-20 17:35:14,316 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Determined nameservice ID: ns1
2015-07-20 17:35:14,317 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: HA Enabled: true
2015-07-20 17:35:14,318 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Append Enabled: true
2015-07-20 17:35:14,535 INFO org.apache.hadoop.util.GSet: Computing capacity for map INodeMap
2015-07-20 17:35:14,535 INFO org.apache.hadoop.util.GSet: VM type       = 32-bit
2015-07-20 17:35:14,535 INFO org.apache.hadoop.util.GSet: 1.0% max memory 966.7 MB = 9.7 MB
2015-07-20 17:35:14,535 INFO org.apache.hadoop.util.GSet: capacity      = 2^21 = 2097152 entries
2015-07-20 17:35:14,587 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names occuring more than 10 times
2015-07-20 17:35:14,600 INFO org.apache.hadoop.util.GSet: Computing capacity for map cachedBlocks
2015-07-20 17:35:14,600 INFO org.apache.hadoop.util.GSet: VM type       = 32-bit
2015-07-20 17:35:14,601 INFO org.apache.hadoop.util.GSet: 0.25% max memory 966.7 MB = 2.4 MB
2015-07-20 17:35:14,601 INFO org.apache.hadoop.util.GSet: capacity      = 2^19 = 524288 entries
2015-07-20 17:35:14,613 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033
2015-07-20 17:35:14,613 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0
2015-07-20 17:35:14,613 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.namenode.safemode.extension     = 30000
2015-07-20 17:35:14,615 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Retry cache on namenode is enabled
2015-07-20 17:35:14,615 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
2015-07-20 17:35:14,619 INFO org.apache.hadoop.util.GSet: Computing capacity for map NameNodeRetryCache
2015-07-20 17:35:14,619 INFO org.apache.hadoop.util.GSet: VM type       = 32-bit
2015-07-20 17:35:14,620 INFO org.apache.hadoop.util.GSet: 0.029999999329447746% max memory 966.7 MB = 297.0 KB
2015-07-20 17:35:14,620 INFO org.apache.hadoop.util.GSet: capacity      = 2^16 = 65536 entries
2015-07-20 17:35:14,627 INFO org.apache.hadoop.hdfs.server.namenode.NNConf: ACLs enabled? false
2015-07-20 17:35:14,628 INFO org.apache.hadoop.hdfs.server.namenode.NNConf: XAttrs enabled? true
2015-07-20 17:35:14,628 INFO org.apache.hadoop.hdfs.server.namenode.NNConf: Maximum size of an xattr: 16384
2015-07-20 17:35:14,661 INFO org.apache.hadoop.hdfs.server.common.Storage: Lock on /iwisdom/hadoop-2.5.2/tmp/dfs/name/in_use.lock acquired by nodename 8525@hadoopcs01
2015-07-20 17:35:14,935 WARN org.apache.hadoop.security.ssl.FileBasedKeyStoresFactory: The property 'ssl.client.truststore.location' has not been set, no TrustStore will be loaded
2015-07-20 17:35:16,310 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hadoopcs06/192.168.1.206:8485. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2015-07-20 17:35:16,311 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hadoopcs07/192.168.1.207:8485. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2015-07-20 17:35:16,311 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hadoopcs05/192.168.1.205:8485. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2015-07-20 17:35:17,312 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hadoopcs07/192.168.1.207:8485. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2015-07-20 17:35:17,313 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hadoopcs06/192.168.1.206:8485. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2015-07-20 17:35:17,313 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hadoopcs05/192.168.1.205:8485. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2015-07-20 17:35:18,315 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hadoopcs07/192.168.1.207:8485. Already tried 2 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2015-07-20 17:35:18,330 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hadoopcs05/192.168.1.205:8485. Already tried 2 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2015-07-20 17:35:18,330 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hadoopcs06/192.168.1.206:8485. Already tried 2 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2015-07-20 17:35:19,331 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hadoopcs05/192.168.1.205:8485. Already tried 3 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2015-07-20 17:35:19,332 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hadoopcs07/192.168.1.207:8485. Already tried 3 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2015-07-20 17:35:19,353 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hadoopcs06/192.168.1.206:8485. Already tried 3 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2015-07-20 17:35:20,333 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hadoopcs07/192.168.1.207:8485. Already tried 4 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2015-07-20 17:35:20,337 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hadoopcs05/192.168.1.205:8485. Already tried 4 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2015-07-20 17:35:20,355 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hadoopcs06/192.168.1.206:8485. Already tried 4 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2015-07-20 17:35:21,151 INFO org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager: Waited 6001 ms (timeout=20000 ms) for a response for selectInputStreams. No responses yet.
2015-07-20 17:35:21,334 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hadoopcs07/192.168.1.207:8485. Already tried 5 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2015-07-20 17:35:21,340 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hadoopcs05/192.168.1.205:8485. Already tried 5 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2015-07-20 17:35:21,356 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hadoopcs06/192.168.1.206:8485. Already tried 5 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2015-07-20 17:35:22,153 INFO org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager: Waited 7003 ms (timeout=20000 ms) for a response for selectInputStreams. No responses yet.
2015-07-20 17:35:22,335 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hadoopcs07/192.168.1.207:8485. Already tried 6 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2015-07-20 17:35:22,343 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hadoopcs05/192.168.1.205:8485. Already tried 6 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2015-07-20 17:35:22,358 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hadoopcs06/192.168.1.206:8485. Already tried 6 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2015-07-20 17:35:24,519 INFO org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager: Waited 9369 ms (timeout=20000 ms) for a response for selectInputStreams. No responses yet.
2015-07-20 17:35:24,519 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hadoopcs07/192.168.1.207:8485. Already tried 7 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2015-07-20 17:35:24,520 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hadoopcs05/192.168.1.205:8485. Already tried 7 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2015-07-20 17:35:24,521 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hadoopcs06/192.168.1.206:8485. Already tried 7 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2015-07-20 17:35:25,520 INFO org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager: Waited 10370 ms (timeout=20000 ms) for a response for selectInputStreams. No responses yet.
2015-07-20 17:35:25,520 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hadoopcs07/192.168.1.207:8485. Already tried 8 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2015-07-20 17:35:25,524 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hadoopcs05/192.168.1.205:8485. Already tried 8 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2015-07-20 17:35:25,523 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hadoopcs06/192.168.1.206:8485. Already tried 8 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2015-07-20 17:35:26,521 INFO org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager: Waited 11371 ms (timeout=20000 ms) for a response for selectInputStreams. No responses yet.
2015-07-20 17:35:26,598 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hadoopcs07/192.168.1.207:8485. Already tried 9 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2015-07-20 17:35:26,598 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hadoopcs05/192.168.1.205:8485. Already tried 9 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2015-07-20 17:35:26,599 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hadoopcs06/192.168.1.206:8485. Already tried 9 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2015-07-20 17:35:26,603 WARN org.apache.hadoop.hdfs.server.namenode.FSEditLog: Unable to determine input streams from QJM to [192.168.1.205:8485, 192.168.1.206:8485, 192.168.1.207:8485]. Skipping.
org.apache.hadoop.hdfs.qjournal.client.QuorumException: Got too many exceptions to achieve quorum size 2/3. 3 exceptions thrown:
192.168.1.205:8485: Call From hadoopcs01/192.168.1.201 to hadoopcs05:8485 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
192.168.1.206:8485: Call From hadoopcs01/192.168.1.201 to hadoopcs06:8485 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
192.168.1.207:8485: Call From hadoopcs01/192.168.1.201 to hadoopcs07:8485 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
        at org.apache.hadoop.hdfs.qjournal.client.QuorumException.create(QuorumException.java:81)
        at org.apache.hadoop.hdfs.qjournal.client.QuorumCall.rethrowException(QuorumCall.java:223)
        at org.apache.hadoop.hdfs.qjournal.client.AsyncLoggerSet.waitForWriteQuorum(AsyncLoggerSet.java:142)
        at org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.selectInputStreams(QuorumJournalManager.java:471)
        at org.apache.hadoop.hdfs.server.namenode.JournalSet.selectInputStreams(JournalSet.java:260)
        at org.apache.hadoop.hdfs.server.namenode.FSEditLog.selectInputStreams(FSEditLog.java:1430)
        at org.apache.hadoop.hdfs.server.namenode.FSEditLog.selectInputStreams(FSEditLog.java:1450)
        at org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:636)
        at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:279)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:955)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:700)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:529)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:585)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:751)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:735)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1407)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1473)
2015-07-20 17:35:26,605 INFO org.apache.hadoop.hdfs.server.namenode.FSImage: No edit log streams selected.
2015-07-20 17:35:26,655 INFO org.apache.hadoop.hdfs.server.namenode.FSImageFormatPBINode: Loading 1 INodes.
2015-07-20 17:35:26,835 INFO org.apache.hadoop.hdfs.server.namenode.FSImageFormatProtobuf: Loaded FSImage in 0 seconds.
2015-07-20 17:35:26,835 INFO org.apache.hadoop.hdfs.server.namenode.FSImage: Loaded image for txid 0 from /iwisdom/hadoop-2.5.2/tmp/dfs/name/current/fsimage_0000000000000000000
2015-07-20 17:35:26,842 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Need to save fs image? false (staleImage=true, haEnabled=true, isRollingUpgrade=false)
2015-07-20 17:35:26,842 INFO org.apache.hadoop.hdfs.server.namenode.NameCache: initialized with 0 entries 0 lookups
2015-07-20 17:35:26,842 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Finished loading FSImage in 12214 msecs
2015-07-20 17:35:27,207 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: RPC server is binding to hadoopcs01:9000
2015-07-20 17:35:27,232 INFO org.apache.hadoop.ipc.CallQueueManager: Using callQueue class java.util.concurrent.LinkedBlockingQueue
2015-07-20 17:35:27,250 INFO org.apache.hadoop.ipc.Server: Starting Socket Reader #1 for port 9000
2015-07-20 17:35:27,295 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered FSNamesystemState MBean
2015-07-20 17:35:27,345 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of blocks under construction: 0
2015-07-20 17:35:27,345 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of blocks under construction: 0
2015-07-20 17:35:27,345 INFO org.apache.hadoop.hdfs.StateChange: STATE* Leaving safe mode after 13 secs
2015-07-20 17:35:27,345 INFO org.apache.hadoop.hdfs.StateChange: STATE* Network topology has 0 racks and 0 datanodes
2015-07-20 17:35:27,345 INFO org.apache.hadoop.hdfs.StateChange: STATE* UnderReplicatedBlocks has 0 blocks
2015-07-20 17:35:27,380 INFO org.apache.hadoop.ipc.Server: IPC Server Responder: starting
2015-07-20 17:35:27,382 INFO org.apache.hadoop.ipc.Server: IPC Server listener on 9000: starting
2015-07-20 17:35:27,384 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: NameNode RPC up at: hadoopcs01/192.168.1.201:9000
2015-07-20 17:35:27,384 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Starting services required for standby state
2015-07-20 17:35:27,389 INFO org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer: Will roll logs on active node at hadoopcs02/192.168.1.202:9000 every 120 seconds.
2015-07-20 17:35:27,395 INFO org.apache.hadoop.hdfs.server.namenode.ha.StandbyCheckpointer: Starting standby checkpoint thread...
Checkpointing active NN at http://hadoopcs02:50070
Serving checkpoints at http://hadoopcs01:50070
2015-07-20 17:35:28,397 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hadoopcs07/192.168.1.207:8485. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2015-07-20 17:35:28,398 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hadoopcs05/192.168.1.205:8485. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2015-07-20 17:35:28,400 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hadoopcs06/192.168.1.206:8485. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2015-07-20 17:35:29,401 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hadoopcs05/192.168.1.205:8485. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2015-07-20 17:35:29,401 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hadoopcs06/192.168.1.206:8485. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2015-07-20 17:35:30,403 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hadoopcs05/192.168.1.205:8485. Already tried 2 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2015-07-20 17:35:30,403 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hadoopcs06/192.168.1.206:8485. Already tried 2 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2015-07-20 17:35:30,659 INFO org.apache.hadoop.hdfs.server.namenode.FSImage: Reading org.apache.hadoop.hdfs.server.namenode.RedundantEditLogInputStream@14a469d expecting start txid #1
2015-07-20 17:35:30,661 INFO org.apache.hadoop.hdfs.server.namenode.FSImage: Start loading edits file http://hadoopcs06:8480/getJournal?jid=ns1&segmentTxId=1&storageInfo=-57%3A1755095172%3A0%3ACID-e7314263-b26e-4167-a7f6-fd97f27a0826, http://hadoopcs07:8480/getJournal?jid=ns1&segmentTxId=1&storageInfo=-57%3A1755095172%3A0%3ACID-e7314263-b26e-4167-a7f6-fd97f27a0826
2015-07-20 17:35:30,663 INFO org.apache.hadoop.hdfs.server.namenode.EditLogInputStream: Fast-forwarding stream 'http://hadoopcs06:8480/getJournal?jid=ns1&segmentTxId=1&storageInfo=-57%3A1755095172%3A0%3ACID-e7314263-b26e-4167-a7f6-fd97f27a0826, http://hadoopcs07:8480/getJournal?jid=ns1&segmentTxId=1&storageInfo=-57%3A1755095172%3A0%3ACID-e7314263-b26e-4167-a7f6-fd97f27a0826' to transaction ID 1
2015-07-20 17:35:30,663 INFO org.apache.hadoop.hdfs.server.namenode.EditLogInputStream: Fast-forwarding stream 'http://hadoopcs06:8480/getJournal?jid=ns1&segmentTxId=1&storageInfo=-57%3A1755095172%3A0%3ACID-e7314263-b26e-4167-a7f6-fd97f27a0826' to transaction ID 1
2015-07-20 17:35:31,623 INFO org.apache.hadoop.hdfs.server.namenode.FSImage: Edits file http://hadoopcs06:8480/getJournal?jid=ns1&segmentTxId=1&storageInfo=-57%3A1755095172%3A0%3ACID-e7314263-b26e-4167-a7f6-fd97f27a0826, http://hadoopcs07:8480/getJournal?jid=ns1&segmentTxId=1&storageInfo=-57%3A1755095172%3A0%3ACID-e7314263-b26e-4167-a7f6-fd97f27a0826 of size 1048576 edits # 1 loaded in 0 seconds
2015-07-20 17:35:31,624 INFO org.apache.hadoop.hdfs.server.namenode.FSImage: Reading org.apache.hadoop.hdfs.server.namenode.RedundantEditLogInputStream@161f335 expecting start txid #2
2015-07-20 17:35:31,624 INFO org.apache.hadoop.hdfs.server.namenode.FSImage: Start loading edits file http://hadoopcs07:8480/getJournal?jid=ns1&segmentTxId=36&storageInfo=-57%3A1755095172%3A0%3ACID-e7314263-b26e-4167-a7f6-fd97f27a0826, http://hadoopcs06:8480/getJournal?jid=ns1&segmentTxId=36&storageInfo=-57%3A1755095172%3A0%3ACID-e7314263-b26e-4167-a7f6-fd97f27a0826
2015-07-20 17:35:31,624 INFO org.apache.hadoop.hdfs.server.namenode.EditLogInputStream: Fast-forwarding stream 'http://hadoopcs07:8480/getJournal?jid=ns1&segmentTxId=36&storageInfo=-57%3A1755095172%3A0%3ACID-e7314263-b26e-4167-a7f6-fd97f27a0826, http://hadoopcs06:8480/getJournal?jid=ns1&segmentTxId=36&storageInfo=-57%3A1755095172%3A0%3ACID-e7314263-b26e-4167-a7f6-fd97f27a0826' to transaction ID 1
2015-07-20 17:35:31,624 INFO org.apache.hadoop.hdfs.server.namenode.EditLogInputStream: Fast-forwarding stream 'http://hadoopcs07:8480/getJournal?jid=ns1&segmentTxId=36&storageInfo=-57%3A1755095172%3A0%3ACID-e7314263-b26e-4167-a7f6-fd97f27a0826' to transaction ID 1
2015-07-20 17:35:32,519 FATAL org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer: Unknown error encountered while tailing edits. Shutting down standby NN.
java.io.IOException: There appears to be a gap in the edit log.  We expected txid 2, but got txid 36.
        at org.apache.hadoop.hdfs.server.namenode.MetaRecoveryContext.editLogLoaderPrompt(MetaRecoveryContext.java:94)
        at org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:209)
        at org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:137)
        at org.apache.hadoop.hdfs.server.namenode.FSImage.loadEdits(FSImage.java:816)
        at org.apache.hadoop.hdfs.server.namenode.FSImage.loadEdits(FSImage.java:797)
        at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer.doTailEdits(EditLogTailer.java:230)
        at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.doWork(EditLogTailer.java:324)
        at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.access$200(EditLogTailer.java:282)
        at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread$1.run(EditLogTailer.java:299)
        at org.apache.hadoop.security.SecurityUtil.doAsLoginUserOrFatal(SecurityUtil.java:411)
        at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.run(EditLogTailer.java:295)
2015-07-20 17:35:32,534 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 1
2015-07-20 17:35:32,545 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at hadoopcs01/192.168.1.201
************************************************************/


已有(2)人评论

跳转到指定楼层
Alkaloid0515 发表于 2015-7-20 22:22:48
楼主格式化是否成功,
读取镜像文件出问题了。
回复

使用道具 举报

smartleon 发表于 2015-7-21 09:53:15
Alkaloid0515 发表于 2015-7-20 22:22
楼主格式化是否成功,
读取镜像文件出问题了。

麻烦能大概和我说一下,您是在哪儿看到问题的提示的。我想学习一下~谢谢指教~
这个问题我最终无奈的选择了,把所有节点的配置全部删除,然后重新按照教程部署了,重新格式化,现在好了。
回复

使用道具 举报

您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

关闭

推荐上一条 /2 下一条