博主,我在验证keystone时,遇到如下问题:
ERROR: openstack An unexpected error prevented the server from fulfilling your request. (HTTP 500)
讨论区有人问过,您说是配置的问题,建议安装juno版,我之前在虚拟机安装挺好,这个虚拟机就出这个问题,特别诧异,没有什么办法了么,或者说这是什么引起的问题,请博主答疑解惑。
启动flume跟kafka整合的时候 015-05-07 17:59:51,612 (lifecycleSupervisor-1-1) [ERROR - org.apache.flume.lifecycle.LifecycleSupervisor$MonitorRunnable.run(LifecycleSupervisor.java:253)] Unable to start SinkRunner: { policy:org.apache.flume.sink.DefaultSinkProcessor@4083633f counterGroup:{ name:null counters:{} } } - Exception follows.
java.lang.NoSuchMethodError: scala.Predef$.augmentString(Ljava/lang/String;)Ljava/lang/String;
at kafka.utils.VerifiableProperties.getShortInRange(VerifiableProperties.scala:85)
at kafka.producer.SyncProducerConfigShared$class.$init$(SyncProducerConfig.scala:53)
at kafka.producer.ProducerConfig.<init>(ProducerConfig.scala:52)
at kafka.producer.ProducerConfig.<init>(ProducerConfig.scala:56)
at org.apache.flume.plugins.KafkaSink.start(KafkaSink.java:97)
at org.apache.flume.sink.DefaultSinkProcessor.start(DefaultSinkProcessor.java:46)
at org.apache.flume.SinkRunner.start(SinkRunner.java:79)
at org.apache.flume.lifecycle.LifecycleSupervisor$MonitorRunnable.run(LifecycleSupervisor.java:251)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:304)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:178)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
2015-05-07 17:59:51,615 (lifecycleSupervisor-1-1) [ERROR - org.apache.flume.lifecycle.LifecycleSupervisor$MonitorRunnable.run(LifecycleSupervisor.java:264)] Unsuccessful attempt to shutdown component: {} due to missing dependencies. Please shutdown the agentor disable this component, or the agent will bein an undefined state.
java.lang.NullPointerException
at org.apache.flume.plugins.KafkaSink.stop(KafkaSink.java:170)
at org.apache.flume.sink.DefaultSinkProcessor.stop(DefaultSinkProcessor.java:53)
at org.apache.flume.SinkRunner.stop(SinkRunner.java:115)
at org.apache.flume.lifecycle.LifecycleSupervisor$MonitorRunnable.run(LifecycleSupervisor.java:259)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:304)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:178)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
你好,我有个问题想请教下。我本地搭建了一个centos的hadoop2.5.0+hbase0.98.5的集群环境,执行mapreduce程序jar包时老是报Exception in thread "main" java.io.FileNotFoundException: File does not exist: hdfs://namenode:9000/home/hadoop/hadoop-2.5.0/share/hadoop/common/lib/guava-11.0.2.jar这个错(或者其他hadoop自带的jar包),而我自己的hadoop-2.5.0/share/hadoop/common/lib路径下是有这些包的,我查看了下hbase的lib下是guava-12.0.1.jar,是版本的问题吗?但是单位用实体机搭建的同样版本的环境就没问题,最近比较纠结这个问题,还请帮忙提供下解决思路,谢谢。
你好!麻烦你帮我看看这个异常!开始几天集群都是正常的,今天突然全死了,然后namenode就无法启动************************************************************/
2015-03-11 17:05:59,334 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
2015-03-11 17:05:59,339 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: createNameNode []
2015-03-11 17:05:59,831 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
2015-03-11 17:05:59,925 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
2015-03-11 17:05:59,926 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system started
2015-03-11 17:05:59,928 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: fs.defaultFS is hdfs://10.1.20.241:8020
2015-03-11 17:05:59,929 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Clients are to use 10.1.20.241:8020 to access this namenode/service.
2015-03-11 17:06:00,523 INFO org.apache.hadoop.hdfs.DFSUtil: Starting web server as: ${dfs.web.authentication.kerberos.principal}
2015-03-11 17:06:00,523 INFO org.apache.hadoop.hdfs.DFSUtil: Starting Web-server for hdfs at: http://namenode:50070
2015-03-11 17:06:00,760 INFO org.apache.hadoop.http.HttpRequestLog: Http request log for http.requests.namenode is not defined
2015-03-11 17:06:00,820 INFO org.apache.hadoop.http.HttpServer2: Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
2015-03-11 17:06:00,831 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context hdfs
2015-03-11 17:06:00,831 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs
2015-03-11 17:06:00,831 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static
2015-03-11 17:06:00,913 INFO org.apache.hadoop.http.HttpServer2: Added filter 'org.apache.hadoop.hdfs.web.AuthFilter' (class=org.apache.hadoop.hdfs.web.AuthFilter)
2015-03-11 17:06:00,916 INFO org.apache.hadoop.http.HttpServer2: addJerseyResourcePackage: packageName=org.apache.hadoop.hdfs.server.namenode.web.resources;org.apache.hadoop.hdfs.web.resources, pathSpec=/webhdfs/v1/*
2015-03-11 17:06:00,952 INFO org.apache.hadoop.http.HttpServer2: Jetty bound to port 50070
2015-03-11 17:06:02,034 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Stopping NameNode metrics system...
2015-03-11 17:06:02,035 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system stopped.
2015-03-11 17:06:02,035 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system shutdown complete.
2015-03-11 17:06:02,036 FATAL org.apache.hadoop.hdfs.server.namenode.NameNode: Failed to start namenode.
java.io.IOException: Problem in starting http server. Server handlers failed
at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:839)
at org.apache.hadoop.hdfs.server.namenode.NameNodeHttpServer.start(NameNodeHttpServer.java:142)
at org.apache.hadoop.hdfs.server.namenode.NameNode.startHttpServer(NameNode.java:695)
at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:585)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:754)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:738)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1427)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1493)
2015-03-11 17:06:02,042 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 1
2015-03-11 17:06:02,045 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
这是运行日志!