分享

求解--各位大神帮帮忙解答一下出现什么问题了?????

xiaohao 发表于 2014-10-9 10:22:43 [显示全部楼层] 回帖奖励 阅读模式 关闭右栏 4 15395
刚开始 学习Hbase照着文档写了个简单的程序就出现这样的错误,请大神给指点一下哪出错了?14/10/09 10:19:21 INFO zookeeper.ZooKeeper: Client environment:zookeeper.version=3.4.3-1240972, built on 02/06/2012 10:48 GMT
14/10/09 10:19:21 INFO zookeeper.ZooKeeper: Client environment:host.name=2013-1016-1614
14/10/09 10:19:21 INFO zookeeper.ZooKeeper: Client environment:java.version=1.6.0_45
14/10/09 10:19:21 INFO zookeeper.ZooKeeper: Client environment:java.vendor=Sun Microsystems Inc.
14/10/09 10:19:21 INFO zookeeper.ZooKeeper: Client environment:java.home=C:\Program Files\Java\jre6
14/10/09 10:19:21 INFO zookeeper.ZooKeeper: Client environment:java.class.path=F:\软件\HadoopWorkPlat\workplace\.metadata\.plugins\org.apache.hadoop.eclipse\hadoop-conf-7063833016097855588;F:\软件\HadoopWorkPlat\workplace\HBASE_pro_01\bin;F:\软件\HadoopWorkPlat\workplace\HBASE_pro_01\lib\hadoop-core-1.0.0.jar;F:\软件\HadoopWorkPlat\workplace\HBASE_pro_01\lib\hbase-0.92.1.jar;F:\软件\HadoopWorkPlat\workplace\HBASE_pro_01\lib\log4j-1.2.15.jar;F:\软件\HadoopWorkPlat\workplace\HBASE_pro_01\lib\log4j-1.2.16.jar;F:\软件\HadoopWorkPlat\workplace\HBASE_pro_01\lib\slf4j-api-1.5.8.jar;F:\软件\HadoopWorkPlat\workplace\HBASE_pro_01\lib\slf4j-log4j12-1.5.8.jar;F:\软件\HadoopWorkPlat\workplace\HBASE_pro_01\lib\zookeeper-3.4.3.jar;F:\软件\HadoopWorkPlat\workplace\HBASE_pro_01\lib\commons-codec-1.4.jar;F:\软件\HadoopWorkPlat\workplace\HBASE_pro_01\lib\commons-configuration-1.6.jar;F:\软件\HadoopWorkPlat\workplace\HBASE_pro_01\lib\commons-lang-2.5.jar;F:\软件\HadoopWorkPlat\workplace\HBASE_pro_01\lib\commons-logging-1.1.1.jar
14/10/09 10:19:21 INFO zookeeper.ZooKeeper: Client environment:java.library.path=C:\Program Files\Java\jre6\bin;C:\Windows\Sun\Java\bin;C:\Windows\system32;C:\Windows;C:/Program Files/Java/jre1.8.0_20/bin/client;C:/Program Files/Java/jre1.8.0_20/bin;C:/Program Files/Java/jre1.8.0_20/lib/i386;C:\ProgramData\Oracle\Java\javapath;C:\Windows\system32;C:\Windows;C:\Windows\System32\Wbem;C:\Windows\System32\WindowsPowerShell\v1.0\;C:\Program Files\Microsoft SQL Server\90\Tools\binn\;C:\Program Files\Microsoft SQL Server\100\Tools\Binn\;C:\Program Files\Microsoft SQL Server\100\DTS\Binn\;F:\软件\HadoopWorkPlat\eclipse;;.
14/10/09 10:19:21 INFO zookeeper.ZooKeeper: Client environment:java.io.tmpdir=C:\Users\ADMINI~1\AppData\Local\Temp\
14/10/09 10:19:21 INFO zookeeper.ZooKeeper: Client environment:java.compiler=<NA>
14/10/09 10:19:21 INFO zookeeper.ZooKeeper: Client environment:os.name=Windows 7
14/10/09 10:19:21 INFO zookeeper.ZooKeeper: Client environment:os.arch=x86
14/10/09 10:19:21 INFO zookeeper.ZooKeeper: Client environment:os.version=6.1
14/10/09 10:19:21 INFO zookeeper.ZooKeeper: Client environment:user.name=root
14/10/09 10:19:21 INFO zookeeper.ZooKeeper: Client environment:user.home=C:\Users\Administrator
14/10/09 10:19:21 INFO zookeeper.ZooKeeper: Client environment:user.dir=F:\软件\HadoopWorkPlat\workplace\HBASE_pro_01
14/10/09 10:19:21 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=192.168.93.128:2181 sessionTimeout=180000 watcher=hconnection
14/10/09 10:19:22 INFO zookeeper.ClientCnxn: Opening socket connection to server /192.168.93.128:2181
14/10/09 10:19:22 INFO zookeeper.RecoverableZooKeeper: The identifier of this process is 12136@2013-1016-1614
14/10/09 10:19:31 WARN client.ZooKeeperSaslClient: SecurityException: java.lang.SecurityException: 无法定位登录配置 occurred when trying to find JAAS configuration.
14/10/09 10:19:31 INFO client.ZooKeeperSaslClient: Client will not SASL-authenticate because the default JAAS configuration section 'Client' could not be found. If you are not using SASL, you may ignore this. On the other hand, if you expected SASL to work, please fix your JAAS configuration.
14/10/09 10:19:31 INFO zookeeper.ClientCnxn: Socket connection established to 192.168.93.128/192.168.93.128:2181, initiating session
14/10/09 10:19:31 INFO zookeeper.ClientCnxn: Session establishment complete on server 192.168.93.128/192.168.93.128:2181, sessionid = 0x148f29bbe720004, negotiated timeout = 180000
代码:
package org.apache.hadoop.examples;
import java.io.IOException;
import java.util.StringTokenizer;

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import org.apache.hadoop.util.GenericOptionsParser;

public class WordCount {

  public static class TokenizerMapper
       extends Mapper<Object, Text, Text, IntWritable>{
   
    private final static IntWritable one = new IntWritable(1);
    private Text word = new Text();
      
    public void map(Object key, Text value, Context context
                    ) throws IOException, InterruptedException {
      StringTokenizer itr = new StringTokenizer(value.toString());
      while (itr.hasMoreTokens()) {
        word.set(itr.nextToken());
        context.write(word, one);
      }
    }
  }
  
  public static class IntSumReducer
       extends Reducer<Text,IntWritable,Text,IntWritable> {
    private IntWritable result = new IntWritable();

    public void reduce(Text key, Iterable<IntWritable> values,
                       Context context
                       ) throws IOException, InterruptedException {
      int sum = 0;
      for (IntWritable val : values) {
        sum += val.get();
      }
      result.set(sum);
      context.write(key, result);
    }
  }

  public static void main(String[] args) throws Exception {
    Configuration conf = new Configuration();
    conf.set("mapred.job.tracker", "192.168.93.128:9001");
    String[] ars=new String[]{" /user/root/testdir","/user/root/newout"};
    String[] otherArgs = new GenericOptionsParser(conf, ars).getRemainingArgs();
    if (otherArgs.length != 2) {
      System.err.println("Usage: wordcount");
      System.exit(2);
    }
    Job job = new Job(conf, "word count");
    job.setJarByClass(WordCount.class);
    job.setMapperClass(TokenizerMapper.class);
    job.setCombinerClass(IntSumReducer.class);
    job.setReducerClass(IntSumReducer.class);
    job.setOutputKeyClass(Text.class);
    job.setOutputValueClass(IntWritable.class);
    FileInputFormat.addInputPath(job, new Path(otherArgs[0]));
    FileOutputFormat.setOutputPath(job, new Path(otherArgs[1]));
    System.exit(job.waitForCompletion(true) ? 0 : 1);
  }
}

已有(4)人评论

跳转到指定楼层
rsgg03 发表于 2014-10-9 11:09:44
把本地的hosts文件,配置上master和slave 即和linux机器的/etc/hosts文件中的ip相同

加上这句试试
  conf.set("hbase.zookeeper.quorum", "master");//使用eclipse时必须添加这个,否则无法定位
(master可以是ip,如果配置了hosts,则写成 hostname)

参考下:
hbase开发环境搭建及运行hbase小实例(HBase 0.98.3新api)

回复

使用道具 举报

xiaohao 发表于 2014-10-9 17:53:08
rsgg03 发表于 2014-10-9 11:09
把本地的hosts文件,配置上master和slave 即和linux机器的/etc/hosts文件中的ip相同

加上这句试试

修改了hosts文件,还是报错,对这个问题我已经彻底无语了,
回复

使用道具 举报

howtodown 发表于 2014-10-9 18:51:03
xiaohao 发表于 2014-10-9 17:53
修改了hosts文件,还是报错,对这个问题我已经彻底无语了,

先保证环境没有问题,你的是hadoop1,还是hadoop2

hadoop1中没有见过:
  conf.set("mapred.job.tracker", "192.168.93.128:9001");

如果hadoop2参考这个,还有问题,找找自己的环境。
hbase开发环境搭建及运行hbase小实例(HBase 0.98.3新api)
回复

使用道具 举报

xiaohao 发表于 2014-10-12 09:53:59
我用的hadoop1,这个ip自动获取的感觉用着很不爽,然后把虚拟机改为桥接的,居然好了,可以运行了。。谢谢你的指导。
回复

使用道具 举报

您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

关闭

推荐上一条 /2 下一条