分享

执行Mapreduce遇到问题ExitCodeException exitCode=1:

舒克 发表于 2017-5-14 20:40:55 [显示全部楼层] 回帖奖励 阅读模式 关闭右栏 7 23653
     我在用mapreduce运行wordcount的时候,在本地测试环境可以正常执行,但是放到集群上就出错,我在网上找了好多,改了改,还是不行,在这看看大家有没有遇见过,给知道一下,下面是控制台输出的内容:
2017-05-14 20:34:08,344 WARN  [main] util.NativeCodeLoader (NativeCodeLoader.java:<clinit>(62)) - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
2017-05-14 20:34:13,034 WARN  [main] mapreduce.JobSubmitter (JobSubmitter.java:copyAndConfigureFiles(150)) - Hadoop command-line option parsing not performed. Implement the Tool interface and execute your application with ToolRunner to remedy this.
2017-05-14 20:34:13,592 INFO  [main] input.FileInputFormat (FileInputFormat.java:listStatus(281)) - Total input paths to process : 1
2017-05-14 20:34:13,861 INFO  [main] mapreduce.JobSubmitter (JobSubmitter.java:submitJobInternal(396)) - number of splits:1
2017-05-14 20:34:13,904 INFO  [main] Configuration.deprecation (Configuration.java:warnOnceIfDeprecated(1019)) - mapred.jar is deprecated. Instead, use mapreduce.job.jar
2017-05-14 20:34:14,144 INFO  [main] mapreduce.JobSubmitter (JobSubmitter.java:printTokens(479)) - Submitting tokens for job: job_1494761713465_0003
2017-05-14 20:34:14,511 INFO  [main] impl.YarnClientImpl (YarnClientImpl.java:submitApplication(236)) - Submitted application application_1494761713465_0003
2017-05-14 20:34:14,552 INFO  [main] mapreduce.Job (Job.java:submit(1289)) - The url to track the job: http://node1:8088/proxy/application_1494761713465_0003/
2017-05-14 20:34:14,553 INFO  [main] mapreduce.Job (Job.java:monitorAndPrintJob(1334)) - Running job: job_1494761713465_0003
2017-05-14 20:34:37,729 INFO  [main] mapreduce.Job (Job.java:monitorAndPrintJob(1355)) - Job job_1494761713465_0003 running in uber mode : false
2017-05-14 20:34:37,731 INFO  [main] mapreduce.Job (Job.java:monitorAndPrintJob(1362)) -  map 0% reduce 0%
2017-05-14 20:34:37,751 INFO  [main] mapreduce.Job (Job.java:monitorAndPrintJob(1375)) - Job job_1494761713465_0003 failed with state FAILED due to: Application application_1494761713465_0003 failed 2 times due to AM Container for appattempt_1494761713465_0003_000002 exited with  exitCode: 1 due to: Exception from container-launch: ExitCodeException exitCode=1:
ExitCodeException exitCode=1:
        at org.apache.hadoop.util.Shell.runCommand(Shell.java:538)
        at org.apache.hadoop.util.Shell.run(Shell.java:455)
        at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:702)
        at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
        at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:300)
        at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
        at java.util.concurrent.FutureTask.run(FutureTask.java:262)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)


Container exited with a non-zero exit code 1
.Failing this attempt.. Failing the application.
2017-05-14 20:34:37,790 INFO  [main] mapreduce.Job (Job.java:monitorAndPrintJob(1380)) - Counters: 0




已有(7)人评论

跳转到指定楼层
nextuser 发表于 2017-5-15 16:33:50
本帖最后由 nextuser 于 2017-5-15 16:35 编辑

查看下当前用户是否对hadoop所用到的文件,具有操作权限。yarn进程是否都在
回复

使用道具 举报

舒克 发表于 2017-5-15 20:20:06
      现在运行wordcount是没问题了,我写了一个mapreduce从hbase读取数据,计算过后输出到HDFS,可是在提交任务后,会出现炸不到自定义的那个mapper类,可是我明明是指定了啊 Origin_MR是包名Origin_Mapper 是类名字
2017-05-15 20:12:19,366 INFO  [main] mapreduce.Job (Job.java:monitorAndPrintJob(1334)) - Running job: job_1494761713465_0045
2017-05-15 20:13:29,772 INFO  [main] mapreduce.Job (Job.java:monitorAndPrintJob(1355)) - Job job_1494761713465_0045 running in uber mode : false
2017-05-15 20:13:29,772 INFO  [main] mapreduce.Job (Job.java:monitorAndPrintJob(1362)) -  map 0% reduce 0%
2017-05-15 20:14:38,654 INFO  [main] mapreduce.Job (Job.java:printTaskEvents(1441)) - Task Id : attempt_1494761713465_0045_m_000000_0, Status : FAILED
Error: java.lang.RuntimeException: java.lang.ClassNotFoundException: Class Origin_MR.cnot found
        at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:1905)
        at org.apache.hadoop.mapreduce.task.JobContextImpl.getMapperClass(JobContextImpl.java:186)
        at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:722)
        at org.apache.hadoop.mapred.MapTask.run(MapTask.java:340)
        at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:415)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1614)
        at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:163)
Caused by: java.lang.ClassNotFoundException: Class Origin_MR.Origin_Mapper not found
        at org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:1811)
        at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:1903)
        ... 8 more


Container killed by the ApplicationMaster.
Container killed on request. Exit code is 143
Container exited with a non-zero exit code 143



主函数的代码如下:
public static void main(String[] args) throws ClassNotFoundException, InterruptedException {
               
                long starttime = System.currentTimeMillis();
                String inputtable = "GJJY95020150613";
               
                Configuration conf = HBaseConfiguration.create();
                conf.set("hbase.zookeeper.quorum", "node1");
                conf.set("fs.defaultFS", "hdfs://shuke");
                conf.set("dfs.nameservices", "shuke");
                conf.set("dfs.ha.namenodes.shuke", "nn1,nn2");
                conf.set("dfs.namenode.rpc-address.shuke.nn1", "192.168.8.118:8020");
                conf.set("dfs.namenode.rpc-address.shuke.nn2", "192.168.8.121:8020");
                conf.set("dfs.client.failover.proxy.provider.shuke",
                                "org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider");
//                conf.set("mapred.jar", "D:\\Origin.jar");
               
                try {
                        Job job = Job.getInstance(conf);
                        job.setJobName("Origin");
                        job.setJarByClass(Origin_job.class);
                       
                        Scan scan = new Scan();
                        scan.setCaching(500);
                        scan.setCacheBlocks(false);
                       
                        TableMapReduceUtil.initTableMapperJob(inputtable,scan,Origin_Mapper.class,Text.class,Text.class, job);


                        job.setReducerClass(Origin_Reducer.class);
                        job.setOutputKeyClass(Text.class);
                        job.setOutputValueClass(Text.class);
                       
                        FileOutputFormat.setOutputPath(job, new Path("/usr/output/sky"));
                       
                        boolean f = job.waitForCompletion(true);
                        if (f) {
                                System.out.println("job任务执行成功!");
                        }else {
                                System.out.println("执行job任务失败!");
                        }
                        System.out.println(System.currentTimeMillis()-starttime+"毫秒");
                       
                } catch (IOException e) {
                        System.out.println("创建job实例失败!");
                        e.printStackTrace();
                }
        }





回复

使用道具 举报

langke93 发表于 2017-5-15 20:56:11
舒克 发表于 2017-5-15 20:20
现在运行wordcount是没问题了,我写了一个mapreduce从hbase读取数据,计算过后输出到HDFS,可是在提交 ...

这个包,需要在每台客户端都添加,并且配置到环境变量中。否则可能找不到
回复

使用道具 举报

舒克 发表于 2017-5-15 21:31:12
langke93 发表于 2017-5-15 20:56
这个包,需要在每台客户端都添加,并且配置到环境变量中。否则可能找不到

这个Origin_MR是我自定义的包啊,里面放着Origin_Mapper类,Origin_Reducer类 和Origin_job类
回复

使用道具 举报

langke93 发表于 2017-5-15 22:00:13
舒克 发表于 2017-5-15 21:31
这个Origin_MR是我自定义的包啊,里面放着Origin_Mapper类,Origin_Reducer类 和Origin_job类

对的,自定义的所以才需要配置。系统的就不需要了。
回复

使用道具 举报

banjming 发表于 2017-10-17 10:43:06

     我在用mapreduce运行wordcount的时候,在本地测试环境可以正常执行,但是放到集群上就出错,我在网上找了好多,改了改,还是不行,在这看看大家有没有遇见过,给知道一下,下面是控制台输出的内容:
2017-05-14 20:34:08,344 WARN  [main] util.NativeCodeLoader (NativeCodeLoader.java:<clinit>(62)) - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
2017-05-14 20:34:13,034 WARN  [main] mapreduce.JobSubmitter (JobSubmitter.java:copyAndConfigureFiles(150)) - Hadoop command-line option parsing not performed. Implement the Tool interface and execute your application with ToolRunner to remedy this.
2017-05-14 20:34:13,592 INFO  [main] input.FileInputFormat (FileInputFormat.java:listStatus(281)) - Total input paths to process : 1
2017-05-14 20:34:13,861 INFO  [main] mapreduce.JobSubmitter (JobSubmitter.java:submitJobInternal(396)) - number of splits:1
2017-05-14 20:34:13,904 INFO  [main] Configuration.deprecation (Configuration.java:warnOnceIfDeprecated(1019)) - mapred.jar is deprecated. Instead, use mapreduce.job.jar
2017-05-14 20:34:14,144 INFO  [main] mapreduce.JobSubmitter (JobSubmitter.java:printTokens(479)) - Submitting tokens for job: job_1494761713465_0003
2017-05-14 20:34:14,511 INFO  [main] impl.YarnClientImpl (YarnClientImpl.java:submitApplication(236)) - Submitted application application_1494761713465_0003
2017-05-14 20:34:14,552 INFO  [main] mapreduce.Job (Job.java:submit(1289)) - The url to track the job: http://node1:8088/proxy/application_1494761713465_0003/
2017-05-14 20:34:14,553 INFO  [main] mapreduce.Job (Job.java:monitorAndPrintJob(1334)) - Running job: job_1494761713465_0003
2017-05-14 20:34:37,729 INFO  [main] mapreduce.Job (Job.java:monitorAndPrintJob(1355)) - Job job_1494761713465_0003 running in uber mode : false
2017-05-14 20:34:37,731 INFO  [main] mapreduce.Job (Job.java:monitorAndPrintJob(1362)) -  map 0% reduce 0%
2017-05-14 20:34:37,751 INFO  [main] mapreduce.Job (Job.java:monitorAndPrintJob(1375)) - Job job_1494761713465_0003 failed with state FAILED due to: Application application_1494761713465_0003 failed 2 times due to AM Container for appattempt_1494761713465_0003_000002 exited with  exitCode: 1 due to: Exception from container-launch: ExitCodeException exitCode=1:
ExitCodeException exitCode=1:  楼主怎么解决这个问题的?????
回复

使用道具 举报

qcbb001 发表于 2017-10-17 13:59:43
banjming 发表于 2017-10-17 10:43
我在用mapreduce运行wordcount的时候,在本地测试环境可以正常执行,但是放到集群上就出错,我在网上 ...

1.确保集群是正常的
2.如何提交的,这个也很重要。可能提交参数错误。
回复

使用道具 举报

您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

关闭

推荐上一条 /2 下一条