分享

困惑了一周的hive 查询 ,mapreduce 报错问题,求大神解救

windowsgy 发表于 2016-1-29 11:54:15 [显示全部楼层] 回帖奖励 阅读模式 关闭右栏 12 45886
本帖最后由 windowsgy 于 2016-1-29 16:25 编辑

HIVE1.2.1 ,HADOOP2.6.3
HIVE 执行 select count(*) from table 提示错误
hive> select count(*) from logstable limit 10;
Query ID = netuser1_20160129093334_f73a817f-b6f5-43e9-8a68-fa42d99f6a90
Total jobs = 1
Launching Job 1 out of 1
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
  set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
  set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
  set mapreduce.job.reduces=<number>
Starting Job = job_1454031176192_0001, Tracking URL = http://server2-
1:8088/proxy/application_1454031176192_0001/
Kill Command = /dbc/hadoop-2.6.3/bin/hadoop job  -kill job_1454031176192_0001
Hadoop job information for Stage-1: number of mappers: 0; number of reducers: 0
2016-01-29 09:33:44,118 Stage-1 map = 0%,  reduce = 0%
Ended Job = job_1454031176192_0001 with errors
Error during job, obtaining debugging information...
FAILED: Execution Error, return code 2 from
org.apache.hadoop.hive.ql.exec.mr.MapRedTask
MapReduce Jobs Launched:
Stage-Stage-1:  HDFS Read: 0 HDFS Write: 0 FAIL
Total MapReduce CPU Time Spent: 0 msec
yarn log 内容如下:(yarn只提示了一个jar包错误,其他未提示。nodemanage上有大量jar包
下载失败的日志)

User: netuser1
Name: select count(*) from logstable limit 10(Stage-1)
Application Type: MAPREDUCE
Application Tags:
YarnApplicationState: FAILED
FinalStatus Reported by AM: FAILED
Started: 29-Jan-2016 09:33:39
Elapsed: 4sec
Tracking URL: History
Diagnostics:
Application application_1454031176192_0001 failed 2 times due to AM Container for
appattempt_1454031176192_0001_000002 exited with exitCode: -1000
For more detailed output, check application tracking page:http://server2-
1:8088/proxy/application_1454031176192_0001/Then, click on links to logs of each
attempt.
Diagnostics: java.io.IOException: Resource file:/dbc/hive-1.2.1/lib/accumulo-
core-1.6.0.jar changed on src filesystem (expected 1453953224000, was
1453963797000
Failing this attempt. Failing the application.

nodemanage log 内容如下:(所有hive的jar包下载失败,提出一条日志参考)
2016-01-28 13:41:17,147 INFO
org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLoca
lizationService: Failed to download resource { { file:/dbc/hive-
1.2.1/lib/commons-httpclient-3.0.1.jar, 1453953223000, FILE, null },pending,
[],879356579278054,DOWNLOADING}
java.io.FileNotFoundException: File file:/dbc/hive-1.2.1/lib/commons-httpclient-
3.0.1.jar does not exist
at org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus
(RawLocalFileSystem.java:534)
at org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal
(RawLocalFileSystem.java:747)
at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus
(RawLocalFileSystem.java:524)
at org.apache.hadoop.fs.FilterFileSystem.getFileStatus
(FilterFileSystem.java:409)
at org.apache.hadoop.yarn.util.FSDownload.copy(FSDownload.java:251)
at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:356)
at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:60)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.Executors$RunnableAdapter.call
(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker
(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run
(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)

分析
nodemanage 下载hive 所有jar包失败 。确定提示错误路径中存在所有jar包。不知什么原因造
成。求大神解救。

已有(20)人评论

跳转到指定楼层
Alkaloid0515 发表于 2016-1-29 15:40:04
配置在本地了?还是hadoop了。

回复

使用道具 举报

windowsgy 发表于 2016-1-29 15:51:33
本帖最后由 windowsgy 于 2016-1-29 15:55 编辑

大神,hadoop了, hive  在每个节点都有复制,提示的错误中路径存在jar包。
nodemanage上每个包下载都是失败的,其中一个包下载过程四条日志 如下:

2016-01-29 12:42:20,105 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.LocalizedResource: Resource file:/dbc/hive-1.2.1/lib/accumulo-core-1.6.0.jar transitioned from INIT to DOWNLOADING



2016-01-29 12:42:20,125 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService: Downloading public rsrc:{ file:/dbc/hive-1.2.1/lib/accumulo-core-1.6.0.jar, 1453953224000, FILE, null }


2016-01-29 12:42:20,328 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService: Failed to download resource { { file:/dbc/hive-1.2.1/lib/accumulo-core-1.6.0.jar, 1453953224000, FILE, null },pending,[(container_1454042526961_0001_02_000001)],948980563237758,DOWNLOADING}


2016-01-29 12:42:20,328 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService: Failed to download resource { { file:/dbc/hive-1.2.1/lib/accumulo-core-1.6.0.jar, 1453953224000, FILE, null },pending,[(container_1454042526961_0001_02_000001)],948980563237758,DOWNLOADING}
java.io.IOException: Resource file:/dbc/hive-1.2.1/lib/accumulo-core-1.6.0.jar changed on src filesystem (expected 1453953224000, was 1453963797000
        at org.apache.hadoop.yarn.util.FSDownload.copy(FSDownload.java:253)
        at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:356)
        at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:60)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:745)

2016-01-29 12:42:20,339 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.LocalizedResource: Resource file:/dbc/hive-1.2.1/lib/accumulo-core-1.6.0.jar(->/dbc/hadoop-2.6.3/yarn/local/filecache/10/accumulo-core-1.6.0.jar) transitioned from DOWNLOADING to FAILED

回复

使用道具 举报

when30 发表于 2016-1-29 15:53:39
windowsgy 发表于 2016-1-29 15:51
大神,hadoop了, hive  在每个节点都有复制,提示的错误中路径存在jar包

hive.aux.jars.path切忌配置正确
不能有换行或则空格。特别是换行


<property>
  <name>hive.aux.jars.path</name>
  <value>file:///usr/hive/lib/hive-hbase-handler-0.13.0-SNAPSHOT.jar,file:///usr/hive/lib/protobuf-java-2.5.0.jar,file:///usr/hive/lib/hbase-client-0.96.0-hadoop2.jar,file:///usr/hive/lib/hbase-common-0.96.0-hadoop2.jar,file:///usr/hive/lib/zookeeper-3.4.5.jar,file:///usr/hive/lib/guava-11.0.2.jar</value>
</property>




更多参考hbase0.96与hive0.12整合高可靠文档及问题总结
http://www.aboutyun.com/thread-7881-1-1.html



回复

使用道具 举报

windowsgy 发表于 2016-1-29 15:56:28
when30 发表于 2016-1-29 15:53
hive.aux.jars.path切忌配置正确
不能有换行或则空格。特别是换行

木有集成hbase,只是hive和hadoop。执行select * from talblename 正常,执行select count(*) from table ,在yarn中提示错误日志。
回复

使用道具 举报

when30 发表于 2016-1-29 15:58:02
windowsgy 发表于 2016-1-29 15:56
木有集成hbase,只是hive和hadoop。执行select * from talblename 正常,执行select count(*) from table  ...

跟整合没有关系的,你可以把缺的包的路径都放到上面
回复

使用道具 举报

windowsgy 发表于 2016-1-29 16:03:19
when30 发表于 2016-1-29 15:58
跟整合没有关系的,你可以把缺的包的路径都放到上面

大神,您要我放在哪里?
看了好多资料。hive 和hadoop。没有介绍复制hive 包在HADOOP中的例子。我也没有这种思路。

每个包下载都是失败的,其中一个包下载过程四条日志 如下:

2016-01-29 12:42:20,105 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.LocalizedResource: Resource file:/dbc/hive-1.2.1/lib/accumulo-core-1.6.0.jar transitioned from INIT to DOWNLOADING



2016-01-29 12:42:20,125 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService: Downloading public rsrc:{ file:/dbc/hive-1.2.1/lib/accumulo-core-1.6.0.jar, 1453953224000, FILE, null }


2016-01-29 12:42:20,328 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService: Failed to download resource { { file:/dbc/hive-1.2.1/lib/accumulo-core-1.6.0.jar, 1453953224000, FILE, null },pending,[(container_1454042526961_0001_02_000001)],948980563237758,DOWNLOADING}


2016-01-29 12:42:20,328 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService: Failed to download resource { { file:/dbc/hive-1.2.1/lib/accumulo-core-1.6.0.jar, 1453953224000, FILE, null },pending,[(container_1454042526961_0001_02_000001)],948980563237758,DOWNLOADING}
java.io.IOException: Resource file:/dbc/hive-1.2.1/lib/accumulo-core-1.6.0.jar changed on src filesystem (expected 1453953224000, was 1453963797000
        at org.apache.hadoop.yarn.util.FSDownload.copy(FSDownload.java:253)
        at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:356)
        at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:60)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:745)

2016-01-29 12:42:20,339 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.LocalizedResource: Resource file:/dbc/hive-1.2.1/lib/accumulo-core-1.6.0.jar(->/dbc/hadoop-2.6.3/yarn/local/filecache/10/accumulo-core-1.6.0.jar) transitioned from DOWNLOADING to FAILED


回复

使用道具 举报

when30 发表于 2016-1-29 16:15:13
缺什么就把它放到里面,比如缺file:///dbc/hive-1.2.1/lib/accumulo-core-1.6.0.jar,x.jar,当然确保你的路径下面是有的
<property>
  <name>hive.aux.jars.path</name>
  <value>file:///dbc/hive-1.2.1/lib/accumulo-core-1.6.0.jar,file:///x.jar</value>
</property>

点评

大神,我的hive中有几十个jar包,每次 nodemanage 加载jar包,日志中提示所有的都要下载,如你所说我要设置的jar包路径太庞大,恐怕不合适,请斟酌。。。。  发表于 2016-1-29 16:18
大神,这个配置是要设置在hive server 中 ,还是hive client 中的配置?  发表于 2016-1-29 16:16
回复

使用道具 举报

when30 发表于 2016-1-29 16:18:10
when30 发表于 2016-1-29 16:15
缺什么就把它放到里面,比如缺file:///dbc/hive-1.2.1/lib/accumulo-core-1.6.0.jar,x.jar,当然确保你的 ...

hive-site文件里

点评

hive 中lib 下有几十个jar包,我设置不来  发表于 2016-1-29 21:58
大神,我的hive中有几十个jar包,每次 nodemanage 加载jar包,日志中提示所有的都要下载,如你所说我要设置的jar包路径太庞大,恐怕不合适,请斟酌。。。。  发表于 2016-1-29 16:20
回复

使用道具 举报

wscl1213 发表于 2016-1-29 16:57:44
本帖最后由 wscl1213 于 2016-1-29 17:10 编辑

hive中是否存在两个版本jar包,如果有的话,需删除旧版本包。或则楼主在安装的时候,版本不清晰,造成多次复制,版本混了

点评

每次执行mapredce,提示错误。错误中包含所有jar包复制的过程。tmp路径下的临时文件可以复制成功,只有hive 中的jar包下载失败,权限一定没有问题,不知道是什么原因。  发表于 2016-1-29 21:45
hive 一直使用1.2.1,从未改变  发表于 2016-1-29 21:43
回复

使用道具 举报

12下一页
您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

关闭

推荐上一条 /2 下一条