分享

hive sql 执行错误

本帖最后由 SuperDove 于 2017-7-25 17:25 编辑

[mw_shl_code=applescript,true]2017-07-25 17:06:00        Starting to launch local task to process map join;        maximum memory = 477626368
2017-07-25 17:06:03        Processing rows:        200000        Hashtable size:        199999        Memory usage:        89864328        percentage:        0.188
2017-07-25 17:06:03        Processing rows:        300000        Hashtable size:        299999        Memory usage:        125622720        percentage:        0.263
2017-07-25 17:06:03        Dump the side-table for tag: 0 with group count: 318705 into file: file:/tmp/dove/1172bf95-212b-42d0-8d50-9da1f9fc683c/hive_2017-07-25_17-05-55_944_7187426946402979767-1/-local-10004/HashTable-Stage-2/MapJoin-mapfile00--.hashtable
2017-07-25 17:06:03        Uploaded 1 File to: file:/tmp/dove/1172bf95-212b-42d0-8d50-9da1f9fc683c/hive_2017-07-25_17-05-55_944_7187426946402979767-1/-local-10004/HashTable-Stage-2/MapJoin-mapfile00--.hashtable (11269074 bytes)
2017-07-25 17:06:03        End of local task; Time Taken: 2.787 sec.
Execution completed successfully
MapredLocal task succeeded
Launching Job 1 out of 1
Number of reduce tasks not specified. Estimated from input data size: 4
In order to change the average load for a reducer (in bytes):
  set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
  set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
  set mapreduce.job.reduces=<number>
Starting Job = job_1500973394395_0001, Tracking URL = http://master:8088/proxy/application_1500973394395_0001/
Kill Command = /usr/hadoop-2.6.4/bin/hadoop job  -kill job_1500973394395_0001
Hadoop job information for Stage-2: number of mappers: 3; number of reducers: 4
2017-07-25 17:06:18,426 Stage-2 map = 0%,  reduce = 0%
2017-07-25 17:07:19,500 Stage-2 map = 0%,  reduce = 0%, Cumulative CPU 10.25 sec
2017-07-25 17:08:19,827 Stage-2 map = 0%,  reduce = 0%, Cumulative CPU 6.23 sec
2017-07-25 17:09:20,359 Stage-2 map = 0%,  reduce = 0%, Cumulative CPU 12.97 sec
2017-07-25 17:09:59,721 Stage-2 map = 100%,  reduce = 100%
MapReduce Total cumulative CPU time: 12 seconds 970 msec
Ended Job = job_1500973394395_0001 with errors
Error during job, obtaining debugging information...
Job Tracking URL: http://master:8088/proxy/application_1500973394395_0001/
Examining task ID: task_1500973394395_0001_m_000001 (and more) from job job_1500973394395_0001

Task with the most failures(4):
-----
Task ID:
  task_1500973394395_0001_m_000001

URL:
  http://master:8088/taskdetails.jsp?jobid=job_1500973394395_0001&tipid=task_1500973394395_0001_m_000001
-----
Diagnostic Messages for this Task:
Error: java.lang.RuntimeException: Error in configuring object
        at org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:109)
        at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:75)
        at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133)
        at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:446)
        at org.apache.hadoop.mapred.MapTask.run(MapTask.java:343)
        at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:163)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1656)
        at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
Caused by: java.lang.reflect.InvocationTargetException
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:106)
        ... 9 more
Caused by: java.lang.RuntimeException: Error in configuring object
        at org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:109)
        at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:75)
        at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133)
        at org.apache.hadoop.mapred.MapRunner.configure(MapRunner.java:38)
        ... 14 more
Caused by: java.lang.reflect.InvocationTargetException
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:106)
        ... 17 more
Caused by: java.lang.OutOfMemoryError: GC overhead limit exceeded
        at org.apache.hadoop.io.Text.setCapacity(Text.java:268)
        at org.apache.hadoop.io.Text.set(Text.java:224)
        at org.apache.hadoop.io.Text.set(Text.java:214)
        at org.apache.hadoop.io.Text.<init>(Text.java:93)
        at org.apache.hadoop.hive.serde2.objectinspector.primitive.WritableStringObjectInspector.copyObject(WritableStringObjectInspector.java:36)
        at org.apache.hadoop.hive.serde2.objectinspector.ObjectInspectorUtils.copyToStandardObject(ObjectInspectorUtils.java:311)
        at org.apache.hadoop.hive.serde2.objectinspector.ObjectInspectorUtils.copyToStandardObject(ObjectInspectorUtils.java:346)
        at org.apache.hadoop.hive.ql.exec.persistence.MapJoinKeyObject.read(MapJoinKeyObject.java:112)
        at org.apache.hadoop.hive.ql.exec.persistence.MapJoinKeyObject.read(MapJoinKeyObject.java:107)
        at org.apache.hadoop.hive.ql.exec.persistence.MapJoinKeyObject.read(MapJoinKeyObject.java:103)
        at org.apache.hadoop.hive.ql.exec.persistence.MapJoinTableContainerSerDe.load(MapJoinTableContainerSerDe.java:83)
        at org.apache.hadoop.hive.ql.exec.mr.HashTableLoader.load(HashTableLoader.java:98)
        at org.apache.hadoop.hive.ql.exec.MapJoinOperator.loadHashTable(MapJoinOperator.java:288)
        at org.apache.hadoop.hive.ql.exec.MapJoinOperator$1.call(MapJoinOperator.java:173)
        at org.apache.hadoop.hive.ql.exec.MapJoinOperator$1.call(MapJoinOperator.java:169)
        at org.apache.hadoop.hive.ql.exec.mr.ObjectCache.retrieve(ObjectCache.java:55)
        at org.apache.hadoop.hive.ql.exec.mr.ObjectCache.retrieveAsync(ObjectCache.java:63)
        at org.apache.hadoop.hive.ql.exec.MapJoinOperator.initializeOp(MapJoinOperator.java:166)
        at org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:362)
        at org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:481)
        at org.apache.hadoop.hive.ql.exec.Operator.initializeChildren(Operator.java:438)
        at org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:375)
        at org.apache.hadoop.hive.ql.exec.mr.ExecMapper.configure(ExecMapper.java:131)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:106)
        at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:75)
        at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133)
        at org.apache.hadoop.mapred.MapRunner.configure(MapRunner.java:38)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)


FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask
[/mw_shl_code]

错误信息如上,同样的sql放到spark1.6.1中读取hive数据做spark sql计算没错,但是在hive1.2.1的客户端就出错,在线等。
hadoop2.6.4
hive1.2.1
spark1.6.1
找过解决办法 更改mapreduce.admin.map.child.java.opts 的参数,结果没有用,一样的,我在想一个问题,为嘛我这里启动的map数是3,并且调整不了,想问具体为啥错了


读取的文件,两个表关联
一个hive表是六个文件  数据条数大约一千万条记录
144.87 MB   145.76 MB   83.66 MB    144.84 MB     145.66 MB   92.91 MB  
另一个hive表一个文件   
6.49 MB


做的sql有四个字段的group by ,一个sum  一个count ,就这样错了


已有(4)人评论

跳转到指定楼层
langke93 发表于 2017-7-25 18:07:38
猜测可能内存的问题。
估计跑不出下面的问题
1.内存、配置
2.版本兼容
上面的错误其实看不出太多的内容。建议看下下面的日志内容,应该有详细的错误
http://master:8088/proxy/application_1500973394395_0001/
回复

使用道具 举报

SuperDove 发表于 2017-7-25 18:20:01
本帖最后由 SuperDove 于 2017-7-25 18:21 编辑
langke93 发表于 2017-7-25 18:07
猜测可能内存的问题。
估计跑不出下面的问题
1.内存、配置
2017-07-25 18:11:50,860 WARN [main] org.apache.hadoop.mapred.YarnChild: Exception running child : java.lang.RuntimeException: Error in configuring object        at org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:109)        at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:75)        at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133)        at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:446)        at org.apache.hadoop.mapred.MapTask.run(MapTask.java:343)        at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:163)        at java.security.AccessController.doPrivileged(Native Method)        at javax.security.auth.Subject.doAs(Subject.java:422)        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1656)        at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)Caused by: java.lang.reflect.InvocationTargetException        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)        at java.lang.reflect.Method.invoke(Method.java:498)        at org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:106)        ... 9 moreCaused by: java.lang.RuntimeException: Error in configuring object        at org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:109)        at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:75)        at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133)        at org.apache.hadoop.mapred.MapRunner.configure(MapRunner.java:38)        ... 14 moreCaused by: java.lang.reflect.InvocationTargetException        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)        at java.lang.reflect.Method.invoke(Method.java:498)        at org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:106)        ... 17 moreCaused by: java.lang.OutOfMemoryError: Java heap space        at java.util.HashMap.resize(HashMap.java:703)        at java.util.HashMap.putVal(HashMap.java:662)        at java.util.HashMap.put(HashMap.java:611)        at org.apache.hadoop.hive.ql.exec.persistence.HashMapWrapper.put(HashMapWrapper.java:107)        at org.apache.hadoop.hive.ql.exec.persistence.MapJoinTableContainerSerDe.load(MapJoinTableContainerSerDe.java:86)        at org.apache.hadoop.hive.ql.exec.mr.HashTableLoader.load(HashTableLoader.java:98)        at org.apache.hadoop.hive.ql.exec.MapJoinOperator.loadHashTable(MapJoinOperator.java:288)        at org.apache.hadoop.hive.ql.exec.MapJoinOperator$1.call(MapJoinOperator.java:173)        at org.apache.hadoop.hive.ql.exec.MapJoinOperator$1.call(MapJoinOperator.java:169)        at org.apache.hadoop.hive.ql.exec.mr.ObjectCache.retrieve(ObjectCache.java:55)        at org.apache.hadoop.hive.ql.exec.mr.ObjectCache.retrieveAsync(ObjectCache.java:63)        at org.apache.hadoop.hive.ql.exec.MapJoinOperator.initializeOp(MapJoinOperator.java:166)        at org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:362)        at org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:481)        at org.apache.hadoop.hive.ql.exec.Operator.initializeChildren(Operator.java:438)        at org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:375)        at org.apache.hadoop.hive.ql.exec.mr.ExecMapper.configure(ExecMapper.java:131)        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)        at java.lang.reflect.Method.invoke(Method.java:498)        at org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:106)        at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:75)        at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133)        at org.apache.hadoop.mapred.MapRunner.configure(MapRunner.java:38)        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)        at java.lang.reflect.Method.invoke(Method.java:498)        at org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:106)        at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:75)        at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133)
yarn.png


回复

使用道具 举报

qcbb001 发表于 2017-7-25 21:37:21
SuperDove 发表于 2017-7-25 18:20
2017-07-25 18:11:50,860 WARN [main] org.apache.hadoop.mapred.YarnChild: Exception running child : ja ...

spark sql用的是yarn吗?
hive使用的yarn。应该是jvm的配置问题。建议增大些。
例如:
mapred.java.child.opts

mapred.child.ulimit 为1.5 或则 2
mapred.tasktracker.map.tasks.maximum 和mapred.tasktracker.reduce.tasks.maximum
比例参考
0.25*mapred.child.java.opts < io.sort.mb < 0.5*mapred.child.java.opts .

回复

使用道具 举报

SuperDove 发表于 2017-7-26 09:39:26
qcbb001 发表于 2017-7-25 21:37
spark sql用的是yarn吗?
hive使用的yarn。应该是jvm的配置问题。建议增大些。
例如:

谢谢大神,做了如下更改map-site.xml的配置添加如下
[mw_shl_code=applescript,true]<property>  
   <name>mapred.child.java.opts</name>  
   <value>-Xmx1024m</value>  
</property>  
[/mw_shl_code]

重启hadoop,然后再hive中修改如下
set io.sort.mb =400;
io.sort.mb 的默认值为100

然后hive sql成功无error。

回复

使用道具 举报

您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

关闭

推荐上一条 /2 下一条