hive> create external table cn(x bigint, y bigint, z bigint, k bigint)
> row format delimited fields terminated by ','
> location '/cn';
OK
Time taken: 0.752 seconds
hive> create table p as select y, z, sum(k) as t
> from cn where x>=20141228 and x<=20150110 group by y, z;
Query ID = guo_20160515161032_2e035fc2-6214-402a-90dd-7acda7d638bf
Total jobs = 1
Launching Job 1 out of 1
Number of reduce tasks not specified. Estimated from input data size: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapreduce.job.reduces=<number>
Starting Job = job_1463299536204_0001, Tracking URL = http://drguo:8088/proxy/application_1463299536204_0001/
Kill Command = /opt/Hadoop/hadoop-2.7.2/bin/hadoop job -kill job_1463299536204_0001
Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 1
2016-05-15 16:10:52,353 Stage-1 map = 0%, reduce = 0%
2016-05-15 16:11:03,418 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 8.65 sec
2016-05-15 16:11:13,257 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 13.99 sec
MapReduce Total cumulative CPU time: 13 seconds 990 msec
Ended Job = job_1463299536204_0001
Moving data to: hdfs://drguo:9000/user/hive/warehouse/predict
Table default.predict stats: [numFiles=1, numRows=1486, totalSize=15256, rawDataSize=13770]
MapReduce Jobs Launched:
Stage-Stage-1: Map: 1 Reduce: 1 Cumulative CPU: 13.99 sec HDFS Read: 16551670 HDFS Write: 15332 SUCCESS
Total MapReduce CPU Time Spent: 13 seconds 990 msec
OK
Time taken: 42.887 seconds