分享

hive2.0启动问题

xw2016 发表于 2016-5-8 00:11:42 [显示全部楼层] 回帖奖励 阅读模式 关闭右栏 20 19888
准备工作,mysql使用5.7.9版本,已安装好,
mysql 5.7.9 安装在192.168.56.110上:
关闭防火墙和SELINUX,
[root@client ~]# /usr/local/mysql/bin/mysql -uroot -p123456
create user hive identified by 'hive';
flush privileges;
GRANT USAGE ON *.* TO 'hive'@'%' IDENTIFIED BY 'hive' WITH GRANT OPTION;
创建数据库,用hive用户创建失败,就用root用户创建了:
create database hive DEFAULT CHARACTER SET latin1;
用户已建好,如下:
mysql> select user, host from mysql.user;
+-----------+--------------+
| user      | host         |
+-----------+--------------+
| hive      | %            |
| mysql     | %            |
| root      | %            |
| repl      | 192.168.56.% |
| mysql     | localhost    |
| mysql.sys | localhost    |
| root      | localhost    |
+-----------+--------------+
7 rows in set (0.00 sec)

前置环境,hadoop2.6.0+zookeeper3.4.5+hbase1.0.3, jdk1.7
以上环境已安装好,启动成功,现安装hive2.0,步骤如下:
1.环境变量
vi /etc/profile.d/java.sh
export JAVA_HOME=/application/hadoop/jdk
export HADOOP_HOME=/application/hadoop/hadoop
export HADOOP_PREFIX=/application/hadoop/hadoop
export ZOOKEEPER_HOME=/application/hadoop/zookeeper
export HIVE_HOME=/application/hadoop/hive
export PATH=$JAVA_HOME/bin:$HAOOP_HOME/bin:$ZOOKEEPER_HOME/bin:$HIVE_HOME/bin:$PATH

[root@yun01-nn-01 lib]# source /etc/profile

2.同步hbase的版本
先cd到hive/lib下,然后从/application/hadoop/hbase/lib下hbase开头的包都拷贝过来:
find /application/hadoop/hbase/lib -name "hbase*.jar"|xargs -i cp {} ./

3. 检查下zookeeper和protobuf的jar包是否和hbase保持一致,如果不一致,
拷贝protobuf.**.jar和zookeeper-3.4.6.jar到hive/lib下。
经检查,hbase和hive下的这两个包是一样的,都为:protobuf-java-2.5.0.jar,zookeeper-3.4.6.jar

4.配置hive-env.sh文件
cp hive-env.sh.template hive-env.sh
vi hive-env.sh
HADOOP_HOME=/application/hadoop/hadoop
export HIVE_CONF_DIR=/application/hadoop/hive/conf

5.拷贝mysql驱动到hive/lib下
cp mysql-connector-java-5.1.38-bin.jar /application/hadoop/hive/lib
授权777:
cd /application/hadoop/hive/lib/
chmod 777 mysql-connector-java-5.1.38-bin.jar

查看hbase与hive的通信包是否存在:
find ./ -name hive-hbase-handler*
./hive-hbase-handler-2.0.0.jar

6.hdfs新建文件并授权:
hadoop fs -mkdir -p /hive/warehouse
hadoop fs -mkdir -p /hive/scratchdir
hadoop fs -chmod g+w /hive/warehouse
hadoop fs -chmod g+w /hive/scratchdir

7.配置hive-site.xml
<configuration>
<property>
<name>hive.metastore.warehouse.dir</name>
<value>hdfs://yun01-nn-01:9000/hive/warehouse</value>
</property>
<property>
<name>hive.exec.scratchdir</name>
<value>hdfs://yun01-nn-01:9000/hive/scratchdir</value>
</property>
<property>
<name>javax.jdo.option.ConnectionURL</name>
<value>jdbc:mysql://192.168.56.110:3306/hive?useSSL=false</value>
</property>
<property>
<name>javax.jdo.option.ConnectionDriverName</name>
<value>com.mysql.jdbc.Driver</value>
</property>
<property>
<name>javax.jdo.option.ConnectionUserName</name>
<value>root</value>
</property>
<property>
<name>javax.jdo.option.ConnectionPassword</name>
<value>123456</value>
</property>
<property>
<name>hive.hwi.war.file</name>
<value>lib/hive-hwi-2.0.0.jar</value>
</property>
<property>
<name>hive.metastore.local</name>
<value>false</value>
</property>
<property>
<name>hive.metastore.uris</name>
<value>thrift://192.168.56.110:9083</value>
</property>
<property>
<name>hive.zookeeper.quorum</name>
<value>yun01-nn-02,yun01-dn-01,yun01-dn-02</value>
</property>
<property>
<name>hive.exec.scratdir</name>
<value>/data/hadoop/hive/tmp</value>
</property>
<property>
<name>hive.querylog.location</name>
<value>/data/hadoop/hive/log</value>
</property>
<property>
<name>hive.aux.jars.path</name>
<value>file:///application/hadoop/hive/lib/hive-hbase-handler-2.0.0.jar,file:///application/hadoop/hive/lib/protobuf-java-2.5.0.jar,file:///application/hadoop/hive/lib/hbase-client-1.1.1.jar,file:///application/hadoop/hive/lib/hbase-common-1.1.1.jar,file:///application/hadoop/hive/lib/zookeeper-3.4.6.jar,file:///application/hadoop/hive/lib/guava-14.0.1.jar</value>
</property>
</configuration>


8.修改hive-env.sh
cd /application/hadoop/hive/conf/
cp hive-env.sh.template hive-env.sh
vi hive-env.sh
export HADOOP_HOME=/application/hadoop/hadoop
export HIVE_CONF_DIR=/application/hadoop/hive/conf
export HIVE_AUX_JARS_PATH=/application/hadoop/hive/lib

9.修改$HIVE_HOME/bin的hive-config.sh,增加以下三行:
#
# processes --config option from command line
#
export JAVA_HOME=/application/hadoop/jdk
export HIVE_HOME=/application/hadoop/hive
export HADOOP_HOME=/application/hadoop/hadoop

10.同步hive和hadoop的jline版本
cd /application/hadoop/hive/lib/
cp jline-2.12.jar /application/hadoop/hadoop/share/hadoop/yarn/lib

删除低版本的jline-0.9.94.jar

11.复制jdk的tools.jar到hive/lib下
cp $JAVA_HOME/lib/tools.jar /application/hadoop/hive/lib

12.同步到其它服务器
scp -r /application/hadoop/hive hadoop@yun01-nn-02:/application/hadoop/
scp -r /application/hadoop/hive hadoop@yun01-dn-01:/application/hadoop/
scp -r /application/hadoop/hive hadoop@yun01-dn-02:/application/hadoop/

12.启动元数据库
hive  --service metastore -hiveconf hive.root.logger=DEBUG,console  

[hadoop@yun01-nn-01 conf]$ hive  --service metastore -hiveconf hive.root.logger=DEBUG,console  
Starting Hive Metastore Server
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/application/hadoop/hive/lib/hive-jdbc-2.0.0-standalone.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/application/hadoop/hive/lib/log4j-slf4j-impl-2.4.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/application/hadoop/hadoop-2.6.0/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]

执行上述语句后就一直这样卡着了。

重新打开一个连接:
[hadoop@yun01-nn-01 ~]$ hive
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/application/hadoop/hive/lib/hive-jdbc-2.0.0-standalone.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/application/hadoop/hive/lib/log4j-slf4j-impl-2.4.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/application/hadoop/hadoop-2.6.0/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]

Logging initialized using configuration in jar:file:/application/hadoop/hive/lib/hive-common-2.0.0.jar!/hive-log4j2.properties
Exception in thread "main" java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient
        at org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1550)
        at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.<init>(RetryingMetaStoreClient.java:86)
        at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:132)
        at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:104)
        at org.apache.hadoop.hive.ql.metadata.Hive.createMetaStoreClient(Hive.java:3080)
        at org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:3108)
        at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:521)
        at org.apache.hadoop.hive.ql.session.SessionState.beginStart(SessionState.java:494)
        at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:709)
        at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:645)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
        at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
        at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
Caused by: java.lang.reflect.InvocationTargetException
        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
        at org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1548)
        ... 15 more
Caused by: MetaException(message:Could not connect to meta store using any of the URIs provided. Most recent failure: org.apache.thrift.transport.TTransportException: java.net.ConnectException: Connection refused
        at org.apache.thrift.transport.TSocket.open(TSocket.java:226)
        at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.open(HiveMetaStoreClient.java:452)
        at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.<init>(HiveMetaStoreClient.java:262)
        at org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient.<init>(SessionHiveMetaStoreClient.java:67)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
        at org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1548)
        at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.<init>(RetryingMetaStoreClient.java:86)
        at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:132)
        at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:104)
        at org.apache.hadoop.hive.ql.metadata.Hive.createMetaStoreClient(Hive.java:3080)
        at org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:3108)
        at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:521)
        at org.apache.hadoop.hive.ql.session.SessionState.beginStart(SessionState.java:494)
        at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:709)
        at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:645)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
        at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
        at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
Caused by: java.net.ConnectException: Connection refused
        at java.net.PlainSocketImpl.socketConnect(Native Method)
        at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)
        at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)
        at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)
        at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
        at java.net.Socket.connect(Socket.java:579)
        at org.apache.thrift.transport.TSocket.open(TSocket.java:221)
        ... 23 more
)
        at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.open(HiveMetaStoreClient.java:500)
        at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.<init>(HiveMetaStoreClient.java:262)
        at org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient.<init>(SessionHiveMetaStoreClient.java:67)
        ... 20 more

直接输入hive命令,则报上面的错。

求助各位大神,这个问题卡了一星期了。


已有(20)人评论

跳转到指定楼层
xw2016 发表于 2016-5-8 00:16:18
补充一下,前置环境
yun01-nn-01  192.168.56.11
yun01-nn-02  192.168.56.12
yun01-dn-01  192.168.56.13
yun01-dn-02 192.168.56.14
我是先在yun01-nn-01 上安装hbase,再scp到其它服务器上
回复

使用道具 举报

bioger_hit 发表于 2016-5-8 07:35:36


存在下面三个问题:

1.hdfs://yun01-nn-01:9000/hive/warehouse
这个能否访问

2.value值是从哪里看得
<property>
<name>javax.jdo.option.ConnectionURL</name>
<value>jdbc:mysql://192.168.56.110:3306/hive?useSSL=false</value>
</property>
如果不确定下,最好替换下面值
  <value>jdbc:mysql://localhost/hive_remote?createDatabaseIfNotExist=true</value>


3.这里只有在不兼容的情况下,才会这样操作。如果兼容就不需要
同步hbase的版本
先cd到hive/lib下,然后从/application/hadoop/hbase/lib下hbase开头的包都拷贝过来:
find /application/hadoop/hbase/lib -name "hbase*.jar"|xargs -i cp {} ./


回复

使用道具 举报

xw2016 发表于 2016-5-8 09:30:03
1、hdfs://yun01-nn-01:9000/hive/warehouse能否访问,这个怎么测试?
hadoop的hdfs-site.xml配置是这样的:
<configuration>
<property>
<name>dfs.nameservices</name>
<value>ns1</value>
</property>
<property>
<name>dfs.ha.namenodes.ns1</name>
<value>nn1,nn2</value>
</property>
<property>
<name>dfs.namenode.rpc-address.ns1.nn1</name>
<value>yun01-nn-01:9000</value>
</property>
<property>
<name>dfs.namenode.http-address.ns1.nn1</name>
<value>yun01-nn-01:50070</value>
</property>
<property>
<name>dfs.namenode.rpc-address.ns1.nn2</name>
<value>yun01-nn-02:9000</value>
</property>
<property>
<name>dfs.namenode.http-address.ns1.nn2</name>
<value>yun01-nn-02:50070</value>
</property>
执行以下命令可以看到结果:
[hadoop@yun01-nn-01 hadoop]$ hadoop fs -ls /hive
Found 2 items
drwxrwxr-x   - hadoop supergroup          0 2016-05-06 02:18 /hive/scratchdir
drwxrwxr-x   - hadoop supergroup          0 2016-05-06 02:18 /hive/warehouse
[hadoop@yun01-nn-01 hadoop]$
回复

使用道具 举报

xw2016 发表于 2016-5-8 09:36:17
2.value值是从哪里看得
<property>
<name>javax.jdo.option.ConnectionURL</name>
<value>jdbc:mysql://192.168.56.110:3306/hive?useSSL=false</value>
</property>
这个问题,我的mysql是安装在192.168.56.110上,用root用户登录mysql,建立了hive库:
Last login: Mon Apr 25 06:21:40 2016 from 192.168.56.1
[root@client ~]# /usr/local/mysql/bin/mysql -uroot -p123456
mysql: [Warning] Using a password on the command line interface can be insecure.
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 11
Server version: 5.7.9-log Source distribution

Copyright (c) 2000, 2015, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql> show databases;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| hive               |
| mysql              |
| oa                 |

端口号3306也是正确的
回复

使用道具 举报

xw2016 发表于 2016-5-8 09:37:26
3.这里只有在不兼容的情况下,才会这样操作。如果兼容就不需要

这个其实我没同步,当时对比了一下两边的jar,发现差距很大,拷过又不能覆盖,所以没同步。
回复

使用道具 举报

jixianqiuxue 发表于 2016-5-8 09:41:58
<property>
<name>hive.metastore.warehouse.dir</name>
<value>hdfs://yun01-nn-01:9000/hive/warehouse</value>
</property>
<property>

没有在HA的情况下配置过,但是
配置HA了,你这样写是不对的。应该是这样hdfs://ns1/hive/warehouse

回复

使用道具 举报

xw2016 发表于 2016-5-8 09:46:48
补充一下,我的hive是安装在这四台服务器上:
yun01-nn-01  192.168.56.11
yun01-nn-02  192.168.56.12
yun01-dn-01  192.168.56.13
yun01-dn-02 192.168.56.14

mysql 5.7.9是安装在192.168.56.110上,所以是配置成远程的。
回复

使用道具 举报

xw2016 发表于 2016-5-8 10:02:37
发现一个问题不知有没有影响:
mysql> select user, host from mysql.user;
+-----------+--------------+
| user      | host         |
+-----------+--------------+
| hive      | %            |
| mysql     | %            |
| root      | %            |
| repl      | 192.168.56.% |
| mysql     | localhost    |
| mysql.sys | localhost    |
| root      | localhost    |
+-----------+--------------+
7 rows in set (0.00 sec)
这里root记录有两行
回复

使用道具 举报

xw2016 发表于 2016-5-8 10:03:54
谢谢了,我改下试试。@jixianqiuxue
回复

使用道具 举报

123下一页
您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

关闭

推荐上一条 /2 下一条