分享

CDH6.2环境中启用Kerberos

问题导读:


1、如何为安装Kerberos?
2、CDH集群如何启用Kerberos?
3、Kerberos如何使用?
4、常见的错误有哪些?



一、Kerberos概述:
Kerberos是一个用于安全认证第三方协议,并不是Hadoop专用,你也可以将其用于其他系统,它采用了传统的共享密钥的方式,实现了在网络环境不一定保证安全的环境下,client和server之间的通信,适用于client/server模型,由MIT开发和实现。而使用Cloudera Manager可以较为轻松的实现界面化的Kerberos集成,

Kerberos协议:
Kerberos协议主要用于计算机网络的身份鉴别(Authentication), 其特点是用户只需输入一次身份验证信息就可以凭借此验证获得的票据(ticket-granting ticket)访问多个服务,即SSO(Single Sign On)。由于在每个Client和Service之间建立了共享密钥,使得该协议具有相当的安全性。

二、安装步骤
安装环境:
OS:CentOS7.5
CDH6.2

1. KDC服务安装及配置
将KDC服务安装在Cloudera Manager Server所在服务器上(KDC服务可根据自己需要安装在其他服务器)

(1) 在CM服务器上安装KDC服务
[root@master~]# yum -y install krb5-server krb5-libs krb5-auth-dialog krb5-workstation

(2) 修改vi /etc/krb5.conf文件
[mw_shl_code=shell,true][root@master ~]# vi /etc/krb5.conf               
# Configuration snippets may be placed in this directory as well
includedir /etc/krb5.conf.d/
[logging]
default = FILE:/var/log/krb5libs.log
kdc = FILE:/var/log/krb5kdc.log
admin_server = FILE:/var/log/kadmind.log

[libdefaults]
dns_lookup_realm = false
ticket_lifetime = 24h
renew_lifetime = 7d
forwardable = true
rdns = false
pkinit_anchors = /etc/pki/tls/certs/ca-bundle.crt
default_realm = HADOOP.COM
#default_ccache_name = KEYRING:persistent:%{uid}

[realms]
HADOOP.COM = {
  kdc = master
  admin_server = master
}

[domain_realm]
.hadoop.com = HADOOP.COM
hadoop.com = HADOOP.COM[/mw_shl_code]
(3) 修改/var/kerberos/krb5kdc/kadm5.acl配置
*/admin@HADOOP.COM      *
*/master@HADOOP.COM      *

(4) 修改/var/kerberos/krb5kdc/kdc.conf配置
[mw_shl_code=shell,true](base) [root@master ~]# vim /var/kerberos/krb5kdc/kdc.conf
[kdcdefaults]
kdc_ports = 88
kdc_tcp_ports = 88

[realms]
HADOOP.COM = {
  #master_key_type = aes256-cts
  max_renewable_life= 7d 0h 0m 0s
  acl_file = /var/kerberos/krb5kdc/kadm5.acl
  dict_file = /usr/share/dict/words
  admin_keytab = /var/kerberos/krb5kdc/kadm5.keytab
  supported_enctypes = aes256-cts:normal aes128-cts:normal des3-hmac-sha1:normal arcfour-hmac:normal camellia256-cts:normal camellia
128-cts:normal des-hmac-sha1:normal des-cbc-md5:normal des-cbc-crc:normal
}[/mw_shl_code]
(5) 创建Kerberos数据库
[mw_shl_code=shell,true](base) [root@master ~]# kdb5_util create –r HADOOP.COM -s
Loading random data
Initializing database '/var/kerberos/krb5kdc/principal' for realm 'HADOOP.COM',
master key name 'K/M@HADOOP.COM'
You will be prompted for the database Master Password.
It is important that you NOT FORGET this password.
Enter KDC database master key:
Re-enter KDC database master key to verify:
此处需要输入Kerberos数据库的密码。[/mw_shl_code]

(6) 创建Kerberos的管理账号
[mw_shl_code=shell,true](base) [root@master ~]# kadmin.local
Authenticating as principal root/admin@HADOOP.COM with password.
kadmin.local:  addprinc admin/admin@HADOOP.COM
WARNING: no policy specified for admin/admin@HADOOP.COM; defaulting to no policy
Enter password for principal "admin/admin@HADOOP.COM":
Re-enter password for principal "admin/admin@HADOOP.COM":
Principal "admin/admin@HADOOP.COM" created.
kadmin.local:  exit

Kerberos管理员账号及密码[/mw_shl_code]
(7) 将Kerberos服务添加到自启动服务,并启动krb5kdc和kadmin服务
[mw_shl_code=shell,true][root@master ~]# systemctl enable krb5kdc
Created symlink from /etc/systemd/system/multi-user.target.wants/krb5kdc.service to /usr/lib/systemd/system/krb5kdc.service.
[root@master ~]# systemctl enable kadmin
Created symlink from /etc/systemd/system/multi-user.target.wants/kadmin.service to /usr/lib/systemd/system/kadmin.service.
[root@master ~]# systemctl start krb5kdc
[root@master ~]# systemctl start kadmin[/mw_shl_code]
(8) 测试Kerberos的管理员账号
[mw_shl_code=shell,true][root@master ~]# kinit admin/admin@HADOOP.COM
Password for admin/admin@HADOOP.COM:
kinit: Password incorrect while getting initial credentials
[root@master ~]# kinit admin/admin@HADOOP.COM
Password for admin/admin@HADOOP.COM:
[root@master ~]# klist
Ticket cache: FILE:/tmp/krb5cc_0
Default principal: admin/admin@HADOOP.COM

Valid starting       Expires              Service principal
06/25/2019 19:11:28  06/26/2019 19:11:28  krbtgt/HADOOP.COM@HADOOP.COM
        renew until 07/02/2019 19:11:28[/mw_shl_code]
(9).为集群安装所有Kerberos客户端,包括Cloudera Manager
使用批处理脚本为集群所有节点安装Kerberos客户端
[root@master ~]# pssh -h hostlist.txt -i yum -y install krb5-libs krb5-workstation
20190627113835998.png

(10) 在Cloudera Manager Server服务器上安装额外的包
[root@master ~]# yum -y install openldap-clients

(11) 将KDC Server上的krb5.conf文件拷贝到所有Kerberos客户端
使用批处理脚本将Kerberos服务端的krb5.conf配置文件拷贝至集群所有节点的/etc目录下:
[root@master ~]# pscp -h hostlist.txt /etc/krb5.conf /etc/
20190627113836204.png

2. CDH集群启用Kerberos
(1).在KDC中给Cloudera Manager添加管理员账号
[mw_shl_code=shell,true](base) [root@master ~]# kadmin.local
Authenticating as principal admin/admin@HADOOP.COM with password.
kadmin.local:  addprinc cloudera-scm/admin@HADOOP.COM
WARNING: no policy specified for cloudera-scm/admin@HADOOP.COM; defaulting to no policy
Enter password for principal "cloudera-scm/admin@HADOOP.COM":
Re-enter password for principal "cloudera-scm/admin@HADOOP.COM":
Principal "cloudera-scm/admin@HADOOP.COM" created.
kadmin.local:  exit[/mw_shl_code]
(2).进入Cloudera Manager的“管理”->“安全”界面
20190627113917846.png
(3).选择“启用Kerberos”,进入如下界面
20190627113852717.png

(4).确保如下列出的所有检查项都已完成,然后全部点击勾选
20190627113916175.png

(5).点击“继续”,配置相关的KDC信息,包括类型、KDC服务器、KDC Realm、加密类型以及待创建的Service Principal(hdfs,yarn,,hbase,hive等)的更新生命期等
20190627113912866.png
(6).不建议让Cloudera Manager来管理krb5.conf, 点击“继续”
20190627113905548.png

(7).输入Cloudera Manager的Kerbers管理员账号,一定得和之前创建的账号一致,点击“继续”
20190627113906621.png

(8).点击“继续”启用Kerberos
20190627113903452.png

(9).Kerberos启用完成,点击“继续
20190627113914263.png
20190627113853639.png
20190627113911192.png
20190627113849112.png

3. Kerberos使用
创建haley测试用户,执行Hive和MapReduce任务,需要在集群所有节点创建haley用户

(1).使用kadmin创建一个haley的principal
[mw_shl_code=shell,true](base) [root@master ~]# kadmin.local
Authenticating as principal admin/admin@HADOOP.COM with password.
kadmin.local:  addprinc haley@HADOOP.COM
WARNING: no policy specified for haley@HADOOP.COM; defaulting to no policy
Enter password for principal "haley@HADOOP.COM":
Re-enter password for principal "haley@HADOOP.COM":
Principal "haley@HADOOP.COM" created.
kadmin.local:  exit[/mw_shl_code](2).使用haley用户登录Kerberos
[mw_shl_code=shell,true][root@master ~]# kdestroy
[root@master ~]# kinit haley
Password for haley@HADOOP.COM:
(base) [root@master ~]# klist
Ticket cache: FILE:/tmp/krb5cc_0
Default principal: haley@HADOOP.COM

Valid starting       Expires              Service principal
06/26/2019 17:29:17  06/27/2019 17:29:17  krbtgt/HADOOP.COM@HADOOP.COM
        renew until 07/03/2019 17:29:17[/mw_shl_code]
(3).在集群所有节点添加haley用户
添加haley用户
[mw_shl_code=shell,true][root@master ~]# pssh -h hostlist.txt -i useradd haley
[1] 17:32:15 [SUCCESS] datanode2
[2] 17:32:15 [SUCCESS] master
[3] 17:32:15 [SUCCESS] datanode3
[4] 17:32:15 [SUCCESS] datanode1[/mw_shl_code]
把haley用户添加到hdfs,hadoop用户组中
[mw_shl_code=shell,true][root@master ~]# pssh -h hostlist.txt -i usermod -G hdfs,hadoop haley
[1] 17:51:11 [SUCCESS] datanode2
[2] 17:51:11 [SUCCESS] master
[3] 17:51:11 [SUCCESS] datanode3
[4] 17:51:12 [SUCCESS] datanode1
[root@master ~]# pssh -h hostlist.txt -i usermod -G hadoop haley
[1] 17:51:54 [SUCCESS] datanode2
[2] 17:51:54 [SUCCESS] datanode1
[3] 17:51:54 [SUCCESS] master
[4] 17:51:54 [SUCCESS] datanode3[/mw_shl_code]
(4).运行MapReduce作业
[root@master ~]# hadoop jar /opt/cloudera/parcels/CDH/lib/hadoop-mapreduce/hadoop-mapreduce-examples.jar pi 10 1
20190627113909919.png

(5).使用beeline连接hive进行测试
[mw_shl_code=shell,true][root@master ~]# beeline
WARNING: Use "yarn jar" to launch YARN applications.
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/jars/log4j-slf4j-impl-2.8.2.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/jars/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
Beeline version 2.1.1-cdh6.2.0 by Apache Hive
beeline> !connect jdbc:hive2://localhost:10000/;principal=hive/master@HADOOP.COM
Connecting to jdbc:hive2://localhost:10000/;principal=hive/master@HADOOP.COM
Connected to: Apache Hive (version 2.1.1-cdh6.2.0)
Driver: Hive JDBC (version 2.1.1-cdh6.2.0)
Transaction isolation: TRANSACTION_REPEATABLE_READ
0: jdbc:hive2://localhost:10000/> show databases;
INFO  : Compiling command(queryId=hive_20190626195802_7194decd-6597-4c72-9a6e-3c2e294031b8): show databases
INFO  : Semantic Analysis Completed
INFO  : Returning Hive schema: Schema(fieldSchemas:[FieldSchema(name:database_name, type:string, comment:from deserializer)], properties:null)
INFO  : Completed compiling command(queryId=hive_20190626195802_7194decd-6597-4c72-9a6e-3c2e294031b8); Time taken: 0.164 seconds
INFO  : Executing command(queryId=hive_20190626195802_7194decd-6597-4c72-9a6e-3c2e294031b8): show databases
INFO  : Starting task [Stage-0:DDL] in serial mode
INFO  : Completed executing command(queryId=hive_20190626195802_7194decd-6597-4c72-9a6e-3c2e294031b8); Time taken: 0.007 seconds
INFO  : OK
+----------------+
| database_name  |
+----------------+
| default        |
| dw_ttt         |
| pdd            |
| taotoutiao     |
+----------------+
4 rows selected (0.281 seconds)
0: jdbc:hive2://localhost:10000/>[/mw_shl_code]
20190627113901859.png

三、总结

问题1:在配置kerberos时报下面的错
[mw_shl_code=text,true]/opt/cloudera/cm/bin/gen_credentials.sh failed with exit code 1 and output of <<
+ export PATH=/usr/kerberos/bin:/usr/kerberos/sbin:/usr/lib/mit/sbin:/usr/sbin:/usr/lib/mit/bin:/usr/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin
+ PATH=/usr/kerberos/bin:/usr/kerberos/sbin:/usr/lib/mit/sbin:/usr/sbin:/usr/lib/mit/bin:/usr/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin
+ CMF_REALM=HADOOP.COM
+ KEYTAB_OUT=/var/run/cloudera-scm-server/cmf3121646876397998512.keytab
+ PRINC=kafka_mirror_maker/master@HADOOP.COM
+ MAX_RENEW_LIFE=432000
+ KADMIN='kadmin -k -t /var/run/cloudera-scm-server/cmf487375145055296868.keytab -p cloudera-scm/admin@HADOOP.COM -r HADOOP.COM'
+ RENEW_ARG=
+ '[' 432000 -gt 0 ']'
+ RENEW_ARG='-maxrenewlife "432000 sec"'
+ '[' -z /etc/krb5.conf ']'
+ echo 'Using custom config path '\''/etc/krb5.conf'\'', contents below:'
+ cat /etc/krb5.conf
+ kadmin -k -t /var/run/cloudera-scm-server/cmf487375145055296868.keytab -p cloudera-scm/admin@HADOOP.COM -r HADOOP.COM -q 'addprinc -maxrenewlife "432000 sec" -randkey kafka_mirror_maker/master@HADOOP.COM'
Couldn't open log file /var/log/kadmind.log: Permission denied
WARNING: no policy specified for kafka_mirror_maker/master@HADOOP.COM; defaulting to no policy
add_principal: Operation requires ``add'' privilege while creating "kafka_mirror_maker/master@HADOOP.COM".
+ '[' 432000 -gt 0 ']'
++ kadmin -k -t /var/run/cloudera-scm-server/cmf487375145055296868.keytab -p cloudera-scm/admin@HADOOP.COM -r HADOOP.COM -q 'getprinc -terse kafka_mirror_maker/master@HADOOP.COM'
++ tail -1
++ cut -f 12
Couldn't open log file /var/log/kadmind.log: Permission denied
get_principal: Operation requires ``get'' privilege while retrieving "kafka_mirror_maker/master@HADOOP.COM".
+ RENEW_LIFETIME='Authenticating as principal cloudera-scm/admin@HADOOP.COM with keytab /var/run/cloudera-scm-server/cmf487375145055296868.keytab.'
+ '[' Authenticating as principal cloudera-scm/admin@HADOOP.COM with keytab /var/run/cloudera-scm-server/cmf487375145055296868.keytab. -eq 0 ']'
/opt/cloudera/cm/bin/gen_credentials.sh: line 35: [: too many arguments
+ kadmin -k -t /var/run/cloudera-scm-server/cmf487375145055296868.keytab -p cloudera-scm/admin@HADOOP.COM -r HADOOP.COM -q 'xst -k /var/run/cloudera-scm-server/cmf3121646876397998512.keytab kafka_mirror_maker/master@HADOOP.COM'
Couldn't open log file /var/log/kadmind.log: Permission denied
kadmin: Operation requires ``change-password'' privilege while changing kafka_mirror_maker/master@HADOOP.COM's key
+ chmod 600 /var/run/cloudera-scm-server/cmf3121646876397998512.keytab
chmod: cannot access ‘/var/run/cloudera-scm-server/cmf3121646876397998512.keytab’: No such file or directory
>>[/mw_shl_code]
原因:没有配置/var/kerberos/krb5kdc/kadm5.acl文件的用户权限:
*/admin@HADOOP.COM

配置完成后重启kerberos服务
servcie krb5kdc restart
servcie kadmin restart

问题2:kerberos用户执行mapreduce任务报错
[mw_shl_code=shell,true][root@master ~]# hadoop jar /opt/cloudera/parcels/CDH/lib/hadoop-mapreduce/hadoop-mapreduce-examples.jar pi 10 1
WARNING: Use "yarn jar" to launch YARN applications.
Number of Maps  = 10
Samples per Map = 1
org.apache.hadoop.security.AccessControlException: Permission denied: user=haley, access=WRITE, inode="/user":hdfs:supergroup:drwxr-xr-x
       at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:400)
       at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:256)
       at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:194)
       at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1855)
       at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1839)
       at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkAncestorAccess(FSDirectory.java:1798)
       at org.apache.hadoop.hdfs.server.namenode.FSDirMkdirOp.mkdirs(FSDirMkdirOp.java:60)
       at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:3101)
       at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:1123)
       at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:696)
       at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
       at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:523)
       at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991)
       at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:869)
       at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:815)
       at java.security.AccessController.doPrivileged(Native Method)
       at javax.security.auth.Subject.doAs(Subject.java:422)
       at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1875)
       at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2675)
       at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
       at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
       at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
       at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
       at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:121)
       at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:88)
       at org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:2335)
       at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:2309)
       at org.apache.hadoop.hdfs.DistributedFileSystem$27.doCall(DistributedFileSystem.java:1247)
       at org.apache.hadoop.hdfs.DistributedFileSystem$27.doCall(DistributedFileSystem.java:1244)
       at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
       at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirsInternal(DistributedFileSystem.java:1261)
       at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:1236)
       at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:2260)
       at org.apache.hadoop.examples.QuasiMonteCarlo.estimatePi(QuasiMonteCarlo.java:283)
       at org.apache.hadoop.examples.QuasiMonteCarlo.run(QuasiMonteCarlo.java:360)
       at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
        at org.apache.hadoop.examples.QuasiMonteCarlo.main(QuasiMonteCarlo.java:368)
       at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
       at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
       at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
       at java.lang.reflect.Method.invoke(Method.java:498)
       at org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:71)
       at org.apache.hadoop.util.ProgramDriver.run(ProgramDriver.java:144)
       at org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:74)
       at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
       at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
       at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
       at java.lang.reflect.Method.invoke(Method.java:498)
       at org.apache.hadoop.util.RunJar.run(RunJar.java:313)
       at org.apache.hadoop.util.RunJar.main(RunJar.java:227)
Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException): Permission denied: user=haley, access=WRITE, inode="/user":hdfs:supergroup:drwxr-xr-x
       at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:400)
       at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:256)
       at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:194)
       at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1855)
       at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1839)
       at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkAncestorAccess(FSDirectory.java:1798)
       at org.apache.hadoop.hdfs.server.namenode.FSDirMkdirOp.mkdirs(FSDirMkdirOp.java:60)
       at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:3101)
       at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:1123)
       at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:696)
       at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
       at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:523)
       at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991)
       at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:869)
       at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:815)
       at java.security.AccessController.doPrivileged(Native Method)
       at javax.security.auth.Subject.doAs(Subject.java:422)
       at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1875)
       at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2675)
      at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1499)
       at org.apache.hadoop.ipc.Client.call(Client.java:1445)
       at org.apache.hadoop.ipc.Client.call(Client.java:1355)
       at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:228)
       at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)
       at com.sun.proxy.$Proxy9.mkdirs(Unknown Source)
       at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.mkdirs(ClientNamenodeProtocolTranslatorPB.java:640)
       at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
       at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
       at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
       at java.lang.reflect.Method.invoke(Method.java:498)
       at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)
       at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)
       at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)
       at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
       at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)
       at com.sun.proxy.$Proxy10.mkdirs(Unknown Source)
       at org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:2333)
       ... 24 more[/mw_shl_code]
解决:
添加supergroup用户组,把haley新用户加入到用户组
[root@master ~]# pssh -h hostlist.txt -i groupadd supergroup
[root@master ~]# pssh -h hostlist.txt -i usermod -G supergroup haley

参考:
https://www.jb51.net/article/94875.htm
https://www.jianshu.com/p/692c4a7676ab
https://mp.weixin.qq.com/s?__biz=MzI4OTY3MTUyNg==&mid=2247495377&idx=1&sn=7370d0fd397132718ad3023c451c4f78&chksm=ec293ed8db5eb7ce3f3799c5e130db5cbb10965efcdc6993d77cb3c60ca759907ee4977f97eb&scene=21#wechat_redirect



最新经典文章,欢迎关注公众号

来源:CSDN

作者:常飞梦

原文:《CDH6.2环境中启用Kerberos》

https://blog.csdn.net/lichangzai/article/details/93861348





已有(2)人评论

跳转到指定楼层
您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

关闭

推荐上一条 /2 下一条