分享

CDH5版本升级汇总:(Cloudera Manager)CDH5.0.2升级CDH5.2.0

阿飞 发表于 2014-12-26 23:53:38 [显示全部楼层] 回帖奖励 阅读模式 关闭右栏 13 126719
本帖最后由 阿飞 于 2014-12-26 23:59 编辑
CDH5.0.2升级CDH5.2.0


问题导读


1.CDH5.0.2升级至CDH5.2.0有什么好处?
2.本文升级整体包含哪两个步骤?
3.升级后impala jdbc安全机制下不可用本文是如何解决的?








升级需求
1.为支持spark kerberos安全机制
2.为满足impala trunc函数
3.为解决impala import时同时query导致impala hang问题

升级步骤
参考http://www.cloudera.com/content/ ... lation_upgrade.html

优先升级cloudera manager,再升级cdh

1.准备工作:

统一集群root密码,需要运维帮忙操作下
agent自动重启关闭
事先下载好parcals包

2.CM升级
登录cmserver安装的主机,执行命令:
  1. cat /etc/cloudera-scm-server/db.properties
复制代码


备份CM数据:
  1.   pg_dump -U scm -p 7432   > scm_server_db_backup.bak
复制代码

  检查/tmp下是否有文件生成,期间保证tmp下文件不要被删除。
停止CM server :
  1.   sudo service cloudera-scm-server stop
复制代码

停止CM server依赖的数据库:
  1.   sudo service cloudera-scm-server-db stop
复制代码

如果这台CM server上有agent在运行也停止:
  1.   sudo service cloudera-scm-agent stop
复制代码

修改yum的 cloudera-manager.repo文件:
  1.   sudo vim /etc/yum.repos.d/cloudera-manager.repo
复制代码

  1.    [cloudera-manager]
  2.        # Packages for Cloudera Manager, Version 5, on RedHat or CentOS 6 x86_64
  3.        name=Cloudera Manager
  4.        baseurl=http://archive.cloudera.com/cm5/redhat/6/x86_64/cm/5/
  5.        gpgkey = http://archive.cloudera.com/cm5/redhat/6/x86_64/cm/RPM-GPG-KEY-cloudera
  6.        gpgcheck = 1
复制代码


安装:
  1.   sudo    yum clean all
  2.   sudo yum upgrade 'cloudera-*'
复制代码


检查:
  1.   rpm  -qa 'cloudera-manager-*'
复制代码


启动CM server 数据库:
  1.   sudo  service cloudera-scm-server-db start
复制代码


启动CM server:
  1.   sudo  service cloudera-scm-server start
复制代码

登录http://172.20.0.83:7180/
  安装agent(步骤略)
升级如果升级jdk,会改变java_home路径,导致java相关服务不可用,需要重新配置java_home
升级CM后需要重启CDH。

3.CDH升级
停止集群所有服务
备份namenode元数据:
  进入namenode dir,执行:
  1.    tar -cvf /root/nn_backup_data.tar ./*
复制代码


下载parcels
分发包->激活包->关闭(非重启)
开启zk服务
进入HDFS服务->升级hdfs metadata
  namenode上启动元数据
  启动剩余HDFS角色
  namenode响应RPC
  HDFS退出安全模式
备份hive metastore数据库
  1.   mysql dump -h172.20.0.67 -ucdhhive -p111111 cdhhive > /tmp/database-backup.sql
复制代码

进入hive服务->更新hive metastore
  1.      database scheme
复制代码

更新oozie sharelib:oozie->install
     oozie share lib
  创建 oozie user
      sharelib
  创建 oozie user
      Dir
更新sqoop:进入sqoop服务->update
     sqoop
  更新sqoop2 server
更新spark(略,可先卸载原来版本,升级后直接安装新版本)
启动集群所有服务:
  1. zk->hdfs->spark->flume->hbase->hive->impala->oozie->sqoop2->hue
复制代码


分发客户端文件:
  1. deploy client
  2.      configuration
  3.   deploy hdfs client configuration
  4.   deploy spark client configuration
  5.   deploy hbase client configuration
  6.   deploy yarn client configuration
  7.   deploy hive client configuration
复制代码


删除老版本包:
  1.   sudo  yum remove bigtop-utils bigtop-jsvc bigtop-tomcat hue-common sqoop2-client
复制代码

启动agent:
  sudo service cloudera-scm-agent restart
HDFS
  metadata update
  hdfs server->instance->namenode=>action->Finalize
      Metadata Upgrade

升级过程遇主要问题:
  1. com.cloudera.server.cmf.FeatureUnavailableException: The feature Navigator Audit Server is not available.
  2.         at com.cloudera.server.cmf.components.LicensedFeatureManager.check(LicensedFeatureManager.java:49)
  3.         at com.cloudera.server.cmf.components.OperationsManagerImpl.setConfig(OperationsManagerImpl.java:1312)
  4.         at com.cloudera.server.cmf.components.OperationsManagerImpl.setConfigUnsafe(OperationsManagerImpl.java:1352)
  5.         at com.cloudera.api.dao.impl.ManagerDaoBase.updateConfigs(ManagerDaoBase.java:264)
  6.         at com.cloudera.api.dao.impl.RoleConfigGroupManagerDaoImpl.updateConfigsHelper(RoleConfigGroupManagerDaoImpl.java:214)
  7.         at com.cloudera.api.dao.impl.RoleConfigGroupManagerDaoImpl.updateRoleConfigGroup(RoleConfigGroupManagerDaoImpl.java:97)
  8.         at com.cloudera.api.dao.impl.RoleConfigGroupManagerDaoImpl.updateRoleConfigGroup(RoleConfigGroupManagerDaoImpl.java:79)
  9.         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
  10.         at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
  11.         at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
  12.         at java.lang.reflect.Method.invoke(Method.java:606)
  13.         at com.cloudera.api.dao.impl.ManagerDaoBase.invoke(ManagerDaoBase.java:208)
  14.         at com.sun.proxy.$Proxy82.updateRoleConfigGroup(Unknown Source)
  15.         at com.cloudera.api.v3.impl.RoleConfigGroupsResourceImpl.updateRoleConfigGroup(RoleConfigGroupsResourceImpl.java:69)
  16.         at com.cloudera.api.v3.impl.MgmtServiceResourceV3Impl$RoleConfigGroupsResourceWrapper.updateRoleConfigGroup(MgmtServiceResourceV3Impl.java:54)
  17.         at com.cloudera.cmf.service.upgrade.RemoveBetaFromRCG.upgrade(RemoveBetaFromRCG.java:80)
  18.         at com.cloudera.cmf.service.upgrade.AbstractApiAutoUpgradeHandler.upgrade(AbstractApiAutoUpgradeHandler.java:36)
  19.         at com.cloudera.cmf.service.upgrade.AutoUpgradeHandlerRegistry.performAutoUpgradesForOneVersion(AutoUpgradeHandlerRegistry.java:233)
  20.         at com.cloudera.cmf.service.upgrade.AutoUpgradeHandlerRegistry.performAutoUpgrades(AutoUpgradeHandlerRegistry.java:167)
  21.         at com.cloudera.cmf.service.upgrade.AutoUpgradeHandlerRegistry.performAutoUpgrades(AutoUpgradeHandlerRegistry.java:138)
  22.         at com.cloudera.server.cmf.Main.run(Main.java:587)
  23.         at com.cloudera.server.cmf.Main.main(Main.java:198)
  24. 2014-11-26 03:17:42,891 INFO ParcelUpdateService:com.cloudera.parcel.components.ParcelDownloade
复制代码


原先版本使用了60天试用企业版本,该期限已经过期,升级时Navigator服务启动不了,导致整个cloduera manager server启动失败
升级后问题
a.升级后flume原先提供的第三方jar丢失,需要将包重新放在/opt....下
b.sqoop导入mysql的驱动包找不到,需要将包重新放在/opt....下
c.hbase服务异常
  1. Unhandled exception. Starting shutdown.
  2. org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.authorize.AuthorizationException): User hbase/ip-10-1-33-20.ec2.internal@YEAHMOBI.COM (auth:KERBEROS) is not authorized for protocol interface org.apache.hadoop.hdfs.protocol.ClientProtocol, expected client Kerberos principal is null
  3. at org.apache.hadoop.ipc.Client.call(Client.java:1409)
  4. at org.apache.hadoop.ipc.Client.call(Client.java:1362)
  5. at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
  6. at com.sun.proxy.$Proxy15.setSafeMode(Unknown Source)
  7. at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
  8. at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
  9. at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
  10. at java.lang.reflect.Method.invoke(Method.java:606)
  11. at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186)
  12. at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
  13. at com.sun.proxy.$Proxy15.setSafeMode(Unknown Source)
  14. at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.setSafeMode(ClientNamenodeProtocolTranslatorPB.java:594)
  15. at org.apache.hadoop.hdfs.DFSClient.setSafeMode(DFSClient.java:2224)
  16. at org.apache.hadoop.hdfs.DistributedFileSystem.setSafeMode(DistributedFileSystem.java:993)
  17. at org.apache.hadoop.hdfs.DistributedFileSystem.setSafeMode(DistributedFileSystem.java:977)
  18. at org.apache.hadoop.hbase.util.FSUtils.isInSafeMode(FSUtils.java:432)
  19. at org.apache.hadoop.hbase.util.FSUtils.waitOnSafeMode(FSUtils.java:851)
  20. at org.apache.hadoop.hbase.master.MasterFileSystem.checkRootDir(MasterFileSystem.java:435)
  21. at org.apache.hadoop.hbase.master.MasterFileSystem.createInitialFileSystemLayout(MasterFileSystem.java:146)
  22. at org.apache.hadoop.hbase.master.MasterFileSystem.<init>(MasterFileSystem.java:127)
  23. at org.apache.hadoop.hbase.master.HMaster.finishInitialization(HMaster.java:789)
  24. at org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:606)
  25. at java.lang.Thread.run(Thread.java:744)
复制代码


通过cm将safe配置文件里的hbase.rpc.engine org.apache.hadoop.hbase.ipc.SecureRpcEngine去掉后重启成功。
后来发现是cm server的问题,之前修改了一个hostname,cloudera manager server未重启,重启后,加入该配置重启hbase不会有问题。
d.service monitor,zookeeper也有警告,其他服务都有部分红色异常
  1. Exception in scheduled runnable.
  2. java.lang.IllegalStateException
  3. at com.google.common.base.Preconditions.checkState(Preconditions.java:133)
  4. at com.cloudera.cmon.firehose.polling.CdhTask.checkClientConfigs(CdhTask.java:712)
  5. at com.cloudera.cmon.firehose.polling.CdhTask.updateCacheIfNeeded(CdhTask.java:675)
  6. at com.cloudera.cmon.firehose.polling.FirehoseServicesPoller.getDescriptorAndHandleChanges(FirehoseServicesPoller.java:615)
  7. at com.cloudera.cmon.firehose.polling.FirehoseServicesPoller.run(FirehoseServicesPoller.java:179)
  8. at com.cloudera.enterprise.PeriodicEnterpriseService$UnexceptionablePeriodicRunnable.run(PeriodicEnterpriseService.java:67)
  9. at java.lang.Thread.run(Thread.java:745)
复制代码


后来发现是cm server的问题,之前修改了一个hostname,cloudera manager server未重启,重启后,加入该配置重启hbase不会有问题。
e.mapreduce访问安全机制下的hbase失败
去除client hbase-site safe配置文件内容:hbase.rpc.protection privacy,旧版本中必须加此配置,而新版本文档中也提到需要加此配置,但经过测试加此配置后报如上异常。
  1. 14/11/27 12:38:26 INFO zookeeper.ClientCnxn: Socket connection established to ip-10-1-33-24.ec2.internal/10.1.33.24:2181, initiating session
  2. 14/11/27 12:38:26 INFO zookeeper.ClientCnxn: Session establishment complete on server ip-10-1-33-24.ec2.internal/10.1.33.24:2181, sessionid = 0x549ef6088f20309, negotiated timeout = 60000
  3. 14/11/27 12:38:41 WARN ipc.RpcClient: Couldn't setup connection for hbase/ip-10-1-10-15.ec2.internal@YEAHMOBI.COM to hbase/ip-10-1-34-31.ec2.internal@YEAHMOBI.COM
  4. 14/11/27 12:38:55 WARN ipc.RpcClient: Couldn't setup connection for hbase/ip-10-1-10-15.ec2.internal@YEAHMOBI.COM to hbase/ip-10-1-34-31.ec2.internal@YEAHMOBI.COM
  5. 14/11/27 12:39:15 WARN ipc.RpcClient: Couldn't setup connection for hbase/ip-10-1-10-15.ec2.internal@YEAHMOBI.COM to hbase/ip-10-1-34-31.ec2.internal@YEAHMOBI.COM
  6. 14/11/27 12:39:34 WARN ipc.RpcClient: Couldn't setup connection for hbase/ip-10-1-10-15.ec2.internal@YEAHMOBI.COM to hbase/ip-10-1-34-31.ec2.internal@YEAHMOBI.COM
  7. 14/11/27 12:39:55 WARN ipc.RpcClient: Couldn't setup connection for hbase/ip-10-1-10-15.ec2.internal@YEAHMOBI.COM to hbase/ip-10-1-34-31.ec2.internal@YEAHMOBI.COM
  8. 14/11/27 12:40:19 WARN ipc.RpcClient: Couldn't setup connection for hbase/ip-10-1-10-15.ec2.internal@YEAHMOBI.COM to hbase/ip-10-1-34-31.ec2.internal@YEAHMOBI.COM
  9. 14/11/27 12:40:36 WARN ipc.RpcClient: Couldn't setup connection for hbase/ip-10-1-10-15.ec2.internal@YEAHMOBI.COM to hbase/ip-10-1-34-31.ec2.internal@YEAHMOBI.COM
  10. Caused by: java.io.IOException: Couldn't setup connection for hbase/ip-10-1-33-20.ec2.internal@YEAHMOBI.COM to hbase/ip-10-1-34-32.ec2.internal@YEAHMOBI.COM
  11. at org.apache.hadoop.hbase.ipc.RpcClient$Connection$1.run(RpcClient.java:821)
  12. at java.security.AccessController.doPrivileged(Native Method)
  13. at javax.security.auth.Subject.doAs(Subject.java:415)
  14. at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1614)
  15. at org.apache.hadoop.hbase.ipc.RpcClient$Connection.handleSaslConnectionFailure(RpcClient.java:796)
  16. at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:898)
  17. at org.apache.hadoop.hbase.ipc.RpcClient.getConnection(RpcClient.java:1543)
  18. at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1442)
  19. at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1661)
  20. at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1719)
  21. at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.execService(ClientProtos.java:30014)
  22. at org.apache.hadoop.hbase.protobuf.ProtobufUtil.execService(ProtobufUtil.java:1623)
  23. at org.apache.hadoop.hbase.ipc.RegionCoprocessorRpcChannel$1.call(RegionCoprocessorRpcChannel.java:93)
  24. at org.apache.hadoop.hbase.ipc.RegionCoprocessorRpcChannel$1.call(RegionCoprocessorRpcChannel.java:90)
  25. at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:114)
  26. ... 31 more
  27. Caused by: javax.security.sasl.SaslException: No common protection layer between client and server
  28. at com.sun.security.sasl.gsskerb.GssKrb5Client.doFinalHandshake(GssKrb5Client.java:252)
  29. at com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:187)
  30. at org.apache.hadoop.hbase.security.HBaseSaslRpcClient.saslConnect(HBaseSaslRpcClient.java:210)
  31. at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupSaslConnection(RpcClient.java:770)
  32. at org.apache.hadoop.hbase.ipc.RpcClient$Connection.access$600(RpcClient.java:357)
  33. at org.apache.hadoop.hbase.ipc.RpcClient$Connection$2.run(RpcClient.java:891)
  34. at org.apache.hadoop.hbase.ipc.RpcClient$Connection$2.run(RpcClient.java:888)
  35. at java.security.AccessController.doPrivileged(Native Method)
  36. at javax.security.auth.Subject.doAs(Subject.java:415)
  37. at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1614)
  38. at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:888)
  39. ... 40 more
  40. <property>
  41.      <name>hbase.rpc.engine</name>
  42.      <value>org.apache.hadoop.hbase.ipc.SecureRpcEngine</value>
  43. </property>
复制代码


mr中使用http://www.cloudera.com/content/ ... apreduce_hbase.html  TableMapReduceUtil.addDependencyJars(job);方式加载。
并且使用user api加入例如:
  1. hbase.master.kerberos.principal=hbase/ip-10-1-10-15.ec2.internal@YEAHMOBI.COM
  2. hbase.keytab.path=/home/dev/1015q.keytab
复制代码


f.升级后impala jdbc安全机制下不可用
  1. java.sql.SQLException: Could not open connection to jdbc:hive2://ip-10-1-33-22.ec2.internal:21050/ym_system;principal=impala/ip-10-1-33-22.ec2.internal@YEAHMOBI.COM: GSS initiate failed
  2. at org.apache.hive.jdbc.HiveConnection.openTransport(HiveConnection.java:187)
  3. at org.apache.hive.jdbc.HiveConnection.<init>(HiveConnection.java:164)
  4. at org.apache.hive.jdbc.HiveDriver.connect(HiveDriver.java:105)
  5. at java.sql.DriverManager.getConnection(DriverManager.java:571)
  6. at java.sql.DriverManager.getConnection(DriverManager.java:233)
  7. at com.cloudera.example.ClouderaImpalaJdbcExample.main(ClouderaImpalaJdbcExample.java:37)
  8. Caused by: org.apache.thrift.transport.TTransportException: GSS initiate failed
  9. at org.apache.thrift.transport.TSaslTransport.sendAndThrowMessage(TSaslTransport.java:221)
  10. at org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:297)
  11. at org.apache.thrift.transport.TSaslClientTransport.open(TSaslClientTransport.java:37)
  12. at org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport$1.run(TUGIAssumingTransport.java:52)
  13. at org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport$1.run(TUGIAssumingTransport.java:49)
  14. at java.security.AccessController.doPrivileged(Native Method)
  15. at javax.security.auth.Subject.doAs(Subject.java:415)
  16. at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
  17. at org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport.open(TUGIAssumingTransport.java:49)
  18. at org.apache.hive.jdbc.HiveConnection.openTransport(HiveConnection.java:185)
  19. ... 5 more
复制代码


解决:
hadoop-auth-2.5.0-cdh5.2.0.jar
hive-shims-common-secure-0.13.1-cdh5.2.0.jar
两个包回退版本即可

本帖被以下淘专辑推荐:

已有(13)人评论

跳转到指定楼层
阿飞 发表于 2014-12-26 23:58:11
进一步补充相关内容:

cloudera manager & CDH5 安装与升级



一、 准备工作
下载cdh的各种源:
1.下载cloudera manager installer:http://archive-primary.cloudera.com/cm5/installer
2.因为这次要模拟升级安装,首先要先下载beta的源:http://archive-primary.cloudera.com/cm5/redhat/5/x86_64/cm/5.0.0-beta-2/
4.然后下载parcel包,官方也推荐这种方式安装:http://archive-primary.cloudera.com/cdh5/parcels/
另外除了parcel包,还要下载json文件,没有那个json文件是无法正常安装的.
5.下载到本地后将rpm包的源,放到webserver目录下,写好yum的repo文件

  1. [cloudera-manager]
  2. name = Cloudera Manager, Version 5.0.0
  3. baseurl = http://IP/yum-package/cm5/redhat/5/x86_64/cm/5.0.0/
  4. gpgcheck = 0
复制代码



6.parcel包添加验证文件
cat manifest.json 把sha那一段沾出来, 写到parcel包.sha文件里,然后为这三个文件授权755
(之前因为manifest.json文件改错,安装包一直失败,查看manager的log才发现问题)
至此准备工作完成!
二 、安装cloudera manager
./cloudera-manager-installer.bin 直接执行,这个还支持鼠标点击.
在最后一步前,准备好另一个终端,准备好手动拷贝刚才写的repo文件,因为安装时会被覆盖,所以只能手动反覆盖.......
完成manager机器的安装 就可以进web配置了,启动略慢
http://ip:7180

三、通过web安装cdh
1.添加完涉及的服务器后,手动指定parcel和agent的本地源
2.安装完成后选择服务,本次只选择了基本服务hdfs,yarn,zookeeper
3.指定角色,完成安装.
4.使用slave节点,测试上传文件ok

四、升级cloudera manager
升级cloudera manager
#备份:

  1. # cd /mnt/hadoop/hdfs/name
  2. # tar -cvf /root/nn_backup_data.tar .
复制代码




1.把当前跑的所有服务都停掉,包括服务.
2.停掉cloudera-scm-server和cloudera-scm-server-db
  1. service cloudera-scm-server stop
  2. service cloudera-scm-server-db stop
复制代码




3.准备好新版本的manager的repo文件
执行yum clean all && yum upgrade 'cloudera-*'
rpm -qa  'cloudera-*' 查看版本
4.升级完成,启动server和db
5.进入web升级agent们,一进去就会有提示,然后手动指定url,升级就可以鸟.

五、升级CDH
  • 注意先办法namenode数据,
    1. # cd /mnt/hadoop/hdfs/name
    2. # tar -cvf /root/nn_backup_data.tar .
    复制代码


  • 点搜索框左边小礼品盒子图标,添加新版本parcel的url,添加后下面会多出个小图,点激活就可以自动升级了



至此完成所有升级.

六、错误整理
  • 升级过程出现namenode启动失败问题,
    1. java.io.IOException:
    2. File system image contains an old layout version -51.
    3. An upgrade to version -55 is required.
    4. Please restart NameNode with the "-rollingUpgrade started" option if a rolling upgraded is already started; or restart NameNode with the "UPGRADE" to start a new upgrade.
    复制代码


手动到master节点启动后,停掉,再通过web页面重启后正常
su - hdfs -c "hdfs --config /var/run/cloudera-scm-agent/process/XXX-hdfs-NAMENODE namenode -upgrade"
2.另一个可能出现的报错:
  1. Get corrupt file blocks returned error: Cannot run listCorruptFileBlocks because replication queues have not been initialized.
复制代码

删掉namenode节点的previous/目录即可
3. 由于前一次安装socket文件(/var/run/hdfs-sockets)没有正常删除,造成datanode启动失败,手动修改其权限为root解决


参考链接及文档:
官方pdf:Cloudera-Manager-Administration-Guide.pdf

回复

使用道具 举报

韩克拉玛寒 发表于 2014-12-27 08:59:19
分享学习了 。
回复

使用道具 举报

wubaozhou 发表于 2015-1-2 21:59:29
回复

使用道具 举报

tustyao 发表于 2015-2-12 14:19:47
回复

使用道具 举报

一丝丝骄傲 发表于 2016-3-2 11:23:59
谢谢楼主分享
回复

使用道具 举报

12下一页
您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

关闭

推荐上一条 /2 下一条