分享

Flume-ng failover 以及Load balance测试及注意事项

问题导读:
1.flume failover以及load balance同时应用该如何配置?
2.是不是sinkgroups的sink不能共享?
扩展:
什么是Flume-ng failover 以及Load balance?




今天给大伙写下我测试flume failover以及load balance(flume的负载均衡)的场景以及一些结论;
测试环境包含5个配置文件,也就是5个agent。
一个主的配置文件,也就是我们配置failover以及load balance关系的配置文件(flume-sink.properties),这个文件在下面的场景
会变动,所以这里就不列举出来了,会在具体的场景中写明;
其他4个配置文件类似:

  1. #Name the compents on this agent
  2. a2.sources = r1
  3. a2.sinks = k1
  4. a2.channels = c1
  5. #Describe/configure the source
  6. a2.sources.r1.type = avro
  7. a2.sources.r1.channels = c1
  8. a2.sources.r1.bind = 192.168.220.159
  9. a2.sources.r1.port = 44411
  10. #Describe the sink
  11. a2.sinks.k1.type = logger
  12. a2.sinks.k1.channel = c1
  13. #Use a channel which buffers events in memory
  14. a2.channels.c1.type = memory
  15. a2.channels.c1.capacity = 1000
  16. a2.channels.c1.transactionCapacity = 100
复制代码


另外三个只需修改agent名称即可。

场景1:---想法(flume failover以及load balance同时应用,兼顾负载和容错,使环境更加有保障)
配置文件(flume-sink.properties):


  1. #Name the compents on this agent
  2. a1.sources = r1
  3. a1.sinks = k1 k2 k3
  4. a1.channels = c1
  5. #Describe the sinkgroups
  6. a1.sinkgroups = g1 g2
  7. a1.sinkgroups.g1.sinks = k1 k2
  8. a1.sinkgroups.g1.processor.type = failover
  9. a1.sinkgroups.g1.processor.priority.k1 = 10
  10. a1.sinkgroups.g1.processor.priority.k2 = 5
  11. a1.sinkgroups.g1.processor.maxpenalty = 10000
  12. a1.sinkgroups.g2.sinks = k1 k3
  13. a1.sinkgroups.g2.processor.type = load_balance
  14. a1.sinkgroups.g2.processor.backoff = true
  15. a1.sinkgroups.g2.processor.selector = round_robin
  16. #Describe/config the source
  17. a1.sources.r1.type = syslogtcp
  18. a1.sources.r1.port = 5140
  19. a1.sources.r1.host = localhost
  20. a1.sources.r1.channels = c1
  21. #Describe the sink
  22. a1.sinks.k1.type = avro
  23. a1.sinks.k1.channel = c1
  24. a1.sinks.k1.hostname = 192.168.220.159
  25. a1.sinks.k1.port = 44411
  26. a1.sinks.k2.type = avro
  27. a1.sinks.k2.channel = c1
  28. a1.sinks.k2.hostname = 192.168.220.159
  29. a1.sinks.k2.port = 44422
  30. a1.sinks.k4.type = avro
  31. a1.sinks.k4.channel = c1
复制代码

测试结果:


1.png


发现K1,K2,K3都可以接收到数据,但是这里K2的优先级别是最低的,原本不应该获得数据的;除非K1挂掉了。

这个测试之前最开始的时候我测试了,但是忘记配置k3的声明了,结果给我报了好多错误,让我以为sinkgroups的sink不能共享,


2.png
3.png



然后我就跑去社区发邮件问,是不是sinkgroups的sink不能共享,结果有个哥们跟我说是。现在知道我和那哥们都错了。哈哈
其实上面我还少配置了一个东西,大伙可以看上面的配置文件,K1做了容错,但是K3没有。所以这里需要添加以下K3的容错节点K4。
这点是@晨色星空跟我说的,我一下明白了,哈;


  1. #Describe the sinkgroups  
  2. a1.sinkgroups = g1 g2 g3  
  3. a1.sinkgroups.g1.sinks = k1 k2  
  4. a1.sinkgroups.g1.processor.type = failover  
  5. a1.sinkgroups.g1.processor.priority.k1 = 10  
  6. a1.sinkgroups.g1.processor.priority.k2 = 5  
  7. a1.sinkgroups.g1.processor.maxpenalty = 10000  
  8.   
  9. a1.sinkgroups.g2.sinks = k1 k3  
  10. a1.sinkgroups.g2.processor.type = load_balance  
  11. a1.sinkgroups.g2.processor.backoff = true  
  12. a1.sinkgroups.g2.processor.selector = round_robin  
  13.   
  14. a1.sinkgroups.g3.sinks = k3 k4  
  15. a1.sinkgroups.g3.processor.type = failover  
  16. a1.sinkgroups.g3.processor.priority.k3 = 10  
  17. a1.sinkgroups.g3.processor.priority.k4 = 5  
  18. a1.sinkgroups.g3.processor.maxpenalty = 10000  
复制代码


那我们把g3和k4加上,然后发送数据看怎么样;

4.png

好把。竟然4个都接收到了。
接下来我们把K1断掉,看什么结果;

5.png

结果这里K2,K3,K4都接收到了数据,真的有点诡异;K4是做K3的容错的不应该接收到数据才对。K2接收到数据倒是在理;
我们再把K1启起来。然后我们来断掉K3看看。

6.png

结果和上面的情况类似;我只能说测试结果有点奇怪;哈哈

场景2:---想法(failover和load balance分开不同sink)
配置文件(flume-sink.properties):


  1. #Name the compents on this agent  
  2. a1.sources = r1  
  3. a1.sinks = k1 k2 k3 k4  
  4. a1.channels = c1  
  5.   
  6. #Describe the sinkgroups  
  7. a1.sinkgroups = g1 g2  
  8. a1.sinkgroups.g1.sinks = k1 k2  
  9. a1.sinkgroups.g1.processor.type = failover  
  10. a1.sinkgroups.g1.processor.priority.k1 = 10  
  11. a1.sinkgroups.g1.processor.priority.k2 = 5  
  12. a1.sinkgroups.g1.processor.maxpenalty = 10000  
  13.   
  14. a1.sinkgroups.g2.sinks = k3 k4  
  15. a1.sinkgroups.g2.processor.type = load_balance  
  16. a1.sinkgroups.g2.processor.backoff = true  
  17. a1.sinkgroups.g2.processor.selector = round_robin  
  18.   
  19. #Describe/config the source  
  20. a1.sources.r1.type = syslogtcp  
  21. a1.sources.r1.port = 5140  
  22. a1.sources.r1.host = localhost  
  23. a1.sources.r1.channels = c1  
  24.   
  25. #Describe the sink  
  26. a1.sinks.k1.type = avro  
  27. a1.sinks.k1.channel = c1  
  28. a1.sinks.k1.hostname = 192.168.220.159  
  29. a1.sinks.k1.port = 44411  
  30.   
  31. a1.sinks.k2.type = avro  
  32. a1.sinks.k2.channel = c1  
  33. a1.sinks.k2.hostname = 192.168.220.159  
  34. a1.sinks.k2.port = 44422  
  35.   
  36. a1.sinks.k3.type = avro  
  37. a1.sinks.k3.channel = c1  
  38. a1.sinks.k3.hostname = 192.168.220.159  
  39. a1.sinks.k3.port = 44433  
  40.   
  41. a1.sinks.k4.type = avro  
  42. a1.sinks.k4.channel = c1  
  43. a1.sinks.k4.hostname = 192.168.220.159  
  44. a1.sinks.k4.port = 44444  
  45. #Use a channel which butters events in memory  
  46. a1.channels.c1.type = memory  
  47. a1.channels.c1.capacity = 1000  
  48. a1.channels.c1.transactionCapacity = 100
复制代码

测试结果:



7.png


这里可以看到K1,K3,K4是可以接收到数据的,这个还是正常的。这里相当于K2做的容错,K1,K3,K4做的负载均衡分担下发的数据。
然后接下来我们把K1断开,再测试看看是什么情况。
这里主agent会报错,原因是因为我们把K1断开了。

8.png

我们再来看接收情况

9.png


可以看到,这里K2,K3,K4做了负载。K1挂掉之后,K2和K3,K4一起分担数据。

10.png

测试结果正常。K1,K4可以接收到数据,K3断开了之后,K1,K4做了负载
场景3:---想法(单独使用failover配置,多个同级别高级的sink,一个低级的sink)
配置文件(flume-sink.properties):


  1. #Name the compents on this agent  
  2. a1.sources = r1  
  3. a1.sinks = k1 k2 k3 k4  
  4. a1.channels = c1  
  5.   
  6. #Describe the sinkgroups  
  7. a1.sinkgroups = g1 g2  
  8. a1.sinkgroups.g1.sinks = k1 k2 k3 k4  
  9. a1.sinkgroups.g1.processor.type = failover  
  10. a1.sinkgroups.g1.processor.priority.k1 = 10  
  11. a1.sinkgroups.g1.processor.priority.k3 = 10  
  12. a1.sinkgroups.g1.processor.priority.k4 = 10  
  13. a1.sinkgroups.g1.processor.priority.k2 = 5  
  14. a1.sinkgroups.g1.processor.maxpenalty = 10000  
  15.   
  16. #Describe/config the source  
  17. a1.sources.r1.type = syslogtcp  
  18. a1.sources.r1.port = 5140  
  19. a1.sources.r1.host = localhost  
  20. a1.sources.r1.channels = c1  
  21.   
  22. #Describe the sink  
  23. a1.sinks.k1.type = avro  
  24. a1.sinks.k1.channel = c1  
  25. a1.sinks.k1.hostname = 192.168.220.159  
  26. a1.sinks.k1.port = 44411  
  27.   
  28. a1.sinks.k2.type = avro  
  29. a1.sinks.k2.channel = c1  
  30. a1.sinks.k2.hostname = 192.168.220.159  
  31. a1.sinks.k2.port = 44422  
  32.   
  33. a1.sinks.k3.type = avro  
  34. a1.sinks.k3.channel = c1  
  35. a1.sinks.k3.hostname = 192.168.220.159  
  36. a1.sinks.k3.port = 44433  
  37.   
  38. a1.sinks.k4.type = avro  
  39. a1.sinks.k4.channel = c1  
  40. a1.sinks.k4.hostname = 192.168.220.159  
  41. a1.sinks.k4.port = 44444  
  42. #Use a channel which butters events in memory  
  43. a1.channels.c1.type = memory  
  44. a1.channels.c1.capacity = 1000  
  45. a1.channels.c1.transactionCapacity = 100  
复制代码

我们来发数据看看。

11.png



这里我们可以看到K3接收到了数据,但是我不管发多少数据都被K3接收了。原本的想法是想多个同等级的sink是否会做负载,结果测试失败,不行;
那么接下来我们把K3断掉看怎么样;

12.png

K3断掉了结果K2接收到了数据,K2的级别是最低的;如果这样的话我们来把K2断掉看看。
结果K1,K4都接收不到数据了。我们来看下日志


  1. org.apache.flume.EventDeliveryException: All sinks failed to process, nothing left to failover to  
  2.         at org.apache.flume.sink.FailoverSinkProcessor.process(FailoverSinkProcessor.java:191)  
  3.         at org.apache.flume.SinkRunner$PollingRunner.run(SinkRunner.java:147)  
  4.         at java.lang.Thread.run(Thread.java:745)  
  5. 2014-07-08 12:12:26,574 (SinkRunner-PollingRunner-FailoverSinkProcessor) [INFO - org.apache.flume.sink.AbstractRpcSink.createConnection(AbstractRpcSink.java:206)] Rpc sink k2: Building RpcClient with hostname: 192.168.220.159, port: 44422  
  6. 2014-07-08 12:12:26,575 (SinkRunner-PollingRunner-FailoverSinkProcessor) [INFO - org.apache.flume.sink.AvroSink.initializeRpcClient(AvroSink.java:126)] Attempting to create Avro Rpc client.  
  7. 2014-07-08 12:12:26,577 (SinkRunner-PollingRunner-FailoverSinkProcessor) [WARN - org.apache.flume.api.NettyAvroRpcClient.configure(NettyAvroRpcClient.java:620)] Using default maxIOWorkers  
  8. 2014-07-08 12:12:26,595 (SinkRunner-PollingRunner-FailoverSinkProcessor) [ERROR - org.apache.flume.SinkRunner$PollingRunner.run(SinkRunner.java:160)] Unable to deliver event. Exception follows.  
复制代码


这里写了,All sinks failed to process。从上面的测试可以表明,在配置failover的时候,sink的级别不能配置相同;如果配置多个相同级别的sink,只有一个生效;
我发了一封邮件到社区,邮件地址是:http://mail-archives.apache.org/ ... 201408.mbox/browser,大家可以去看看。

  1. Each sink needs to have a different priority. If multiple sinks have   
  2. same priority, only one of them will be used.  
复制代码


这是一个叫cloudera的外国哥们回答的。意思是说我配置多个sinks的时候,sink等级必须不同;
测试4:--想法:测试下上面那哥们说的,哈哈
配置文件(flume-sink.properties)


  1. #Name the compents on this agent  
  2. a1.sources = r1  
  3. a1.sinks = k1 k2 k3 k4  
  4. a1.channels = c1  
  5.   
  6. #Describe the sinkgroups  
  7. a1.sinkgroups = g1 g2  
  8. a1.sinkgroups.g1.sinks = k1 k2 k3 k4  
  9. a1.sinkgroups.g1.processor.type = failover  
  10. a1.sinkgroups.g1.processor.priority.k1 = 10  
  11. a1.sinkgroups.g1.processor.priority.k3 = 9  
  12. a1.sinkgroups.g1.processor.priority.k4 = 9  
  13. a1.sinkgroups.g1.processor.priority.k2 = 5  
  14. a1.sinkgroups.g1.processor.maxpenalty = 10000  
  15.   
  16. #Describe/config the source  
  17. a1.sources.r1.type = syslogtcp  
  18. a1.sources.r1.port = 5140  
  19. a1.sources.r1.host = localhost  
  20. a1.sources.r1.channels = c1  
  21.   
  22. #Describe the sink  
  23. a1.sinks.k1.type = avro  
  24. a1.sinks.k1.channel = c1  
  25. a1.sinks.k1.hostname = 192.168.220.159  
  26. a1.sinks.k1.port = 44411  
  27.   
  28. a1.sinks.k2.type = avro  
  29. a1.sinks.k2.channel = c1  
  30. a1.sinks.k2.hostname = 192.168.220.159  
  31. a1.sinks.k2.port = 44422  
  32.   
  33. a1.sinks.k3.type = avro  
  34. a1.sinks.k3.channel = c1  
  35. a1.sinks.k3.hostname = 192.168.220.159  
  36. a1.sinks.k3.port = 44433  
  37.   
  38. a1.sinks.k4.type = avro  
  39. a1.sinks.k4.channel = c1  
  40. a1.sinks.k4.hostname = 192.168.220.159  
  41. a1.sinks.k4.port = 44444  
  42. #Use a channel which butters events in memory  
  43. a1.channels.c1.type = memory  
  44. a1.channels.c1.capacity = 1000  
  45. a1.channels.c1.transactionCapacity = 100  
复制代码


第四个场景就留给大家去测试吧。测试结果正常。
消息由高级别的sink接收。依次关闭,其次等级接收;











转载:
http://blog.csdn.net/weijonathan/article/details/38557959



欢迎加入about云群425860289432264021 ,云计算爱好者群,关注about云腾讯认证空间

已有(9)人评论

跳转到指定楼层
anyhuayong 发表于 2014-9-16 08:29:38
好文章,学习了
回复

使用道具 举报

694809295@qq.co 发表于 2015-3-12 21:42:32
请问楼主一个flume采集数据到hdfs性能问题  将n10作为数据源 n08作为collector n07是hdfs n10的配置大概
  1. n10的flume配置文件如下
  2. # Name the components on this agent
  3. a1.sources = r1  
  4. a1.sinks = k1  
  5. a1.channels = c1
  6. # Describe the sink
  7. a1.sinks.k1.type = logger  
  8. ####
  9. a1.sources.r1.type = spooldir
  10. a1.sources.r1.spoolDir = /home/nids/wg/apache-flume-1.5.2-bin/ceshi12
  11. a1.sources.r1.fileHeader =false
  12. a1.sources.r1.channels = c1
  13. ####
  14. # Describe/configure the source
  15. #a1.sources.r1.type = avro   
  16. a1.sources.r1.bind = localhost  
  17. a1.sources.r1.port = 44444
  18. # avro sink   
  19. a1.sinks.k1.type = avro  
  20. a1.sinks.k1.channel = c1
  21. a1.sinks.k1.hostname = r09n08  
  22. a1.sinks.k1.port = 55555
  23. # Use a channel which buffers events in file
  24. a1.channels = c1
  25. a1.channels.c1.type = memory
  26. #a1.channels.c1.checkpointDir = /home/nids/wg/apache-flume-1.5.2-bin/checkpoint
  27. #a1.channels.c1.dataDirs = /home/nids/wg/apache-flume-1.5.2-bin/datadir
  28. a1.sinks.k1.hdfs.batchSize = 10000
  29. #a1.channels.c1.type = memory  
  30. a1.channels.c1.capacity = 100000  
  31. a1.channels.c1.transactionCapacity = 10000  
  32. # Bind the source and sink to the channel
  33. a1.sources.r1.channels = c1  
  34. a1.sinks.k1.channel = c1
  35. ==================
  36. n08的flume配置文件如下
  37. # Name the components on this agent
  38. a1.sources = r1  
  39. a1.sinks = k1  
  40. a1.channels = c1  
  41. # Describe/configure the source
  42. a1.sources.r1.type = avro  
  43. a1.sources.r1.bind = r09n08  
  44. a1.sources.r1.port = 55555   
  45. a1.sources.r1.interceptors = i1  
  46. a1.sources.r1.interceptors.i1.type = timestamp   
  47. #hdfs sink
  48. a1.sinks.k1.type = hdfs  
  49. a1.sinks.k1.hdfs.path = hdfs://r09n07:8020/project/dame/input/%Y%m%d/%H
  50. a1.sinks.k1.hdfs.fileType = DataStream
  51. a1.sinks.k1.hdfs.filePrefix = hdfs-
  52. a1.sinks.k1.hdfs.rollInterval = 0
  53. #a1.sinks.k1.hdfs.fileSuffix = .log  
  54. #a1.sinks.k1.hdfs.round = true  
  55. #a1.sinks.k1.hdfs.roundValue = 1  
  56. #a1.sinks.k1.hdfs.roundUnit = minute  
  57. a1.sinks.k1.hdfs.rollSize = 67108864  
  58. a1.sinks.k1.hdfs.rollCount = 0   
  59. #a1.sinks.k1.hdfs.writeFormat = Text  
  60. # Use a channel which buffers events in file
  61. a1.channels = c1
  62. a1.channels.c1.type = memory
  63. #a1.channels.c1.checkpointDir=/home/nids/wg/apache-flume-1.5.2-bin/checkpoint
  64. #a1.channels.c1.dataDirs=/home/nids/wg/apache-flume-1.5.2-bin/datadir
  65. a1.sinks.k1.hdfs.batchSize = 10000
  66. #a1.sinks.k1.hdfs.callTimeout = 6000
  67. #a1.sinks.k1.hdfs.appendTimeout = 6000   
  68. #a1.channels.c1.type = memory  
  69. a1.channels.c1.capacity = 100000  
  70. a1.channels.c1.transactionCapacity = 10000
  71. a1.sources.r1.channels = c1  
  72. a1.sinks.k1.channel = c1
  73. ===================
复制代码


采集的数据文件才几十m都写了好久,有时只写了一部分就不动了。channel是memory类型,在channel的几个参数都来回设置了好多次也没什么起色transactionCapacity  capacity 还有sink的batchSize。我是用文件大小类分割的大概64M  a1.sinks.k1.hdfs.rollSize = 67108864   请问具体是什么地方设置出的问题,谢谢

点评

a1.sinks.k1.hdfs.rollSize = 67108864 rollSize参数设置小一些,滚动就快了,越小越快  发表于 2015-3-14 23:40
回复

使用道具 举报

babyLiyuan 发表于 2015-3-14 23:30:17
我自己搭建了一下,有点问题想请教,谢谢。详情请移步我的日志http://www.aboutyun.com/blog-13274-1798.html
回复

使用道具 举报

694809295@qq.co 发表于 2015-3-19 18:28:17
楼主,我试着把rollsize设置成1M或者10M分割一次,速度上也没什么提升,有时候控制台只是打出一些debug日志,hdfs没有新的数据写入。请问你有没有类似 a 是source 到b是collector c是hdfs的配置文件,参考下,就像我上面的那种结构,反复测试各种参数 性能上就是提不上去,flume自带的conf;里的内存设置到了2-10G log4j里面的级别也是error的了 。主要会报java.lang.OutOfMemoryError: GC overhead limit exceeded;
Uncaught exception in SpoolDirectorySource thread. Restart or reconfigure Flume to continue ;
nable to deliver event. Exception follows.org.apache.flume.EventDeliveryException: Failed to send events processing.;
主要会报这些错误,求解。万分感谢
回复

使用道具 举报

arsenduan 发表于 2015-3-19 20:00:00
694809295@qq.co 发表于 2015-3-19 18:28
楼主,我试着把rollsize设置成1M或者10M分割一次,速度上也没什么提升,有时候控制台只是打出一些debug日志 ...

是不是使用的memory的方式。
内存不够了。
更换其他方式试试
回复

使用道具 举报

horrylala 发表于 2015-8-6 15:12:41
694809295@qq.co 发表于 2015-3-19 18:28
楼主,我试着把rollsize设置成1M或者10M分割一次,速度上也没什么提升,有时候控制台只是打出一些debug日志 ...

修改一下flume-env.sh,修改JAVA_OPTS="-Xms100m -Xmx1024m -Dcom.sun.management.jmxremote"
回复

使用道具 举报

天行健 发表于 2015-10-19 12:55:51
本帖最后由 天行健 于 2015-10-19 13:28 编辑

#配置文件:load_sink_case14.conf # Name the components on this agent
a1.sources = r1
a1.sinks = k1 k2 k3 k4
a1.channels = c1 c2 c3 c4

a1.sinkgroups = g1 g2 g3

a1.sinkgroups.g1.sinks = k1 k2
a1.sinkgroups.g1.processor.type =load_balance
a1.sinkgroups.g1.processor.backoff = true
a1.sinkgroups.g1.processor.selector =round_robin  

a1.sinkgroups.g2.sinks = k1 k3
a1.sinkgroups.g2.processor.type= failover
a1.sinkgroups.g2.processor.priority.k1= 5
a1.sinkgroups.g2.processor.priority.k3= 10
a1.sinkgroups.g2.processor.maxpenalty= 10000

a1.sinkgroups.g3.sinks = k2 k4
a1.sinkgroups.g3.processor.type= failover
a1.sinkgroups.g3.processor.priority.k2= 5
a1.sinkgroups.g3.processor.priority.k4= 10
a1.sinkgroups.g3.processor.maxpenalty= 10000

# Describe/configure the source
a1.sources.r1.type = syslogtcp
a1.sources.r1.port = 50000
a1.sources.r1.host = 192.168.4.23
a1.sources.r1.channels = c1 c2 c3 c4
  
# Describe the sink
a1.sinks.k1.type = avro
a1.sinks.k1.channel = c1
a1.sinks.k1.hostname = 192.168.4.21
a1.sinks.k1.port = 50000
   
a1.sinks.k2.type = avro
a1.sinks.k2.channel = c2
a1.sinks.k2.hostname = 192.168.4.22
a1.sinks.k2.port = 50000

a1.sinks.k3.type = avro
a1.sinks.k3.channel = c3
a1.sinks.k3.hostname = 192.168.4.25
a1.sinks.k3.port = 50000

a1.sinks.k4.type = avro
a1.sinks.k4.channel = c4
a1.sinks.k4.hostname = 192.168.4.26
a1.sinks.k4.port = 50000


# Use a channel which buffers events inmemory
a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100

a1.channels.c2.type = memory
a1.channels.c2.capacity = 1000
a1.channels.c2.transactionCapacity = 100

a1.channels.c3.type = memory
a1.channels.c3.capacity = 1000
a1.channels.c3.transactionCapacity = 100

a1.channels.c4.type = memory
a1.channels.c4.capacity = 1000
a1.channels.c4.transactionCapacity = 100 KV252O3G4_R0$[YWHMC`EQY.png
配置负载均衡和容错,报这个错误,求指导,哪里配置有问题啊?



回复

使用道具 举报

pq2527991 发表于 2016-9-6 09:32:39
这篇文章是有问题的吧!源码来看,是不能同时做一台机器的负载跟容错的,因为你他group加载配置的时候,不会加载重复的sink,就比如上面那个兄弟的报错。

回复

使用道具 举报

您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

关闭

推荐上一条 /2 下一条