分享

flume学习(四):Flume Channel Selectors使用

问题导读
1、怎样将不同项目的的日志输出到不同的channel?
2、如何理解一个sink为hdfs,一个sink为logger的拓扑结构?
3、怎样在Log4jExtAppender.java类里扩展一个参数?





前几篇文章只有一个项目的日志,现在我们考虑多个项目的日志的收集,我拷贝了一份flumedemo项目,重命名为flumedemo2,添加了一个WriteLog2.java类,稍微改动了一下JSON字符串的输出,将以前requestUrl中的"reporter-api"改为了"image-api",以便和WriteLog类的输出稍微区分开来,如下:
  1. package com.besttone.flume;  
  2.   
  3. import java.util.Date;  
  4.   
  5. import org.apache.commons.logging.Log;  
  6. import org.apache.commons.logging.LogFactory;  
  7.   
  8. public class WriteLog2 {  
  9.     protected static final Log logger = LogFactory.getLog(WriteLog2.class);  
  10.   
  11.     /**
  12.      * @param args
  13.      * @throws InterruptedException
  14.      */  
  15.     public static void main(String[] args) throws InterruptedException {  
  16.         // TODO Auto-generated method stub  
  17.         while (true) {  
  18.             logger.info(new Date().getTime());  
  19.             logger.info("{"requestTime":"  
  20.                     + System.currentTimeMillis()  
  21.                     + ","requestParams":{"timestamp":1405499314238,"phone":"02038824941","cardName":"测试商家名称","provinceCode":"440000","cityCode":"440106"},"requestUrl":"/image-api/reporter/reporter12/init.do"}");  
  22.             Thread.sleep(2000);  
  23.   
  24.         }  
  25.     }  
  26. }
复制代码


现在有这么一个需求描述:要求flumedemo的项目的log4j日志输出到hdfs,而flumedemo2项目的log4j日志输出到agent的log日志中。

我们还是采用log4jappender来配置log4j输出给flume的souce,现在的需求明显是有两个sink了,一个sink为hdfs,一个sink为logger。于是现在的拓扑结构应该是这样的:
1.jpg


需要实现这么一个拓扑接口,就需要使用到channel selectors,让不同的项目日志通过不同的channel到不同的sink中去。

官方文档上channel selectors 有两种类型:

Replicating Channel Selector (default)

Multiplexing Channel Selector

这两种selector的区别是:Replicating 会将source过来的events发往所有channel,而Multiplexing 可以选择该发往哪些channel。对于上面的例子来说,如果采用Replicating ,那么demo和demo2的日志会同时发往channel1和channel2,这显然是和需求不符的,需求只是让demo的日志发往channel1,而demo2的日志发往channel2。

综上所述,我们选择Multiplexing Channel Selector。这里我们有遇到一个棘手的问题,Multiplexing 需要判断header里指定key的值来决定分发到某个具体的channel,我们现在demo和demo2同时运行在同一个服务器上,如果在不同的服务器上运行,我们可以在 source1上加上一个 host 拦截器(上一篇有介绍过),这样可以通过header中的host来判断event该分发给哪个channel,而这里是在同一个服务器上,由host是区分不出来日志的来源的,我们必须想办法在header中添加一个key来区分日志的来源。

设想一下,如果header中有一个key:flume.client.log4j.logger.source,我们通过设置这个key的值,demo设为app1,demo2设为app2,这样我们就能通过设置:
  1. tier1.sources.source1.channels=channel1 channel2
  2. tier1.sources.source1.selector.type=multiplexing
  3. tier1.sources.source1.selector.header=flume.client.log4j.logger.source
  4. tier1.sources.source1.selector.mapping.app1=channel1
  5. tier1.sources.source1.selector.mapping.app2=channel2
复制代码

来将不同项目的的日志输出到不同的channel了。


我们按照这个思路继续下去,遇到了困难,log4jappender没有这样的参数来让你设置。怎么办?翻看了一下log4jappender的源码,发现可以很容易的实现扩展参数,于是我复制了一份log4jappender代码,新加了一个类叫Log4jExtAppender.java,里面扩展了一个参数叫:source,代码如下:

  1. package com.besttone.flume;  
  2.   
  3. import java.io.ByteArrayOutputStream;  
  4. import java.io.IOException;  
  5. import java.nio.charset.Charset;  
  6. import java.util.HashMap;  
  7. import java.util.Map;  
  8. import java.util.Properties;  
  9.   
  10. import org.apache.avro.Schema;  
  11. import org.apache.avro.generic.GenericRecord;  
  12. import org.apache.avro.io.BinaryEncoder;  
  13. import org.apache.avro.io.DatumWriter;  
  14. import org.apache.avro.io.EncoderFactory;  
  15. import org.apache.avro.reflect.ReflectData;  
  16. import org.apache.avro.reflect.ReflectDatumWriter;  
  17. import org.apache.avro.specific.SpecificRecord;  
  18. import org.apache.flume.Event;  
  19. import org.apache.flume.EventDeliveryException;  
  20. import org.apache.flume.FlumeException;  
  21. import org.apache.flume.api.RpcClient;  
  22. import org.apache.flume.api.RpcClientConfigurationConstants;  
  23. import org.apache.flume.api.RpcClientFactory;  
  24. import org.apache.flume.clients.log4jappender.Log4jAvroHeaders;  
  25. import org.apache.flume.event.EventBuilder;  
  26. import org.apache.log4j.AppenderSkeleton;  
  27. import org.apache.log4j.helpers.LogLog;  
  28. import org.apache.log4j.spi.LoggingEvent;  
  29.   
  30. /**
  31. *  
  32. * Appends Log4j Events to an external Flume client which is decribed by the
  33. * Log4j configuration file. The appender takes two required parameters:
  34. * <p>
  35. * <strong>Hostname</strong> : This is the hostname of the first hop at which
  36. * Flume (through an AvroSource) is listening for events.
  37. * </p>
  38. * <p>
  39. * <strong>Port</strong> : This the port on the above host where the Flume
  40. * Source is listening for events.
  41. * </p>
  42. * A sample log4j properties file which appends to a source would look like:
  43. *  
  44. * <pre>
  45. * <p>
  46. * log4j.appender.out2 = org.apache.flume.clients.log4jappender.Log4jAppender
  47. * log4j.appender.out2.Port = 25430
  48. * log4j.appender.out2.Hostname = foobarflumesource.com
  49. * log4j.logger.org.apache.flume.clients.log4jappender = DEBUG,out2</p>
  50. * </pre>
  51. * <p>
  52. * <i>Note: Change the last line to the package of the class(es), that will do
  53. * the appending.For example if classes from the package com.bar.foo are
  54. * appending, the last line would be:</i>
  55. * </p>
  56. *  
  57. * <pre>
  58. * <p>log4j.logger.com.bar.foo = DEBUG,out2</p>
  59. * </pre>
  60. *  
  61. *  
  62. */  
  63. public class Log4jExtAppender extends AppenderSkeleton {  
  64.   
  65.     private String hostname;  
  66.     private int port;  
  67.     private String source;  
  68.   
  69.     public String getSource() {  
  70.         return source;  
  71.     }  
  72.   
  73.     public void setSource(String source) {  
  74.         this.source = source;  
  75.     }  
  76.   
  77.     private boolean unsafeMode = false;  
  78.     private long timeout = RpcClientConfigurationConstants.DEFAULT_REQUEST_TIMEOUT_MILLIS;  
  79.     private boolean avroReflectionEnabled;  
  80.     private String avroSchemaUrl;  
  81.   
  82.     RpcClient rpcClient = null;  
  83.   
  84.     /**
  85.      * If this constructor is used programmatically rather than from a log4j
  86.      * conf you must set the <tt>port</tt> and <tt>hostname</tt> and then call
  87.      * <tt>activateOptions()</tt> before calling <tt>append()</tt>.
  88.      */  
  89.     public Log4jExtAppender() {  
  90.     }  
  91.   
  92.     /**
  93.      * Sets the hostname and port. Even if these are passed the
  94.      * <tt>activateOptions()</tt> function must be called before calling
  95.      * <tt>append()</tt>, else <tt>append()</tt> will throw an Exception.
  96.      *  
  97.      * @param hostname
  98.      *            The first hop where the client should connect to.
  99.      * @param port
  100.      *            The port to connect on the host.
  101.      *  
  102.      */  
  103.     public Log4jExtAppender(String hostname, int port, String source) {  
  104.         this.hostname = hostname;  
  105.         this.port = port;  
  106.         this.source = source;  
  107.     }  
  108.   
  109.     /**
  110.      * Append the LoggingEvent, to send to the first Flume hop.
  111.      *  
  112.      * @param event
  113.      *            The LoggingEvent to be appended to the flume.
  114.      * @throws FlumeException
  115.      *             if the appender was closed, or the hostname and port were not
  116.      *             setup, there was a timeout, or there was a connection error.
  117.      */  
  118.     @Override  
  119.     public synchronized void append(LoggingEvent event) throws FlumeException {  
  120.         // If rpcClient is null, it means either this appender object was never  
  121.         // setup by setting hostname and port and then calling activateOptions  
  122.         // or this appender object was closed by calling close(), so we throw an  
  123.         // exception to show the appender is no longer accessible.  
  124.         if (rpcClient == null) {  
  125.             String errorMsg = "Cannot Append to Appender! Appender either closed or"  
  126.                     + " not setup correctly!";  
  127.             LogLog.error(errorMsg);  
  128.             if (unsafeMode) {  
  129.                 return;  
  130.             }  
  131.             throw new FlumeException(errorMsg);  
  132.         }  
  133.   
  134.         if (!rpcClient.isActive()) {  
  135.             reconnect();  
  136.         }  
  137.   
  138.         // Client created first time append is called.  
  139.         Map<String, String> hdrs = new HashMap<String, String>();  
  140.         hdrs.put(Log4jAvroHeaders.LOGGER_NAME.toString(), event.getLoggerName());  
  141.         hdrs.put(Log4jAvroHeaders.TIMESTAMP.toString(),  
  142.                 String.valueOf(event.timeStamp));  
  143.   
  144.         // 添加日志来源  
  145.         if (this.source == null || this.source.equals("")) {  
  146.             this.source = "unknown";  
  147.         }  
  148.         hdrs.put("flume.client.log4j.logger.source", this.source);  
  149.         // To get the level back simply use  
  150.         // LoggerEvent.toLevel(hdrs.get(Integer.parseInt(  
  151.         // Log4jAvroHeaders.LOG_LEVEL.toString()))  
  152.         hdrs.put(Log4jAvroHeaders.LOG_LEVEL.toString(),  
  153.                 String.valueOf(event.getLevel().toInt()));  
  154.   
  155.         Event flumeEvent;  
  156.         Object message = event.getMessage();  
  157.         if (message instanceof GenericRecord) {  
  158.             GenericRecord record = (GenericRecord) message;  
  159.             populateAvroHeaders(hdrs, record.getSchema(), message);  
  160.             flumeEvent = EventBuilder.withBody(  
  161.                     serialize(record, record.getSchema()), hdrs);  
  162.         } else if (message instanceof SpecificRecord || avroReflectionEnabled) {  
  163.             Schema schema = ReflectData.get().getSchema(message.getClass());  
  164.             populateAvroHeaders(hdrs, schema, message);  
  165.             flumeEvent = EventBuilder  
  166.                     .withBody(serialize(message, schema), hdrs);  
  167.         } else {  
  168.             hdrs.put(Log4jAvroHeaders.MESSAGE_ENCODING.toString(), "UTF8");  
  169.             String msg = layout != null ? layout.format(event) : message  
  170.                     .toString();  
  171.             flumeEvent = EventBuilder.withBody(msg, Charset.forName("UTF8"),  
  172.                     hdrs);  
  173.         }  
  174.   
  175.         try {  
  176.             rpcClient.append(flumeEvent);  
  177.         } catch (EventDeliveryException e) {  
  178.             String msg = "Flume append() failed.";  
  179.             LogLog.error(msg);  
  180.             if (unsafeMode) {  
  181.                 return;  
  182.             }  
  183.             throw new FlumeException(msg + " Exception follows.", e);  
  184.         }  
  185.     }  
  186.   
  187.     private Schema schema;  
  188.     private ByteArrayOutputStream out;  
  189.     private DatumWriter<Object> writer;  
  190.     private BinaryEncoder encoder;  
  191.   
  192.     protected void populateAvroHeaders(Map<String, String> hdrs, Schema schema,  
  193.             Object message) {  
  194.         if (avroSchemaUrl != null) {  
  195.             hdrs.put(Log4jAvroHeaders.AVRO_SCHEMA_URL.toString(), avroSchemaUrl);  
  196.             return;  
  197.         }  
  198.         LogLog.warn("Cannot find ID for schema. Adding header for schema, "  
  199.                 + "which may be inefficient. Consider setting up an Avro Schema Cache.");  
  200.         hdrs.put(Log4jAvroHeaders.AVRO_SCHEMA_LITERAL.toString(),  
  201.                 schema.toString());  
  202.     }  
  203.   
  204.     private byte[] serialize(Object datum, Schema datumSchema)  
  205.             throws FlumeException {  
  206.         if (schema == null || !datumSchema.equals(schema)) {  
  207.             schema = datumSchema;  
  208.             out = new ByteArrayOutputStream();  
  209.             writer = new ReflectDatumWriter<Object>(schema);  
  210.             encoder = EncoderFactory.get().binaryEncoder(out, null);  
  211.         }  
  212.         out.reset();  
  213.         try {  
  214.             writer.write(datum, encoder);  
  215.             encoder.flush();  
  216.             return out.toByteArray();  
  217.         } catch (IOException e) {  
  218.             throw new FlumeException(e);  
  219.         }  
  220.     }  
  221.   
  222.     // This function should be synchronized to make sure one thread  
  223.     // does not close an appender another thread is using, and hence risking  
  224.     // a null pointer exception.  
  225.     /**
  226.      * Closes underlying client. If <tt>append()</tt> is called after this
  227.      * function is called, it will throw an exception.
  228.      *  
  229.      * @throws FlumeException
  230.      *             if errors occur during close
  231.      */  
  232.     @Override  
  233.     public synchronized void close() throws FlumeException {  
  234.         // Any append calls after this will result in an Exception.  
  235.         if (rpcClient != null) {  
  236.             try {  
  237.                 rpcClient.close();  
  238.             } catch (FlumeException ex) {  
  239.                 LogLog.error("Error while trying to close RpcClient.", ex);  
  240.                 if (unsafeMode) {  
  241.                     return;  
  242.                 }  
  243.                 throw ex;  
  244.             } finally {  
  245.                 rpcClient = null;  
  246.             }  
  247.         } else {  
  248.             String errorMsg = "Flume log4jappender already closed!";  
  249.             LogLog.error(errorMsg);  
  250.             if (unsafeMode) {  
  251.                 return;  
  252.             }  
  253.             throw new FlumeException(errorMsg);  
  254.         }  
  255.     }  
  256.   
  257.     @Override  
  258.     public boolean requiresLayout() {  
  259.         // This method is named quite incorrectly in the interface. It should  
  260.         // probably be called canUseLayout or something. According to the docs,  
  261.         // even if the appender can work without a layout, if it can work with  
  262.         // one,  
  263.         // this method must return true.  
  264.         return true;  
  265.     }  
  266.   
  267.     /**
  268.      * Set the first flume hop hostname.
  269.      *  
  270.      * @param hostname
  271.      *            The first hop where the client should connect to.
  272.      */  
  273.     public void setHostname(String hostname) {  
  274.         this.hostname = hostname;  
  275.     }  
  276.   
  277.     /**
  278.      * Set the port on the hostname to connect to.
  279.      *  
  280.      * @param port
  281.      *            The port to connect on the host.
  282.      */  
  283.     public void setPort(int port) {  
  284.         this.port = port;  
  285.     }  
  286.   
  287.     public void setUnsafeMode(boolean unsafeMode) {  
  288.         this.unsafeMode = unsafeMode;  
  289.     }  
  290.   
  291.     public boolean getUnsafeMode() {  
  292.         return unsafeMode;  
  293.     }  
  294.   
  295.     public void setTimeout(long timeout) {  
  296.         this.timeout = timeout;  
  297.     }  
  298.   
  299.     public long getTimeout() {  
  300.         return this.timeout;  
  301.     }  
  302.   
  303.     public void setAvroReflectionEnabled(boolean avroReflectionEnabled) {  
  304.         this.avroReflectionEnabled = avroReflectionEnabled;  
  305.     }  
  306.   
  307.     public void setAvroSchemaUrl(String avroSchemaUrl) {  
  308.         this.avroSchemaUrl = avroSchemaUrl;  
  309.     }  
  310.   
  311.     /**
  312.      * Activate the options set using <tt>setPort()</tt> and
  313.      * <tt>setHostname()</tt>
  314.      *  
  315.      * @throws FlumeException
  316.      *             if the <tt>hostname</tt> and <tt>port</tt> combination is
  317.      *             invalid.
  318.      */  
  319.     @Override  
  320.     public void activateOptions() throws FlumeException {  
  321.         Properties props = new Properties();  
  322.         props.setProperty(RpcClientConfigurationConstants.CONFIG_HOSTS, "h1");  
  323.         props.setProperty(RpcClientConfigurationConstants.CONFIG_HOSTS_PREFIX  
  324.                 + "h1", hostname + ":" + port);  
  325.         props.setProperty(  
  326.                 RpcClientConfigurationConstants.CONFIG_CONNECT_TIMEOUT,  
  327.                 String.valueOf(timeout));  
  328.         props.setProperty(  
  329.                 RpcClientConfigurationConstants.CONFIG_REQUEST_TIMEOUT,  
  330.                 String.valueOf(timeout));  
  331.         try {  
  332.             rpcClient = RpcClientFactory.getInstance(props);  
  333.             if (layout != null) {  
  334.                 layout.activateOptions();  
  335.             }  
  336.         } catch (FlumeException e) {  
  337.             String errormsg = "RPC client creation failed! " + e.getMessage();  
  338.             LogLog.error(errormsg);  
  339.             if (unsafeMode) {  
  340.                 return;  
  341.             }  
  342.             throw e;  
  343.         }  
  344.     }  
  345.   
  346.     /**
  347.      * Make it easy to reconnect on failure
  348.      *  
  349.      * @throws FlumeException
  350.      */  
  351.     private void reconnect() throws FlumeException {  
  352.         close();  
  353.         activateOptions();  
  354.     }  
  355. }  
复制代码



然后然后将这个类打了一个jar包:Log4jExtAppender.jar,扔在了flumedemo和flumedemo2的lib目录下。
这时候flumedemo的log4j.properties如下:
  1. log4j.rootLogger=INFO  
  2.   
  3.   
  4. log4j.category.com.besttone=INFO,flume,console,LogFile  
  5.   
  6. #log4j.appender.flume = org.apache.flume.clients.log4jappender.Log4jExtAppender  
  7. log4j.appender.flume = com.besttone.flume.Log4jExtAppender  
  8. log4j.appender.flume.Hostname = localhost  
  9. log4j.appender.flume.Port = 44444  
  10. log4j.appender.flume.UnsafeMode = false  
  11. log4j.appender.flume.Source = app1  
  12.   
  13.   
  14. log4j.appender.console= org.apache.log4j.ConsoleAppender  
  15. log4j.appender.console.Target= System.out  
  16. log4j.appender.console.layout= org.apache.log4j.PatternLayout  
  17. log4j.appender.console.layout.ConversionPattern= %d{yyyy-MM-dd HH:mm:ss} %5p %c{1}: %L - %m%n  
  18.   
  19. log4j.appender.LogFile= org.apache.log4j.DailyRollingFileAppender  
  20. log4j.appender.LogFile.File= logs/app.log  
  21. log4j.appender.LogFile.MaxFileSize=10KB  
  22. log4j.appender.LogFile.Append= true  
  23. log4j.appender.LogFile.Threshold= DEBUG  
  24. log4j.appender.LogFile.layout= org.apache.log4j.PatternLayout  
  25. log4j.appender.LogFile.layout.ConversionPattern= %-d{yyyy-MM-dd HH:mm:ss} [%t:%r] - [%5p] %m%n
复制代码



flumedemo2的如下:
  1. log4j.rootLogger=INFO  
  2.   
  3.   
  4. log4j.category.com.besttone=INFO,flume,console,LogFile  
  5.   
  6. #log4j.appender.flume = org.apache.flume.clients.log4jappender.Log4jExtAppender  
  7. log4j.appender.flume = com.besttone.flume.Log4jExtAppender  
  8. log4j.appender.flume.Hostname = localhost  
  9. log4j.appender.flume.Port = 44444  
  10. log4j.appender.flume.UnsafeMode = false  
  11. log4j.appender.flume.Source = app2  
  12.   
  13.   
  14. log4j.appender.console= org.apache.log4j.ConsoleAppender  
  15. log4j.appender.console.Target= System.out  
  16. log4j.appender.console.layout= org.apache.log4j.PatternLayout  
  17. log4j.appender.console.layout.ConversionPattern= %d{yyyy-MM-dd HH:mm:ss} %5p %c{1}: %L - %m%n  
  18.   
  19. log4j.appender.LogFile= org.apache.log4j.DailyRollingFileAppender  
  20. log4j.appender.LogFile.File= logs/app.log  
  21. log4j.appender.LogFile.MaxFileSize=10KB  
  22. log4j.appender.LogFile.Append= true  
  23. log4j.appender.LogFile.Threshold= DEBUG  
  24. log4j.appender.LogFile.layout= org.apache.log4j.PatternLayout  
  25. log4j.appender.LogFile.layout.ConversionPattern= %-d{yyyy-MM-dd HH:mm:ss} [%t:%r] - [%5p] %m%n  
复制代码



将原来的log4j.appender.flume 由org.apache.flume.clients.log4jappender.Log4jExtAppender改为了我重新实现添加了source参数的com.besttone.flume.Log4jExtAppender

然后flumedemo的log4j.appender.flume.Source = app1,flumedemo2的log4j.appender.flume.Source = app2。
运行flumedemo的WriteLog类,和flumedemo2的WriteLog2类,分别去hdfs上和agent的log文件中看看内容,发现hdfs上都是app1的日志,log文件中都是app2的日志,功能实现。

完整的flume.conf如下:

  1. tier1.sources=source1  
  2. tier1.channels=channel1 channel2  
  3. tier1.sinks=sink1 sink2  
  4. tier1.sources.source1.type=avro  
  5. tier1.sources.source1.bind=0.0.0.0  
  6. tier1.sources.source1.port=44444  
  7. tier1.sources.source1.channels=channel1 channel2  
  8. tier1.sources.source1.selector.type=multiplexing  
  9. tier1.sources.source1.selector.header=flume.client.log4j.logger.source  
  10. tier1.sources.source1.selector.mapping.app1=channel1  
  11. tier1.sources.source1.selector.mapping.app2=channel2  
  12. tier1.sources.source1.interceptors=i1 i2  
  13. tier1.sources.source1.interceptors.i1.type=regex_filter  
  14. tier1.sources.source1.interceptors.i1.regex=\\{.*\\}  
  15. tier1.sources.source1.interceptors.i2.type=timestamp  
  16. tier1.channels.channel1.type=memory  
  17. tier1.channels.channel1.capacity=10000  
  18. tier1.channels.channel1.transactionCapacity=1000  
  19. tier1.channels.channel1.keep-alive=30  
  20. tier1.channels.channel2.type=memory  
  21. tier1.channels.channel2.capacity=10000  
  22. tier1.channels.channel2.transactionCapacity=1000  
  23. tier1.channels.channel2.keep-alive=30  
  24. tier1.sinks.sink1.type=hdfs  
  25. tier1.sinks.sink1.channel=channel1  
  26. tier1.sinks.sink1.hdfs.path=hdfs://master68:8020/flume/events/%y-%m-%d  
  27. tier1.sinks.sink1.hdfs.round=true  
  28. tier1.sinks.sink1.hdfs.roundValue=10  
  29. tier1.sinks.sink1.hdfs.roundUnit=minute  
  30. tier1.sinks.sink1.hdfs.fileType=DataStream  
  31. tier1.sinks.sink1.hdfs.writeFormat=Text  
  32. tier1.sinks.sink1.hdfs.rollInterval=0  
  33. tier1.sinks.sink1.hdfs.rollSize=10240  
  34. tier1.sinks.sink1.hdfs.rollCount=0  
  35. tier1.sinks.sink1.hdfs.idleTimeout=60  
  36. tier1.sinks.sink2.type=logger  
  37. tier1.sinks.sink2.channel=channel2  
复制代码




相关文章:


flume学习(一):log4j直接输出日志到flume


flume学习(二):如何找到cm安装的flume的配置文件



flume学习(三):Flume Interceptors的使用






已有(1)人评论

跳转到指定楼层
hb1984 发表于 2014-10-15 16:39:55
谢谢楼主分享。               
回复

使用道具 举报

您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

关闭

推荐上一条 /2 下一条