AppendersAppenders are responsible for delivering LogEvents to their destination. Every Appender must implement the Appender interface. Most Appenders will extend AbstractAppender which adds Lifecycle and Filterable support. Lifecycle allows components to finish initialization after configuration has completed and to perform cleanup during shutdown. Filterable allows the component to have Filters attached to it which are evaluated during event processing. Appenders usually are only responsible for writing the event data to the target destination. In most cases they delegate responsibility for formatting the event to a layout. Some appenders wrap other appenders so that they can modify the LogEvent, handle a failure in an Appender, route the event to a subordinate Appender based on advanced Filter criteria or provide similar functionality that does not directly format the event for viewing. Appenders always have a name so that they can be referenced from Loggers. In the tables below, the "Type" column corresponds to the Java type expected. For non-JDK classes, these should usually be in Log4j Core unless otherwise noted. AsyncAppenderThe AsyncAppender accepts references to other Appenders and causes LogEvents to be written to them on a separate Thread. Note that exceptions while writing to those Appenders will be hidden from the application. The AsyncAppender should be configured after the appenders it references to allow it to shut down properly. By default, AsyncAppender uses java.util.concurrent.ArrayBlockingQueue which does not require any external libraries. Note that multi-threaded applications should exercise care when using this appender as such: the blocking queue is susceptible to lock contention and our tests showed performance may become worse when more threads are logging concurrently. Consider using lock-free Async Loggers for optimal performance.
There are also a few system properties that can be used to maintain application throughput even when the underlying appender cannot keep up with the logging rate and the queue is filling up. See the details for system properties log4j2.AsyncQueueFullPolicy and log4j2.DiscardThreshold. A typical AsyncAppender configuration might look like: <?xml version="1.0" encoding="UTF-8"?> <Configuration status="warn" name="MyApp" packages=""> <Appenders> <File name="MyFile" fileName="logs/app.log"> <PatternLayout> <Pattern>%d %p %c{1.} [%t] %m%n</Pattern> </PatternLayout> </File> <Async name="Async"> <AppenderRef ref="MyFile"/> </Async> </Appenders> <Loggers> <Root level="error"> <AppenderRef ref="Async"/> </Root> </Loggers> </Configuration> Starting in Log4j 2.7, a custom implementation of BlockingQueue or TransferQueue can be specified using a BlockingQueueFactory plugin. To override the default BlockingQueueFactory, specify the plugin inside an <Async/> element like so: <Configuration name="LinkedTransferQueueExample"> <Appenders> <List name="List"/> <Async name="Async" bufferSize="262144"> <AppenderRef ref="List"/> <LinkedTransferQueue/> </Async> </Appenders> <Loggers> <Root> <AppenderRef ref="Async"/> </Root> </Loggers> </Configuration> Log4j ships with the following implementations:
CassandraAppenderThe CassandraAppender writes its output to an Apache Cassandra database. A keyspace and table must be configured ahead of time, and the columns of that table are mapped in a configuration file. Each column can specify either a StringLayout (e.g., a PatternLayout) along with an optional conversion type, or only a conversion type for org.apache.logging.log4j.spi.ThreadContextMap or org.apache.logging.log4j.spi.ThreadContextStack to store the MDC or NDC in a map or list column respectively. A conversion type compatible with java.util.Date will use the log event timestamp converted to that type (e.g., use java.util.Date to fill a timestamp column type in Cassandra).
Here is an example CassandraAppender configuration: <Configuration name="CassandraAppenderTest"> <Appenders> <Cassandra name="Cassandra" clusterName="Test Cluster" keyspace="test" table="logs" bufferSize="10" batched="true"> <SocketAddress host="localhost" port="9042"/> <ColumnMapping name="id" pattern="%uuid{TIME}" type="java.util.UUID"/> <ColumnMapping name="timeid" literal="now()"/> <ColumnMapping name="message" pattern="%message"/> <ColumnMapping name="level" pattern="%level"/> <ColumnMapping name="marker" pattern="%marker"/> <ColumnMapping name="logger" pattern="%logger"/> <ColumnMapping name="timestamp" type="java.util.Date"/> <ColumnMapping name="mdc" type="org.apache.logging.log4j.spi.ThreadContextMap"/> <ColumnMapping name="ndc" type="org.apache.logging.log4j.spi.ThreadContextStack"/> </Cassandra> </Appenders> <Loggers> <Logger name="org.apache.logging.log4j.nosql.appender.cassandra" level="DEBUG"> <AppenderRef ref="Cassandra"/> </Logger> <Root level="ERROR"/> </Loggers> </Configuration> This example configuration uses the following table schema: CREATE TABLE logs ( id timeuuid PRIMARY KEY, timeid timeuuid, message text, level text, marker text, logger text, timestamp timestamp, mdc map<text,text>, ndc list<text> ); ConsoleAppenderAs one might expect, the ConsoleAppender writes its output to either System.out or System.err with System.out being the default target. A Layout must be provided to format the LogEvent.
A typical Console configuration might look like: <?xml version="1.0" encoding="UTF-8"?> <Configuration status="warn" name="MyApp" packages=""> <Appenders> <Console name="STDOUT" target="SYSTEM_OUT"> <PatternLayout pattern="%m%n"/> </Console> </Appenders> <Loggers> <Root level="error"> <AppenderRef ref="STDOUT"/> </Root> </Loggers> </Configuration> FailoverAppenderThe FailoverAppender wraps a set of appenders. If the primary Appender fails the secondary appenders will be tried in order until one succeeds or there are no more secondaries to try.
A Failover configuration might look like: <?xml version="1.0" encoding="UTF-8"?> <Configuration status="warn" name="MyApp" packages=""> <Appenders> <RollingFile name="RollingFile" fileName="logs/app.log" filePattern="logs/app-%d{MM-dd-yyyy}.log.gz" ignoreExceptions="false"> <PatternLayout> <Pattern>%d %p %c{1.} [%t] %m%n</Pattern> </PatternLayout> <TimeBasedTriggeringPolicy /> </RollingFile> <Console name="STDOUT" target="SYSTEM_OUT" ignoreExceptions="false"> <PatternLayout pattern="%m%n"/> </Console> <Failover name="Failover" primary="RollingFile"> <Failovers> <AppenderRef ref="Console"/> </Failovers> </Failover> </Appenders> <Loggers> <Root level="error"> <AppenderRef ref="Failover"/> </Root> </Loggers> </Configuration> FileAppenderThe FileAppender is an OutputStreamAppender that writes to the File named in the fileName parameter. The FileAppender uses a FileManager (which extends OutputStreamManager) to actually perform the file I/O. While FileAppenders from different Configurations cannot be shared, the FileManagers can be if the Manager is accessible. For example, two web applications in a servlet container can have their own configuration and safely write to the same file if Log4j is in a ClassLoader that is common to both of them.
Here is a sample File configuration: <?xml version="1.0" encoding="UTF-8"?> <Configuration status="warn" name="MyApp" packages=""> <Appenders> <File name="MyFile" fileName="logs/app.log"> <PatternLayout> <Pattern>%d %p %c{1.} [%t] %m%n</Pattern> </PatternLayout> </File> </Appenders> <Loggers> <Root level="error"> <AppenderRef ref="MyFile"/> </Root> </Loggers> </Configuration> FlumeAppenderThis is an optional component supplied in a separate jar. Apache Flume is a distributed, reliable, and available system for efficiently collecting, aggregating, and moving large amounts of log data from many different sources to a centralized data store. The FlumeAppender takes LogEvents and sends them to a Flume agent as serialized Avro events for consumption. The Flume Appender supports three modes of operation.
Usage as an embedded agent will cause the messages to be directly passed to the Flume Channel and then control will be immediately returned to the application. All interaction with remote agents will occur asynchronously. Setting the "type" attribute to "Embedded" will force the use of the embedded agent. In addition, configuring agent properties in the appender configuration will also cause the embedded agent to be used.
A sample FlumeAppender configuration that is configured with a primary and a secondary agent, compresses the body, and formats the body using the RFC5424Layout: <?xml version="1.0" encoding="UTF-8"?> <Configuration status="warn" name="MyApp" packages=""> <Appenders> <Flume name="eventLogger" compress="true"> <Agent host="192.168.10.101" port="8800"/> <Agent host="192.168.10.102" port="8800"/> <RFC5424Layout enterpriseNumber="18060" includeMDC="true" appName="MyApp"/> </Flume> </Appenders> <Loggers> <Root level="error"> <AppenderRef ref="eventLogger"/> </Root> </Loggers> </Configuration> A sample FlumeAppender configuration that is configured with a primary and a secondary agent, compresses the body, formats the body using the RFC5424Layout, and persists encrypted events to disk: <?xml version="1.0" encoding="UTF-8"?> <Configuration status="warn" name="MyApp" packages=""> <Appenders> <Flume name="eventLogger" compress="true" type="persistent" dataDir="./logData"> <Agent host="192.168.10.101" port="8800"/> <Agent host="192.168.10.102" port="8800"/> <RFC5424Layout enterpriseNumber="18060" includeMDC="true" appName="MyApp"/> <Property name="keyProvider">MySecretProvider</Property> </Flume> </Appenders> <Loggers> <Root level="error"> <AppenderRef ref="eventLogger"/> </Root> </Loggers> </Configuration> A sample FlumeAppender configuration that is configured with a primary and a secondary agent, compresses the body, formats the body using RFC5424Layout and passes the events to an embedded Flume Agent. <?xml version="1.0" encoding="UTF-8"?> <Configuration status="warn" name="MyApp" packages=""> <Appenders> <Flume name="eventLogger" compress="true" type="Embedded"> <Agent host="192.168.10.101" port="8800"/> <Agent host="192.168.10.102" port="8800"/> <RFC5424Layout enterpriseNumber="18060" includeMDC="true" appName="MyApp"/> </Flume> <Console name="STDOUT"> <PatternLayout pattern="%d [%p] %c %m%n"/> </Console> </Appenders> <Loggers> <Logger name="EventLogger" level="info"> <AppenderRef ref="eventLogger"/> </Logger> <Root level="warn"> <AppenderRef ref="STDOUT"/> </Root> </Loggers> </Configuration> A sample FlumeAppender configuration that is configured with a primary and a secondary agent using Flume configuration properties, compresses the body, formats the body using RFC5424Layout and passes the events to an embedded Flume Agent. <?xml version="1.0" encoding="UTF-8"?> <Configuration status="error" name="MyApp" packages=""> <Appenders> <Flume name="eventLogger" compress="true" type="Embedded"> <Property name="channels">file</Property> <Property name="channels.file.type">file</Property> <Property name="channels.file.checkpointDir">target/file-channel/checkpoint</Property> <Property name="channels.file.dataDirs">target/file-channel/data</Property> <Property name="sinks">agent1 agent2</Property> <Property name="sinks.agent1.channel">file</Property> <Property name="sinks.agent1.type">avro</Property> <Property name="sinks.agent1.hostname">192.168.10.101</Property> <Property name="sinks.agent1.port">8800</Property> <Property name="sinks.agent1.batch-size">100</Property> <Property name="sinks.agent2.channel">file</Property> <Property name="sinks.agent2.type">avro</Property> <Property name="sinks.agent2.hostname">192.168.10.102</Property> <Property name="sinks.agent2.port">8800</Property> <Property name="sinks.agent2.batch-size">100</Property> <Property name="sinkgroups">group1</Property> <Property name="sinkgroups.group1.sinks">agent1 agent2</Property> <Property name="sinkgroups.group1.processor.type">failover</Property> <Property name="sinkgroups.group1.processor.priority.agent1">10</Property> <Property name="sinkgroups.group1.processor.priority.agent2">5</Property> <RFC5424Layout enterpriseNumber="18060" includeMDC="true" appName="MyApp"/> </Flume> <Console name="STDOUT"> <PatternLayout pattern="%d [%p] %c %m%n"/> </Console> </Appenders> <Loggers> <Logger name="EventLogger" level="info"> <AppenderRef ref="eventLogger"/> </Logger> <Root level="warn"> <AppenderRef ref="STDOUT"/> </Root> </Loggers> </Configuration> JDBCAppenderThe JDBCAppender writes log events to a relational database table using standard JDBC. It can be configured to obtain JDBC connections using a JNDI DataSource or a custom factory method. Whichever approach you take, it must be backed by a connection pool. Otherwise, logging performance will suffer greatly. If batch statements are supported by the configured JDBC driver and a bufferSize is configured to be a positive number, then log events will be batched. Note that as of Log4j 2.8, there are two ways to configure log event to column mappings: the original ColumnConfig style that only allows strings and timestamps, and the new ColumnMapping plugin that uses Log4j's built-in type conversion to allow for more data types (this is the same plugin as in the Cassandra Appender).
When configuring the JDBCAppender, you must specify a ConnectionSource implementation from which the Appender gets JDBC connections. You must use exactly one of the <DataSource> or <ConnectionFactory> nested elements.
When configuring the JDBCAppender, use the nested <Column> elements to specify which columns in the table should be written to and how to write to them. The JDBCAppender uses this information to formulate a PreparedStatement to insert records without SQL injection vulnerability.
Here are a couple sample configurations for the JDBCAppender, as well as a sample factory implementation that uses Commons Pooling and Commons DBCP to pool database connections: <?xml version="1.0" encoding="UTF-8"?> <Configuration status="error"> <Appenders> <JDBC name="databaseAppender" tableName="dbo.application_log"> <DataSource jndiName="java:/comp/env/jdbc/LoggingDataSource" /> <Column name="eventDate" isEventTimestamp="true" /> <Column name="level" pattern="%level" /> <Column name="logger" pattern="%logger" /> <Column name="message" pattern="%message" /> <Column name="exception" pattern="%ex{full}" /> </JDBC> </Appenders> <Loggers> <Root level="warn"> <AppenderRef ref="databaseAppender"/> </Root> </Loggers> </Configuration> <?xml version="1.0" encoding="UTF-8"?> <Configuration status="error"> <Appenders> <JDBC name="databaseAppender" tableName="LOGGING.APPLICATION_LOG"> <ConnectionFactory class="net.example.db.ConnectionFactory" method="getDatabaseConnection" /> <Column name="EVENT_ID" literal="LOGGING.APPLICATION_LOG_SEQUENCE.NEXTVAL" /> <Column name="EVENT_DATE" isEventTimestamp="true" /> <Column name="LEVEL" pattern="%level" /> <Column name="LOGGER" pattern="%logger" /> <Column name="MESSAGE" pattern="%message" /> <Column name="THROWABLE" pattern="%ex{full}" /> </JDBC> </Appenders> <Loggers> <Root level="warn"> <AppenderRef ref="databaseAppender"/> </Root> </Loggers> </Configuration> package net.example.db; import java.sql.Connection; import java.sql.SQLException; import java.util.Properties; import javax.sql.DataSource; import org.apache.commons.dbcp.DriverManagerConnectionFactory; import org.apache.commons.dbcp.PoolableConnection; import org.apache.commons.dbcp.PoolableConnectionFactory; import org.apache.commons.dbcp.PoolingDataSource; import org.apache.commons.pool.impl.GenericObjectPool; public class ConnectionFactory { private static interface Singleton { final ConnectionFactory INSTANCE = new ConnectionFactory(); } private final DataSource dataSource; private ConnectionFactory() { Properties properties = new Properties(); properties.setProperty("user", "logging"); properties.setProperty("password", "abc123"); // or get properties from some configuration file GenericObjectPool<PoolableConnection> pool = new GenericObjectPool<PoolableConnection>(); DriverManagerConnectionFactory connectionFactory = new DriverManagerConnectionFactory( "jdbc:mysql://example.org:3306/exampleDb", properties ); new PoolableConnectionFactory( connectionFactory, pool, null, "SELECT 1", 3, false, false, Connection.TRANSACTION_READ_COMMITTED ); this.dataSource = new PoolingDataSource(pool); } public static Connection getDatabaseConnection() throws SQLException { return Singleton.INSTANCE.dataSource.getConnection(); } } JMS AppenderThe JMS Appender sends the formatted log event to a JMS Destination. Note that in Log4j 2.0, this appender was split into a JMSQueueAppender and a JMSTopicAppender. Starting in Log4j 2.1, these appenders were combined into the JMS Appender which makes no distinction between queues and topics. However, configurations written for 2.0 which use the <JMSQueue/> or <JMSTopic/> elements will continue to work with the new <JMS/> configuration element.
Here is a sample JMS Appender configuration: <?xml version="1.0" encoding="UTF-8"?> <Configuration status="warn" name="MyApp"> <Appenders> <JMS name="jmsQueue" destinationBindingName="MyQueue" factoryBindingName="MyQueueConnectionFactory"> <JsonLayout properties="true"/> </JMS> </Appenders> <Loggers> <Root level="error"> <AppenderRef ref="jmsQueue"/> </Root> </Loggers> </Configuration> To map your Log4j MapMessages to JMS javax.jms.MapMessages, set the layout of the appender to MessageLayout with <MessageLayout /> (Since 2.9.): <?xml version="1.0" encoding="UTF-8"?> <Configuration status="warn" name="MyApp"> <Appenders> <JMS name="jmsQueue" destinationBindingName="MyQueue" factoryBindingName="MyQueueConnectionFactory"> <MessageLayout /> </JMS> </Appenders> <Loggers> <Root level="error"> <AppenderRef ref="jmsQueue"/> </Root> </Loggers> </Configuration> JPAAppenderThe JPAAppender writes log events to a relational database table using the Java Persistence API 2.1. It requires the API and a provider implementation be on the classpath. It also requires a decorated entity configured to persist to the table desired. The entity should either extend org.apache.logging.log4j.core.appender.db.jpa.BasicLogEventEntity (if you mostly want to use the default mappings) and provide at least an @Id property, or org.apache.logging.log4j.core.appender.db.jpa.AbstractLogEventWrapperEntity (if you want to significantly customize the mappings). See the Javadoc for these two classes for more information. You can also consult the source code of these two classes as an example of how to implement the entity.
Here is a sample configuration for the JPAAppender. The first XML sample is the Log4j configuration file, the second is the persistence.xml file. EclipseLink is assumed here, but any JPA 2.1 or higher provider will do. You should always create a separate persistence unit for logging, for two reasons. First, <shared-cache-mode> must be set to "NONE," which is usually not desired in normal JPA usage. Also, for performance reasons the logging entity should be isolated in its own persistence unit away from all other entities and you should use a non-JTA data source. Note that your persistence unit must also contain <class> elements for all of the org.apache.logging.log4j.core.appender.db.jpa.converter converter classes. <?xml version="1.0" encoding="UTF-8"?> <Configuration status="error"> <Appenders> <JPA name="databaseAppender" persistenceUnitName="loggingPersistenceUnit" entityClassName="com.example.logging.JpaLogEntity" /> </Appenders> <Loggers> <Root level="warn"> <AppenderRef ref="databaseAppender"/> </Root> </Loggers> </Configuration> <?xml version="1.0" encoding="UTF-8"?> <persistence xmlns="http://xmlns.jcp.org/xml/ns/persistence" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://xmlns.jcp.org/xml/ns/persistence http://xmlns.jcp.org/xml/ns/persistence/persistence_2_1.xsd" version="2.1"> <persistence-unit name="loggingPersistenceUnit" transaction-type="RESOURCE_LOCAL"> <provider>org.eclipse.persistence.jpa.PersistenceProvider</provider> <class>org.apache.logging.log4j.core.appender.db.jpa.converter.ContextMapAttributeConverter</class> <class>org.apache.logging.log4j.core.appender.db.jpa.converter.ContextMapJsonAttributeConverter</class> <class>org.apache.logging.log4j.core.appender.db.jpa.converter.ContextStackAttributeConverter</class> <class>org.apache.logging.log4j.core.appender.db.jpa.converter.ContextStackJsonAttributeConverter</class> <class>org.apache.logging.log4j.core.appender.db.jpa.converter.MarkerAttributeConverter</class> <class>org.apache.logging.log4j.core.appender.db.jpa.converter.MessageAttributeConverter</class> <class>org.apache.logging.log4j.core.appender.db.jpa.converter.StackTraceElementAttributeConverter</class> <class>org.apache.logging.log4j.core.appender.db.jpa.converter.ThrowableAttributeConverter</class> <class>com.example.logging.JpaLogEntity</class> <non-jta-data-source>jdbc/LoggingDataSource</non-jta-data-source> <shared-cache-mode>NONE</shared-cache-mode> </persistence-unit> </persistence> package com.example.logging; ... @Entity @Table(name="application_log", schema="dbo") public class JpaLogEntity extends BasicLogEventEntity { private static final long serialVersionUID = 1L; private long id = 0L; public TestEntity() { super(null); } public TestEntity(LogEvent wrappedEvent) { super(wrappedEvent); } @Id @GeneratedValue(strategy = GenerationType.IDENTITY) @Column(name = "id") public long getId() { return this.id; } public void setId(long id) { this.id = id; } // If you want to override the mapping of any properties mapped in BasicLogEventEntity, // just override the getters and re-specify the annotations. } package com.example.logging; ... @Entity @Table(name="application_log", schema="dbo") public class JpaLogEntity extends AbstractLogEventWrapperEntity { private static final long serialVersionUID = 1L; private long id = 0L; public TestEntity() { super(null); } public TestEntity(LogEvent wrappedEvent) { super(wrappedEvent); } @Id @GeneratedValue(strategy = GenerationType.IDENTITY) @Column(name = "logEventId") public long getId() { return this.id; } public void setId(long id) { this.id = id; } @Override @Enumerated(EnumType.STRING) @Column(name = "level") public Level getLevel() { return this.getWrappedEvent().getLevel(); } @Override @Column(name = "logger") public String getLoggerName() { return this.getWrappedEvent().getLoggerName(); } @Override @Column(name = "message") @Convert(converter = MyMessageConverter.class) public Message getMessage() { return this.getWrappedEvent().getMessage(); } ... } HttpAppenderThe HttpAppender sends log events over HTTP. A Layout must be provided to format the LogEvent. Will set the Content-Type header according to the layout. Additional headers can be specified with embedded Property elements. Will wait for response from server, and throw error if no 2xx response is received. Implemented with HttpURLConnection.
Here is a sample HttpAppender configuration snippet: <?xml version="1.0" encoding="UTF-8"?> ... <Appenders> <Http name="Http" url="https://localhost:9200/test/log4j/"> <Property name="X-Java-Runtime" value="$${java:runtime}" /> <JsonLayout properties="true"/> <SSL> <KeyStore location="log4j2-keystore.jks" password="changeme"/> <TrustStore location="truststore.jks" password="changeme"/> </SSL> </Http> </Appenders> KafkaAppenderThe KafkaAppender logs events to an Apache Kafka topic. Each log event is sent as a Kafka record with no key.
Here is a sample KafkaAppender configuration snippet: <?xml version="1.0" encoding="UTF-8"?> ... <Appenders> <Kafka name="Kafka" topic="log-test"> <PatternLayout pattern="%date %message"/> <Property name="bootstrap.servers">localhost:9092</Property> </Kafka> </Appenders> This appender is synchronous by default and will block until the record has been acknowledged by the Kafka server, timeout for this can be set with the timeout.ms property (defaults to 30 seconds). Wrap with Async appender and/or set syncSend to false to log asynchronously. This appender requires the Kafka client library. Note that you need to use a version of the Kafka client library matching the Kafka server used. Note:Make sure to not let org.apache.kafka log to a Kafka appender on DEBUG level, since that will cause recursive logging: <?xml version="1.0" encoding="UTF-8"?> ... <Loggers> <Root level="DEBUG"> <AppenderRef ref="Kafka"/> </Root> <Logger name="org.apache.kafka" level="INFO" /> <!-- avoid recursive logging --> </Loggers> MemoryMappedFileAppenderNew since 2.1. Be aware that this is a new addition, and although it has been tested on several platforms, it does not have as much track record as the other file appenders. The MemoryMappedFileAppender maps a part of the specified file into memory and writes log events to this memory, relying on the operating system's virtual memory manager to synchronize the changes to the storage device. The main benefit of using memory mapped files is I/O performance. Instead of making system calls to write to disk, this appender can simply change the program's local memory, which is orders of magnitude faster. Also, in most operating systems the memory region mapped actually is the kernel's page cache (file cache), meaning that no copies need to be created in user space. (TODO: performance tests that compare performance of this appender to RandomAccessFileAppender and FileAppender.) There is some overhead with mapping a file region into memory, especially very large regions (half a gigabyte or more). The default region size is 32 MB, which should strike a reasonable balance between the frequency and the duration of remap operations. (TODO: performance test remapping various sizes.) Similar to the FileAppender and the RandomAccessFileAppender, MemoryMappedFileAppender uses a MemoryMappedFileManager to actually perform the file I/O. While MemoryMappedFileAppender from different Configurations cannot be shared, the MemoryMappedFileManagers can be if the Manager is accessible. For example, two web applications in a servlet container can have their own configuration and safely write to the same file if Log4j is in a ClassLoader that is common to both of them.
Here is a sample MemoryMappedFile configuration: <?xml version="1.0" encoding="UTF-8"?> <Configuration status="warn" name="MyApp" packages=""> <Appenders> <MemoryMappedFile name="MyFile" fileName="logs/app.log"> <PatternLayout> <Pattern>%d %p %c{1.} [%t] %m%n</Pattern> </PatternLayout> </MemoryMappedFile> </Appenders> <Loggers> <Root level="error"> <AppenderRef ref="MyFile"/> </Root> </Loggers> </Configuration> NoSQLAppenderThe NoSQLAppender writes log events to a NoSQL database using an internal lightweight provider interface. Provider implementations currently exist for MongoDB and Apache CouchDB, and writing a custom provider is quite simple.
You specify which NoSQL provider to use by specifying the appropriate configuration element within the <NoSql> element. The types currently supported are <MongoDb> and <CouchDb>. To create your own custom provider, read the JavaDoc for the NoSQLProvider, NoSQLConnection, and NoSQLObject classes and the documentation about creating Log4j plugins. We recommend you review the source code for the MongoDB and CouchDB providers as a guide for creating your own provider.
Here are a few sample configurations for the NoSQLAppender: <?xml version="1.0" encoding="UTF-8"?> <Configuration status="error"> <Appenders> <NoSql name="databaseAppender"> <MongoDb databaseName="applicationDb" collectionName="applicationLog" server="mongo.example.org" username="loggingUser" password="abc123" /> </NoSql> </Appenders> <Loggers> <Root level="warn"> <AppenderRef ref="databaseAppender"/> </Root> </Loggers> </Configuration> <?xml version="1.0" encoding="UTF-8"?> <Configuration status="error"> <Appenders> <NoSql name="databaseAppender"> <MongoDb collectionName="applicationLog" factoryClassName="org.example.db.ConnectionFactory" factoryMethodName="getNewMongoClient" /> </NoSql> </Appenders> <Loggers> <Root level="warn"> <AppenderRef ref="databaseAppender"/> </Root> </Loggers> </Configuration> <?xml version="1.0" encoding="UTF-8"?> <Configuration status="error"> <Appenders> <NoSql name="databaseAppender"> <CouchDb databaseName="applicationDb" protocol="https" server="couch.example.org" username="loggingUser" password="abc123" /> </NoSql> </Appenders> <Loggers> <Root level="warn"> <AppenderRef ref="databaseAppender"/> </Root> </Loggers> </Configuration> The following example demonstrates how log events are persisted in NoSQL databases if represented in a JSON format: { "level": "WARN", "loggerName": "com.example.application.MyClass", "message": "Something happened that you might want to know about.", "source": { "className": "com.example.application.MyClass", "methodName": "exampleMethod", "fileName": "MyClass.java", "lineNumber": 81 }, "marker": { "name": "SomeMarker", "parent" { "name": "SomeParentMarker" } }, "threadName": "Thread-1", "millis": 1368844166761, "date": "2013-05-18T02:29:26.761Z", "thrown": { "type": "java.sql.SQLException", "message": "Could not insert record. Connection lost.", "stackTrace": [ { "className": "org.example.sql.driver.PreparedStatement$1", "methodName": "responder", "fileName": "PreparedStatement.java", "lineNumber": 1049 }, { "className": "org.example.sql.driver.PreparedStatement", "methodName": "executeUpdate", "fileName": "PreparedStatement.java", "lineNumber": 738 }, { "className": "com.example.application.MyClass", "methodName": "exampleMethod", "fileName": "MyClass.java", "lineNumber": 81 }, { "className": "com.example.application.MainClass", "methodName": "main", "fileName": "MainClass.java", "lineNumber": 52 } ], "cause": { "type": "java.io.IOException", "message": "Connection lost.", "stackTrace": [ { "className": "java.nio.channels.SocketChannel", "methodName": "write", "fileName": null, "lineNumber": -1 }, { "className": "org.example.sql.driver.PreparedStatement$1", "methodName": "responder", "fileName": "PreparedStatement.java", "lineNumber": 1032 }, { "className": "org.example.sql.driver.PreparedStatement", "methodName": "executeUpdate", "fileName": "PreparedStatement.java", "lineNumber": 738 }, { "className": "com.example.application.MyClass", "methodName": "exampleMethod", "fileName": "MyClass.java", "lineNumber": 81 }, { "className": "com.example.application.MainClass", "methodName": "main", "fileName": "MainClass.java", "lineNumber": 52 } ] } }, "contextMap": { "ID": "86c3a497-4e67-4eed-9d6a-2e5797324d7b", "username": "JohnDoe" }, "contextStack": [ "topItem", "anotherItem", "bottomItem" ] } OutputStreamAppenderThe OutputStreamAppender provides the base for many of the other Appenders such as the File and Socket appenders that write the event to an Output Stream. It cannot be directly configured. Support for immediateFlush and buffering is provided by the OutputStreamAppender. The OutputStreamAppender uses an OutputStreamManager to handle the actual I/O, allowing the stream to be shared by Appenders in multiple configurations. RandomAccessFileAppenderThe RandomAccessFileAppender is similar to the standard FileAppender except it is always buffered (this cannot be switched off) and internally it uses a ByteBuffer + RandomAccessFile instead of a BufferedOutputStream. We saw a 20-200% performance improvement compared to FileAppender with "bufferedIO=true" in our measurements. Similar to the FileAppender, RandomAccessFileAppender uses a RandomAccessFileManager to actually perform the file I/O. While RandomAccessFileAppender from different Configurations cannot be shared, the RandomAccessFileManagers can be if the Manager is accessible. For example, two web applications in a servlet container can have their own configuration and safely write to the same file if Log4j is in a ClassLoader that is common to both of them.
Here is a sample RandomAccessFile configuration: <?xml version="1.0" encoding="UTF-8"?> <Configuration status="warn" name="MyApp" packages=""> <Appenders> <RandomAccessFile name="MyFile" fileName="logs/app.log"> <PatternLayout> <Pattern>%d %p %c{1.} [%t] %m%n</Pattern> </PatternLayout> </RandomAccessFile> </Appenders> <Loggers> <Root level="error"> <AppenderRef ref="MyFile"/> </Root> </Loggers> </Configuration> RewriteAppenderThe RewriteAppender allows the LogEvent to manipulated before it is processed by another Appender. This can be used to mask sensitive information such as passwords or to inject information into each event. The RewriteAppender must be configured with a RewritePolicy. The RewriteAppender should be configured after any Appenders it references to allow it to shut down properly.
RewritePolicyRewritePolicy is an interface that allows implementations to inspect and possibly modify LogEvents before they are passed to Appender. RewritePolicy declares a single method named rewrite that must be implemented. The method is passed the LogEvent and can return the same event or create a new one. MapRewritePolicyMapRewritePolicy will evaluate LogEvents that contain a MapMessage and will add or update elements of the Map.
The following configuration shows a RewriteAppender configured to add a product key and its value to the MapMessage.: <?xml version="1.0" encoding="UTF-8"?> <Configuration status="warn" name="MyApp" packages=""> <Appenders> <Console name="STDOUT" target="SYSTEM_OUT"> <PatternLayout pattern="%m%n"/> </Console> <Rewrite name="rewrite"> <AppenderRef ref="STDOUT"/> <MapRewritePolicy mode="Add"> <KeyValuePair key="product" value="TestProduct"/> </MapRewritePolicy> </Rewrite> </Appenders> <Loggers> <Root level="error"> <AppenderRef ref="Rewrite"/> </Root> </Loggers> </Configuration> PropertiesRewritePolicyPropertiesRewritePolicy will add properties configured on the policy to the ThreadContext Map being logged. The properties will not be added to the actual ThreadContext Map. The property values may contain variables that will be evaluated when the configuration is processed as well as when the event is logged.
The following configuration shows a RewriteAppender configured to add a product key and its value to the MapMessage: <?xml version="1.0" encoding="UTF-8"?> <Configuration status="warn" name="MyApp" packages=""> <Appenders> <Console name="STDOUT" target="SYSTEM_OUT"> <PatternLayout pattern="%m%n"/> </Console> <Rewrite name="rewrite"> <AppenderRef ref="STDOUT"/> <PropertiesRewritePolicy> <Property name="user">${sys:user.name}</Property> <Property name="env">${sys:environment}</Property> </PropertiesRewritePolicy> </Rewrite> </Appenders> <Loggers> <Root level="error"> <AppenderRef ref="Rewrite"/> </Root> </Loggers> </Configuration> LoggerNameLevelRewritePolicyYou can use this policy to make loggers in third party code less chatty by changing event levels. The LoggerNameLevelRewritePolicy will rewrite log event levels for a given logger name prefix. You configure a LoggerNameLevelRewritePolicy with a logger name prefix and a pairs of levels, where a pair defines a source level and a target level.
The following configuration shows a RewriteAppender configured to map level INFO to DEBUG and level WARN to INFO for all loggers that start with com.foo.bar. <?xml version="1.0" encoding="UTF-8"?> <Configuration status="warn" name="MyApp"> <Appenders> <Console name="STDOUT" target="SYSTEM_OUT"> <PatternLayout pattern="%m%n"/> </Console> <Rewrite name="rewrite"> <AppenderRef ref="STDOUT"/> <LoggerNameLevelRewritePolicy logger="com.foo.bar"> <KeyValuePair key="INFO" value="DEBUG"/> <KeyValuePair key="WARN" value="INFO"/> </LoggerNameLevelRewritePolicy> </Rewrite> </Appenders> <Loggers> <Root level="error"> <AppenderRef ref="Rewrite"/> </Root> </Loggers> </Configuration> RollingFileAppenderThe RollingFileAppender is an OutputStreamAppender that writes to the File named in the fileName parameter and rolls the file over according the TriggeringPolicy and the RolloverPolicy. The RollingFileAppender uses a RollingFileManager (which extends OutputStreamManager) to actually perform the file I/O and perform the rollover. While RolloverFileAppenders from different Configurations cannot be shared, the RollingFileManagers can be if the Manager is accessible. For example, two web applications in a servlet container can have their own configuration and safely write to the same file if Log4j is in a ClassLoader that is common to both of them. A RollingFileAppender requires a TriggeringPolicy and a RolloverStrategy. The triggering policy determines if a rollover should be performed while the RolloverStrategy defines how the rollover should be done. If no RolloverStrategy is configured, RollingFileAppender will use the DefaultRolloverStrategy. Since log4j-2.5, a custom delete action can be configured in the DefaultRolloverStrategy to run at rollover. Since 2.8 if no file name is configured then DirectWriteRolloverStrategy will be used instead of DefaultRolloverStrategy. Since log4j-2.9, a custom POSIX file attribute view action can be configured in the DefaultRolloverStrategy to run at rollover, if not defined, inherited POSIX file attribute view from the RollingFileAppender will be applied. File locking is not supported by the RollingFileAppender.
Triggering PoliciesComposite Triggering PolicyThe CompositeTriggeringPolicy combines multiple triggering policies and returns true if any of the configured policies return true. The CompositeTriggeringPolicy is configured simply by wrapping other policies in a Policies element. For example, the following XML fragment defines policies that rollover the log when the JVM starts, when the log size reaches twenty megabytes, and when the current date no longer matches the log’s start date. <Policies> <OnStartupTriggeringPolicy /> <SizeBasedTriggeringPolicy size="20 MB" /> <TimeBasedTriggeringPolicy /> </Policies> Cron Triggering PolicyThe CronTriggeringPolicy triggers rollover based on a cron expression.
OnStartup Triggering PolicyThe OnStartupTriggeringPolicy policy causes a rollover if the log file is older than the current JVM's start time and the minimum file size is met or exceeded.
Google App Engine note: SizeBased Triggering PolicyThe SizeBasedTriggeringPolicy causes a rollover once the file has reached the specified size. The size can be specified in bytes, with the suffix KB, MB or GB, for example 20MB. TimeBased Triggering PolicyThe TimeBasedTriggeringPolicy causes a rollover once the date/time pattern no longer applies to the active file. This policy accepts an interval attribute which indicates how frequently the rollover should occur based on the time pattern and a modulate boolean attribute.
Rollover StrategiesDefault Rollover StrategyThe default rollover strategy accepts both a date/time pattern and an integer from the filePattern attribute specified on the RollingFileAppender itself. If the date/time pattern is present it will be replaced with the current date and time values. If the pattern contains an integer it will be incremented on each rollover. If the pattern contains both a date/time and integer in the pattern the integer will be incremented until the result of the date/time pattern changes. If the file pattern ends with ".gz", ".zip", ".bz2", ".deflate", ".pack200", or ".xz" the resulting archive will be compressed using the compression scheme that matches the suffix. The formats bzip2, Deflate, Pack200 and XZ require Apache Commons Compress. In addition, XZ requires XZ for Java. The pattern may also contain lookup references that can be resolved at runtime such as is shown in the example below. The default rollover strategy supports three variations for incrementing the counter. The first is the "fixed window" strategy. To illustrate how it works, suppose that the min attribute is set to 1, the max attribute is set to 3, the file name is "foo.log", and the file name pattern is "foo-%i.log".
By way of contrast, when the fileIndex attribute is set to "max" but all the other settings are the same the following actions will be performed.
Finally, as of release 2.8, if the fileIndex attribute is set to "nomax" then the min and max values will be ignored and file numbering will increment by 1 and each rollover will have an incrementally higher value with no maximum number of files.
DirectWrite Rollover StrategyThe DirectWriteRolloverStrategy causes log events to be written directly to files represented by the file pattern. With this strategy file renames are not performed. If the size-based triggering policy causes multiple files to be written durring the specified time period they will be numbered starting at one and continually incremented until a time-based rollover occurs. Warning: If the file pattern has a suffix indicating compression should take place the current file will not be compressed when the application is shut down. Furthermore, if the time changes such that the file pattern no longer matches the current file it will not be compressed at startup either.
Below is a sample configuration that uses a RollingFileAppender with both the time and size based triggering policies, will create up to 7 archives on the same day (1-7) that are stored in a directory based on the current year and month, and will compress each archive using gzip: <?xml version="1.0" encoding="UTF-8"?> <Configuration status="warn" name="MyApp" packages=""> <Appenders> <RollingFile name="RollingFile" fileName="logs/app.log" filePattern="logs/$${date:yyyy-MM}/app-%d{MM-dd-yyyy}-%i.log.gz"> <PatternLayout> <Pattern>%d %p %c{1.} [%t] %m%n</Pattern> </PatternLayout> <Policies> <TimeBasedTriggeringPolicy /> <SizeBasedTriggeringPolicy size="250 MB"/> </Policies> </RollingFile> </Appenders> <Loggers> <Root level="error"> <AppenderRef ref="RollingFile"/> </Root> </Loggers> </Configuration> This second example shows a rollover strategy that will keep up to 20 files before removing them. <?xml version="1.0" encoding="UTF-8"?> <Configuration status="warn" name="MyApp" packages=""> <Appenders> <RollingFile name="RollingFile" fileName="logs/app.log" filePattern="logs/$${date:yyyy-MM}/app-%d{MM-dd-yyyy}-%i.log.gz"> <PatternLayout> <Pattern>%d %p %c{1.} [%t] %m%n</Pattern> </PatternLayout> <Policies> <TimeBasedTriggeringPolicy /> <SizeBasedTriggeringPolicy size="250 MB"/> </Policies> <DefaultRolloverStrategy max="20"/> </RollingFile> </Appenders> <Loggers> <Root level="error"> <AppenderRef ref="RollingFile"/> </Root> </Loggers> </Configuration> Below is a sample configuration that uses a RollingFileAppender with both the time and size based triggering policies, will create up to 7 archives on the same day (1-7) that are stored in a directory based on the current year and month, and will compress each archive using gzip and will roll every 6 hours when the hour is divisible by 6: <?xml version="1.0" encoding="UTF-8"?> <Configuration status="warn" name="MyApp" packages=""> <Appenders> <RollingFile name="RollingFile" fileName="logs/app.log" filePattern="logs/$${date:yyyy-MM}/app-%d{yyyy-MM-dd-HH}-%i.log.gz"> <PatternLayout> <Pattern>%d %p %c{1.} [%t] %m%n</Pattern> </PatternLayout> <Policies> <TimeBasedTriggeringPolicy interval="6" modulate="true"/> <SizeBasedTriggeringPolicy size="250 MB"/> </Policies> </RollingFile> </Appenders> <Loggers> <Root level="error"> <AppenderRef ref="RollingFile"/> </Root> </Loggers> </Configuration> This sample configuration uses a RollingFileAppender with both the cron and size based triggering policies, and writes directly to an unlimited number of archive files. The cron trigger causes a rollover every hour while the file size is limited to 250MB: <?xml version="1.0" encoding="UTF-8"?> <Configuration status="warn" name="MyApp" packages=""> <Appenders> <RollingFile name="RollingFile" filePattern="logs/app-%d{yyyy-MM-dd-HH}-%i.log.gz"> <PatternLayout> <Pattern>%d %p %c{1.} [%t] %m%n</Pattern> </PatternLayout> <Policies> <CronTriggeringPolicy schedule="0 0 * * * ?"/> <SizeBasedTriggeringPolicy size="250 MB"/> </Policies> </RollingFile> </Appenders> <Loggers> <Root level="error"> <AppenderRef ref="RollingFile"/> </Root> </Loggers> </Configuration> This sample configuration is the same as the previous but limits the number of files saved each hour to 10: <?xml version="1.0" encoding="UTF-8"?> <Configuration status="warn" name="MyApp" packages=""> <Appenders> <RollingFile name="RollingFile" filePattern="logs/app-%d{yyyy-MM-dd-HH}-%i.log.gz"> <PatternLayout> <Pattern>%d %p %c{1.} [%t] %m%n</Pattern> </PatternLayout> <Policies> <CronTriggeringPolicy schedule="0 0 * * * ?"/> <SizeBasedTriggeringPolicy size="250 MB"/> </Policies> <DirectWriteRolloverStrategy maxFiles="10"/> </RollingFile> </Appenders> <Loggers> <Root level="error"> <AppenderRef ref="RollingFile"/> </Root> </Loggers> </Configuration> Log Archive Retention Policy: Delete on RolloverLog4j-2.5 introduces a Delete action that gives users more control over what files are deleted at rollover time than what was possible with the DefaultRolloverStrategy max attribute. The Delete action lets users configure one or more conditions that select the files to delete relative to a base directory. Note that it is possible to delete any file, not just rolled over log files, so use this action with care! With the testMode parameter you can test your configuration without accidentally deleting the wrong files.
Below is a sample configuration that uses a RollingFileAppender with the cron triggering policy configured to trigger every day at midnight. Archives are stored in a directory based on the current year and month. All files under the base directory that match the "*/app-*.log.gz" glob and are 60 days old or older are deleted at rollover time. <?xml version="1.0" encoding="UTF-8"?> <Configuration status="warn" name="MyApp" packages=""> <Properties> <Property name="baseDir">logs</Property> </Properties> <Appenders> <RollingFile name="RollingFile" fileName="${baseDir}/app.log" filePattern="${baseDir}/$${date:yyyy-MM}/app-%d{yyyy-MM-dd}.log.gz"> <PatternLayout pattern="%d %p %c{1.} [%t] %m%n" /> <CronTriggeringPolicy schedule="0 0 0 * * ?"/> <DefaultRolloverStrategy> <Delete basePath="${baseDir}" maxDepth="2"> <IfFileName glob="*/app-*.log.gz" /> <IfLastModified age="60d" /> </Delete> </DefaultRolloverStrategy> </RollingFile> </Appenders> <Loggers> <Root level="error"> <AppenderRef ref="RollingFile"/> </Root> </Loggers> </Configuration> Below is a sample configuration that uses a RollingFileAppender with both the time and size based triggering policies, will create up to 100 archives on the same day (1-100) that are stored in a directory based on the current year and month, and will compress each archive using gzip and will roll every hour. During every rollover, this configuration will delete files that match "*/app-*.log.gz" and are 30 days old or older, but keep the most recent 100 GB or the most recent 10 files, whichever comes first. <?xml version="1.0" encoding="UTF-8"?> <Configuration status="warn" name="MyApp" packages=""> <Properties> <Property name="baseDir">logs</Property> </Properties> <Appenders> <RollingFile name="RollingFile" fileName="${baseDir}/app.log" filePattern="${baseDir}/$${date:yyyy-MM}/app-%d{yyyy-MM-dd-HH}-%i.log.gz"> <PatternLayout pattern="%d %p %c{1.} [%t] %m%n" /> <Policies> <TimeBasedTriggeringPolicy /> <SizeBasedTriggeringPolicy size="250 MB"/> </Policies> <DefaultRolloverStrategy max="100"> <!-- Nested conditions: the inner condition is only evaluated on files for which the outer conditions are true. --> <Delete basePath="${baseDir}" maxDepth="2"> <IfFileName glob="*/app-*.log.gz"> <IfLastModified age="30d"> <IfAny> <IfAccumulatedFileSize exceeds="100 GB" /> <IfAccumulatedFileCount exceeds="10" /> </IfAny> </IfLastModified> </IfFileName> </Delete> </DefaultRolloverStrategy> </RollingFile> </Appenders> <Loggers> <Root level="error"> <AppenderRef ref="RollingFile"/> </Root> </Loggers> </Configuration>
Below is a sample configuration that uses a RollingFileAppender with the cron triggering policy configured to trigger every day at midnight. Archives are stored in a directory based on the current year and month. The script returns a list of rolled over files under the base directory dated Friday the 13th. The Delete action will delete all files returned by the script. <?xml version="1.0" encoding="UTF-8"?> <Configuration status="trace" name="MyApp" packages=""> <Properties> <Property name="baseDir">logs</Property> </Properties> <Appenders> <RollingFile name="RollingFile" fileName="${baseDir}/app.log" filePattern="${baseDir}/$${date:yyyy-MM}/app-%d{yyyyMMdd}.log.gz"> <PatternLayout pattern="%d %p %c{1.} [%t] %m%n" /> <CronTriggeringPolicy schedule="0 0 0 * * ?"/> <DefaultRolloverStrategy> <Delete basePath="${baseDir}" maxDepth="2"> <ScriptCondition> <Script name="superstitious" language="groovy"><![CDATA[ import java.nio.file.*; def result = []; def pattern = ~/\d*\/app-(\d*)\.log\.gz/; pathList.each { pathWithAttributes -> def relative = basePath.relativize pathWithAttributes.path statusLogger.trace 'SCRIPT: relative path=' + relative + " (base=$basePath)"; // remove files dated Friday the 13th def matcher = pattern.matcher(relative.toString()); if (matcher.find()) { def dateString = matcher.group(1); def calendar = Date.parse("yyyyMMdd", dateString).toCalendar(); def friday13th = calendar.get(Calendar.DAY_OF_MONTH) == 13 \ && calendar.get(Calendar.DAY_OF_WEEK) == Calendar.FRIDAY; if (friday13th) { result.add pathWithAttributes; statusLogger.trace 'SCRIPT: deleting path ' + pathWithAttributes; } } } statusLogger.trace 'SCRIPT: returning ' + result; result; ]] > </Script> </ScriptCondition> </Delete> </DefaultRolloverStrategy> </RollingFile> </Appenders> <Loggers> <Root level="error"> <AppenderRef ref="RollingFile"/> </Root> </Loggers> </Configuration> Log Archive File Attribute View Policy: Custom file attribute on RolloverLog4j-2.9 introduces a PosixViewAttribute action that gives users more control over which file attribute permissions, owner and group should be applied. The PosixViewAttribute action lets users configure one or more conditions that select the eligible files relative to a base directory.
Below is a sample configuration that uses a RollingFileAppender and defines different POSIX file attribute view for current and rolled log files. <?xml version="1.0" encoding="UTF-8"?> <Configuration status="trace" name="MyApp" packages=""> <Properties> <Property name="baseDir">logs</Property> </Properties> <Appenders> <RollingFile name="RollingFile" fileName="${baseDir}/app.log" filePattern="${baseDir}/$${date:yyyy-MM}/app-%d{yyyyMMdd}.log.gz" filePermissions="rw-------"> <PatternLayout pattern="%d %p %c{1.} [%t] %m%n" /> <CronTriggeringPolicy schedule="0 0 0 * * ?"/> <DefaultRolloverStrategy stopCustomActionsOnError="true"> <PosixViewAttribute basePath="${baseDir}/$${date:yyyy-MM}" filePermissions="r--r--r--"> <IfFileName glob="*.gz" /> </PosixViewAttribute> </DefaultRolloverStrategy> </RollingFile> </Appenders> <Loggers> <Root level="error"> <AppenderRef ref="RollingFile"/> </Root> </Loggers> </Configuration> RollingRandomAccessFileAppenderThe RollingRandomAccessFileAppender is similar to the standard RollingFileAppender except it is always buffered (this cannot be switched off) and internally it uses a ByteBuffer + RandomAccessFile instead of a BufferedOutputStream. We saw a 20-200% performance improvement compared to RollingFileAppender with "bufferedIO=true" in our measurements. The RollingRandomAccessFileAppender writes to the File named in the fileName parameter and rolls the file over according the TriggeringPolicy and the RolloverPolicy. Similar to the RollingFileAppender, RollingRandomAccessFileAppender uses a RollingRandomAccessFileManager to actually perform the file I/O and perform the rollover. While RollingRandomAccessFileAppender from different Configurations cannot be shared, the RollingRandomAccessFileManagers can be if the Manager is accessible. For example, two web applications in a servlet container can have their own configuration and safely write to the same file if Log4j is in a ClassLoader that is common to both of them. A RollingRandomAccessFileAppender requires a TriggeringPolicy and a RolloverStrategy. The triggering policy determines if a rollover should be performed while the RolloverStrategy defines how the rollover should be done. If no RolloverStrategy is configured, RollingRandomAccessFileAppender will use the DefaultRolloverStrategy. Since log4j-2.5, a custom delete action can be configured in the DefaultRolloverStrategy to run at rollover. File locking is not supported by the RollingRandomAccessFileAppender.
Rollover StrategiesSee RollingFileAppender Rollover Strategies. Below is a sample configuration that uses a RollingRandomAccessFileAppender with both the time and size based triggering policies, will create up to 7 archives on the same day (1-7) that are stored in a directory based on the current year and month, and will compress each archive using gzip: <?xml version="1.0" encoding="UTF-8"?> <Configuration status="warn" name="MyApp" packages=""> <Appenders> <RollingRandomAccessFile name="RollingRandomAccessFile" fileName="logs/app.log" filePattern="logs/$${date:yyyy-MM}/app-%d{MM-dd-yyyy}-%i.log.gz"> <PatternLayout> <Pattern>%d %p %c{1.} [%t] %m%n</Pattern> </PatternLayout> <Policies> <TimeBasedTriggeringPolicy /> <SizeBasedTriggeringPolicy size="250 MB"/> </Policies> </RollingRandomAccessFile> </Appenders> <Loggers> <Root level="error"> <AppenderRef ref="RollingRandomAccessFile"/> </Root> </Loggers> </Configuration> This second example shows a rollover strategy that will keep up to 20 files before removing them. <?xml version="1.0" encoding="UTF-8"?> <Configuration status="warn" name="MyApp" packages=""> <Appenders> <RollingRandomAccessFile name="RollingRandomAccessFile" fileName="logs/app.log" filePattern="logs/$${date:yyyy-MM}/app-%d{MM-dd-yyyy}-%i.log.gz"> <PatternLayout> <Pattern>%d %p %c{1.} [%t] %m%n</Pattern> </PatternLayout> <Policies> <TimeBasedTriggeringPolicy /> <SizeBasedTriggeringPolicy size="250 MB"/> </Policies> <DefaultRolloverStrategy max="20"/> </RollingRandomAccessFile> </Appenders> <Loggers> <Root level="error"> <AppenderRef ref="RollingRandomAccessFile"/> </Root> </Loggers> </Configuration> Below is a sample configuration that uses a RollingRandomAccessFileAppender with both the time and size based triggering policies, will create up to 7 archives on the same day (1-7) that are stored in a directory based on the current year and month, and will compress each archive using gzip and will roll every 6 hours when the hour is divisible by 6: <?xml version="1.0" encoding="UTF-8"?> <Configuration status="warn" name="MyApp" packages=""> <Appenders> <RollingRandomAccessFile name="RollingRandomAccessFile" fileName="logs/app.log" filePattern="logs/$${date:yyyy-MM}/app-%d{yyyy-MM-dd-HH}-%i.log.gz"> <PatternLayout> <Pattern>%d %p %c{1.} [%t] %m%n</Pattern> </PatternLayout> <Policies> <TimeBasedTriggeringPolicy interval="6" modulate="true"/> <SizeBasedTriggeringPolicy size="250 MB"/> </Policies> </RollingRandomAccessFile> </Appenders> <Loggers> <Root level="error"> <AppenderRef ref="RollingRandomAccessFile"/> </Root> </Loggers> </Configuration> RoutingAppenderThe RoutingAppender evaluates LogEvents and then routes them to a subordinate Appender. The target Appender may be an appender previously configured and may be referenced by its name or the Appender can be dynamically created as needed. The RoutingAppender should be configured after any Appenders it references to allow it to shut down properly. You can also configure a RoutingAppender with scripts: you can run a script when the appender starts and when a route is chosen for an log event.
In this example, the script causes the "ServiceWindows" route to be the default route on Windows and "ServiceOther" on all other operating systems. Note that the List Appender is one of our test appenders, any appender can be used, it is only used as a shorthand. <?xml version="1.0" encoding="UTF-8"?> <Configuration status="WARN" name="RoutingTest"> <Appenders> <Routing name="Routing"> <Script name="RoutingInit" language="JavaScript"><![CDATA[ importPackage(java.lang); System.getProperty("os.name").search("Windows") > -1 ? "ServiceWindows" : "ServiceOther";]]> </Script> <Routes> <Route key="ServiceOther"> <List name="List1" /> </Route> <Route key="ServiceWindows"> <List name="List2" /> </Route> </Routes> </Routing> </Appenders> <Loggers> <Root level="error"> <AppenderRef ref="Routing" /> </Root> </Loggers> </Configuration> RoutesThe Routes element accepts a single attribute named "pattern". The pattern is evaluated against all the registered Lookups and the result is used to select a Route. Each Route may be configured with a key. If the key matches the result of evaluating the pattern then that Route will be selected. If no key is specified on a Route then that Route is the default. Only one Route can be configured as the default. The Routes element may contain a Script child element. If specified, the Script is run for each log event and returns the String Route key to use. You must specify either the pattern attribute or the Script element, but not both. Each Route must reference an Appender. If the Route contains a ref attribute then the Route will reference an Appender that was defined in the configuration. If the Route contains an Appender definition then an Appender will be created within the context of the RoutingAppender and will be reused each time a matching Appender name is referenced through a Route. This script is passed the following variables:
In this example, the script runs for each log event and picks a route based on the presence of a Marker named "AUDIT". <?xml version="1.0" encoding="UTF-8"?> <Configuration status="WARN" name="RoutingTest"> <Appenders> <Console name="STDOUT" target="SYSTEM_OUT" /> <Flume name="AuditLogger" compress="true"> <Agent host="192.168.10.101" port="8800"/> <Agent host="192.168.10.102" port="8800"/> <RFC5424Layout enterpriseNumber="18060" includeMDC="true" appName="MyApp"/> </Flume> <Routing name="Routing"> <Routes> <Script name="RoutingInit" language="JavaScript"><![CDATA[ if (logEvent.getMarker() != null && logEvent.getMarker().isInstanceOf("AUDIT")) { return "AUDIT"; } else if (logEvent.getContextMap().containsKey("UserId")) { return logEvent.getContextMap().get("UserId"); } return "STDOUT";]]> </Script> <Route> <RollingFile name="Rolling-${mdc:UserId}" fileName="${mdc:UserId}.log" filePattern="${mdc:UserId}.%i.log.gz"> <PatternLayout> <pattern>%d %p %c{1.} [%t] %m%n</pattern> </PatternLayout> <SizeBasedTriggeringPolicy size="500" /> </RollingFile> </Route> <Route ref="AuditLogger" key="AUDIT"/> <Route ref="STDOUT" key="STDOUT"/> </Routes> <IdlePurgePolicy timeToLive="15" timeUnit="minutes"/> </Routing> </Appenders> <Loggers> <Root level="error"> <AppenderRef ref="Routing" /> </Root> </Loggers> </Configuration> Purge PolicyThe RoutingAppender can be configured with a PurgePolicy whose purpose is to stop and remove dormant Appenders that have been dynamically created by the RoutingAppender. Log4j currently provides the IdlePurgePolicy as the only PurgePolicy available for cleaning up the Appenders. The IdlePurgePolicy accepts 2 attributes; timeToLive, which is the number of timeUnits the Appender should survive without having any events sent to it, and timeUnit, the String representation of java.util.concurrent.TimeUnit which is used with the timeToLive attribute. Below is a sample configuration that uses a RoutingAppender to route all Audit events to a FlumeAppender and all other events will be routed to a RollingFileAppender that captures only the specific event type. Note that the AuditAppender was predefined while the RollingFileAppenders are created as needed. <?xml version="1.0" encoding="UTF-8"?> <Configuration status="warn" name="MyApp" packages=""> <Appenders> <Flume name="AuditLogger" compress="true"> <Agent host="192.168.10.101" port="8800"/> <Agent host="192.168.10.102" port="8800"/> <RFC5424Layout enterpriseNumber="18060" includeMDC="true" appName="MyApp"/> </Flume> <Routing name="Routing"> <Routes pattern="$${sd:type}"> <Route> <RollingFile name="Rolling-${sd:type}" fileName="${sd:type}.log" filePattern="${sd:type}.%i.log.gz"> <PatternLayout> <pattern>%d %p %c{1.} [%t] %m%n</pattern> </PatternLayout> <SizeBasedTriggeringPolicy size="500" /> </RollingFile> </Route> <Route ref="AuditLogger" key="Audit"/> </Routes> <IdlePurgePolicy timeToLive="15" timeUnit="minutes"/> </Routing> </Appenders> <Loggers> <Root level="error"> <AppenderRef ref="Routing"/> </Root> </Loggers> </Configuration> SMTPAppenderSends an e-mail when a specific logging event occurs, typically on errors or fatal errors. The number of logging events delivered in this e-mail depend on the value of BufferSize option. The SMTPAppender keeps only the last BufferSize logging events in its cyclic buffer. This keeps memory requirements at a reasonable level while still delivering useful application context. All events in the buffer are included in the email. The buffer will contain the most recent events of level TRACE to WARN preceding the event that triggered the email. The default behavior is to trigger sending an email whenever an ERROR or higher severity event is logged and to format it as HTML. The circumstances on when the email is sent can be controlled by setting one or more filters on the Appender. As with other Appenders, the formatting can be controlled by specifying a Layout for the Appender.
<?xml version="1.0" encoding="UTF-8"?> <Configuration status="warn" name="MyApp" packages=""> <Appenders> <SMTP name="Mail" subject="Error Log" to="errors@logging.apache.org" from="test@logging.apache.org" smtpHost="localhost" smtpPort="25" bufferSize="50"> </SMTP> </Appenders> <Loggers> <Root level="error"> <AppenderRef ref="Mail"/> </Root> </Loggers> </Configuration> ScriptAppenderSelectorWhen the configuration is built, the ScriptAppenderSelector appender calls a Script to compute an appender name. Log4j then creates one of the appender named listed under AppenderSet using the name of the ScriptAppenderSelector. After configuration, Log4j ignores the ScriptAppenderSelector. Log4j only builds the one selected appender from the configuration tree, and ignores other AppenderSet child nodes. In the following example, the script returns the name "List2". The appender name is recorded under the name of the ScriptAppenderSelector, not the name of the selected appender, in this example, "SelectIt". <Configuration status="WARN" name="ScriptAppenderSelectorExample"> <Appenders> <ScriptAppenderSelector name="SelectIt"> <Script language="JavaScript"><![CDATA[ importPackage(java.lang); System.getProperty("os.name").search("Windows") > -1 ? "MyCustomWindowsAppender" : "MySyslogAppender";]]> </Script> <AppenderSet> <MyCustomWindowsAppender name="MyAppender" ... /> <SyslogAppender name="MySyslog" ... /> </AppenderSet> </ScriptAppenderSelector> </Appenders> <Loggers> <Root level="error"> <AppenderRef ref="SelectIt" /> </Root> </Loggers> </Configuration> SocketAppenderThe SocketAppender is an OutputStreamAppender that writes its output to a remote destination specified by a host and port. The data can be sent over either TCP or UDP and can be sent in any format. You can optionally secure communication with SSL.
This is an unsecured TCP configuration: <?xml version="1.0" encoding="UTF-8"?> <Configuration status="warn" name="MyApp" packages=""> <Appenders> <Socket name="socket" host="localhost" port="9500"> <JsonLayout properties="true"/> </Socket> </Appenders> <Loggers> <Root level="error"> <AppenderRef ref="socket"/> </Root> </Loggers> </Configuration> This is a secured SSL configuration: <?xml version="1.0" encoding="UTF-8"?> <Configuration status="warn" name="MyApp" packages=""> <Appenders> <Socket name="socket" host="localhost" port="9500"> <JsonLayout properties="true"/> <SSL> <KeyStore location="log4j2-keystore.jks" password="changeme"/> <TrustStore location="truststore.jks" password="changeme"/> </SSL> </Socket> </Appenders> <Loggers> <Root level="error"> <AppenderRef ref="socket"/> </Root> </Loggers> </Configuration> SyslogAppenderThe SyslogAppender is a SocketAppender that writes its output to a remote destination specified by a host and port in a format that conforms with either the BSD Syslog format or the RFC 5424 format. The data can be sent over either TCP or UDP.
A sample syslogAppender configuration that is configured with two SyslogAppenders, one using the BSD format and one using RFC 5424. <?xml version="1.0" encoding="UTF-8"?> <Configuration status="warn" name="MyApp" packages=""> <Appenders> <Syslog name="bsd" host="localhost" port="514" protocol="TCP"/> <Syslog name="RFC5424" format="RFC5424" host="localhost" port="8514" protocol="TCP" appName="MyApp" includeMDC="true" facility="LOCAL0" enterpriseNumber="18060" newLine="true" messageId="Audit" id="App"/> </Appenders> <Loggers> <Logger name="com.mycorp" level="error"> <AppenderRef ref="RFC5424"/> </Logger> <Root level="error"> <AppenderRef ref="bsd"/> </Root> </Loggers> </Configuration> For SSL this appender writes its output to a remote destination specified by a host and port over SSL in a format that conforms with either the BSD Syslog format or the RFC 5424 format. <?xml version="1.0" encoding="UTF-8"?> <Configuration status="warn" name="MyApp" packages=""> <Appenders> <TLSSyslog name="bsd" host="localhost" port="6514"> <SSL> <KeyStore location="log4j2-keystore.jks" password="changeme"/> <TrustStore location="truststore.jks" password="changeme"/> </SSL> </TLSSyslog> </Appenders> <Loggers> <Root level="error"> <AppenderRef ref="bsd"/> </Root> </Loggers> </Configuration> ZeroMQ/JeroMQ AppenderThe ZeroMQ appender uses the JeroMQ library to send log events to one or more ZeroMQ endpoints. This is a simple JeroMQ configuration: <?xml version="1.0" encoding="UTF-8"?> <Configuration name="JeroMQAppenderTest" status="TRACE"> <Appenders> <JeroMQ name="JeroMQAppender"> <Property name="endpoint">tcp://*:5556</Property> <Property name="endpoint">ipc://info-topic</Property> </JeroMQ> </Appenders> <Loggers> <Root level="info"> <AppenderRef ref="JeroMQAppender"/> </Root> </Loggers> </Configuration> The table below describes all options. Please consult the JeroMQ and ZeroMQ documentation for details.
|