Delegating Appenders
Log4j Core supplies multiple appenders that do not perform any work themselves, but modify the way other appenders work. The following behaviors can be modified:
-
If you want to perform all I/O from a dedicated thread, see
Async
Appender. -
If you want to provide a backup appender in case an appender fails, see
Failover
Appender. -
If you want to modify the log event, before it is sent to the target destination, see
Rewrite
Appender. -
If you want to create appenders dynamically or choose a different appender for each log event, see
Routing
Appender.
Async
Appender
The Async
Appender stores log events in a blocking queue and forwards them to other appenders on a separate thread.
Due to the asynchronous barrier, exceptions occurring in those appenders will not be forwarded to the caller of the log statement.
The Async
should be configured after the appenders it references to allow it to shut down properly.
The blocking queue is susceptible to lock contention, and performance may become worse when more threads are logging concurrently. Consider using lock-free asynchronous loggers instead, for optimal performance. |
Log4j 2 brought the following enhancements to the Log4j 1 async appender:
|
Async
configuration
The Async
Appender supports the following configuration options:
Attribute | Type | Default value | Description | ||
---|---|---|---|---|---|
Required |
|||||
|
The name of the appender. |
||||
Optional |
|||||
|
|
If If false, the event will be written to the error appender if the queue is full. The default is true. |
|||
|
1024 |
Specifies the maximum number of events that can be queued. When using a disruptor-style When the application is logging faster than the underlying appender can keep up with for a long enough time to fill up the queue, the behavior is determined by the Queue full policy. |
|||
String |
The name of the appender to invoke if none of the appenders can be called, either due to exceptions in the appenders or because the queue is full. If not specified then errors will be ignored. |
||||
boolean |
|
If set to See location information for more information. |
|||
|
|
If Logging exceptions are always also logged to the status logger.
|
|||
|
|
Timeout in milliseconds to wait before stopping the asynchronous thread. A value of |
Type | Multiplicity | Description |
---|---|---|
zero or one |
Allows filtering log events just before they are appended to the blocking queue. See also appender filtering stage. |
|
one or more |
A list of appenders to invoke asynchronously. See appender references for more information. |
|
zero or one |
The blocking queue factory implementation to use. If not provided, See Blocking Queue Factories below. |
As an example, you can instrument a File
appender to perform asynchronous I/O, by using the following appender configurations:
-
XML
-
JSON
-
YAML
-
Properties
log4j2.xml
<File name="FILE"
fileName="app.log">
<JsonTemplateLayout/>
</File>
<Async name="ASYNC">
<AppenderRef ref="FILE"/>
</Async>
log4j2.json
"File": {
"name": "FILE",
"fileName": "app.log"
},
"Async": {
"name": "ASYNC"
}
log4j2.yaml
File:
name: "FILE"
fileName: "app.log"
Async:
name: "ASYNC"
AppenderRef:
ref: "FILE"
log4j2.properties
Appenders.File.name = FILE
Appenders.File.fileName = app.log
Appenders.Async.name = ASYNC
Appenders.Async.AppenderRef.ref = FILE
Queue full policy
When the queue is full the Async
Appender uses an
AsyncQueueFullPolicy
to decide whether to:
-
drop the log event.
-
busy wait until the log event can be added to the queue.
-
log the event on the current thread.
The queue full policy can only be configured through configuration properties. See Async components for more details.
Blocking Queue Factories
The Async
appender allows you to customize the blocking queue used by specifying a nested
BlockingQueueFactory
element.
You can specify the size of the queue using the bufferSize
configuration attribute.
ArrayBlockingQueue
-
This is the default implementation that produces
ArrayBlockingQueue
s.
DisruptorBlockingQueue
-
This queue factory uses the Conversant Disruptor implementation of
BlockingQueue
.Table 3. DisruptorBlockingQueue
Factory configuration attributesAttribute Type Default value Description spinPolicy
The
SpinPolicy
to apply, when adding elements to the queue.Additional dependencies are required to use
DisruptorBlockingQueue
-
Maven
-
Gradle
We assume you use
log4j-bom
for dependency management.<dependency> <groupId>org.apache.logging.log4j</groupId> <artifactId>log4j-conversant</artifactId> <scope>runtime</scope> </dependency>
We assume you use
log4j-bom
for dependency management.runtimeOnly 'org.apache.logging.log4j:log4j-conversant'
-
JCToolsBlockingQueue
-
This queue factory uses JCTools, specifically the MPSC bounded lock-free queue.
Additional dependencies are required to use
JCToolsBlockingQueue
-
Maven
-
Gradle
We assume you use
log4j-bom
for dependency management.<dependency> <groupId>org.apache.logging.log4j</groupId> <artifactId>log4j-jctools</artifactId> <scope>runtime</scope> </dependency>
We assume you use
log4j-bom
for dependency management.runtimeOnly 'org.apache.logging.log4j:log4j-jctools'
-
LinkedTransferQueue
-
This queue factory produces
LinkedTransferQueue
s. Note that this queue does not have a maximum capacity and ignores thebufferSize
attribute.
Failover
Appender
The Failover
Appender can protect your logging pipeline against I/O exceptions in other appenders.
During normal operations the Failover
Appender forwards all log events to a primary appender.
However, if the primary appender fails, a set of secondary appenders will be checked until one succeeds.
Failover
configuration
The Failover
Appender supports the following configuration options:
Attribute | Type | Default value | Description |
---|---|---|---|
Required |
|||
|
The name of this appender. |
||
|
The name of the primary appender to use. |
||
Optional |
|||
|
|
It specifies how many seconds to wait after a failure of the primary appender before the primary appender can be used again. |
|
|
|
If Logging exceptions are always also logged to the status logger. |
Type | Multiplicity | Description |
---|---|---|
zero or one |
Allows filtering log events just before they are appended to the blocking queue. See also appender filtering stage. |
|
one |
A container element for a list of
|
The primary appender must be configured to forward exceptions to the caller, by setting the
|
The following example shows how to configure Failover
to use an appender named FILE
as primary and fall back to CONSOLE
if an error occurs:
-
XML
-
JSON
-
YAML
-
Properties
log4j2.xml
<File name="FILE"
fileName="app.log"
ignoreExceptions="false"/> (1)
<Console name="CONSOLE"/>
<Failover name="FAILOVER"
primary="FILE">
<Failovers>
<AppenderRef ref="CONSOLE"/>
</Failovers>
</Failover>
log4j2.json
"File": {
"name": "FILE",
"fileName": "app.log",
"ignoreExceptions": false (1)
},
"Console": {
"name": "CONSOLE"
},
"Failover": {
"name": "FAILOVER",
"primary": "FILE",
"Failovers": {
"AppenderRef": {
"ref": "CONSOLE"
}
}
}
log4j2.yaml
File:
name: "FILE"
fileName: "app.log"
ignoreExceptions: false
Console:
name: "CONSOLE"
Failover:
name: "FAILOVER"
primary: "FILE"
Failovers:
AppenderRef:
ref: "CONSOLE"
log4j2.properties
Appenders.File.name = FILE
Appenders.File.fileName = app.log
Appenders.File.ignoreExceptions = false
Appenders.Console.name = CONSOLE
Appenders.Failover.name = FAILOVER
Appenders.Failover.primary = FILE
Appenders.Failover.Failovers.AppenderRef = CONSOLE
1 | The primary appender must set ignoreExceptions to false . |
Rewrite
Appender
The Rewrite
allows the log events to be manipulated before they are processed by another Appender.
This can be used to inject additional information into each event.
Although this appender can be used to mask sensitive information contained in log events, we strongly discourage such practice. Sensitive data like passwords and credit card numbers can appear in log files in many formats, and it is challenging to detect them all. A better approach to sensitive data is not to log them at all.
Third-party frameworks like
Palantir |
Rewrite
Configuration
Attribute | Type | Default value | Description |
---|---|---|---|
Required |
|||
|
The name of this appender. |
||
Optional |
|||
|
|
If Logging exceptions are always also logged to the status logger. |
Type | Multiplicity | Description |
---|---|---|
one |
The reference to an appender that will perform the actual logging. |
|
zero or one |
Allows filtering log events just before they are appended to the blocking queue. See also appender filtering stage. |
|
one |
The rewrite policy to apply to all logged events. |
Rewrite Policies
A rewrite policy is a Log4j plugin that implements the
RewritePolicy
interface.
Rewrite policies allow to apply arbitrary modifications to log events.
Log4j Core provides three rewrite policies out-of-the-box:
MapRewritePolicy
-
The
MapRewritePolicy
only modifies events that contain aMapMessage
. It allows adding or updating the keys of theMapMessage
.Table 8. MapRewritePolicy
configuration attributesAttribute Type Default value Description It determines which map entries to modify:
- Add
-
All the configured map entries will be added to the
MapMessage
, modifying the existing ones. - Update
-
The rewrite policy will add to the
MapMessage
only entries corresponding to existing keys.
Table 9. MapRewritePolicy
nested elementsType
Multiplicity
Description
one or more
A list of map entries to add to the
MapMessage
.
PropertiesRewritePolicy
-
The
PropertiesRewritePolicy
will add properties to the context data of the log event.Only the context data of the log event will be modified. The contents of the thread context will remain unchanged.
Table 10. PropertiesRewritePolicy
nested elementsType Multiplicity Description one or more
A list of map entries to add to the context data of the log event.
The
value
attribute of eachProperty
element supports runtime property substitution in the global context.
- LoggerNameLevelRewritePolicy
-
You can use this policy to change the log level of loggers from third-party libraries. The
LoggerNameLevelRewritePolicy
will rewrite the level of log event for a given logger name prefix.The new log levels will only be used by the filter attached to the
Rewrite
appender and those downstream of the appender. Filters configured on loggers will use the previous levels. See Filters for more details on filteringTable 11. LoggerNameLevelRewritePolicy
configuration attributesAttribute Type Default value Description String
The rewrite policy will only be applied to loggers with this logger name and their children.
Table 12. LoggerNameLevelRewritePolicy
nested elementsType Multiplicity Description one or more
Provides a mapping between old level names and new level names.
Configuration example
If a library org.example
over-evaluates the severity of its log events, you decrease their severity with the following configuration:
-
XML
-
JSON
-
YAML
-
Properties
log4j2.xml
<Rewrite name="REWRITE">
<LoggerNameLevelRewritePolicy logger="org.example"> (1)
<KeyValuePair key="WARN" value="INFO"/>
<KeyValuePair key="INFO" value="DEBUG"/>
</LoggerNameLevelRewritePolicy>
<AppenderRef level="INFO" ref="CONSOLE"/> (2)
</Rewrite>
log4j2.json
"Rewrite": {
"name": "REWRITE",
"LoggerNameLevelRewritePolicy": { (1)
"logger": "org.example",
"KeyValuePair": [
{
"key": "WARN",
"value": "INFO"
},
{
"key": "INFO",
"value": "DEBUG"
}
]
},
"AppenderRef": {
"level": "INFO", (2)
"ref": "CONSOLE"
}
}
log4j2.yaml
Rewrite:
name: "REWRITE"
LoggerNameLevelRewritePolicy: (1)
logger: "org.example"
KeyValuePair:
- key: "WARN"
value: "INFO"
- key: "INFO"
value: "DEBUG"
AppenderRef:
level: "INFO" (2)
ref: "CONSOLE"
log4j2.properties
appender.1.type = Rewrite
appender.1.name = REWRITE
(1)
appender.1.policy.type = LoggerNameLevelRewritePolicy
appender.1.policy.logger = org.example
appender.1.policy.kv0.type = KeyValuePair
appender.1.policy.kv0.key = WARN
appender.1.policy.kv0.value = INFO
appender.1.policy.kv1.type = KeyValuePair
appender.1.policy.kv1.key = INFO
appender.1.policy.kv1.value = DEBUG
appender.1.appenderRef.type = AppenderRef
(2)
appender.1.appenderRef.level = INFO
appender.1.appenderRef.ref = CONSOLE
1 | Decreases the severity of WARN and INFO messages, so they appear with the new severity in your log viewer. |
2 | If additionally you don’t want to log DEBUG log events, you must apply a filter. |
Routing
Appender
The Routing
Appender evaluates log events and then routes them to one of its subordinate appenders.
The target appender may be:
-
an existing appender referenced by its name.
-
a new appender obtained by evaluating a configuration snippet.
The Routing
Appender should be configured after any appenders it references to allow it to shut down properly.
Routing
Configuration
Attribute | Type | Default value | Description |
---|---|---|---|
Required |
|||
|
The name of this appender. |
||
Optional |
|||
|
|
If Logging exceptions are always also logged to the status logger. |
Type | Multiplicity | Description |
---|---|---|
zero or one |
This script has two purposes:
The script has the following bindings:
See also Scripts for more details on scripting in Log4j Core. |
|
zero or one |
Allows filtering log events before routing them to a subordinate appender. See also appender filtering stage. |
|
zero or one |
The purge policy to apply to handle the lifecycle of automatically instantiated appenders. See Purge Policy for more details. |
|
zero or one |
The rewrite policy to apply to all logged events. If set, the |
|
one |
Determines the routing configuration of the appender. See |
Route selection
At the base of route selection there are two configuration elements:
Routes
-
The
Routes
element is a container forRoute
definitions. It provides two additional properties, which are used to determine the appropriate route for each log event:Table 15. Routes
configuration attributesAttribute Type Default value Description String
If present, this pattern is evaluated at each log event to determine the key of the route to use.
This attribute supports runtime property substitution using the current event as context.
Required, unless a nested
AbstractScript
is provided.Table 16. Routes
nested elementsType Multiplicity Description zero or one
If present, this script is evaluated at each log event to determine the key of the route to use. The script has the following bindings:
staticVariables
-
A
Map<String, Object>
that is reused between script calls. This is the same map, which is passed to theAbstractScript
ofRouting
. logEvent
-
The
LogEvent
being processed. configuration
-
The current
Configuration
object. statusLogger
-
The status logger to use to print diagnostic messages in the script.
See also Scripts for more details on scripting in Log4j Core.
Required, unless the
pattern
attribute is provided.one or more
Route
-
The
Route
element determines the appender to use if the route is selected. The appender can be:-
A previously declared appender, from the
Appenders
section of the configuration file. -
A new appender that is instantiated based on a nested appender definition, when the route becomes active. See also Purge Policy to learn more about the lifecycle of such an appender.
Table 17. Route
configuration attributesAttribute
Type
Default value
Description
String
null
A key that is compared with the evaluation of either the
pattern
attribute or nested script of the `Routes element.String
The reference to an existing appender to use.
You cannot specify both this attribute and a nested `Appender definition.
Table 18. Route
nested elementsType
Multiplicity
Description
zero or one
The definition of an
Appender
to create, when this route is used for the first time.You cannot specify both this nested element and the
ref
configuration attribute.Lookups in the children of the
Route
component are not evaluated at configuration time. The substitution is delayed until theRoute
element is evaluated. This means that${...}
expression should not be escaped as$${...}
.The appender definition is evaluated in the context of the current event, instead of the global context.
See lazy property substitution for more details.
-
For each log event, the appropriate route is selected as follows:
-
First the
pattern
attribute orRoutes
script are evaluated to obtain a key. -
The key is compared with the
key
attribute of eachRoute
element. -
If there is a
Route
for that key, it is selected. -
Otherwise, the default
Route
is selected. The key of the defaultRoute
is determined by theRouting
script or isnull
(lack ofkey
attribute) if the script is absent.
If the
|
Purge Policy
If your default Route
element contains an appender definition, the Routing
Appender can instantiate a large number of appenders, one for each value of the routing key.
These appenders might be useful only for a short period of time, but will consume system resources unless they are stopped.
The purge policy is a Log4j plugin that implements the
PurgePolicy
interface and handles the lifecycle of automatically instantiated appenders.
If an appender has been destroyed, it can be created again when the |
Log4j Core provides one implementation of PurgePolicy
:
IdlePurgePolicy
-
This policy destroys appenders if they have not been used for a certain amount of time. It supports the following configuration attributes:
Table 19. IdlePurgePolicy
configuration attributesAttribute Type Default value Description long
It specifies the number of time units that an appender can be idle, before it is destroyed.
Required
long
It specifies the number of time units between two runs of this purge policy.
It specifies the time unit to use for the other attributes.
Configuration examples
Using appender references
You can deliver log events for different markers into separate log files, using the following configuration:
-
XML
-
JSON
-
YAML
-
Properties
log4j2.xml
<Routing name="ROUTING">
<Routes pattern="$${event:Marker}">
<Route key="AUDIT" ref="AUDIT_LOG"/> (1)
<Route key="$${event:Marker}" ref="MAIN_LOG"/> (2)
<Route ref="MARKED_LOG"/> (3)
</Routes>
</Routing>
log4j2.json
"Routing": {
"name": "ROUTING",
"Routes": {
"pattern": "$${event:Marker}}",
"Route": [
{ (1)
"key": "AUDIT",
"ref": "AUDIT_LOG"
},
{ (2)
"key": "$${event:Marker}",
"ref": "MAIN_LOG"
},
{ (3)
"ref": "MARKED_LOG"
}
]
}
}
log4j2.yaml
Routing:
Routes:
pattern: "$${event:Marker}"
Route:
- key: "AUDIT" (1)
ref: "AUDIT_LOG"
- key: "$${event:Marker}" (2)
ref: "MAIN_LOG"
- ref: "MARKED_LOG" (3)
log4j2.properties
Appenders.Routing.name = ROUTING
Appenders.Routing.Routes.pattern = $${event:Marker}
(1)
Appenders.Routing.Routes.Route[1].key = AUDIT
Appenders.Routing.Routes.Route[1].ref = AUDIT_LOG
(2)
Appenders.Routing.Routes.Route[2].key = $${event:Marker}
Appenders.Routing.Routes.Route[2].ref = MAIN_LOG
(3)
Appenders.Routing.Routes.Route[3].ref = MARKED_LOG
1 | This route is selected if the log event is marked with an AUDIT marker. |
2 | This route is selected if the log event has no marker.
In this case the expression ${event:Marker} evaluates to itself.
See Property evaluation for more details. |
3 | This is the default route.
It is selected if the log event has a marker, but it is not the AUDIT marker. |
Using appender definitions
If the number of appenders is high or unknown, you might want to use appender definitions instead of appender references. In the example below, a different log file is created for each marker.
-
XML
-
JSON
-
YAML
-
Properties
log4j2.xml
<Routing name="ROUTING">
<Routes pattern="$${event:Marker}"> (1)
<Route>
<File name="${event:Marker}"
fileName="${event:Marker:-main}.log"> (2)
<JsonTemplateLayout/>
</File>
</Route>
</Routes>
<IdlePurgePolicy timeToLive="15"/> (3)
</Routing>
log4j2.json
"Routing": {
"name": "ROUTING",
"Routes": {
"pattern": "$${event:Marker}}", (1)
"Route": {
"File": { (2)
"name": "${event:Marker}",
"fileName": "${event:Marker:-main}.log",
"JsonTemplateLayout": {}
}
}
},
"IdlePurgePolicy": { (3)
"timeToLive": 15
}
}
log4j2.yaml
Routing:
Routes:
pattern: "$${event:Marker}" (1)
Route:
File: (2)
name: "${event:Marker}"
fileName: "${event:Marker:-main}.log"
IdlePurgePolicy: (3)
timeToLive: 15
log4j2.properties
Appenders.Routing.name = ROUTING
(1)
Appenders.Routing.Routes.pattern = $${event:Marker}
(2)
Appenders.Routing.Routes.Route.File.name = ${event:Marker}
Appenders.Routing.Routes.Route.File.fileName = ${event:Marker:-main}.log
Appenders.Routing.Routes.Route.File.layout.type = JsonTemplateLayout
(3)
Appenders.Routing.IdlePurgePolicy.timeToLive = 15
1 | The pattern attribute is evaluated at configuration time, so the ${event:Marker} lookup needs to be escaped. |
2 | The appender definition is not evaluated at configuration time, so no escaping is necessary. |
3 | To prevent resource leaks, consider using a Purge Policy. |
Using scripts
Additional runtime dependencies are required to use scripts:
We assume you use
We assume you use
|
If the flexibility of Lookups is not enough to express your routing logic, you can also resort to scripts. In the example below, we route messages in a round-robin fashion to three different Syslog servers:
-
XML
-
JSON
-
YAML
-
Properties
log4j2.xml
<Routing name="ROUTING">
<Script language="groovy"> (1)
staticVariables.servers = ['server1', 'server2', 'server3'];
staticVariables.count = 0;
</Script>
<Routes>
<Script language="groovy"> (2)
int count = staticVariables.count++;
String server = staticVariables.servers[count % 3];
return configuration.properties['server'] = server;
</Script>
<Route>
<Socket name="${server}"
protocol="TCP"
host="${server}"
port="500"> (3)
<Rfc5424Layout/>
</Socket>
</Route>
</Routes>
</Routing>
log4j2.json
"Routing": {
"name": "ROUTING",
"Script": {
"language": "groovy",
(1)
"scriptText": "staticVariables.servers = ['server1', 'server2', 'server3']; staticVariables.count = 0;"
},
"Routes": {
"Script": {
"language": "groovy",
(2)
"scriptText": "int count = staticVariables.count++; String server = staticVariables.servers[count % 3]; return configuration.properties['server'] = server;"
},
"Route": {
"Socket": { (3)
"name": "${server}",
"protocol": "TCP",
"host": "${server}",
"port": "500",
"Rfc5425Layout": {}
}
}
}
}
log4j2.yaml
Routing:
Script:
language: "groovy"
(1)
scriptText: |
staticVariables.servers = ['server1', 'server2', 'server3'];
staticVariables.count = 0;
Routes:
Script:
language: "groovy"
(2)
scriptText: |
int count = staticVariables.count++;
String server = staticVariables.servers[count % 3];
return configuration.properties['server'] = server;
Route:
(3)
Socket:
name: "${server}"
protocol: "TCP"
host: "${server}"
port: 500
Rfc5424Layout: {}
log4j2.properties
Appenders.Routing.name = ROUTING
Appenders.Routing.Script.language = groovy
(1)
Appenders.Routing.Script.scriptText = \
staticVariables.servers = ['server1', 'server2', 'server3']; \
staticVariables.count = 0;
Appenders.Routing.Routes.Script.language = groovy
(2)
Appenders.Routing.Routes.Script.scriptText = \
int count = staticVariables.count++; \
String server = staticVariables.servers[count % 3]; \
return configuration.properties['server'] = server;
(3)
Appenders.Routing.Routes.Route.Socket.name = ${server}
Appenders.Routing.Routes.Route.Socket.protocol = TCP
Appenders.Routing.Routes.Route.Socket.host = ${server}
Appenders.Routing.Routes.Route.Socket.port = 500
Appenders.Routing.Routes.Route.Socket.layout.type = Rfc5424Layout
1 | The Routing script performs the initialization of state variables. |
2 | The Routes script returns the name of the server to use.
It also exports the value as server entry in
Configuration.getProperties() . |
3 | The exported value can be used as ${server} in the appender definition. |
ScriptAppenderSelector
The ScriptAppenderSelector
plugin allows using different appender definitions based on the output of a script.
At configuration time:
-
The nested script element is evaluated to obtain the name of an appender.
-
The plugin looks for the appropriate appender definition inside the
<AppenderSet>
container.
The functionality of |
|
Type |
Default value |
Description |
|
The name of this appender. Required |
Type | Multiplicity | Description |
---|---|---|
one |
The script to determine the appender name. |
|
one |
A lazy container for appender definitions. |