Message queue appenders
This page guides you through message queue appenders that forward log events to a message broker.
Flume Appender
Apache Flume is a distributed, reliable, and available system for efficiently collecting, aggregating, and moving large amounts of log data from many different sources to a centralized data store. The Flume Appender takes log events and sends them to a Flume agent as serialized Avro events for consumption.
The Flume Appender supports three modes of operation.
AVRO
-
It can act as a remote Flume client which sends Flume events via Avro to a Flume Agent configured with an Avro Source.
EMBEDDED
-
It can act as an embedded Flume Agent where Flume events pass directly into Flume for processing.
PERSISTENT
-
It can persist events to a local BerkeleyDB data store and then asynchronously send the events to Flume, similar to the embedded Flume Agent but without most of the Flume dependencies.
Usage as an embedded agent will cause the messages to be directly passed to the Flume Channel, and then control will be immediately returned to the application.
All interaction with remote agents will occur asynchronously.
Setting the type
attribute to EMBEDDED
will force the use of the embedded agent.
In addition, configuring agent properties in the appender configuration will also cause the embedded agent to be used.
Attribute | Type | Default value | Description |
---|---|---|---|
Required |
|||
|
The name of the appender. |
||
Optional |
|||
enumeration |
|
One of AVRO, EMBEDDED or PERSISTENT to indicate which variation of the Appender is desired. |
|
|
|
If Logging exceptions are always also logged to Status Logger |
|
|
|
The connect timeout in milliseconds.
If |
|
|
|
The request timeout in milliseconds.
If |
|
|
|
The number of times the agent should be retried before failing to a secondary.
This parameter is ignored when |
|
|
|
It specifies the number of events that should be sent as a batch. |
|
|
|
When set to |
|
Directory where the Flume write-ahead log should be written. Valid only when embedded is set to true and Agent elements are used instead of Property elements. |
|||
|
|
The character string to prepend to each event attribute to distinguish it from MDC attributes. |
|
|
|
The number of times to retry if a LockConflictException occurs while writing to Berkeley DB. |
|
|
|
The maximum number of milliseconds to wait for batchSize events before publishing the batch. |
|
|
A comma-separated list of mdc keys that should be excluded from the FlumeEvent. This is mutually exclusive with the mdcIncludes attribute. |
||
|
A comma-separated list of mdc keys that should be included in the This option is mutually exclusive with the mdcExcludes attribute. |
||
|
A comma-separated list of |
||
|
|
A string that should be prepended to each MDC key to distinguish it from event attributes. |
Type | Multiplicity | Description |
---|---|---|
zero or more |
An array of Agents to which the logging events should be sent. If more than one agent is specified, the first Agent will be the primary and subsequent Agents will be used in the order specified as secondaries should the primary Agent fail. Each Agent definition supplies the Agent’s host and port. The specification of agents and properties are mutually exclusive. |
|
zero or one |
Allows filtering log events just before they are formatted and sent. See also appender filtering stage. |
|
zero or one |
Factory that generates the Flume events from Log4j events. The default factory is the appender itself. |
|
zero or one |
Formats log events. If not provided, Rfc5424 Layout is used. See Layouts for more information. |
|
zero or more |
One or more Property elements that are used to configure the Flume Agent. The properties must be configured without the agent name, the appender name is used for this, and no sources can be configured. Interceptors can be specified for the source using "sources.log4j-source.interceptors". All other Flume configuration properties are allowed. Specifying both Agent and Property elements will result in an error. When used to configure in Persistent mode, the valid properties are: 1. The specification of agents and properties are mutually exclusive. |
Additional runtime dependencies are required to use the Flume Appender:
-
Maven
-
Gradle
We assume you use log4j-bom
for dependency management.
<dependency>
<groupId>org.apache.logging.log4j</groupId>
<artifactId>log4j-flume-ng</artifactId>
<scope>runtime</scope>
</dependency>
We assume you use log4j-bom
for dependency management.
runtimeOnly 'org.apache.logging.log4j:log4j-flume-ng'
To use the Flume Appender PERSISTENT mode, you need the following additional dependency:
-
Maven
-
Gradle
<dependency>
<groupId>com.sleepycat</groupId>
<artifactId>je</artifactId>
<version>18.3.12</version>
<scope>runtime</scope>
</dependency>
runtimeOnly 'com.sleepycat:je:{je-version}'
If you use the Flume Appender in EMBEDDED mode, you need to add the flume-ng-embedded-agent
dependency below and all the channel and sink implementation you plan to use.
See Flume Embedded Agent documentation for more details.
-
Maven
-
Gradle
<dependency>
<groupId>org.apache.flume</groupId>
<artifactId>flume-ng-embedded-agent</artifactId>
<version>1.11.0</version>
<scope>runtime</scope>
</dependency>
runtimeOnly 'org.apache.flume:flume-ng-embedded-agent:{flume-version}'
Agent Addresses
The address of the Flume server is specified using the Agent
element, which supports the following configuration options:
Attribute | Type | Default value | Description |
---|---|---|---|
|
The host to connect to. |
||
|
|
The port to connect to. |
Flume event factories
Flume event factories are Log4j plugins that implement the
org.apache.logging.log4j.flume.appender.FlumeEventFactory
and allow to customize the way log events are transformed into `org.apache.logging.log4j.flume.appender.FlumeEvent`s.
Configuration examples
A sample Flume Appender which is configured with a primary and a secondary agent, compresses the body and formats the body using the RFC5424 Layout:
-
XML
-
JSON
-
YAML
-
Properties
log4j2.xml
<Flume name="FLUME">
<Rfc5424Layout enterpriseNumber="18060"
includeMDC="true"
appName="MyApp"/>
<Agent host="192.168.10.101" port="8800"/> (1)
<Agent host="192.168.10.102" port="8800"/> (2)
</Flume>
log4j2.json
"Flume": {
"name": "FLUME",
"Rfc5424Layout": {
"enterpriseNumber": 18060,
"includeMDC": true,
"appName": "MyAPP"
},
"Agent": [
{ (1)
"host": "192.168.10.101",
"port": "8800"
},
{ (2)
"host": "192.168.10.102",
"port": "8800"
}
]
}
log4j2.yaml
Flume:
name: "FLUME"
Rfc5424Layout:
enterpriseNumber: 18060
includeMDC: true
appName: MyApp
Agent:
(1)
- host: "192.168.10.101"
port: 8800
(2)
- host: "192.168.10.102"
port: 8800
log4j2.properties
appender.0.type = Flume
appender.0.name = FLUME
appender.0.layout.type = Rfc5424Layout
appender.0.layout.enterpriseNumber = 18060
appender.0.layout.includeMDC = true
appender.0.layout.appName = MyApp
(1)
appender.0.primary.type = Agent
appender.0.primary.host = 192.168.10.101
appender.0.primary.port = 8800
(2)
appender.0.secondary.type = Agent
appender.0.secondary.host = 192.168.10.102
appender.0.secondary.port = 8800
1 | Primary agent |
2 | Secondary agent |
A sample Flume Appender, which is configured with a primary and a secondary agent, compresses the body, formats the body using the RFC5424 Layout, and persists encrypted events to disk:
-
XML
-
JSON
-
YAML
-
Properties
log4j2.xml
<Flume name="FLUME"
type="PERSISTENT"
compress="true"
dataDir="./logData">
<Rfc5424Layout enterpriseNumber="18060"
includeMDC="true"
appName="MyApp"/>
<Property name="keyProvider" value="org.example.MySecretProvider"/>
<Agent host="192.168.10.101" port="8800"/>
<Agent host="192.168.10.102" port="8800"/>
</Flume>
log4j2.json
"Flume": {
"name": "FLUME",
"type": "PERSISTENT",
"compress": true,
"dataDir": "./logData",
"Rfc5424Layout": {
"enterpriseNumber": 18060,
"includeMDC": true,
"appName": "MyAPP"
},
"Property": {
"name": "keyProvider",
"value": "org.example.MySecretProvider"
},
"Agent": [
{
"host": "192.168.10.101",
"port": "8800"
},
{
"host": "192.168.10.102",
"port": "8800"
}
]
}
log4j2.yaml
Flume:
name: "FLUME"
type: "PERSISTENT"
compress: true
dataDir: "./logData"
Rfc5424Layout:
enterpriseNumber: 18060
includeMDC: true
appName: MyApp
Property:
name: "keyProvider"
value: "org.example.MySecretProvider"
Agent:
- host: "192.168.10.101"
port: 8800
- host: "192.168.10.102"
port: 8800
This example cannot be configured using Java properties.
A sample Flume Appender, which is configured with a primary and a secondary agent compresses the body, formats the body using RFC5424 Layout, and passes the events to an embedded Flume Agent.
-
XML
-
JSON
-
YAML
-
Properties
log4j2.xml
<Flume name="FLUME"
type="EMBEDDED"
compress="true">
<Rfc5424Layout enterpriseNumber="18060"
includeMDC="true"
appName="MyApp"/>
<Agent host="192.168.10.101" port="8800"/>
<Agent host="192.168.10.102" port="8800"/>
</Flume>
log4j2.json
"Flume": {
"name": "FLUME",
"type": "EMBEDDED",
"compress": true,
"Rfc5424Layout": {
"enterpriseNumber": 18060,
"includeMDC": true,
"appName": "MyAPP"
},
"Agent": [
{
"host": "192.168.10.101",
"port": "8800"
},
{
"host": "192.168.10.102",
"port": "8800"
}
]
}
log4j2.yaml
Flume:
name: "FLUME"
type: "EMBEDDED"
compress: true
Rfc5424Layout:
enterpriseNumber: 18060
includeMDC: true
appName: MyApp
Agent:
- host: "192.168.10.101"
port: 8800
- host: "192.168.10.102"
port: 8800
This example cannot be configured using Java properties.
JMS Appender
The JMS Appender sends the formatted log event to a Jakarta Messaging API destination.
As of Log4j Due to breaking changes in the underlying API, the JMS Appender cannot be used with Jakarta Messaging API 3.0 or later. |
Attribute | Type | Default value | Description |
---|---|---|---|
Required |
|||
|
The name of the appender. |
||
The JNDI name of the
Only the |
|||
The JNDI name of the
Only the |
|||
JNDI configuration (overrides system properties) |
|||
String |
It specifies the
See INITIAL_CONTEXT_FACTORY for details. |
||
String[] |
A colon-separated list of package prefixes that contain URL context factories. See URL_PKG_PREFIXES for details. |
||
String |
A configuration parameter for the See PROVIDER_URL for details. |
||
String |
The name of the principal to use for the See SECURITY_PRINCIPAL for details. |
||
|
String |
null |
The security credentials for the principal. See SECURITY_CREDENTIALS for details. |
Optional |
|||
|
The username for the
|
||
|
The password for the
|
||
|
|
If Logging exceptions are always also logged to Status Logger |
|
|
|
The request timeout in milliseconds.
If |
Type | Multiplicity | Description |
---|---|---|
zero or one |
Allows filtering log events just before they are formatted and sent. See also appender filtering stage. |
|
one |
Used in the mapping process to get a JMS
See Mapping events to JMS messages below for more information. |
Mapping events to JMS messages
The mapping between log events and JMS messages has two steps:
-
First, the layout is used to transform a log event into an intermediary format.
-
Then, a
Message
is created based on the type of object returned by the layout:String
-
Strings are converted into
TextMessage
s. MapMessage
-
The Log4j
MapMessage
type is mapped to the JMSMapMessage
type. Serializable
-
Anything else is converted into an
ObjectMessage
.
Configuration examples
Here is a sample JMS Appender configuration:
-
XML
-
JSON
-
YAML
-
Properties
log4j2.xml
<JMS name="JMS"
factoryBindingName="jms/ConnectionFactory"
destinationBindingName="jms/Queue">
<JsonTemplateLayout/>
</JMS>
log4j2.json
"JMS": {
"name": "JMS",
"factoryBindingName": "jms/ConnectionFactory",
"destinationBindingName": "jms/Queue",
"JsonTemplateLayout": {}
}
log4j2.yaml
JMS:
name: "JMS"
factoryBindingName: "jms/ConnectionFactory"
destinationBindingName: "jms/Queue"
JsonTemplateLayout: {}
log4j2.properties
appender.0.type = JMS
appender.0.name = JMS
appender.0.factoryBindingName = jms/ConnectionFactory
appender.0.destinationBindingName = jms/Queue
appender.0.layout.type = JsonTemplateLayout
To map your Log4j MapMessage
to JMS javax.jms.MapMessage
, set the layout of the appender to MessageLayout
:
-
XML
-
JSON
-
YAML
-
Properties
log4j2.xml
<JMS name="JMS"
factoryBindingName="jms/ConnectionFactory"
destinationBindingName="jms/Queue">
<MessageLayout/>
</JMS>
log4j2.json
"JMS": {
"name": "JMS",
"factoryBindingName": "jms/ConnectionFactory",
"destinationBindingName": "jms/Queue",
"MessageLayout": {}
}
log4j2.yaml
JMS:
name: "JMS"
factoryBindingName: "jms/ConnectionFactory"
destinationBindingName: "jms/Queue"
MessageLayout: {}
log4j2.properties
appender.0.type = JMS
appender.0.name = JMS
appender.0.factoryBindingName = jms/ConnectionFactory
appender.0.destinationBindingName = jms/Queue
appender.0.layout.type = MessageLayout
Kafka Appender
This appender is planned to be removed in the next major release! If you are using this library, please get in touch with the Log4j maintainers using the official support channels. |
The KafkaAppender logs events to an Apache Kafka topic.
Each log event is sent as a
ProducerRecord<byte[], byte[]>
, where:
This appender is synchronous by default and will block until the record has been acknowledged by the Kafka server.
The maximum delivery time can be configured using the
Kafka delivery.timeout.ms
property.
Wrap the appender with an
Async Appender
or set
syncSend
to false
to log asynchronously.
Attribute | Type | Default value | Description | ||
---|---|---|---|---|---|
Required |
|||||
|
The name of the appender. |
||||
|
The Kafka topic to use. |
||||
Optional |
|||||
|
The key of the Kafka
Supports runtime property substitution and is evaluated in the global context. |
||||
|
|
If Logging exceptions are always also logged to Status Logger |
|||
|
|
If
|
Type | Multiplicity | Description |
---|---|---|
zero or one |
Allows filtering log events just before they are formatted and sent. See also appender filtering stage. |
|
one |
Formats the log event as a byte array using
See Layouts for more information. |
|
one or more |
These properties are forwarded directly to the Kafka producer. See Kafka producer properties for more details.
|
Additional runtime dependencies are required to use the Kafka Appender:
-
Maven
-
Gradle
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka-clients</artifactId>
<version>3.8.0</version>
</dependency>
runtimeOnly 'org.apache.kafka:kafka-clients:3.8.0'
Configuration examples
Here is a sample Kafka Appender configuration snippet:
-
XML
-
JSON
-
YAML
-
Properties
log4j2.xml
<Kafka name="KAFKA"
topic="logs"
key="$${web:contextName}"> (1)
<JsonTemplateLayout/>
</Kafka>
log4j2.json
"Kafka": {
"name": "KAFKA",
"topic": "logs",
"key": "$${web:contextName}", (1)
"JsonTemplateLayout": {}
}
log4j2.yaml
Kafka:
name: "KAFKA"
topic: "logs"
key: "$${web:contextName}" (1)
JsonTemplateLayout: {}
log4j2.properties
appender.1.type = Kafka
appender.1.name = KAFKA
appender.1.topic = logs
(1)
appender.1.key = $${web:contextName}
appender.1.layout.type = JsonTemplateLayout
1 | The key attribute supports runtime lookups. |
Make sure to not let
Snippet from an example
log4j2.xml
Snippet from an example
log4j2.json
Snippet from an example
log4j2.yaml
Snippet from an example
log4j2.properties
|
ZeroMQ/JeroMQ Appender
This appender is planned to be removed in the next major release!
Users should consider switching to
a third-party |
The ZeroMQ appender uses the JeroMQ library to send log events to one or more ZeroMQ endpoints.
Attribute | Type | Default value | Description |
---|---|---|---|
Required |
|||
|
The name of the appender. |
||
Optional |
|||
|
|
If Logging exceptions are always also logged to Status Logger |
|
|
|
The I/O affinity of the sending thread. See
|
|
|
|
The maximum size of the backlog. See
|
|
|
|
Delays the attachment of a pipe on connection. See
|
|
|
It sets the identity of the socket. See
|
||
|
|
If set, only IPv4 will be used. See
|
|
|
|
It sets the linger-period for the socket.
The value See
|
|
|
|
Size limit in bytes for inbound messages. See
|
|
|
|
It sets the high-water mark for inbound messages. See
|
|
|
|
It sets the OS buffer size for inbound messages.
A value of See
|
|
|
|
It sets the timeout in milliseconds for receive operations. See
|
|
|
|
It sets the reconnection interval. See
|
|
|
|
It sets the maximum reconnection interval. See
|
|
|
|
It sets the OS buffer size for outbound messages.
A value of See
|
|
|
|
It sets the timeout in milliseconds for send operations. See
|
|
|
|
It sets the OS buffer size for outbound messages.
A value of See
|
|
|
|
A value of:
See
|
|
|
|
It sets the maximum number of keep-alive probes before dropping the connection.
A value of See
|
|
|
|
It sets the time a connection needs to remain idle before keep-alive probes are sent.
The unit depends on the OS and a value of See
|
|
|
|
It sets the time between two keep-alive probes.
The unit depends on the OS and a value of See
|
|
|
|
If See
|
Type | Multiplicity | Description |
---|---|---|
zero or one |
Allows filtering log events just before they are formatted and sent. See also appender filtering stage. |
|
one |
Formats the log event as a byte array using
See Layouts for more information. |
|
one or more |
Only properties with an See
|
Additional runtime dependencies are required to use the JeroMQ Appender:
-
Maven
-
Gradle
<dependency>
<groupId>org.zeromq</groupId>
<artifactId>jeromq</artifactId>
<version>0.6.0</version>
</dependency>
runtimeOnly 'org.zeromq:jeromq:0.6.0'
Configuration examples
This is a simple JeroMQ configuration:
-
XML
-
JSON
-
YAML
-
Properties
log4j2.xml
<JeroMQ name="JEROMQ">
<JsonTemplateLayout/>
<Property name="endpoint" value="tcp://*:5556"/>
<Property name="endpoint" value="ipc://info-topic"/>
</JeroMQ>
log4j2.json
"JeroMQ": {
"name": "JEROMQ",
"JsonTemplateLayout": {},
"Property": [
{
"name": "endpoint",
"value": "tcp://*:5556"
},
{
"name": "endpoint",
"value": "ipc://info-topic"
}
]
}
log4j2.yaml
JeroMQ:
name: "JEROMQ"
JsonTemplateLayout: {}
Property:
- name: "endpoint"
value: "tcp://*:5556"
- name: "endpoint"
value: "ipc://info-topic"
log4j2.properties
appender.0.type = JeroMQ
appender.0.name = JEROMQ
appender.0.layout.type = JsonTemplateLayout
appender.0.endpoint[0].type = Property
appender.0.endpoint[0].name = endpoint
appender.0.endpoint[0].value = tcp://*:5556
appender.0.endpoint[1].type = Property
appender.0.endpoint[1].name = endpoint
appender.0.endpoint[1].value = ipc://info-topic