Database appenders
Log4j Core provides multiple appenders to send log events directly to your database.
Common concerns
Column mapping
Since relational databases and some NoSQL databases split data into columns, Log4j Core provides a reusable
ColumnMapping
configuration element to allow specifying the content of each column.
The Column Mapping element supports the following configuration properties:
Attribute | Type | Default value | Description | ||
---|---|---|---|---|---|
Required |
|||||
|
The name of the column. |
||||
Optional |
|||||
|
|
It specifies the Java type that will be stored in the column. If set to:
For any other value:
|
|||
|
|
Deprecated: since |
|||
|
If set, the value will be added directly in the insert statement of the database-specific query language.
|
||||
|
It specifies the database-specific parameter marker to use. Otherwise, the default parameter marker for the database language will be used.
|
||||
|
This is a shortcut configuration attribute to set the
nested |
||||
|
It specifies which key of a
|
Type | Multiplicity | Description |
---|---|---|
zero or one |
Formats the value to store in the column. See Layouts for more information. |
An example column mapping might look like this:
-
XML
-
JSON
-
YAML
-
Properties
log4j2.xml
(1)
<ColumnMapping name="id" literal="currval('logging_seq')"/>
(2)
<ColumnMapping name="uuid"
pattern="%uuid{TIME}"
columnType="java.util.UUID"/>
<ColumnMapping name="message" pattern="%m"/>
(3)
<ColumnMapping name="timestamp" columnType="java.util.Date"/>
<ColumnMapping name="mdc"
columnType="org.apache.logging.log4j.spi.ThreadContextMap"/>
<ColumnMapping name="ndc"
columnType="org.apache.logging.log4j.spi.ThreadContextStack"/>
(4)
<ColumnMapping name="asJson">
<JsonTemplateLayout/>
</ColumnMapping>
(5)
<ColumnMapping name="resource" source="resourceId"/>
log4j2.json
"ColumnMapping": [
(1)
{
"name": "id",
"literal": "currval('logging_seq')"
},
(2)
{
"name": "uuid",
"pattern": "%uuid{TIME}",
"columnType": "java.util.UUID"
},
{
"name": "message",
"pattern": "%m"
},
(3)
{
"name": "timestamp",
"columnType": "java.util.Date"
},
{
"name": "mdc",
"columnType": "org.apache.logging.log4j.spi.ThreadContextMap"
},
{
"name": "ndc",
"columnType": "org.apache.logging.log4j.spi.ThreadContextStack"
},
(4)
{
"name": "asJson",
"JsonTemplateLayout": {}
},
(5)
{
"name": "resource",
"source": "resourceId"
}
]
log4j2.yaml
ColumnMapping:
(1)
- name: "id"
literal: "currval('logging_seq')"
(2)
- name: "uuid"
pattern: "%uuid{TIME}"
columnType: "java.util.UUID"
- name: "message"
pattern: "%m"
(3)
- name: "timestamp"
columnType: "java.util.Date"
- name: "mdc"
columnType: "org.apache.logging.log4j.spi.ThreadContextMap"
- name: "ndc"
columnType: "org.apache.logging.log4j.spi.ThreadContextStack"
(4)
- name: "asJson"
JsonTemplateLayout: {}
(5)
- name: "resource"
source: "resourceId"
log4j2.properties
(1)
appender.0.col[0].type = ColumnMapping
appender.0.col[0].name = id
appender.0.col[0].literal = currval('logging_seq')
(2)
appender.0.col[1].type = ColumnMapping
appender.0.col[1].name = uuid
appender.0.col[1].pattern = %uuid{TIME}
appender.0.col[1].columnType = java.util.UUID
appender.0.col[2].type = ColumnMapping
appender.0.col[2].name = message
appender.0.col[2].pattern = %m
(3)
appender.0.col[3].type = ColumnMapping
appender.0.col[3].name = timestamp
appender.0.col[3].timestamp = java.util.Date
appender.0.col[4].type = ColumnMapping
appender.0.col[4].name = mdc
appender.0.col[4].columnType = org.apache.logging.log4j.spi.ThreadContextMap
appender.0.col[5].type = ColumnMapping
appender.0.col[5].name = ndc
appender.0.col[5].columnType = org.apache.logging.log4j.spi.ThreadContextStack
(4)
appender.0.col[6].type = ColumnMapping
appender.0.col[6].name = asJson
appender.0.col[6].layout.type = JsonTemplateLayout
(5)
appender.0.col[7].type = ColumnMapping
appender.0.col[7].name = resource
appender.0.col[7].source = resourceId
1 | A database-specific expression is added literally to the INSERT statement. |
2 | A Pattern Layout with the specified pattern is used for these columns.
The uuid column is additionally converted into a java.util.UUID before being sent to the JDBC driver. |
3 | Three special column types are replaced with the log event timestamp, context map, and context stack. |
4 | A JSON Template Layout is used to format this column. |
5 | If the global layout of the appender returns a MapMessage , the value for key resourceId will be put into the resource column. |
Cassandra Appender
This appender is planned to be removed in the next major release! If you are using this library, please get in touch with the Log4j maintainers using the official support channels. |
The Cassandra Appender writes its output to an Apache Cassandra database. The appender supports the following configuration properties:
Attribute | Type | Default value | Description |
---|---|---|---|
Required |
|||
|
The name of the Appender. |
||
Optional |
|||
|
|
Whether to use batch statements to write log messages to Cassandra. |
|
The batch type to use when using batched writes. |
|||
|
|
The number of log messages to buffer or batch before writing.
If |
|
|
The name of the Cassandra cluster to connect to. |
||
|
|
If |
|
String |
The name of the keyspace containing the table that log messages will be written to. |
||
|
|
The password to use (along with the username) to connect to Cassandra. |
|
|
|
The name of the table to write log messages to. |
|
|
|
|
Whether to use the configured |
|
|
The username to use to connect to Cassandra. By default, no username or password is used. |
|
|
|
|
Whether to use TLS/SSL to connect to Cassandra. This is |
Type | Multiplicity | Description |
---|---|---|
zero or one |
Allows filtering log events just before they are formatted and sent. See also appender filtering stage. |
|
one or more |
A list of column mapping configurations. The following database-specific restrictions apply:
|
|
one or more |
A list of Cassandra node addresses to connect to.
If absent, See Socket Addresses for the configuration syntax. |
Additional runtime dependencies are required for using the Cassandra Appender:
-
Maven
-
Gradle
We assume you use log4j-bom
for dependency management.
<dependency>
<groupId>org.apache.logging.log4j</groupId>
<artifactId>log4j-cassandra</artifactId>
<scope>runtime</scope>
</dependency>
We assume you use log4j-bom
for dependency management.
runtimeOnly 'org.apache.logging.log4j:log4j-cassandra'
Socket Addresses
The address of the Cassandra server is specified using the SocketAddress
element, which supports the following configuration options:
Attribute | Type | Default value | Description |
---|---|---|---|
|
The host to connect to. |
||
|
|
The port to connect to. |
Configuration examples
Here is an example Cassandra Appender configuration:
-
XML
-
JSON
-
YAML
-
Properties
log4j2.xml
<Cassandra name="CASSANDRA"
clusterName="test-cluster"
keyspace="test"
table="logs"
bufferSize="10"
batched="true"> (1)
(2)
<SocketAddress host="server1" port="9042"/>
<SocketAddress host="server2" port="9042"/>
(3)
<ColumnMapping name="id"
pattern="%uuid{TIME}"
columnType="java.util.UUID"/>
<ColumnMapping name="timestamp" columnType="java.util.Date"/>
<ColumnMapping name="level" pattern="%level"/>
<ColumnMapping name="marker" pattern="%marker"/>
<ColumnMapping name="logger" pattern="%logger"/>
<ColumnMapping name="message" pattern="%message"/>
<ColumnMapping name="mdc"
columnType="org.apache.logging.log4j.spi.ThreadContextMap"/>
<ColumnMapping name="ndc"
columnType="org.apache.logging.log4j.spi.ThreadContextStack"/>
</Cassandra>
log4j2.json
"Cassandra": {
"name": "CASSANDRA",
"clusterName": "test-cluster",
"keyspace": "test",
"table": "logs",
(1)
"bufferSize": 10,
"batched": true,
(2)
"SocketAddress": [
{
"host": "server1",
"port": "9042"
},
{
"host": "server2",
"port": "9042"
}
],
(3)
"ColumnMapping": [
{
"name": "id",
"pattern": "%uuid{TIME}",
"columnType": "java.util.UUID"
},
{
"name": "timestamp",
"columnType": "java.util.Date"
},
{
"name": "level",
"pattern": "%level"
},
{
"name": "marker",
"pattern": "%marker"
},
{
"name": "logger",
"pattern": "%logger"
},
{
"name": "message",
"pattern": "%m"
},
{
"name": "mdc",
"columnType": "org.apache.logging.log4j.spi.ThreadContextMap"
},
{
"name": "ndc",
"columnType": "org.apache.logging.log4j.spi.ThreadContextStack"
}
]
}
log4j2.yaml
Cassandra:
name: "CASSANDRA"
clusterName: "test-cluster"
keyspace: "test"
table: "logs"
(1)
bufferSize: 10
batched: true
(2)
SocketAddress:
- host: "server1"
port: "9042"
- host: "server2"
port: "9042"
(3)
ColumnMapping:
- name: "id"
pattern: "%uuid{TIME}"
columnType: "java.util.UUID"
- name: "timestamp"
columnType: "java.util.Date"
- name: "level"
pattern: "%level"
- name: "marker"
pattern: "%marker"
- name: "logger"
pattern: "%logger"
- name: "message"
pattern: "%message"
- name: "mdc"
columnType: "org.apache.logging.log4j.spi.ThreadContextMap"
- name: "ndc"
columnType: "org.apache.logging.log4j.spi.ThreadContextStack"
log4j2.properties
appender.0.type = Cassandra
appender.0.name = CASSANDRA
appender.0.clusterName = test-cluster
appender.0.keyspace = test
appender.0.table = logs
(1)
appender.0.bufferSize = 10
appender.0.batched = true
(2)
appender.0.addr[0].type = SocketAddress
appender.0.addr[0].host = server1
appender.0.addr[0].port = 9042
appender.0.addr[1].type = SocketAddress
appender.0.addr[1].host = server2
appender.0.addr[1].port = 9042
(3)
appender.0.col[0].type = ColumnMapping
appender.0.col[0].name = uuid
appender.0.col[0].pattern = %uuid{TIME}
appender.0.col[0].columnType = java.util.UUID
appender.0.col[1].type = ColumnMapping
appender.0.col[1].name = timestamp
appender.0.col[1].timestamp = java.util.Date
appender.0.col[2].type = ColumnMapping
appender.0.col[2].name = level
appender.0.col[2].pattern = %level
appender.0.col[3].type = ColumnMapping
appender.0.col[3].name = marker
appender.0.col[3].pattern = %marker
appender.0.col[4].type = ColumnMapping
appender.0.col[4].name = logger
appender.0.col[4].pattern = %logger
appender.0.col[5].type = ColumnMapping
appender.0.col[5].name = message
appender.0.col[5].pattern = %message
appender.0.col[6].type = ColumnMapping
appender.0.col[6].name = mdc
appender.0.col[6].columnType = org.apache.logging.log4j.spi.ThreadContextMap
appender.0.col[7].type = ColumnMapping
appender.0.col[7].name = ndc
appender.0.col[7].columnType = org.apache.logging.log4j.spi.ThreadContextStack
1 | Enables buffering. Messages are sent in batches of 10. |
2 | Multiple server addresses can be used. |
3 | An example of column mapping. See Column mapping for more details. |
The example above uses the following table schema:
CREATE TABLE logs
(
id timeuuid PRIMARY KEY,
level text,
marker text,
logger text,
message text,
timestamp timestamp,
mdc map<text,text>,
ndc list<text>
);
JDBC Appender
The JDBCAppender writes log events to a relational database table using standard JDBC. It can be configured to get JDBC connections from different connection sources.
If batch statements are supported by the configured JDBC driver and
bufferSize
is configured to be a positive number, then log events will be batched.
The appender gets a new connection for each batch of log events. The connection source must be backed by a connection pool, otherwise the performance will suffer greatly. |
Attribute | Type | Default value | Description |
---|---|---|---|
Required |
|||
|
The name of the Appender. |
||
|
The name of the table to use. |
||
Optional |
|||
|
|
The number of log messages to batch before writing.
If |
|
|
|
If |
|
|
|
When set to |
|
|
|
If set to a value greater than 0, after an error, the If the reconnecting fails then an exception will be thrown and can be caught by the application if
|
Type | Multiplicity | Description |
---|---|---|
zero or one |
Allows filtering log events just before they are formatted and sent. See also appender filtering stage. |
|
zero or more |
A list of column mapping configurations. The following database-specific restrictions apply:
Required, unless |
|
zero or more |
Deprecated: an older mechanism to define column mappings. |
|
one |
It specifies how to retrieve JDBC
See Connection Sources for more details. |
|
zero or one |
An optional
If supplied See Map Message handling for more details. |
Connection Sources
When configuring the JDBC Appender, you must specify an implementation of
ConnectionSource
that the appender will use to get
Connection
objects.
The following connection sources are available out-of-the-box:
DataSource
This connection source uses JNDI to locate a JDBC
DataSource
.
As of Log4j |
Attribute | Type | Default value | Description |
---|---|---|---|
It specifies the JNDI name of a JDBC
Only the Required |
ConnectionFactory
This connection source can use any factory method. The method must:
-
Be
public
andstatic
. -
Have an empty parameter list.
-
Return either
Connection
orDataSource
.
Attribute | Type | Default value | Description |
---|---|---|---|
|
The fully qualified class name of the class containing the factory method. Required |
||
|
The name of the factory method. Required |
DriverManager
This connection source uses
DriverManager
to directly create connections using a JDBC
Driver
.
This configuration source is useful during development, but we don’t recommend it in production. Unless the JDBC driver provides connection pooling, the performance of the appender will suffer. See |
Attribute | Type | Default value | Description |
---|---|---|---|
|
The driver-specific JDBC connection string. Required |
||
|
autodetected |
The fully qualified class name of the JDBC driver to use. JDBC 4.0 drivers can be automatically detected by |
|
|
The username to use to connect to the database. |
||
|
The password to use to connect to the database. |
Type | Multiplicity | Description |
---|---|---|
zero or more |
A list of key/value pairs to pass to If supplied, the |
PoolingDriver
The PoolingDriver
uses
Apache Commons DBCP 2
to configure a JDBC connection pool.
Attribute | Type | Default value | Description |
---|---|---|---|
|
The driver-specific JDBC connection string. Required |
||
|
autodetected |
The fully qualified class name of the JDBC driver to use. JDBC 4.0 drivers can be automatically detected by |
|
|
The username to use to connect to the database. |
||
|
The password to use to connect to the database. |
||
|
|
Type | Multiplicity | Description |
---|---|---|
zero or more |
A list of key/value pairs to pass to If supplied, the |
|
zero or one |
Allows finely tuning the configuration of the DBCP 2 connection pool. The available parameters are the same as those provided by DBCP 2. See DBCP 2 configuration for more details. |
Additional runtime dependencies are required for using PoolingDriver
:
-
Maven
-
Gradle
We assume you use log4j-bom
for dependency management.
<dependency>
<groupId>org.apache.logging.log4j</groupId>
<artifactId>log4j-jdbc-dbcp2</artifactId>
<scope>runtime</scope>
</dependency>
We assume you use log4j-bom
for dependency management.
runtimeOnly 'org.apache.logging.log4j:log4j-jdbc-dbcp2'
Map Message handling
If the optional nested element of type Layout<? Extends Message>
is provided, log events containing messages of type
MapMessage
will be treated specially.
For each column mapping (except those containing literals) the source
attribute will be used as key to the value in MapMessage
that will be stored in column name
.
Configuration examples
Here is an example JDBC Appender configuration:
-
XML
-
JSON
-
YAML
-
Properties
log4j2.xml
<JDBC name="JDBC"
tableName="logs"
bufferSize="10"> (1)
(2)
<DataSource jndiName="java:comp/env/jdbc/logging"/>
(3)
<ColumnMapping name="id"
pattern="%uuid{TIME}"
columnType="java.util.UUID"/>
<ColumnMapping name="timestamp" columnType="java.util.Date"/>
<ColumnMapping name="level" pattern="%level"/>
<ColumnMapping name="marker" pattern="%marker"/>
<ColumnMapping name="logger" pattern="%logger"/>
<ColumnMapping name="message" pattern="%message"/>
<ColumnMapping name="mdc"
columnType="org.apache.logging.log4j.spi.ThreadContextMap"/>
<ColumnMapping name="ndc"
columnType="org.apache.logging.log4j.spi.ThreadContextStack"/>
</JDBC>
log4j2.json
"JDBC": {
"name": "JDBC",
"tableName": "logs",
(1)
"bufferSize": 10,
(2)
"DataSource": {
"jndiName": "java:comp/env/jdbc/logging"
},
(3)
"ColumnMapping": [
{
"name": "id",
"pattern": "%uuid{TIME}",
"columnType": "java.util.UUID"
},
{
"name": "timestamp",
"columnType": "java.util.Date"
},
{
"name": "level",
"pattern": "%level"
},
{
"name": "marker",
"pattern": "%marker"
},
{
"name": "logger",
"pattern": "%logger"
},
{
"name": "message",
"pattern": "%m"
},
{
"name": "mdc",
"columnType": "org.apache.logging.log4j.spi.ThreadContextMap"
},
{
"name": "ndc",
"columnType": "org.apache.logging.log4j.spi.ThreadContextStack"
}
]
}
log4j2.yaml
JDBC:
name: "JDBC"
tableName: "logs"
(1)
bufferSize: 10
(2)
DataSource:
jndiName: "java:comp/env/jdbc/logging"
(3)
ColumnMapping:
- name: "id"
pattern: "%uuid{TIME}"
columnType: "java.util.UUID"
- name: "timestamp"
columnType: "java.util.Date"
- name: "level"
pattern: "%level"
- name: "marker"
pattern: "%marker"
- name: "logger"
pattern: "%logger"
- name: "message"
pattern: "%message"
- name: "mdc"
columnType: "org.apache.logging.log4j.spi.ThreadContextMap"
- name: "ndc"
columnType: "org.apache.logging.log4j.spi.ThreadContextStack"
log4j2.properties
appender.0.type = JDBC
appender.0.name = JDBC
appender.0.tableName = logs
(1)
appender.0.bufferSize = 10
(2)
appender.0.ds.type = DataSource
appender.0.ds.jndiName = java:comp/env/jdbc/logging
(3)
appender.0.col[0].type = ColumnMapping
appender.0.col[0].name = uuid
appender.0.col[0].pattern = %uuid{TIME}
appender.0.col[0].columnType = java.util.UUID
appender.0.col[1].type = ColumnMapping
appender.0.col[1].name = timestamp
appender.0.col[1].timestamp = java.util.Date
appender.0.col[2].type = ColumnMapping
appender.0.col[2].name = level
appender.0.col[2].pattern = %level
appender.0.col[3].type = ColumnMapping
appender.0.col[3].name = marker
appender.0.col[3].pattern = %marker
appender.0.col[4].type = ColumnMapping
appender.0.col[4].name = logger
appender.0.col[4].pattern = %logger
appender.0.col[5].type = ColumnMapping
appender.0.col[5].name = message
appender.0.col[5].pattern = %message
appender.0.col[6].type = ColumnMapping
appender.0.col[6].name = mdc
appender.0.col[6].columnType = org.apache.logging.log4j.spi.ThreadContextMap
appender.0.col[7].type = ColumnMapping
appender.0.col[7].name = ndc
appender.0.col[7].columnType = org.apache.logging.log4j.spi.ThreadContextStack
1 | Enables buffering. Messages are sent in batches of 10. |
2 | A JNDI data source is used. |
3 | An example of column mapping. See Column mapping for more details. |
The example above uses the following table schema:
CREATE TABLE logs
(
id BIGINT PRIMARY KEY,
level VARCHAR,
marker VARCHAR,
logger VARCHAR,
message VARCHAR,
timestamp TIMESTAMP,
mdc VARCHAR,
ndc VARCHAR
);
JPA Appender
This appender is planned to be removed in the next major release! If you are using this library, please get in touch with the Log4j maintainers using the official support channels. |
The JPA Appender writes log events to a relational database table using the Jakarta Persistence API 2.2. To use the appender, you need to:
-
configure your JPA persistence unit. See Persistence configuration below.
-
configure the JPA Appender. See Appender configuration below.
Due to breaking changes in the underlying API, the JPA Appender cannot be used with Jakarta Persistence API 3.0 or later. |
Persistence configuration
To store log events using JPA, you need to implement a JPA Entity that extends the
AbstractLogEventWrapperEntity
class.
To help you with the implementation, Log4j provides a
BasicLogEventEntity
class that only lacks an identity field.
A simple AbstractLogEventWrapperEntity
implementation might look like:
LogEventEntity.java
@Entity
@Table(name = "log")
public class LogEventEntity extends BasicLogEventEntity {
private static final long serialVersionUID = 1L;
private long id;
(1)
public LogEventEntity() {}
(2)
public LogEventEntity(final LogEvent wrapped) {
super(wrapped);
}
(3)
@Id
@GeneratedValue(strategy = GenerationType.IDENTITY)
@Column(name = "id")
public long getId() {
return id;
}
}
For performance reasons, we recommend creating a separate persistence unit for logging. This allows you to optimize the unit for logging purposes. The definition of the persistence unit should look like the example below:
<?xml version="1.0" encoding="UTF-8"?>
<persistence xmlns="http://xmlns.jcp.org/xml/ns/persistence"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://xmlns.jcp.org/xml/ns/persistence
http://xmlns.jcp.org/xml/ns/persistence/persistence_2_1.xsd"
version="2.1">
<persistence-unit name="logging" transaction-type="RESOURCE_LOCAL">
(1)
<provider>org.eclipse.persistence.jpa.PersistenceProvider</provider>
(2)
<non-jta-data-source>jdbc/logging</non-jta-data-source>
(3)
<class>
org.apache.logging.log4j.core.appender.db.jpa.converter.ContextMapAttributeConverter
</class>
<class>
org.apache.logging.log4j.core.appender.db.jpa.converter.ContextStackAttributeConverter
</class>
<class>
org.apache.logging.log4j.core.appender.db.jpa.converter.InstantAttributeConverter
</class>
<class>
org.apache.logging.log4j.core.appender.db.jpa.converter.LevelAttributeConverter
</class>
<class>
org.apache.logging.log4j.core.appender.db.jpa.converter.MarkerAttributeConverter
</class>
<class>
org.apache.logging.log4j.core.appender.db.jpa.converter.MessageAttributeConverter
</class>
<class>
org.apache.logging.log4j.core.appender.db.jpa.converter.StackTraceElementAttributeConverter
</class>
<class>
org.apache.logging.log4j.core.appender.db.jpa.converter.ThrowableAttributeConverter
</class>
(4)
<class>
com.example.logging.LogEventEntity
</class>
(5)
<shared-cache-mode>NONE</shared-cache-mode>
</persistence-unit>
</persistence>
1 | Specify you JPA provider. |
2 | A non-JTA source should be used for performance. |
3 | If your log event entity extends BasicLogEventEntity , you need to declare these converters. |
4 | Declare your log event entity. |
5 | Cache sharing should be set to NONE . |
Appender configuration
The JPA appender supports these configuration options:
Attribute | Type | Default value | Description |
---|---|---|---|
Required |
|||
|
The name of the Appender. |
||
|
The name of the table to use. |
||
|
The name of the persistence unit to use. |
||
|
The fully qualified name of the entity class to use. The type must extend
|
||
Optional |
|||
|
|
The number of log messages to batch before writing.
If |
|
|
|
If |
Type | Multiplicity | Description |
---|---|---|
zero or one |
Allows filtering log events just before they are formatted and sent. See also appender filtering stage. |
Additional runtime dependencies are required for using the JPA Appender:
-
Maven
-
Gradle
We assume you use log4j-bom
for dependency management.
<dependency>
<groupId>org.apache.logging.log4j</groupId>
<artifactId>log4j-jpa</artifactId>
<scope>runtime</scope>
</dependency>
We assume you use log4j-bom
for dependency management.
runtimeOnly 'org.apache.logging.log4j:log4j-jpa'
Configuration examples
Using the persistence unit from section Persistence configuration, the JPA appender can be easily configured as:
-
XML
-
JSON
-
YAML
-
Properties
log4j2.xml
<JPA name="JPA"
persistenceUnitName="logging"
entityClassName="com.example.logging.LogEventEntity"/>
log4j2.json
"JPA": {
"name": "JPA",
"persistenceUnitName": "logging",
"entityClassName": "com.example.logging.LogEventEntity"
}
log4j2.yaml
JPA:
name: "JPA"
persistenceUnitName: "logging"
entityClassName: "com.example.logging.LogEventEntity"
log4j2.properties
appender.0.type = JPA
appender.0.name = JPA
appender.0.persistenceUnitName = logging
appender.0.entityClassName = com.example.logging.LogEventEntity
NoSQL Appender
The NoSQL Appender writes log events to a document-oriented NoSQL database using an internal lightweight provider interface. It supports the following configuration options:
Attribute | Type | Default value | Description |
---|---|---|---|
Required |
|||
|
The name of the Appender. |
||
Optional |
|||
|
|
The number of log messages to batch before writing to the database.
If |
|
|
|
If |
Type | Multiplicity | Description |
---|---|---|
zero or one |
Allows filtering log events just before they are formatted and sent. See also appender filtering stage. |
|
Zero or more |
Adds a simple key/value field to the NoSQL object. The |
|
zero or one |
An optional
See Formatting for more details. |
Formatting
This appender transforms log events into NoSQL documents in two ways:
-
If the optional
Layout
configuration element is provided, theMapMessage
returned by the layout will be converted into its NoSQL document. -
Otherwise, a default conversion will be applied. You enhance the format with additional top level key/value pairs using nested
KeyValuePair
configuration elements.Click to see an example of default log event formatting
{ "level": "WARN", "loggerName": "com.example.application.MyClass", "message": "Something happened that you might want to know about.", "source": { "className": "com.example.application.MyClass", "methodName": "exampleMethod", "fileName": "MyClass.java", "lineNumber": 81 }, "marker": { "name": "SomeMarker", "parent": { "name": "SomeParentMarker" } }, "threadName": "Thread-1", "millis": 1368844166761, "date": "2013-05-18T02:29:26.761Z", "thrown": { "type": "java.sql.SQLException", "message": "Could not insert record. Connection lost.", "stackTrace": [ { "className": "org.example.sql.driver.PreparedStatement$1", "methodName": "responder", "fileName": "PreparedStatement.java", "lineNumber": 1049 }, { "className": "org.example.sql.driver.PreparedStatement", "methodName": "executeUpdate", "fileName": "PreparedStatement.java", "lineNumber": 738 }, { "className": "com.example.application.MyClass", "methodName": "exampleMethod", "fileName": "MyClass.java", "lineNumber": 81 }, { "className": "com.example.application.MainClass", "methodName": "main", "fileName": "MainClass.java", "lineNumber": 52 } ], "cause": { "type": "java.io.IOException", "message": "Connection lost.", "stackTrace": [ { "className": "java.nio.channels.SocketChannel", "methodName": "write", "fileName": null, "lineNumber": -1 }, { "className": "org.example.sql.driver.PreparedStatement$1", "methodName": "responder", "fileName": "PreparedStatement.java", "lineNumber": 1032 }, { "className": "org.example.sql.driver.PreparedStatement", "methodName": "executeUpdate", "fileName": "PreparedStatement.java", "lineNumber": 738 }, { "className": "com.example.application.MyClass", "methodName": "exampleMethod", "fileName": "MyClass.java", "lineNumber": 81 }, { "className": "com.example.application.MainClass", "methodName": "main", "fileName": "MainClass.java", "lineNumber": 52 } ] } }, "contextMap": { "ID": "86c3a497-4e67-4eed-9d6a-2e5797324d7b", "username": "JohnDoe" }, "contextStack": [ "topItem", "anotherItem", "bottomItem" ] }
Providers
The NoSQL Appender only handles the conversion of log events into NoSQL documents, and it delegates database-specific tasks to a NoSQL provider.
NoSQL providers are Log4j plugins that implement the
NoSqlProvider
interface.
Log4j Core currently provides the following providers:
-
Multiple providers for different versions of the MongoDB database. See MongoDB Providers below for more details.
-
A provider for the Apache CouchDB database. See Apache CouchDB provider below for more details.
MongoDB Providers
Starting with version 2.11.0, Log4j supplies providers for the MongoDB NoSQL database engine, based on the MongoDB synchronous Java driver. The choice of the provider to use depends on:
-
the major version of the MongoDB Java driver your application uses: Log4j supports all major versions starting from version 2.
-
the type of driver API used: either the Legacy API or the Modern API. See MongoDB documentation for the difference between APIs.
The list of dependencies of your application provides a hint as to which driver API your application is using. If your application contains any one of these dependencies, it might use the Legacy API:
If you application only uses |
The version of the MongoDB Java driver is not the same as the version of the MongoDB server. See MongoDB compatibility matrix for more information. |
In order to use a Log4j MongoDB appender you need to add the following dependencies to your application:
Driver version | Driver API | Log4j artifact | Notes |
---|---|---|---|
|
Legacy |
Reached end-of-support. Last released version: |
|
Legacy |
Reached end-of-support. Last released version: |
||
|
Modern |
||
|
Modern |
If you are note sure, which implementation to choose, |
MongoDb Provider (current)
The MongoDb
provider is based on the
current version of the MongoDB Java driver.
It supports the following configuration options:
Attribute | Type | Default value | Description |
---|---|---|---|
It specifies the connection URI used to reach the server. See Connection URI documentation for its format. Required |
|||
|
|
If |
|
|
|
It specifies the capped collection size of bytes. |
Additional runtime dependencies are required to use the MongoDb
provider:
-
Maven
-
Gradle
We assume you use log4j-bom
for dependency management.
<dependency>
<groupId>org.apache.logging.log4j</groupId>
<artifactId>log4j-mongodb</artifactId>
<scope>runtime</scope>
</dependency>
We assume you use log4j-bom
for dependency management.
runtimeOnly 'org.apache.logging.log4j:log4j-mongodb4'
MongoDb4 Provider (deprecated)
The log4j-mongodb4
module is deprecated in favor of the current MongoDB
provider.
It supports the following configuration attributes:
Attribute | Type | Default value | Description |
---|---|---|---|
It specifies the connection URI used to reach the server. See Connection URI documentation for its format. Required |
|||
|
|
If |
|
|
|
It specifies the capped collection size of bytes. |
Additional runtime dependencies are required to use the MongoDb4
provider:
-
Maven
-
Gradle
We assume you use log4j-bom
for dependency management.
<dependency>
<groupId>org.apache.logging.log4j</groupId>
<artifactId>log4j-mongodb4</artifactId>
<scope>runtime</scope>
</dependency>
We assume you use log4j-bom
for dependency management.
runtimeOnly 'org.apache.logging.log4j:log4j-mongodb4'
Apache CouchDB provider
This provider is planned to be removed in the next major release! If you are using this library, please get in touch with the Log4j maintainers using the official support channels. |
The CouchDb
Provider allows using the NoSQL Appender with an
Apache CouchDB database.
The provider can be configured by:
-
either providing some standard configuration attributes,
-
or providing a factory method.
Attribute | Type | Default value | Description |
---|---|---|---|
enumeration |
|
It specifies the protocol to use to connect to the server. Can be one of:
|
|
|
|
The host name of the CouchDB server. |
|
|
|
It specifies the TCP port to use. |
|
|
The name of the database to connect to. |
||
|
The username for authentication. |
||
|
The password for authentication. |
||
|
The fully qualified class name that contains a factory method that returns either a
The class must be public. |
||
|
The name of the factory method. The method:
|
Additional runtime dependencies are required to use the CouchDb
provider:
-
Maven
-
Gradle
We assume you use log4j-bom
for dependency management.
<dependency>
<groupId>org.apache.logging.log4j</groupId>
<artifactId>log4j-couchdb</artifactId>
<scope>runtime</scope>
</dependency>
We assume you use log4j-bom
for dependency management.
runtimeOnly 'org.apache.logging.log4j:log4j-couchdb'
Configuration examples
To connect the NoSQL Appender to a MongoDB database, you only need to provide a connection string:
-
XML
-
JSON
-
YAML
-
Properties
log4j2.xml
<NoSql name="MONGO">
<MongoDb connection="mongodb://${env:DB_USER}:${env:DB_PASS}@localhost:27017/logging.logs"/>
</NoSql>
log4j2.json
"NoSql": {
"name": "MONGO",
"MongoDb": {
"connection": "mongodb://${env:DB_USER}:${env:DB_PASS}@localhost:27017/logging.logs"
}
}
log4j2.yaml
NoSql:
name: "MONGO"
MongoDb:
connection: "mongodb://${env:DB_USER}:${env:DB_PASS}@localhost:27017/logging.logs"
log4j2.properties
appender.1.type = NoSql
appender.1.name = MONGO
appender.1.provider.type = MongoDB
appender.1.provider.connection = mongodb://${env:DB_USER}:${env:DB_PASS}@localhost:27017/logging.logs
Make sure to not let
Snippet from an example
log4j2.xml
Snippet from an example
log4j2.json
Snippet from an example
log4j2.yaml
Snippet from an example
log4j2.properties
|
A similar configuration for an Apache CouchDB database looks like:
-
XML
-
JSON
-
YAML
-
Properties
log4j2.xml
<NoSql name="COUCH">
<CouchDB protocol="https"
username="${env:DB_USER}"
password="${env:DB_PASS}"
server="localhost"
port="5984"
databaseName="logging"/>
</NoSql>
log4j2.json
"CouchDb": {
"name": "COUCH",
"CouchDB": {
"protocol": "https",
"username": "${env:DB_USER}",
"password": "${env:DB_PASS"},
"server": "localhost",
"port": 5984,
"databaseName": "logging"
}
log4j2.yaml
NoSql:
name: "COUCH"
CouchDB:
protocol: "https"
username: "${env:DB_USER}"
password: "${env:DB_PASS}"
server: "localhost"
port: 5984
databaseName: "logging"
log4j2.properties
appender.0.type = NoSql
appender.0.name = COUCH
appender.0.provider.type = CouchDB
appender.0.provider.protocol = https
appender.0.provider.username = ${env:DB_USER}
appender.0.provider.password = ${env:DB_PASS}
appender.0.provider.server = localhost
appender.0.provider.port = 5984
appender.0.provider.databaseName = logging
You can define additional fields to the NoSQL document using KeyValuePair
elements, for example:
-
XML
-
JSON
-
YAML
-
Properties
log4j2.xml
<NoSql name="MONGO">
<MongoDb connection="mongodb://${env:DB_USER}:${env:DB_PASS}@localhost:27017/logging.logs"/>
<KeyValuePair key="startTime" value="${date:yyyy-MM-dd hh:mm:ss.SSS}"/> (1)
<KeyValuePair key="currentTime" value="$${date:yyyy-MM-dd hh:mm:ss.SSS}"/> (2)
</NoSql>
log4j2.json
"NoSql": {
"name": "MONGO",
"MongoDb": {
"connection": "mongodb://${env:DB_USER}:${env:DB_PASS}@localhost:27017/logging.logs"
},
"KeyValuePair": [
{
"key": "startTime",
"value": "${date:yyyy-MM-dd hh:mm:ss.SSS}" (1)
},
{
"key": "currentTime",
"value": "$${date:yyyy-MM-dd hh:mm:ss.SSS}" (2)
}
]
}
log4j2.yaml
NoSql:
name: "MONGO"
MongoDb:
connection: "mongodb://${env:DB_USER}:${env:DB_PASS}@localhost:27017/logging.logs"
KeyValuePair:
- key: "startTime"
value: "${date:yyyy-MM-dd hh:mm:ss.SSS}" (1)
- key: "currentTime"
value: "$${date:yyyy-MM-dd hh:mm:ss.SSS}" (2)
log4j2.properties
appender.0.type = NoSql
appender.0.name = MONGO
appender.0.provider.type = MongoDB
appender.0.provider.connection = mongodb://${env:DB_USER}:${env:DB_PASS}@localhost:27017/logging.logs
appender.0.kv[0].type = KeyValuePair
appender.0.kv[0].key = startTime
(1)
appender.0.kv[0].value = ${date:yyyy-MM-dd hh:mm:ss.SSS}
appender.0.kv[1].type = KeyValuePair
appender.0.kv[1].key = currentTime
(1)
appender.0.kv[1].value = $${date:yyyy-MM-dd hh:mm:ss.SSS}
1 | This lookup is evaluated at configuration time and gives the time when Log4j was most recently reconfigured. |
2 | This lookup is evaluated at runtime and gives the current date. See runtime lookup evaluation for more details. |