Database appenders

Log4j Core provides multiple appenders to send log events directly to your database.

Common concerns

Column mapping

Since relational databases and some NoSQL databases split data into columns, Log4j Core provides a reusable ColumnMapping configuration element to allow specifying the content of each column.

The Column Mapping element supports the following configuration properties:

Table 1. ColumnMapping configuration attributes
Attribute Type Default value Description

Required

name

String

The name of the column.

Optional

columnType

Class<?>

String

It specifies the Java type that will be stored in the column.

If set to:

org.apache.logging.log4j.util.ReadOnlyStringMap
org.apache.logging.log4j.spi.ThreadContextMap

The column will be filled with the contents of the log event’s context map.

org.apache.logging.log4j.spi.ThreadContextStack

The column will be filled with the contents of the log event’s context stack.

java.util.Date

The column will be filled with the log event’s timestamp.

For any other value:

  1. The log event will be formatted using the nested Layout.

  2. The resulting String will be converted to the specified type using a TypeConverter. See the plugin reference for a list of available type converters.

type

Class<?>

String

Deprecated: since 2.21.0 use columnType instead.

literal

String

If set, the value will be added directly in the insert statement of the database-specific query language.

This value is added as-is, without any validation. Never use user-provided data to determine its value.

parameter

String

It specifies the database-specific parameter marker to use. Otherwise, the default parameter marker for the database language will be used.

This value is added as-is, without any validation. Never use user-provided data to determine its value.

pattern

String

This is a shortcut configuration attribute to set the nested Layout element to a PatternLayout instance with the specified pattern property.

source

String

name

It specifies which key of a MapMessage will be stored in the column. This attribute is used only if:

Table 2. ColumnMapping nested elements
Type Multiplicity Description

Layout

zero or one

Formats the value to store in the column.

See Layouts for more information.

An example column mapping might look like this:

  • XML

  • JSON

  • YAML

  • Properties

Snippet from an example log4j2.xml
(1)
<ColumnMapping name="id" literal="currval('logging_seq')"/>
(2)
<ColumnMapping name="uuid"
               pattern="%uuid{TIME}"
               columnType="java.util.UUID"/>
<ColumnMapping name="message" pattern="%m"/>
(3)
<ColumnMapping name="timestamp" columnType="java.util.Date"/>
<ColumnMapping name="mdc"
               columnType="org.apache.logging.log4j.spi.ThreadContextMap"/>
<ColumnMapping name="ndc"
               columnType="org.apache.logging.log4j.spi.ThreadContextStack"/>
(4)
<ColumnMapping name="asJson">
  <JsonTemplateLayout/>
</ColumnMapping>
(5)
<ColumnMapping name="resource" source="resourceId"/>
Snippet from an example log4j2.json
"ColumnMapping": [
  (1)
  {
    "name": "id",
    "literal": "currval('logging_seq')"
  },
  (2)
  {
    "name": "uuid",
    "pattern": "%uuid{TIME}",
    "columnType": "java.util.UUID"
  },
  {
    "name": "message",
    "pattern": "%m"
  },
  (3)
  {
    "name": "timestamp",
    "columnType": "java.util.Date"
  },
  {
    "name": "mdc",
    "columnType": "org.apache.logging.log4j.spi.ThreadContextMap"
  },
  {
    "name": "ndc",
    "columnType": "org.apache.logging.log4j.spi.ThreadContextStack"
  },
  (4)
  {
    "name": "asJson",
    "JsonTemplateLayout": {}
  },
  (5)
  {
    "name": "resource",
    "source": "resourceId"
  }
]
Snippet from an example log4j2.yaml
ColumnMapping:
  (1)
  - name: "id"
    literal: "currval('logging_seq')"
  (2)
  - name: "uuid"
    pattern: "%uuid{TIME}"
    columnType: "java.util.UUID"
  - name: "message"
    pattern: "%m"
  (3)
  - name: "timestamp"
    columnType: "java.util.Date"
  - name: "mdc"
    columnType: "org.apache.logging.log4j.spi.ThreadContextMap"
  - name: "ndc"
    columnType: "org.apache.logging.log4j.spi.ThreadContextStack"
  (4)
  - name: "asJson"
    JsonTemplateLayout: {}
  (5)
  - name: "resource"
    source: "resourceId"
Snippet from an example log4j2.properties
(1)
Appenders.JDBC.ColumnMapping[1].name = id
Appenders.JDBC.ColumnMapping[1].literal = currval('logging_seq')

(2)
Appenders.JDBC.ColumnMapping[2].name = uuid
Appenders.JDBC.ColumnMapping[2].pattern = %uuid{TIME}
Appenders.JDBC.ColumnMapping[2].columnType = java.util.UUID

Appenders.JDBC.ColumnMapping[3].name = message
Appenders.JDBC.ColumnMapping[3].pattern = %m

(3)
Appenders.JDBC.ColumnMapping[4].name = timestamp
Appenders.JDBC.ColumnMapping[4].timestamp = java.util.Date

Appenders.JDBC.ColumnMapping[5].name = mdc
Appenders.JDBC.ColumnMapping[5].columnType = org.apache.logging.log4j.spi.ThreadContextMap

Appenders.JDBC.ColumnMapping[6].name = ndc
Appenders.JDBC.ColumnMapping[6].columnType = org.apache.logging.log4j.spi.ThreadContextStack

(4)
Appenders.JDBC.ColumnMapping[7].name = asJson
Appenders.JDBC.ColumnMapping[7].layout.type = JsonTemplateLayout

(5)
Appenders.JDBC.ColumnMapping[8].name = resource
Appenders.JDBC.ColumnMapping[8].source = resourceId
1 A database-specific expression is added literally to the INSERT statement.
2 A Pattern Layout with the specified pattern is used for these columns. The uuid column is additionally converted into a java.util.UUID before being sent to the JDBC driver.
3 Three special column types are replaced with the log event timestamp, context map, and context stack.
4 A JSON Template Layout is used to format this column.
5 If the global layout of the appender returns a MapMessage, the value for key resourceId will be put into the resource column.

JDBC Appender

The JDBCAppender writes log events to a relational database table using standard JDBC. It can be configured to get JDBC connections from different connection sources.

If batch statements are supported by the configured JDBC driver and bufferSize is configured to be a positive number, then log events will be batched.

The appender gets a new connection for each batch of log events. The connection source must be backed by a connection pool, otherwise the performance will suffer greatly.

Table 3. JDBC Appender configuration attributes
Attribute Type Default value Description

Required

name

String

The name of the Appender.

tableName

String

The name of the table to use.

Optional

bufferSize

int

0

The number of log messages to batch before writing. If 0, batching is disabled.

ignoreExceptions

boolean

true

If false, logging exception will be forwarded to the caller of the logging statement. Otherwise, they will be ignored.

immediateFail

boolean

false

When set to true, log events will not wait to try to reconnect and will fail immediately if the JDBC resources are not available.

reconnectIntervalMillis

long

5000

If set to a value greater than 0, after an error, the JdbcDatabaseManager will attempt to reconnect to the database after waiting the specified number of milliseconds.

If the reconnecting fails then an exception will be thrown and can be caught by the application if ignoreExceptions is set to false.

Table 4. JDBC Appender nested elements
Type Multiplicity Description

Filter

zero or one

Allows filtering log events just before they are formatted and sent.

See also appender filtering stage.

ColumnMapping

zero or more

A list of column mapping configurations. The following database-specific restrictions apply:

Required, unless ColumnConfig is used.

ColumnConfig

zero or more

Deprecated: an older mechanism to define column mappings.

📖 Plugin reference for ColumnConfig

ConnectionSource

one

It specifies how to retrieve JDBC Connection objects.

See Connection Sources for more details.

Layout

zero or one

An optional Layout<? extends Message> implementation that formats a log event as log Message.

If supplied MapMessages will be treated in a special way.

See Map Message handling for more details.

Additional runtime dependencies are required to use the JDBC Appender:

  • Maven

  • Gradle

We assume you use log4j-bom for dependency management.

<dependency>
  <groupId>org.apache.logging.log4j</groupId>
  <artifactId>log4j-jdbc</artifactId>
  <scope>runtime</scope>
</dependency>

We assume you use log4j-bom for dependency management.

runtimeOnly 'org.apache.logging.log4j:log4j-jdbc'

Connection Sources

When configuring the JDBC Appender, you must specify an implementation of ConnectionSource that the appender will use to get Connection objects.

The following connection sources are available out-of-the-box:

DataSource

This connection source uses JNDI to locate a JDBC DataSource.

As of Log4j 2.17.0 you need to enable the DataSource connection source explicitly by setting the log4j.jndi.enableJdbc configuration property to true.

Table 5. DataSource configuration attributes
Attribute Type Default value Description

jndiName

Name

It specifies the JNDI name of a JDBC DataSource.

Only the java: JNDI protocol is supported.

Required

Additional runtime dependencies are required to use DataSource:

  • Maven

  • Gradle

We assume you use log4j-bom for dependency management.

<dependency>
  <groupId>org.apache.logging.log4j</groupId>
  <artifactId>log4j-jdbc-jndi</artifactId>
  <scope>runtime</scope>
</dependency>

We assume you use log4j-bom for dependency management.

runtimeOnly 'org.apache.logging.log4j:log4j-jdbc-jndi'

ConnectionFactory

This connection source can use any factory method. The method must:

Table 6. ConnectionFactory configuration attributes
Attribute Type Default value Description

class

Class<?>

The fully qualified class name of the class containing the factory method.

Required

method

String

The name of the factory method.

Required

DriverManager

This connection source uses DriverManager to directly create connections using a JDBC Driver.

This configuration source is useful during development, but we don’t recommend it in production. Unless the JDBC driver provides connection pooling, the performance of the appender will suffer.

See PoolingDriver for a variant of this connection source that uses a connection pool.

Table 7. DriverManager configuration attributes
Attribute Type Default value Description

connectionString

String

The driver-specific JDBC connection string.

Required

driverClassName

String

autodetected

The fully qualified class name of the JDBC driver to use.

JDBC 4.0 drivers can be automatically detected by DriverManager. See DriverManager for more details.

userName

String

The username to use to connect to the database.

password

String

The password to use to connect to the database.

Table 8. DriverManager nested elements
Type Multiplicity Description

Property

zero or more

A list of key/value pairs to pass to DriverManager.

If supplied, the userName and password attributes will be ignored.

PoolingDriver

The PoolingDriver uses Apache Commons DBCP 2 to configure a JDBC connection pool.

Table 9. PoolingDriver configuration attributes
Attribute Type Default value Description

connectionString

String

The driver-specific JDBC connection string.

Required

driverClassName

String

autodetected

The fully qualified class name of the JDBC driver to use.

JDBC 4.0 drivers can be automatically detected by DriverManager. See DriverManager for more details.

userName

String

The username to use to connect to the database.

password

String

The password to use to connect to the database.

poolName

String

example

Table 10. PoolingDriver nested elements
Type Multiplicity Description

Property

zero or more

A list of key/value pairs to pass to DriverManager.

If supplied, the userName and password attributes will be ignored.

PoolableConnectionFactory

zero or one

Allows finely tuning the configuration of the DBCP 2 connection pool. The available parameters are the same as those provided by DBCP 2. See DBCP 2 configuration for more details.

📖 Plugin reference for PoolableConnectionFactory

Additional runtime dependencies are required for using PoolingDriver:

  • Maven

  • Gradle

We assume you use log4j-bom for dependency management.

<dependency>
  <groupId>org.apache.logging.log4j</groupId>
  <artifactId>log4j-jdbc-dbcp2</artifactId>
  <scope>runtime</scope>
</dependency>

We assume you use log4j-bom for dependency management.

runtimeOnly 'org.apache.logging.log4j:log4j-jdbc-dbcp2'

Map Message handling

If the optional nested element of type Layout<? Extends Message> is provided, log events containing messages of type MapMessage will be treated specially. For each column mapping (except those containing literals) the source attribute will be used as key to the value in MapMessage that will be stored in column name.

Configuration examples

Here is an example JDBC Appender configuration:

  • XML

  • JSON

  • YAML

  • Properties

Snippet from an example log4j2.xml
<JDBC name="JDBC"
      tableName="logs"
      bufferSize="10"> (1)
  (2)
  <DataSource jndiName="java:comp/env/jdbc/logging"/>
  (3)
  <ColumnMapping name="id"
                 pattern="%uuid{TIME}"
                 columnType="java.util.UUID"/>
  <ColumnMapping name="timestamp" columnType="java.util.Date"/>
  <ColumnMapping name="level" pattern="%level"/>
  <ColumnMapping name="marker" pattern="%marker"/>
  <ColumnMapping name="logger" pattern="%logger"/>
  <ColumnMapping name="message" pattern="%message"/>
  <ColumnMapping name="mdc"
                 columnType="org.apache.logging.log4j.spi.ThreadContextMap"/>
  <ColumnMapping name="ndc"
                 columnType="org.apache.logging.log4j.spi.ThreadContextStack"/>
</JDBC>
Snippet from an example log4j2.json
"JDBC": {
  "name": "JDBC",
  "tableName": "logs",
  (1)
  "bufferSize": 10,
  (2)
  "DataSource": {
    "jndiName": "java:comp/env/jdbc/logging"
  },
  (3)
  "ColumnMapping": [
    {
      "name": "id",
      "pattern": "%uuid{TIME}",
      "columnType": "java.util.UUID"
    },
    {
      "name": "timestamp",
      "columnType": "java.util.Date"
    },
    {
      "name": "level",
      "pattern": "%level"
    },
    {
      "name": "marker",
      "pattern": "%marker"
    },
    {
      "name": "logger",
      "pattern": "%logger"
    },
    {
      "name": "message",
      "pattern": "%m"
    },
    {
      "name": "mdc",
      "columnType": "org.apache.logging.log4j.spi.ThreadContextMap"
    },
    {
      "name": "ndc",
      "columnType": "org.apache.logging.log4j.spi.ThreadContextStack"
    }
  ]
}
Snippet from an example log4j2.yaml
JDBC:
  name: "JDBC"
  tableName: "logs"
  (1)
  bufferSize: 10
  (2)
  DataSource:
    jndiName: "java:comp/env/jdbc/logging"
  (3)
  ColumnMapping:
    - name: "id"
      pattern: "%uuid{TIME}"
      columnType: "java.util.UUID"
    - name: "timestamp"
      columnType: "java.util.Date"
    - name: "level"
      pattern: "%level"
    - name: "marker"
      pattern: "%marker"
    - name: "logger"
      pattern: "%logger"
    - name: "message"
      pattern: "%message"
    - name: "mdc"
      columnType: "org.apache.logging.log4j.spi.ThreadContextMap"
    - name: "ndc"
      columnType: "org.apache.logging.log4j.spi.ThreadContextStack"
Snippet from an example log4j2.properties
Appenders.0.type = JDBC
Appenders.0.name = JDBC
Appenders.0.tableName = logs
(1)
Appenders.0.bufferSize = 10

(2)
Appenders.0.ds.type = DataSource
Appenders.0.ds.jndiName = java:comp/env/jdbc/logging

(3)
Appenders.0.col[0].type = ColumnMapping
Appenders.0.col[0].name = uuid
Appenders.0.col[0].pattern = %uuid{TIME}
Appenders.0.col[0].columnType = java.util.UUID

Appenders.0.col[1].type = ColumnMapping
Appenders.0.col[1].name = timestamp
Appenders.0.col[1].timestamp = java.util.Date

Appenders.0.col[2].type = ColumnMapping
Appenders.0.col[2].name = level
Appenders.0.col[2].pattern = %level

Appenders.0.col[3].type = ColumnMapping
Appenders.0.col[3].name = marker
Appenders.0.col[3].pattern = %marker

Appenders.0.col[4].type = ColumnMapping
Appenders.0.col[4].name = logger
Appenders.0.col[4].pattern = %logger

Appenders.0.col[5].type = ColumnMapping
Appenders.0.col[5].name = message
Appenders.0.col[5].pattern = %message

Appenders.0.col[6].type = ColumnMapping
Appenders.0.col[6].name = mdc
Appenders.0.col[6].columnType = org.apache.logging.log4j.spi.ThreadContextMap

Appenders.0.col[7].type = ColumnMapping
Appenders.0.col[7].name = ndc
Appenders.0.col[7].columnType = org.apache.logging.log4j.spi.ThreadContextStack
1 Enables buffering. Messages are sent in batches of 10.
2 A JNDI data source is used.
3 An example of column mapping. See Column mapping for more details.

The example above uses the following table schema:

CREATE TABLE logs
(
    id        BIGINT PRIMARY KEY,
    level     VARCHAR,
    marker    VARCHAR,
    logger    VARCHAR,
    message   VARCHAR,
    timestamp TIMESTAMP,
    mdc       VARCHAR,
    ndc       VARCHAR
);

NoSQL Appender

The NoSQL Appender writes log events to a document-oriented NoSQL database using an internal lightweight provider interface. It supports the following configuration options:

Table 11. NoSQL Appender configuration attributes
Attribute Type Default value Description

Required

name

String

The name of the Appender.

Optional

bufferSize

int

0

The number of log messages to batch before writing to the database. If 0, batching is disabled.

ignoreExceptions

boolean

true

If false, logging exception will be forwarded to the caller of the logging statement. Otherwise, they will be ignored.

Table 12. NoSQL Appender nested elements
Type Multiplicity Description

Filter

zero or one

Allows filtering log events just before they are formatted and sent.

See also appender filtering stage.

KeyValuePair

Zero or more

Adds a simple key/value field to the NoSQL object.

The value attribute of the pair supports runtime property substitution using the current event as context.

Layout

zero or one

An optional Layout<? extends MapMessage> implementation that formats a log event as MapMessage.

See Formatting for more details.

Formatting

This appender transforms log events into NoSQL documents in two ways:

  • If the optional Layout configuration element is provided, the MapMessage returned by the layout will be converted into its NoSQL document.

  • Otherwise, a default conversion will be applied. You enhance the format with additional top level key/value pairs using nested KeyValuePair configuration elements.

    Click to see an example of default log event formatting
    {
      "level": "WARN",
      "loggerName": "com.example.application.MyClass",
      "message": "Something happened that you might want to know about.",
      "source": {
        "className": "com.example.application.MyClass",
        "methodName": "exampleMethod",
        "fileName": "MyClass.java",
        "lineNumber": 81
      },
      "marker": {
        "name": "SomeMarker",
        "parent": {
          "name": "SomeParentMarker"
        }
      },
      "threadName": "Thread-1",
      "millis": 1368844166761,
      "date": "2013-05-18T02:29:26.761Z",
      "thrown": {
        "type": "java.sql.SQLException",
        "message": "Could not insert record. Connection lost.",
        "stackTrace": [
          {
            "className": "org.example.sql.driver.PreparedStatement$1",
            "methodName": "responder",
            "fileName": "PreparedStatement.java",
            "lineNumber": 1049
          },
          {
            "className": "org.example.sql.driver.PreparedStatement",
            "methodName": "executeUpdate",
            "fileName": "PreparedStatement.java",
            "lineNumber": 738
          },
          {
            "className": "com.example.application.MyClass",
            "methodName": "exampleMethod",
            "fileName": "MyClass.java",
            "lineNumber": 81
          },
          {
            "className": "com.example.application.MainClass",
            "methodName": "main",
            "fileName": "MainClass.java",
            "lineNumber": 52
          }
        ],
        "cause": {
          "type": "java.io.IOException",
          "message": "Connection lost.",
          "stackTrace": [
            {
              "className": "java.nio.channels.SocketChannel",
              "methodName": "write",
              "fileName": null,
              "lineNumber": -1
            },
            {
              "className": "org.example.sql.driver.PreparedStatement$1",
              "methodName": "responder",
              "fileName": "PreparedStatement.java",
              "lineNumber": 1032
            },
            {
              "className": "org.example.sql.driver.PreparedStatement",
              "methodName": "executeUpdate",
              "fileName": "PreparedStatement.java",
              "lineNumber": 738
            },
            {
              "className": "com.example.application.MyClass",
              "methodName": "exampleMethod",
              "fileName": "MyClass.java",
              "lineNumber": 81
            },
            {
              "className": "com.example.application.MainClass",
              "methodName": "main",
              "fileName": "MainClass.java",
              "lineNumber": 52
            }
          ]
        }
      },
      "contextMap": {
        "ID": "86c3a497-4e67-4eed-9d6a-2e5797324d7b",
        "username": "JohnDoe"
      },
      "contextStack": [
        "topItem",
        "anotherItem",
        "bottomItem"
      ]
    }

Providers

The NoSQL Appender only handles the conversion of log events into NoSQL documents, and it delegates database-specific tasks to a NoSQL provider. NoSQL providers are Log4j plugins that implement the NoSqlProvider interface.

Log4j Core 3 provides a single MongoDB Provider, but version 2 providers can also be used.

MongoDB Provider

Starting with version 2.11.0, Log4j supplies providers for the MongoDB NoSQL database engine, based on the MongoDB synchronous Java driver. The choice of the provider to use depends on:

  • the major version of the MongoDB Java driver your application uses: Log4j supports all major versions starting from version 2.

  • the type of driver API used: either the Legacy API or the Modern API. See MongoDB documentation for the difference between APIs.

The list of dependencies of your application provides a hint as to which driver API your application is using. If your application contains any one of these dependencies, it might use the Legacy API:

  • org.mongodb:mongo-java-driver

  • org.mongodb:mongodb-driver-legacy

If you application only uses org.mongodb:mongodb-driver-sync, it uses the Modern API.

The version of the MongoDB Java driver is not the same as the version of the MongoDB server. See MongoDB compatibility matrix for more information.

In order to use a Log4j MongoDB appender you need to add the following dependencies to your application:

Table 13. MongoDB providers compatibility table
Driver version Driver API Log4j artifact Notes

2.x

Legacy

log4j-mongodb2

Reached end-of-support.

Last released version: 2.12.4

3.x

Legacy

log4j-mongodb3

Reached end-of-support.

Last released version: 2.23.1

4.x

Modern

log4j-mongodb4

5.x or later

Modern

log4j-mongodb

If you are note sure, which implementation to choose, log4j-mongodb is the recommended choice.

MongoDb Provider (current)

The MongoDb provider is based on the current version of the MongoDB Java driver. It supports the following configuration options:

Table 14. MongoDb Provider configuration attributes
Attribute Type Default value Description

connection

ConnectionString

It specifies the connection URI used to reach the server.

See Connection URI documentation for its format.

Required

capped

boolean

false

If true, a capped collection will be used.

collectionSize

long

512 MiB

It specifies the capped collection size of bytes.

Additional runtime dependencies are required to use the MongoDb provider:

  • Maven

  • Gradle

We assume you use log4j-bom for dependency management.

<dependency>
  <groupId>org.apache.logging.log4j</groupId>
  <artifactId>log4j-mongodb</artifactId>
  <scope>runtime</scope>
</dependency>

We assume you use log4j-bom for dependency management.

runtimeOnly 'org.apache.logging.log4j:log4j-mongodb4'

Configuration examples

To connect the NoSQL Appender to a MongoDB database, you only need to provide a connection string:

  • XML

  • JSON

  • YAML

  • Properties

Snippet from an example log4j2.xml
<NoSql name="MONGO">
  <MongoDb connection="mongodb://${env:DB_USER}:${env:DB_PASS}@localhost:27017/logging.logs"/>
</NoSql>
Snippet from an example log4j2.json
"NoSql": {
  "name": "MONGO",
  "MongoDb": {
    "connection": "mongodb://${env:DB_USER}:${env:DB_PASS}@localhost:27017/logging.logs"
  }
}
Snippet from an example log4j2.yaml
NoSql:
  name: "MONGO"
  MongoDb:
    connection: "mongodb://${env:DB_USER}:${env:DB_PASS}@localhost:27017/logging.logs"
Snippet from an example log4j2.properties
Appenders.1.type = NoSql
Appenders.1.name = MONGO
Appenders.1.provider.type = MongoDB
Appenders.1.provider.connection = mongodb://${env:DB_USER}:${env:DB_PASS}@localhost:27017/logging.logs

Make sure to not let org.bson, com.mongodb log to a MongoDB database on a DEBUG level, since that will cause recursive logging:

  • XML

  • JSON

  • YAML

  • Properties

Snippet from an example log4j2.xml
<Root level="INFO">
  <AppenderRef ref="MONGO"/>
</Root>
<Logger name="org.bson"
        level="WARN"
        additivity="false"> (1)
  <AppenderRef ref="FILE"/>
</Logger>
<Logger name="com.mongodb"
        level="WARN"
        additivity="false"> (1)
  <AppenderRef ref="FILE"/>
</Logger>
Snippet from an example log4j2.json
"Root": {
  "level": "INFO",
  "AppenderRef": {
    "ref": "MONGO"
  }
},
"Logger": [
  {
    "name": "org.bson",
    "level": "WARN",
    "additivity": false, (1)
    "AppenderRef": {
      "ref": "FILE"
    }
  },
  {
    "name": "com.mongodb",
    "level": "WARN",
    "additivity": false, (1)
    "AppenderRef": {
      "ref": "FILE"
    }
  }
]
Snippet from an example log4j2.yaml
Root:
  level: "INFO"
  AppenderRef:
    ref: "MONGO"
Logger:
  - name: "org.bson"
    level: "WARN"
    additivity: false (1)
    AppenderRef:
      ref: "FILE"
  - name: "com.mongodb"
    level: "WARN"
    additivity: false (1)
    AppenderRef:
      ref: "FILE"
Snippet from an example log4j2.properties
Loggers.Root.level = INFO
Loggers.Root.AppenderRef.ref = MONGO

Loggers.Logger[1].name = org.bson
Loggers.Logger[1].level = WARN
(1)
Loggers.Logger[1].additivity = false
Loggers.Logger[1].AppenderRef.ref = FILE

Loggers.Logger[2].name = com.mongodb
Loggers.Logger[2].level = WARN
(1)
Loggers.Logger[2].additivity = false
Loggers.Logger[2].AppenderRef.ref = FILE
1 Remember to set the additivity configuration attribute to false.

You can define additional fields to the NoSQL document using KeyValuePair elements, for example:

  • XML

  • JSON

  • YAML

  • Properties

Snippet from an example log4j2.xml
<NoSql name="MONGO">
  <MongoDb connection="mongodb://${env:DB_USER}:${env:DB_PASS}@localhost:27017/logging.logs"/>
  <KeyValuePair key="startTime" value="${date:yyyy-MM-dd hh:mm:ss.SSS}"/> (1)
  <KeyValuePair key="currentTime" value="$${date:yyyy-MM-dd hh:mm:ss.SSS}"/> (2)
</NoSql>
Snippet from an example log4j2.json
"NoSql": {
  "name": "MONGO",
  "MongoDb": {
    "connection": "mongodb://${env:DB_USER}:${env:DB_PASS}@localhost:27017/logging.logs"
  },
  "KeyValuePair": [
    {
      "key": "startTime",
      "value": "${date:yyyy-MM-dd hh:mm:ss.SSS}" (1)
    },
    {
      "key": "currentTime",
      "value": "$${date:yyyy-MM-dd hh:mm:ss.SSS}" (2)
    }
  ]
}
Snippet from an example log4j2.yaml
NoSql:
  name: "MONGO"
  MongoDb:
    connection: "mongodb://${env:DB_USER}:${env:DB_PASS}@localhost:27017/logging.logs"
  KeyValuePair:
    - key: "startTime"
      value: "${date:yyyy-MM-dd hh:mm:ss.SSS}" (1)
    - key: "currentTime"
      value: "$${date:yyyy-MM-dd hh:mm:ss.SSS}" (2)
Snippet from an example log4j2.properties
Appenders.0.type = NoSql
Appenders.0.name = MONGO
Appenders.0.provider.type = MongoDB
Appenders.0.provider.connection = mongodb://${env:DB_USER}:${env:DB_PASS}@localhost:27017/logging.logs

Appenders.0.kv[0].type = KeyValuePair
Appenders.0.kv[0].key = startTime
(1)
Appenders.0.kv[0].value = ${date:yyyy-MM-dd hh:mm:ss.SSS}

Appenders.0.kv[1].type = KeyValuePair
Appenders.0.kv[1].key = currentTime
(1)
Appenders.0.kv[1].value = $${date:yyyy-MM-dd hh:mm:ss.SSS}
1 This lookup is evaluated at configuration time and gives the time when Log4j was most recently reconfigured.
2 This lookup is evaluated at runtime and gives the current date. See runtime lookup evaluation for more details.