By default, the connectors produce a fair amount of useful information when they start up,
but then produce very few logs when the connector is keeping up with the source databases.
This is often sufficient when the connector is operating normally,
but may not be enough when the connector is behaving unexpectedly.
In such cases, you can change the logging level so that the connector generates much more verbose log messages describing what the connector is doing and what it is not doing.
Each log message produced by the application is sent to a specific _logger_
(for example, `io.debezium.connector.mysql`).
Loggers are arranged in hierarchies.
For example, the `io.debezium.connector.mysql` logger is the child of the `io.debezium.connector` logger,
which is the child of the `io.debezium` logger.
At the top of the hierarchy,
the _root logger_ defines the default logger configuration for all of the loggers beneath it.
[discrete]
=== Log levels
Every log message produced by the application also has a specific _log level_:
1. `ERROR` - errors, exceptions, and other significant problems
2. `WARN` - _potential_ problems and issues
3. `INFO` - status and general activity (usually low-volume)
4. `DEBUG` - more detailed activity that would be useful in diagnosing unexpected behavior
5. `TRACE` - very verbose and detailed activity (usually very high-volume)
[discrete]
=== Appenders
An _appender_ is essentially a destination where log messages are written.
Each appender controls the format of its log messages,
giving you even more control over what the log messages look like.
To configure logging, you specify the desired level for each logger and the appender(s) where those log messages should be written. Since loggers are hierarchical, the configuration for the root logger serves as a default for all of the loggers below it, although you can override any child (or descendant) logger.
|Directs the `stdout` appender to write log messages to the console, as opposed to a file.
|3
|Specifies that the `stdout` appender uses a pattern matching algorithm to format log messages.
|4
|The pattern that the `stdout` appender uses (see the https://logging.apache.org/log4j/1.2/apidocs/org/apache/log4j/PatternLayout.html[Log4j documentation] for details).
In general, {prodname} connectors send their log messages to loggers with names that match the fully-qualified name of the Java class that is generating the log message.
{prodname} uses packages to organize code with similar or related functions.
This means that you can control all of the log messages for a specific class or for all of the classes within or under a specific package.
|This pair of `log4j.additivity.io` entries disable https://logging.apache.org/log4j/2.x/manual/configuration.html#additivity[additivity].
If you use multiple appenders, set `additivity` values to `false` to prevent duplicate log messages from being sent to the appenders of the parent loggers.
=== Dynamically setting the logging level with the Kafka Connect REST API
You can use the Kafka Connect REST API to set logging levels for a connector dynamically at runtime.
Unlike log level changes that you set in `log4j.properties`, changes that you make via the API take effect immediately, and do not require you to restart the worker.
The log level setting that you specify in the API applies only to the worker at the endpoint that receives the request.
The log levels of other workers in the cluster remain unchanged.
The specified level is not persisted after the worker restarts.
To make persistent changes to the logging level, set the log level in `log4j.properties` by xref:changing-logging-level[configuring loggers] or xref:adding-mapped-diagnostic-contexts[adding mapped diagnostic contexts].
.Procedure
* Set the log level by sending a PUT request to the `admin/loggers` endpoint that specifies the following information:
** The package for which you want to change the log level.
The configuration in the preceding example produces log messages similar to the ones in the following output:
[source,shell,options="nowrap"]
----
...
2017-02-07 20:49:37,692 INFO MySQL|dbserver1|snapshot Starting snapshot for jdbc:mysql://mysql:3306/?useInformationSchema=true&nullCatalogMeansCurrent=false&useSSL=false&useUnicode=true&characterEncoding=UTF-8&characterSetResults=UTF-8&zeroDateTimeBehavior=convertToNull with user 'debezium' [io.debezium.connector.mysql.SnapshotReader]
2017-02-07 20:49:37,696 INFO MySQL|dbserver1|snapshot Snapshot is using user 'debezium' with these MySQL grants: [io.debezium.connector.mysql.SnapshotReader]
2017-02-07 20:49:37,697 INFO MySQL|dbserver1|snapshot GRANT SELECT, RELOAD, SHOW DATABASES, REPLICATION SLAVE, REPLICATION CLIENT ON *.* TO 'debezium'@'%' [io.debezium.connector.mysql.SnapshotReader]
...
----
Each line in the log includes the connector type (for example, `MySQL`), the name of the connector (for example, `dbserver1`), and the activity of the thread (for example, `snapshot`).
--
ifdef::product[]
// Category: debezium-using
// Type: concept
[id="debezium-logging-on-openshift"]
== {prodname} logging on OpenShift
If you are using {prodname} on OpenShift, you can use the Kafka Connect loggers to configure the {prodname} loggers and logging levels.
For more information about configuring logging properties in a Kafka Connect schema, see link:{LinkStreamsOpenShift}#type-KafkaConnectSpec-schema-reference[{NameStreamsOpenShift}].
== Configuring the log level in the {prodname} container images
The {prodname} container images for Zookeeper, Kafka, and Kafka Connect all set up their `log4j.properties` file to configure the Debezium-related loggers.
All log messages are sent to the Docker container's console (and thus the Docker logs).
The log messages are also written to files under the `/kafka/logs` directory.
The containers use a `LOG_LEVEL` environment variable to set the log level for the root logger.
You can use this environment variable to set the log level for the service running in the container.