Fundamentals of the Python Logging Library

by Alex
Fundamentals of the Python Logging Library

As applications change and become more complex, having a log will be useful for debugging and understanding problems, analyzing application performance. The standard logging library in Python comes with the logging module , which offers most of the major functions for logging. If the logging message is configured correctly, we will get a lot of useful information. About when and where logging is started, about the context of the logging, for example: the running process or a thread. Despite the benefits, the logging module is often overlooked because it takes some time to set it up properly. In my opinion, the complete official logging documentation does not really show the best logging practices and does not highlight some logging surprises. Note that the code snippets in the article assume that you have already imported the logging module:

import logging

Python Logging Concepts

This section provides an overview of some of the concepts that are often found in the logging module.

Python Logging levels

The level of a log corresponds to its importance: the ERROR log is more important than the WARNING log. Whereas the DEBUG log should only be used when debugging an application. Python offers six log levels; each level is associated with a number that indicates the importance of the log: NOTSET=0, DEBUG=10, INFO=20, WARNING=30, ERROR=40 and CRITICAL=50. The hierarchy of levels is intuitive: DEBUG < INFO < WARNING. Except for NOTSET which we will get to know later.

Formatting a log in Python

Log formatting supplements the message by adding contextual information to it. It is useful to know when the log is sent, where (Python file, line number, method, etc.), as well as additional context such as thread and process. Extremely useful data when debugging a multithreaded application.

"%(time)s - %(log_name)s - %(level)s - %(func_name)s:%(line_no)d - %(message)s"

It turns into:

2019-01-16 10:35:12,468 - keyboards - ERROR - :1 - hello world

Python Log Handler

The log handler is a component that records and displays log data. It displays the log in the console (via StreamHandler), in a file (via FileHandler), by sending an email (via SMTPHandler) and other methods. There are 2 important fields in each log handler:

  • A formatter that adds contextual information to the log.
  • The log importance level, which filters out logs whose levels are lower. Therefore a log handler with INFO level will not handle DEBUG logs.

The standard library contains several handlers, which are sufficient for most cases: The most common are StreamHandler and FileHandler:

console_handler = logging.StreamHandler()
file_handler = logging.FileHandler("MyLogFile.txt")

Python Logger

The logger will be used most often in code and will be the most complex. The new Logger will be obtained as follows:

toto_logger = logging.getLogger("Privacy")

The logger has three main fields:

  • Propagate: determines whether the log should propagate to the logging parent. The default is True.
  • Level: like the log handler level, the logger level is used to filter “less important” logs. Also, unlike the log handler, the level is checked only on the logger’s “children”; and once a log has been distributed to its parent, the level will not be checked. This is rather counter-intuitive behavior.
  • Handlers: a list of handlers to which the log will be sent when it arrives at the logger. This makes log handling flexible, for example, you would create a log handler to write to a file that logs DEBUG logs and a log handler to send emails that will only be used for CRITICAL logs. In this way, the logging handler-logger relationship becomes similar to publisher-consumer: where the log will be passed to all handlers after checking the logging level.

Logger is defined by name, which means that if a log is created with the name foo, subsequent calls to logging.getLogger (" foo") will return the same object:

assert id(logging.getLogger("foo")) == id(logging.getLogger("foo"))

As you might have guessed, loggers have a hierarchy. At the top of the hierarchy is the root logger, which can be accessed through logging.root. This log is called when methods such as logging.debug() are used. By default, the level of the root log is WARNING, so every log with a level lower than that is ignored (for example, via"info")). Another feature of the root logger is that its handler, by default, is created the first time a log with a level higher than WARNING is entered. Using the root logger with methods such as logging.debug() is not recommended

lab = logging.getLogger("f.r")
assert lab.parent == logging.root # lab.parent is really the root logger

Nevertheless, the logger uses a “point record”, so a logger named f.r will be a descendant of logger f. However, this is only true if logger f is created, otherwise the parent fr will still be the root.

la = logging.getLogger("f")
assert lab.parent == la # parent lab is now la instead of root

Effective Logger Level

When a logger decides whether a log should be output according to the level of importance (for example, if the logger level is lower than the logger level, the message will be ignored), it uses its “effective level” instead of the actual level. The effective level is the same as the logger level, unless the level is NOTSET. However, if the logger level is NOTSET, the effective level will be the first level of the parent, which has a level other than NOTSET. By default, the new logger has a NOTSET level. Since the root logger has a WARNING level, the effective level of the logger will be WARNING. Therefore, even if the new logger is connected to some handlers, these handlers will not be called unless the logger level exceeds WARNING:

foo_logger = logging.getLogger("foo")
assert foo_logger.level == logging.NOTSET # the new logger has NOTSET level
assert foo_logger.getEffectiveLevel() == logging.WARNING # and its effective level is not the root logger level
# attach console handler to foo_logger
console_handler = logging.StreamHandler()
foo_logger.debug("debug level") # nothing is displayed because the DEBUG logging level is less than the effective level of foo
foo_logger.debug("debug message") # now you will see "debug message"

By default, the logger level will be used to decide whether to output the log. If the log level is lower than the logger level, the log will not be taken into account.

Recommendations for working with Python Logging

The logging module is really handy, but with its own peculiarities that turn into hours of headaches even for experienced Python developers. Here are recommendations for using this module:

  • Set up the root logger, but never use it in your code, never call a function like, which leaves the root logger idle. If you want to remove error messages from the libraries you use, be sure to configure the root logger to write to a file, for example, to make debugging easier. By default, the root logger only outputs to stderr, so the log can easily get lost.
  • To use logging, create a new logger with logging.getLogger(logger name). I use __name__ as the logger name, but you can pick up other options. To add more than one handler, there is a method that returns the logger.
import logging
import sys
from logging.handlers import TimedRotatingFileHandler
FORMATTER = logging.Formatter("%(time)s - %(name)s - %(level)s - %(message)s")
LOG_FILE = "my_app.log"
def get_console_handler():
console_handler = logging.StreamHandler(sys.stdout)
return console_handler
def get_file_handler():
file_handler = TimedRotatingFileHandler(LOG_FILE, when='midnight')
return file_handler
def get_logger(logger_name):
logger = logging.getLogger(logger_name)
logger.setLevel(logging.DEBUG) # better to have more logs than not enough
logger.propagate = False
return logger

After that, you can create a new logger and use it:

my_logger = get_logger("module name for example")
my_logger.debug("debug message")
  • Use the RotatingFileHandler classes, such as TimedRotatingFileHandler, from the example instead of FileHandler, because it automatically changes the file for when it reaches its maximum size or does so every day.
  • Use tools like Sentry, Airbrake, Raygun, etc. to automatically catch error messages. This is useful when working with applications. Logging can be very verbose there, and error logs can easily get lost. Another advantage of using these tools is the ability to get detailed information about the values of the variables which are related to the error: which URL causes the error, which user, etc.

Related Posts