Familiar with python log logging

logging introduction

The log can be used to record bank transfers, aircraft flight data, event execution steps, etc. In Python, the logging module is a built-in standard module in Python. It is mainly used to output running logs. You can set the level of output logs, log saving path, log file rollback, etc.

1, Log level

1. DEBUG: used when debugging a bug
2. INFO: used when the program is running normally
3. WARNING: the program is not used as expected, but it is not an error. For example, the mobile phone number has been registered
4. ERROR: used when the program has an ERROR, such as IO operation failure
5. CRITICAL: a particularly serious problem that causes the program to no longer run, such as server downtime and full disk
The default is WARNING level. Log information is recorded only when the level is above or above WARNING.
Level low → high: debug < info < WARNING < error < critical. The default is WARNING. Only those at and above WARNING level will be recorded

2, Log display data format

%(levelno)s: print log level value
%(levelname)s: the name of the print log level
%(pathname)s: print the path of the currently executing program, which is actually sys argv[0]
%(filename)s: print the name of the current executing program
%(funcName)s: current function for printing logs
%(lineno)d: the current line number of the print log
%(actime) s: time to print the log
%(thread): print thread ID
%(threadName)s: print thread name
%(process)d: print process ID
%(message)s: print log information

3, Advanced tutorial

Application phase I (console, file output)

1. Jade (without modification)

import logging

logging.debug('This is a debug Level log information')
logging.info('This is a info Level log information')
logging.warning('This is a warning Level log information')
logging.error('This is a error Level log information')
logging.critical('This is a critical Level log information')

Note: output to the console. Since the default is warning, only the last three lines of information will be displayed
2. Sculpt (add plot style)

import logging

logging.basicConfig(
    level=logging.DEBUG,
    format='%(asctime)s - %(filename)s[line:%(lineno)d] - %(levelname)s: %(message)s'
)

logging.debug('This is a debug Level log information')
logging.info('This is a info Level log information')
logging.warning('This is a warning Level log information')
logging.error('This is a error Level log information')
logging.critical('This is a critical Level log information')

Note: output to console
3. Color (modify output position)

import logging
logging.basicConfig(
    level=logging.DEBUG,
    format='%(asctime)s - %(filename)s[line:%(lineno)d] - %(levelname)s: %(message)s',
    filename='log.txt',
    filemode='a'
)

logging.debug('This is a debug Level log information')
logging.info('This is a info Level log information')
logging.warning('This is a warning Level log information')
logging.error('This is a error Level log information')
logging.critical('This is a critical Level log information')

Note: output to file

Application phase II (console, file selection)

import logging

# The first step is to create a logger
logger = logging.getLogger()
logger.setLevel(logging.INFO)  # Log level master switch is INFO at this time

# The second step is to create a handler for output to the console
terHandler = logging.StreamHandler()
terHandler.setLevel(logging.WARNING)   # Switch of log level output to console

# Step 3: create another handler to write to the log file
fileName = 'log.txt'   # The file path can be configured on demand
fileHandler = logging.FileHandler(fileName, mode='a')  # The open mode rwa, where a is used, is also an additional mode by default
fileHandler.setLevel(logging.DEBUG)  # Switch of log level output to file

# Step 4: define the output format of the handler (time, file, number of lines, error level, error prompt)
formatter = logging.Formatter("%(asctime)s - %(filename)s: %(lineno)s - %(levelname)s: %(message)s")
terHandler.setFormatter(formatter)
fileHandler.setFormatter(formatter)

# Step 5: add the logger to the handler
logger.addHandler(terHandler)
logger.addHandler(fileHandler)

# log level
logger.debug('This is logger debug message')
logger.info('This is logger info message')
logger.warning('This is logger warning message')
logger.error('This is logger error message')
logger.critical('This is logger critical message')

Application phase 3 (select all log files and roll back)

What is log rollback:

The log information is output to a single file. With the continuous use of the application, the log file will become larger and larger, which will affect the performance of the system. Therefore, it is necessary to segment the log file according to certain conditions.

Trigger conditions for splitting logs: size, date, or size plus date.

In fact, when a log file reaches the trigger condition, rename the log file, and then create a new log file with the original name (it is an empty file at this time), and the newly generated log will be written to the new log file.

Why rollback? When the number of split log files reaches the upper limit of the specified number, the oldest log file will be deleted.
The logging library provides two class es that can be used for log scrolling

1)TimeRotatingFileHandler, which mainly scrolls according to time. In practical application, we usually scroll according to time

# Phase II code
fileHandler = logging.FileHandler(fileName, mode='a')  # The open mode rwa of open, where a is used, is also an additional form if it is not filled in
# Change to
# 1. Add guided package option
from logging.handlers import TimedRotatingFileHandler, RotatingFileHandler
# 2. The time rollback log generates one file a day and retains up to the last 60 files
fileHandler = TimedRotatingFileHandler(fileName, when='D', interval=1, backupCount=60)  # No read / write mode is provided. By default, data is written in the form of append
# 3. Set suffix name
fileHandler.suffix = "%Y-%m-%d_%H-%M-%S.log"

Note: if the setting is day, it must be written as "% Y-%m-%d.log". Writing in other formats will cause the deletion of old files to be ineffective. This configuration can be seen in the source code.

Noun interpretation
①,when
"S": Seconds
"M": Minutes
"H": Hours
"D": Days
"W": Week day (0=Monday)
"midnight": Roll over at midnight
②,interval
It refers to the number of units of time to wait for when. The Logger will automatically rebuild the file. Of course, the creation of this file
Depending on filename+suffix, if this file has the same name as the previous file, the previous file will be automatically overwritten, so
In some cases, the definition of suffix cannot be repeated because of when
③,backupCount
Number of reserved logs. By default, 0 will not automatically delete logs. If 3 is set, during the creation of the file
The library will judge whether there is more than this 3. If it exceeds this 3, it will be deleted from the first one created.

2)RotatingFileHandler, which mainly scrolls according to the size of log files,

# Phase II code
fileHandler = logging.FileHandler(fileName, mode='a')  # The open mode rwa of open, where a is used, is also an additional form if it is not filled in
# Change to
# 1. Add guided package option
from logging.handlers import TimedRotatingFileHandler, RotatingFileHandler
# 2. Size rollback log. The maximum size of each file is 100Bytes, and only 5 files are reserved
fileHandler = RotatingFileHandler(fileName, maxBytes=500, backupCount=5)  # No read / write mode is provided. By default, data is written in the form of append
# 3. Set suffix name
fileHandler.suffix = "%Y-%m-%d_%H-%M-%S.log"

4, End, I hope you have learned!!!

Keywords: Python

Added by Flukey on Mon, 27 Dec 2021 06:28:04 +0200