Experience Retrieval (Pure Manual) => Python-logging Module Logging Principle Analysis and Use

logging module

logging module is a large module. It has a complete log system.
Mainly divided into: main Logger - processor - formatter

logging is a python built-in module that does not require installation.
Import mode: import logging is enough

Log ranking (weak - > strong)

DEBUG < INFO < WARNING < ERROR < FATAL

DEBUG :  Some information on development and debugging(print Debugging...)
INFO:    Important information about program running process (not too much)
WARNING: A warning about minor problems that do not affect the operation of the program. Record it for later resolution.
ERROR:   Influence procedure, It's a little serious. Need to deal with, otherwise the procedure (possibly, possibly) will hang up.
FATAL:   Serious impact on the program, immediately re-check, modify the code bar.

logging Architecture Components

Commonly used is divided into: (from the outside to the inside of the inclusion relationship)

  1. Logger (principal log)
    The largest container with Handler in it
  2. Handler (Processor Class)
    Install Formatter inside
  3. Formatter
    Format grammar for printing information

Next, I will explain the above components "from inside to outside" for convenience.

Formatter (Formatter class)

Format: A syntax used to define the formatting of strings for printing information.
Initialize a formatter:

fmt = '[%(asctime)s] [%(filename)s: %(lineno)d] [%(levelname)s] => %(message)s' # format
console_formatter = logging.Formatter(fmt=fmt)   # Instantiate the formatter and pass in the format

Comprehension:

  1. I'm sure you can't understand only the line "format". You can match this format freely. See the official document below with parameters:
    Official File Format Complete: https://docs.python.org/3.7/l...
  2. Open the official file and you will see Format in the second column of the form. The format inside can be directly copied in its original form and eg:% (asctime)s
    Then, you can splice these formats into your favorite symbolic format with strings, eg: "Date = >" (asctime)s"
  3. You may wonder why this% s format can be recognized. Instead of being treated as an original string???
    Note that the second line of code, the fmt format string is only a parameter of Formatter(), which is parsed automatically. Don't worry about that.
  4. I'll post the results of the above example. You may see better:

     > [2019-09-10118:23:19,347] [logging11.py:15] [WARNING]=> Haha
          asctime is the date        
          filename is the file name  
          lineno is a line of code
          Level name is the log level name (that is, INFO WARNING ERROR, etc.)
          message is the log information you want to print (as we will see below, bury a little bit here first)
    

Handler (Processor Class)

Processor: Used to load the "formatter" mentioned above and process logs (processors have many kinds, just choose one as needed, the following two commonly used):
Initialize a stream processor (more commonly used):

handler = logging.StreamHandler()

Or initialize a "file processor" (the source code is clearly written, it inherits the "stream processor" above. Usually used for log persistence:

handler = logging.FileHandler('mylog.log', mode='a', encoding='utf-8')
# There's no need to explain. This API grammar is familiar. Isn't that the file open grammar we commonly use?

Load the Formatter. The instantiated formatter, not yet useful, is loaded here.

handler.setFormatter(fmt=file_formatter)

Note: Although the handler object can use setLevel() to set the log level, I do not recommend setting it here. Keep looking down

Logger (principal log class)

Logger: Used to load "Processor's".
There are two ways to instantiate Logger:
Method 1: (Non-shared creation)

log= logging.Logger(name='my_log', level='INFO')    
# Name is the name given to Logger
# Level is the level of the log (capitalization is important). We talked about WARNING, INFO, ERROR at the beginning.

Method 2: (Log pool shared creation, recommendation)

log = logging.getLogger(name='console')   # If there is one, it will be taken out; if there is no, it will be created.

Talk about the difference between the two methods:

  1. Non-shared: That is, it needs to be recreated every time. Configuration from 0
  2. Log pool sharing: extract the index from the Log pool to operate (which is equivalent to a function-passed index operation)
    When you configure each pair of logs you retrieve, the mapping is saved and updated to the Log pool.
    The next time we call getLogger() to fetch the log (or any other file, of course, a complete program), that's what we matched before.

Loading "Processor": (Almost forgot, the processor defined above is not used yet, it is used here):

log.addHandler(handler)

Setting log levels (this step is negligible)

log.setLevel('ERROR')

In fact, when we instantiated Logger above, we had passed a level parameter and set the log level.
So log.setLevel() can be set without setting (including handler and setLevel as mentioned earlier)
     handler.setLevel()

Starting to output log information, there are API s corresponding to the following log levels:

log.debug ("This is a debug log")
log.info ("This is a log showing the main information")
log.warning('This is a warning log')
log.error ("This is an error log")
log.fatal ("This is a fatal error log")

### Looking back on the Formatter formatter we talked about earlier
 The first thing we talked about was formatter, and we talked about common formats.
One of them is (message)s, which is the parameters in the above API s.
    eg: log.info('ha ha ha ha')     
        (message)s format occupies the output of Haha ha
    
There's another (level name) s, which is the method name that occupies the API above.
    eg: log.info('xxx')
         (level name) s format takes up the output of info

If you feel obscure about the level of the log and the role of the log, you must look at my next example!!!!!!!!
I mentioned at the beginning: log ranking (weak = > strong) => (DEBUG < INFO < WARNING < ERROR < FATAL)

You set a log level. The log will only be processed if the level corresponding to the above API is "stronger than or equal to" the set level.
emmmmm, if you don't understand, when I put P... The more formal it is, the less comprehensible it is. Let's look at the following examples~~

A small example of log level understanding:

log.setLevel('WARNING')   You see the log level we set is WARNING

------------Gorgeous dividing line------------ Let's start the official log.

log.debug("This is a debug log")    
    # This debug(), you can take a look at the "log ranking" in the opening column.
    # debug is weaker than warning, so this log will not be processed.
    # (Vernacular understanding: "I give warning, you have a debug level is too low, the problem is not serious. Does not deserve to be recorded. ""
    
log.info("This is a log showing the main information.")
    # Similarly, info is weaker than warning, and this log will not be processed
    
log.warning('This is a warning log.')
    # Warning = warning (stronger than or equal to what I said earlier), so this log will be processed
    
log.error("This is an error log.")
    # error is better than warning, so this log will be processed

log.fatal("This is a fatal error log.")
    # fatal is better than warning, so this log will be processed
    # Let's put it more plainly: "The tolerance you gave me was warning, and your log was fatally wrong. I'm sure I'll deal with you."

Reflection! As an example, I always said, "xxxxx, this log will be processed".
So what does this "processing" really deal with???
Look back at the "processor" above. Well, that's right. These logs are processed by the processor.

  1. If you define a logging. Stream Handler, it will output the log to the terminal.
  2. If you define a log. FileHandler, it will automatically save the log to a file for persistence.

Comprehensive cases:

Business requirements are as follows (casually, not necessarily useful):

  1. Logs that are stronger than DEBUG but weaker than WARNING (excluding WARNING) are exported only to terminals.
    This type of log output format is not required
  2. Better than WARNING (including WARNING), this type of log is exported to the terminal and one file each.
    This type of log output format is required in the following format: [date] [file name: line of code] [log level]=> log content

The code is as follows (if you use it yourself, it's better to encapsulate it):

import logging
# Log ranking (weak - > strong): DEBUG < INFO < WARNING < ERROR < FATAL

fmt = '[%(asctime)s] [%(filename)s: %(lineno)d] [%(levelname)s] => %(message)s'  # format
file_formatter = logging.Formatter(fmt=fmt)    # Define the formatter and put the format in
file_handler = logging.FileHandler('mylog.log', mode='a', encoding='utf-8')  # Define File Processor
file_handler.setFormatter(fmt=file_formatter)  # Set a formatter for the file processor
file_handler.setLevel('WARNING')               # Set the log level for this processor


console_handler = logging.StreamHandler()      # Define a stream processor for output to the terminal 
# StreamHandler has no formatter, it will default to set you a%(message)s, that is, only log content, no date file name, etc. 
console_handler.setLevel('DEBUG')              # Setting Log Level for Stream Processor

log = logging.getLogger(name='file_log')          # Remove a log from the log pool (new if not)   
log.addHandler(file_handler)                   # Add a file processor (formatted output to file)
log.addHandler(console_handler)                # Add a stream processor (formatless output to terminal)

log.info('I only export it to the terminal.')     # Because info is only better than DEBUG set by console_handler
log.error('I output to both the terminal and the file.')
# Because error is stronger than DEBUG set by console_handler, and error is stronger than WARNING set by file_handler.

Operation results:

Terminal output:
I only export to the terminal
   I output to both the terminal and the file.

In the mylog.log file:
    [2019-09-10 23:54:59,055] [logging11.py:20] [ERROR]=> I output to both the terminal and the file

------------------------------------------------------------------------------------------------------------------------------------------------

Speculative skillful methods (not recommended, start here later, don't look)

This method is only a little convenient, but not flexible.

We put a lot of effort ahead.

  1. First, a formatter is defined.
  2. It also defines a stream processor and a file processor.
  3. Instantiate a Logger in the Logger pool
  4. And put them all together, set the log level. Wait for operation (although it looks a lot). Actually, you got it through. It's really not complicated.

In fact, in the logging system, there is a default initial Logger called root Logger.
We don't need to instantiate it, we don't need to instantiate the formatter, and we don't need to instantiate the controller.
One line of API can be used (default is output to the terminal):

import logging

fmt = '[%(asctime)s] [%(filename)s: %(lineno)d] [%(levelname)s] => %(message)s'  # format

logging.basicConfig(    # The default is root Logger
    level='DEBUG',      # Set log level to DEBUG
    format=fmt,         # Set format
)
logging.info('I only export it to the terminal.')


//Operation results:
>> [2019-09-11 00:15:55,219] [logging11.py: 30] [INFO] => I only export it to the terminal.

If you want to output to a file, you only need to add two parameters, filename and filemode:

import logging

fmt = '[%(asctime)s] [%(filename)s: %(lineno)d] [%(levelname)s] => %(message)s'  # format

logging.basicConfig(
    level='DEBUG',
    format=fmt,
    filename='mylog.log',   # file name
    filemode='a'            # File operator
)

//Operation results:
mylog.log file:
    [2019-09-11 00:20:22,396] [logging11.py: 32] [INFO] => ��ֻ��������ն�

But did you find out? Output scrambled code into the file. You can think of it with your ears. We didn't configure encoding....
However, I tell you that basicConfig() has no encoding parameter. What about that?
But it has a parameter called handlers. Handlers are familiar with it. That's the "processor" we talked about above. Complex instructions can be passed more than one.

logging.basicConfig(
    level='DEBUG',
    format=fmt,
    handlers=[ logging.FileHandler(filename='mylog.log',mode='a',encoding='utf-8') ]
    # Look at the definition of this processor here. It's exactly the same as before. Here we can match encoding
)
# So there's no scrambling.

Note: The above is implemented with logging.baseConfig() simple log
It was said that it was opportunistic because it lacked flexibility in addition to the problem of document scrambling.
Let's say you want to output different levels of logs in different formats. At this point, you can't do it with the basic Config line alone.
So the getLogger() combination is recommended.

Concluding remarks

logging module actually has a lot of functions:
Filter: (Actually, there's another component, but I haven't used it, so I don't say it.)
Formatter: (before to the official file, there are also according to the process, thread (PID, TID, tName,pName) format to output the log.
Controller: (I only talked about stream s and file s), but there are many more. They are all in logging.handlers module:

from logging.handlers import (
    RotatingFileHandler,       # By setting the file size threshold, beyond that threshold, the log is dumped into a new file.
    TimedRotatingFileHandler,  # Set a time interval, and every time that interval passes, the log is dumped into a new file
    HTTPHandler,               # Logs are exported to remote servers via HTTP protocol (only GET and POST are supported)
    SMTPHandler,               # Through the SMTP protocol, the log is exported to the remote mailbox
    SocketHandler,             # Send to remote server through TCP protocol.
    DatagramHandler,           # Send to remote server through UDP protocol...
    QueueHandler,              # Send it to the queue (if you want to send RabbitMQ or something, you can go to github to find someone else's finished product)
)
# These usages are also very simple. Look at the official files, or use Pycharm ctrl + left key point to enter the source code. Just look at the _init_() parameter instantiation.
# Once instantiated, add xxx.addHandler() to the logger and you can use it (just like the file and stream usage mentioned earlier)

It can also be used as various types of configuration files: https://docs.python.org/3/lib...
Official Archives: https://docs.python.org/3/how...

Keywords: Python encoding less RabbitMQ

Added by ub_kh on Wed, 11 Sep 2019 06:35:41 +0300