MySQL's slow query and slow log and the use of mysqldumpslow and Pt query digest slow query log analysis tools

Slow query and slow log

Query and start of slow log

Query whether the slow log is enabled

mysql> show variables like 'slow_query_log';
+----------------+-------+
| Variable_name  | Value |
+----------------+-------+
| slow_query_log | OFF   |
+----------------+-------+
1 row in set (0.00 sec)

Enable slow query log

# It is convenient for testing, and all query records are entered into the slow log
mysql> set global log_queries_not_using_indexes=on;
Query OK, 0 rows affected (0.00 sec)

mysql> set global slow_query_log=on;
Query OK, 0 rows affected (0.04 sec)

Query and setting of slow log location

View variable information of all logs

mysql> show variables like '%log%';
+--------------------------------------------+-----------------------------------------------------+
| Variable_name                              | Value                                               |
+--------------------------------------------+---------------------------------------------------                                         
| log_queries_not_using_indexes              | OFF                                                   
| slow_query_log                             | OFF                                                 |
| slow_query_log_file                        | /usr/local/mysql/data/administrator-slow.log                                                  |
+--------------------------------------------+-----------------------------------------------------+
74 rows in set (0.00 sec)

Set the location of slow query log

mysql> set global slow_query_log_file='/usr/local/mysql/data/administrator-slow.log';
Query OK, 0 rows affected (0.00 sec)

Slow log judgment criteria and modification judgment criteria

Judgment criteria: SQL statements with query time greater than 10s by default

mysql> show variables like 'long_query_time';
+----------------+-----------+
| Variable_name  |   Value   |
+----------------+-----------+
| long_query_time | 10.000000 |
+----------------+-----------+

Data greater than 1 second is recorded in the slow log. If it is set to 0 by default, a large amount of information will be stored in the disk

mysql> set global long_query_time=0; 
Query OK, 0 rows affected (0.00 sec)

Analyze slow query log (core)

mysql> show variables like '%log%';
+--------------------------------------------+-----------------------------------------------------+
| Variable_name                              | Value                                               |
+--------------------------------------------+-----------------------------------------------------+
| log_queries_not_using_indexes              | ON                                                 
| slow_query_log                             | ON                                                  
| slow_query_log_file                        | /usr/local/mysql/data/administrator-slow.log        
---------------------------------------+-----------------------------------------------------+
74 rows in set (0.01 sec)
mysql> select *  from user where  name='Li Bai';
+----+--------+------+
| id | name   | age  |
+----+--------+------+
|  1 | Li Bai   |   22 |
+----+--------+------+
1 row in set (0.00 sec)

[root@administrator ~]# more /usr/local/mysql/data/administrator-slow.log
/usr/local/mysql/bin/mysqld, Version: 5.7.32 (MySQL Community Server (GPL)). started with:
Tcp port: 3306  Unix socket: /tmp/mysql.sock
Time                 Id Command    Argument
# Time: 2020-11-19T14:14:58.342982Z
# User@Host: root[root] @ localhost []  Id:     3
# Query_time: 0.000216  Lock_time: 0.000112 Rows_sent: 1  Rows_examined: 1
use demo;
SET timestamp=1605795298;
select * from user;
# Time: 2020-11-19T14:15:27.538174Z
# User@Host: root[root] @ localhost []  Id:     3
# Query_time: 0.000202  Lock_time: 0.000110 Rows_sent: 1  Rows_examined: 1
SET timestamp=1605795327;
select * from user;
# Time: 2020-11-19T14:19:37.986218Z
# User@Host: root[root] @ localhost []  Id:     3
# Query_time: 0.000247  Lock_time: 0.000138 Rows_sent: 1  Rows_examined: 1
SET timestamp=1605795577;
select *  from user where  name='Li Bai';

Slow log format description

# Time: 2020-11-19T14:19:37.986218Z -- execution time of query
# User@Host : root [root] @ localhost [] ID: 3 -- host information of executing sql
# Query_ Time: 0.000247 -- query time of SQL 
# Lock_ Time: 0.000138 -- query time of SQL
# Rows_sent: 1 -- number of rows sent
# Rows_examined: 1 -- number of rows scanned by lock
SET timestamp=1605795577; --SQL execution time
select *  from user where  name='Li Bai'; --SQL Implementation content of

Slow query log analysis tool

If the slow query log is enabled, a large amount of data will be generated. The analysis report can be generated through the analysis of the log and optimized through the report.

mysqldumpslow

This tool is the most commonly used tool. It is installed by installing mysql, but the statistical results of this tool are relatively few

Command considerations

mysqldumpslow -h (Need to make a soft connection to/usr/bin Directory, such as: ln -s  /usr/local/mysql/bin,mysqldumpslow  /usr/bin)

./mysqldumpslow -h(Need to enter mysql Under the installation path bin catalogue)

View detailed usage

View detailed usage: mysqldumpslow --help

[root@administrator bin]# ./mysqldumpslow -help
Option h requires an argument
ERROR: bad option

Usage: mysqldumpslow [ OPTS... ] [ LOGS... ]

Parse and summarize the MySQL slow query log. Options are

  --verbose    verbose
  --debug      debug
  --help       write this text to standard output

  -v           verbose
  -d           debug
  -s ORDER     what to sort by (al, at, ar, c, l, r, t), 'at' is default
                al: average lock time
                ar: average rows sent
                at: average query time
                 c: count
                 l: lock time
                 r: rows sent
                 t: query time  
  -r           reverse the sort order (largest last instead of first)
  -t NUM       just show the top n queries
  -a           don't abstract all numbers to N and strings to 'S'
  -n NUM       abstract numbers with at least n digits within names
  -g PATTERN   grep: only consider stmts that include this string
  -h HOSTNAME  hostname of db server for *-slow.log filename (can be wildcard),
               default is '*', i.e. match all
  -i NAME      name of server instance (if using mysql.server startup script)
  -l           don't subtract lock time from total time

[root@administrator bin]#

Usage example

View the top 20 slow query logs

[root@administrator bin]# ./mysqldumpslow  -t 20 /usr/local/mysql/data/administrator-slow.log 

Count: 1  Time=0.02s (0s)  Lock=0.03s (0s)  Rows=1.0 (1), root[root]@[117.176.186.42]
  SELECT COUNT(*) FROM information_schema.TABLES WHERE TABLE_SCHEMA = 'S' UNION SELECT COUNT(*) FROM information_schema.COLUMNS WHERE TABLE_SCHEMA = 'S' UNION SELECT COUNT(*) FROM information_schema.ROUTINES WHERE ROUTINE_SCHEMA = 'S'

Count: 2  Time=0.00s (0s)  Lock=0.00s (0s)  Rows=0.0 (0), root[root]@[117.176.186.42]
  INSERT INTO `user`.`user`(`name`, `age`) VALUES ('S', N)

Count: 1  Time=0.00s (0s)  Lock=0.00s (0s)  Rows=1.0 (1), root[root]@localhost
  select *  from user where  name='S'

Analysis log format description

Count: Statement execution times

Time: Total time (average time)/(Times)

Lock: Watch lock time

Rows: Total returned rows (average rows)

pt-query-digest

PT query digest is a more powerful slow query log analysis tool. It belongs to a third-party tool. It can analyze binlog, General log and slowlog, as well as MySQL protocol data captured through SHOWPROCESSLIST or tcpdump. The analysis results can be output to the file. The analysis process is to parameterize the conditions of the query statement first, and then group and count the parameterized queries to calculate the execution time, times and proportion of each query. With the help of the analysis results, problems can be found and optimized.

Official website: https://www.percona.com/downloads/percona-toolkit/LATEST/

Install Pt query Digest

download

wget https://downloads.percona.com/downloads/percona-toolkit/3.3.1/binary/redhat/7/x86_64/percona-toolkit-3.3.1-1.el7.x86_64.rpm

install

 yum localinstall –y percona-toolkit-3.3.1-1.el7.x86_64.rpm

PT query digest is essentially a perl script. When installing Pt query digest, you will be prompted to install perl related dependent modules, which can be confirmed manually.

Check that the installation is complete

pt-query-digest --help

[root@administrator root]# pt-query-digest --help
pt-query-digest analyzes MySQL queries from slow, general, and binary log files.
It can also analyze queries from C<SHOW PROCESSLIST> and MySQL protocol data
from tcpdump.  By default, queries are grouped by fingerprint and reported in
descending order of query time (i.e. the slowest queries first).  If no C<FILES>
are given, the tool reads C<STDIN>.  The optional C<DSN> is used for certain
options like L<"--since"> and L<"--until">.  For more details, please use the
--help option, or try 'perldoc /usr/bin/pt-query-digest' for complete
documentation.

Usage: pt-query-digest [OPTIONS] [FILES] [DSN]

Options:

  --ask-pass                   Prompt for a password when connecting to MySQL
  --attribute-aliases=a        List of attribute|alias,etc (default db|Schema)
  --attribute-value-limit=i    A sanity limit for attribute values (default 0)
  --charset=s              -A  Default character set
  --config=A                   Read this comma-separated list of config files;
                               if specified, this must be the first option on
                               the command line
  --[no]continue-on-error      Continue parsing even if there is an error (

Analyze slow query log (core)

pt-query-digest /www/server/data/mysql-slow.log

Including: total time, minimum time, maximum time, average time, 95% time, standard time, median time and other details

Focus:

1.The number of queries is large and each query takes a long time sql: Look at the first few queries( Rank Query ID  ),Focus on each SQL Information such as the number and percentage of executions. The number of executions accounts for a large proportion SQL

2.IO Big SQL: Rows examine Item, the more lines scanned, IO Bigger

3.Missing index SQL: Rows examine and Rows Send Poor contrast
# Profile
# Rank Query ID                            Response time   Calls R/Call   
# ==== =================================== =============== ===== ======== 
#    1 0xAC8DD5BBF3975693C05247449313884D  208.7543 100.0%     1 208.7543  0.00 CALL insert_temp

# Query 1: 0 QPS, 0x concurrency, ID 0xAC8DD5BBF3975693C05247449313884D at byte 0
# This item is included in the report because it matches --limit.
# Scores: V/M = 0.00
# Time range: all events occurred at 2021-12-23T03:22:14
# Attribute    pct   total     min     max     avg     95%  stddev  median
# ============ === ======= ======= ======= ======= ======= ======= =======
# Count        100       1
# Exec time    100    209s    209s    209s    209s    209s       0    209s
# Lock time    100   139us   139us   139us   139us   139us       0   139us
# Rows sent      0       0       0       0       0       0       0       0
# Rows examine   0       0       0       0       0       0       0       0
# Query size   100      18      18      18      18      18       0      18
# String:
# Databases    demo
# Hosts        125.71.203.164
# Users        root
# Query_time distribution

Other common commands

Viewing server information: Pt summary

View disk overhead information: Pt diskstats

View mysql database information: Pt mysql summary -- user = root -- password = 123456

View deadlock information: Pt deadlock logger -- run time = 10 -- interval = 3 -- create dest table -- dest d = test, t = deadlocks, u = root, P = 123456

Analyze the index usage from the slow query log: Pt index usage -- user = root -- password = 123456 -- host = localhost / www / server / data / MySQL slow log

Find the duplicate indexes in the database table from the slow query log: Pt duplicate key checker -- host = localhost -- user = root -- password = 123456

Check the current active IO cost of mysql tables and files. Do not use Pt ioprofile during peak hours

PT find finds mysql tables and executes commands

1. Find a table larger than 1M in the database: Pt find -- user = root -- password = 123456 -- tablesize + 1M

2. Check the table and index size and sort: Pt find -- user = root -- password = 123456 -- printf "% t \ T% d.% n \ n" | sort - RN

PT kill kills the mysql process that meets the standard

1. Display queries with query time greater than 3 seconds: Pt kill -- user = root -- password = 123456 -- busy time 3 -- Print

2.kill queries longer than 3 seconds: Pt kill -- user = root -- password = 123456 -- busy time 3 -- kill

Cluster related commands

View slave library and synchronization status: Pt slave find -- host = localhost -- user = root -- password = 123456

Check the differences between different configuration files: Pt config diff / etc / my cnf /root/my_ master. cnf

View authorization, authorized copy:
pt-show-grants --user=root --password=123456

pt-show-grants --user=root --password=123456 --separate --revoke

Verify the integrity of database replication, and check after master-slave replication: Pt table checksum -- user = root -- password = 123456

Keywords: Database MySQL

Added by jscix on Mon, 21 Feb 2022 03:46:26 +0200