1, Slow query log learning
1. What is slow query log?
- The slow query log of My SQL is a kind of log record provided by MySQL. It is used to record the statements whose response time exceeds the threshold in MySQL, specifically the statements whose running time exceeds long_query_time value, it will be recorded in the slow query log.
- long_ query_ The default value of time is 10, which means to run statements for more than 10 seconds.
- The slow query log is used to check which SQL exceeds our maximum endurance time. For example, if an SQL is executed for more than 5 seconds, we want to collect SQL for more than 5 seconds and conduct a comprehensive analysis in combination with the previous explain.
Special note
By default, MySQL database does not enable slow query log. We need to set this parameter manually.
Of course, if it is not necessary for tuning, it is generally not recommended to start this parameter, because starting the slow query log will more or less have a certain performance impact. Slow query log supports writing log records to files.
2. Check whether and how the slow query log is enabled
- Check whether the slow query log is enabled: SHOW VARIABLES LIKE '% slow'_ query_ log%’;.
- Enable slow query log: SET GLOBAL slow_query_log = 1 ; . Using this method to start the slow query log of MySQL is only effective for the current database. If MySQL is restarted, it will become invalid.
If you want to enable the slow query log permanently, you need to modify my CNF file, add and modify parameters under [mysqld].
# my.cnf [mysqld] # 1. This is to enable slow query. Note that ON needs to be capitalized slow_query_log=ON # 2. This is a log file for storing slow queries. If this file does not exist, you need to create it yourself slow_query_log_file=/var/lib/mysql/slow.log
After the slow query log is enabled, what kind of SQL will be recorded in the slow query log?
This is determined by the parameter long_query_time controlled. By default, long_ query_ The value of time is 10 seconds.
View long in MySQL_ query_ SHOW VARIABLES LIKE 'long'_ query_ time%’;
Modify long_query_time, need in my CNF modify configuration file
1.Method 1 set global long_query_time = 1; 2.Method 2 [mysqld] # This is the time to set the slow query. I set it to 1 second long_query_time=1 //Configure slow query time slow _query_log=1; //Configure slow query log save file slow_query _log_file=/var/lib/mysql/atguigu-slow.loglong_query_time=3; //Not output on console log_output=FILE
Open cat / var / lib / MySQL / / var / lib / MySQL / slow log
/usr/sbin/mysqld, Version: 5.7.34 (MySQL Community Server (GPL)). started with: Tcp port: 3306 Unix socket: /var/run/mysqld/mysqld.sock Time Id Command Argument # Time: 2021-10-27T14:03:33.201362Z # User@Host: root[root] @ DESKTOP-2JU9EV3.lan [192.168.199.132] Id: 11 # Query_time: 4.001308 Lock_time: 0.000000 Rows_sent: 1 Rows_examined: 0 use db01; SET timestamp=1635343413; select sleep(4);
2, Use of log analysis tools
Log analysis tool mysqldumpslow: in the production environment, if you want to manually analyze logs, find and analyze SQL, it is obviously an individual effort. MySQL provides a log analysis tool mysqldumpslow
# 1. mysqldumpslow --help to view the help information of mysqldumpslow root@1dcb5644392c:/usr/bin# mysqldumpslow --help Usage: mysqldumpslow [ OPTS... ] [ LOGS... ] Parse and summarize the MySQL slow query log. Options are --verbose verbose --debug debug --help write this text to standard output -v verbose -d debug -s ORDER what to sort by (al, at, ar, c, l, r, t), 'at' is default # How to sort al: average lock time # Average lock time ar: average rows sent # Average number of returned records at: average query time # Average query time c: count # Number of visits l: lock time # Lock time r: rows sent # return recording t: query time # Query time -r reverse the sort order (largest last instead of first) -t NUM just show the top n queries # How many previous records are returned -a don't abstract all numbers to N and strings to 'S' -n NUM abstract numbers with at least n digits within names -g PATTERN grep: only consider stmts that include this string -h HOSTNAME hostname of db server for *-slow.log filename (can be wildcard), default is '*', i.e. match all -i NAME name of server instance (if using mysql.server startup script) -l don't subtract lock time from total time # 2. Case # 2.1. Get up to 10 SQL statements that return recordsets mysqldumpslow -s r -t 10 /var/lib/mysql/slow.log # 2.2. Get the top 10 SQL queries accessed mysqldumpslow -s c -t 10 /var/lib/mysql/slow.log # 2.3. Get the top 10 query statements with left join sorted by time mysqldumpslow -s t -t 10 -g "left join" /var/lib/mysql/slow.log # 2.4 in addition, it is recommended to use these commands in combination with | and more, otherwise the screen will burst mysqldumpslow -s r -t 10 /var/lib/mysql/slow.log | more
2, Batch data script
Since it is too slow to open the query log and bin log, we must specify a parameter for function, otherwise an error will be reported when using the function.
# Set in mysql # log_bin_trust_function_creators are off by default and need to be turned on manually mysql> SHOW VARIABLES LIKE 'log_bin_trust_function_creators'; +---------------------------------+-------+ | Variable_name | Value | +---------------------------------+-------+ | log_bin_trust_function_creators | OFF | +---------------------------------+-------+ 1 row in set (0.00 sec) mysql> SET GLOBAL log_bin_trust_function_creators=1; Query OK, 0 rows affected (0.00 sec)
The above modification method will fail after MySQL is restarted. In my The modification under CNF configuration file is permanently valid.
[mysqld] log_bin_trust_function_creators=ON
2. Create function
# 1. Functions: random string generation DELIMITER $$ CREATE FUNCTION rand_string(n INT) RETURNS VARCHAR(255) BEGIN DECLARE chars_str VARCHAR(100) DEFAULT 'abcdefghijklmnopqrstuvwsyzABCDEFGHIJKLMNOPQRSTUVWXYZ'; DECLARE return_str VARCHAR(255) DEFAULT ''; DECLARE i INT DEFAULT 0; WHILE i < n DO SET return_str = CONCAT(return_str,SUBSTRING(chars_str,FLOOR(1+RAND()*52),1)); SET i = i + 1; END WHILE; RETURN return_str; END $$ # 2. Function: randomly generate department number DELIMITER $$ CREATE FUNCTION rand_num() RETURNS INT(5) BEGIN DECLARE i INT DEFAULT 0; SET i = FLOOR(100 + RAND() * 10); RETURN i; END $$
3. Create stored procedure
# 1. Functions: batch insert to dept table DELIMITER $$ CREATE PROCEDURE insert_dept(IN START INT(10),IN max_num INT(10)) BEGIN DECLARE i INT DEFAULT 0; SET autocommit = 0; REPEAT SET i = i + 1; INSERT INTO dept(deptno,dname,loc) VALUES((START + i),rand_string(10),rand_string(8)); UNTIL i = max_num END REPEAT; COMMIT; END $$ # 2. Function: batch insert to emp table DELIMITER $$ CREATE PROCEDURE insert_emp(IN START INT(10),IN max_num INT(10)) BEGIN DECLARE i INT DEFAULT 0; SET autocommit = 0; REPEAT SET i = i + 1; INSERT INTO emp(empno,ename,job,mgr,hiredata,sal,comm,deptno) VALUES((START + i),rand_string(6),'SALESMAN',0001,CURDATE(),2000,400,rand_num()); UNTIL i = max_num END REPEAT; COMMIT; END $$
4. Call stored procedure
# 1. Call the stored procedure to insert 10 departments into the dept table. DELIMITER ; CALL insert_dept(100,10); # 2. Call the stored procedure to insert 500000 pieces of data into the emp table. DELIMITER ; CALL insert_emp(100001,500000);
3, Show Profile
MySQL can be used to analyze the resource consumption of statement execution in the current session.
Measurements that can be used for SQL tuning. By default, the parameter is turned off and the results of the last 15 runs are saved.
Analysis steps
- 1. Check whether it supports
# Check whether the Show Profile function is enabled mysql> SHOW VARIABLES LIKE 'profiling'; +---------------+-------+ | Variable_name | Value | +---------------+-------+ | profiling | OFF | +---------------+-------+ 1 row in set (0.00 sec)
-
- Turn on the Show Profile function. It is turned off by default. It needs to be turned on before use
# Turn on the Show Profile function mysql> SET profiling=ON; Query OK, 0 rows affected, 1 warning (0.00 sec)
- 3. Run SQL
SELECT * FROM `emp` GROUP BY `id`%10 LIMIT 150000; SELECT * FROM `emp` GROUP BY `id`%20 ORDER BY 5;
- 4. View the results and execute SHOW PROFILES;
Duration: duration.
mysql> SHOW PROFILES; +----------+------------+---------------------------------------------------+ | Query_ID | Duration | Query | +----------+------------+---------------------------------------------------+ | 1 | 0.00156100 | SHOW VARIABLES LIKE 'profiling' | | 2 | 0.56296725 | SELECT * FROM `emp` GROUP BY `id`%10 LIMIT 150000 | | 3 | 0.52105825 | SELECT * FROM `emp` GROUP BY `id`%10 LIMIT 150000 | | 4 | 0.51279775 | SELECT * FROM `emp` GROUP BY `id`%20 ORDER BY 5 | +----------+------------+---------------------------------------------------+ 4 rows in set, 1 warning (0.00 sec)
- 5. Diagnose SQL, SHOW PROFILE cpu,block io FOR QUERY Query_ID;
# Here 3 is the query in step 4_ ID. # You can see a complete life cycle in SQL in SHOW PROFILE. mysql> SHOW PROFILE cpu,block io FOR QUERY 3; +----------------------+----------+----------+------------+--------------+---------------+ | Status | Duration | CPU_user | CPU_system | Block_ops_in | Block_ops_out | +----------------------+----------+----------+------------+--------------+---------------+ | starting | 0.000097 | 0.000090 | 0.000002 | 0 | 0 | | checking permissions | 0.000010 | 0.000009 | 0.000000 | 0 | 0 | | Opening tables | 0.000039 | 0.000058 | 0.000000 | 0 | 0 | | init | 0.000046 | 0.000046 | 0.000000 | 0 | 0 | | System lock | 0.000011 | 0.000000 | 0.000000 | 0 | 0 | | optimizing | 0.000005 | 0.000000 | 0.000000 | 0 | 0 | | statistics | 0.000023 | 0.000037 | 0.000000 | 0 | 0 | | preparing | 0.000014 | 0.000000 | 0.000000 | 0 | 0 | | Creating tmp table | 0.000041 | 0.000053 | 0.000000 | 0 | 0 | | Sorting result | 0.000005 | 0.000000 | 0.000000 | 0 | 0 | | executing | 0.000003 | 0.000000 | 0.000000 | 0 | 0 | | Sending data | 0.520620 | 0.516267 | 0.000000 | 0 | 0 | | Creating sort index | 0.000060 | 0.000051 | 0.000000 | 0 | 0 | | end | 0.000006 | 0.000000 | 0.000000 | 0 | 0 | | query end | 0.000011 | 0.000000 | 0.000000 | 0 | 0 | | removing tmp table | 0.000006 | 0.000000 | 0.000000 | 0 | 0 | | query end | 0.000004 | 0.000000 | 0.000000 | 0 | 0 | | closing tables | 0.000009 | 0.000000 | 0.000000 | 0 | 0 | | freeing items | 0.000032 | 0.000064 | 0.000000 | 0 | 0 | | cleaning up | 0.000019 | 0.000000 | 0.000000 | 0 | 0 | +----------------------+----------+----------+------------+--------------+---------------+ 20 rows in set, 1 warning (0.00 sec)
Show Profile query parameter remarks:
- ● ALL: display ALL overhead information.
- ● BLOCK IO: display BLOCK IO related overhead (general).
- ● CONTEXT SWITCHES: context switching related overhead.
- ● CPU: display CPU related overhead information (general).
- ● IPC: display the overhead information related to sending and receiving.
- ● MEMORY: displays MEMORY related overhead information.
- ● PAGE FAULTS: display overhead information related to page errors.
- ● SOURCE: display and Source_function.
- ● SWAPS: displays information about the cost related to the number of exchanges.
6. Show Profile query list, conclusions for daily development:
- ● converting HEAP to MyISAM: the query result is too large and the memory is not enough. It is moved to disk.
- ● Creating tmp table: creating a temporary table (copying data to the temporary table and deleting it after use), which consumes database performance.
- ● Copying to tmp table on disk: copying temporary tables in memory to disk is dangerous!!!
- ● locked: deadlock.
4, Global query log
It can only be used in the test environment
stay mysql of my.cnf In, the settings are as follows #open general_log=1 #Path to log file general_log_file=/path/logfile #Output format log_output=FILE
set global general_log=1; set global log_output='TABLE'; #The sql statements written later will be recorded in the general in the mysql library_ Log table, which can be viewed with the following command select * from mysql.general_log;