Stress is a stress testing gadget under linux.
I see some people use this tool to describe some scenarios of resource depletion, and others use it for chaos testing. Users should note that this tool does not simulate business problems, but system level problems. Therefore, when using it to simulate, it is very different from the real business scenario.
Because in performance work, people often misuse tools because they don't understand them, so I'll take this tool out and explain it here.
1, Install stress
yum install -y stress
2, stress parameter
[root@7DGroupT1 ~]# stress `stress' imposes certain types of compute stress on your system Usage: stress [OPTION [ARG]] ... -?, --help show this help statement --version show version statement -v, --verbose be verbose -q, --quiet be quiet -n, --dry-run show what would have been done -t, --timeout N timeout after N seconds --backoff N wait factor of N microseconds before work starts -c, --cpu N spawn N workers spinning on sqrt() -i, --io N spawn N workers spinning on sync() -m, --vm N spawn N workers spinning on malloc()/free() --vm-bytes B malloc B bytes per vm worker (default is 256MB) --vm-stride B touch a byte every B bytes (default is 4096) --vm-hang N sleep N secs before free (default none, 0 is inf) --vm-keep redirty memory instead of freeing and reallocating -d, --hdd N spawn N workers spinning on write()/unlink() --hdd-bytes B write B bytes per hdd worker (default is 1GB) Example: stress --cpu 8 --io 4 --vm 2 --vm-bytes 128M --timeout 10s Note: Numbers may be suffixed with s,m,h,d,y (time) or B,K,M,G (size).
Parameters can be said to be very simple. At a glance, it can simulate the common and important resource consumption of CPU, IO, memory and disk.
Let's take a look.
Three, analog CPU
[root@7DGroupT1 ~]# stress -c 4 -t 100
top - 10:48:11 up 63 days, 23:57, 2 users, load average: 0.67, 1.41, 4.21 Tasks: 122 total, 5 running, 117 sleeping, 0 stopped, 0 zombie %Cpu0 : 99.7 us, 0.3 sy, 0.0 ni, 0.0 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st %Cpu1 :100.0 us, 0.0 sy, 0.0 ni, 0.0 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st %Cpu2 :100.0 us, 0.0 sy, 0.0 ni, 0.0 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st %Cpu3 :100.0 us, 0.0 sy, 0.0 ni, 0.0 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st KiB Mem : 8010528 total, 5550792 free, 1866688 used, 593048 buff/cache KiB Swap: 0 total, 0 free, 0 used. 5762564 avail Mem
The parameters of analog CPU are very concise. Print the stack and have a look:
[root@s6 ~]# pstack 29253 #0 0x00007f123634761b in random () from /usr/lib64/libc.so.6 #1 0x00007f1236347b39 in rand () from /usr/lib64/libc.so.6 #2 0x0000557e9ea32dbd in hogcpu () #3 0x0000557e9ea3180a in main () [root@s6 ~]
In fact, the code is very simple, which is a hogcpu function. The source code is as follows:
int hogcpu (void) { while (1) sqrt (rand ()); return 0; }
Do you think you can write one after reading it? Isn't it just a while?
4, Analog memory
[root@7DGroupT1 ~]# stress --vm 30 --vm-bytes 1G --vm-hang 50 --timeout 50s
[root@7DGroupT1 ~]# vmstat 1 procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu----- r b swpd free buff cache si so bi bo in cs us sy id wa st 0 0 0 5606796 6828 534736 0 0 4548 212 457 710 0 0 99 1 0 0 0 0 5597852 6976 544360 0 0 9712 52 666 1163 0 0 99 1 0 0 0 0 5595060 7136 545828 0 0 1752 0 440 580 0 0 98 2 0 30 0 0 2125872 7136 546040 0 0 8 0 1098 522 0 21 79 0 0 0 14 0 100896 200 211224 0 0 529748 2932 25058 43164 1 51 4 44 0 [root@7DGroupT1 ~]# sar -B 1 Linux 3.10.0-514.21.1.el7.x86_64 (7DGroupT1) 10/03/2019 _x86_64_ (4 CPU) 10:52:49 AM pgpgin/s pgpgout/s fault/s majflt/s pgfree/s pgscank/s pgscand/s pgsteal/s %vmeff 10:52:50 AM 8.00 68.00 122.00 0.00 1143.00 0.00 0.00 0.00 0.00 10:52:51 AM 0.00 0.00 29.00 0.00 25.00 0.00 0.00 0.00 0.00 10:52:52 AM 0.00 0.00 184.00 0.00 45.00 0.00 0.00 0.00 0.00 10:52:53 AM 0.00 0.00 2482.00 0.00 804.00 0.00 0.00 0.00 0.00 10:52:54 AM 870.77 2436.92 172704.62 2.31 92710.00 38558.46 26820.00 61888.46 94.66 10:52:56 AM 76853.61 618.56 34391.24 82.47 1297422.16 205238.14 404672.68 14717.53 2.41 10:52:57 AM 125560.00 300.00 4875.00 110.00 5040.00 0.00 0.00 0.00 0.00 10:52:58 AM 111080.00 0.00 8940.00 68.00 4723.00 0.00 0.00 0.00 0.00 10:52:59 AM 80944.00 0.00 5725.00 40.00 1636.00 0.00 0.00 0.00 0.00 10:53:00 AM 26224.00 300.00 37293.00 2.00 534.00 0.00 0.00 0.00 0.00 10:53:01 AM 7344.00 180.00 1092.00 0.00 17475.00 0.00 0.00 0.00 0.00 10:53:02 AM 24576.00 224.00 5725.00 41.00 1866.00 0.00 0.00 0.00 0.00 [root@7DGroupT1 ~]# sar -r 1 Linux 3.10.0-514.21.1.el7.x86_64 (7DGroupT1) 10/03/2019 _x86_64_ (4 CPU) 10:56:55 AM kbmemfree kbmemused %memused kbbuffers kbcached kbcommit %commit kbactive kbinact kbdirty 10:57:13 AM 5582520 2428008 30.31 7648 447240 5643636 70.45 1780508 443288 3400 10:57:14 AM 101544 7908984 98.73 6736 375896 37108100 463.24 7328268 367028 3404
From the above data, we can see that there are indeed large page faults, which is also an inevitable phenomenon in the process of simulating memory consumption. I also stressed before that to see whether there is enough memory is to see this falluts.
In Stress, how to simulate memory. Take a look.
int hogvm (long long bytes, long long stride, long long hang, int keep) { long long i; char *ptr = 0; char c; int do_malloc = 1; while (1) { if (do_malloc) { dbg (stdout, "allocating %lli bytes ...\n", bytes); if (!(ptr = (char *) malloc (bytes * sizeof (char)))) { err (stderr, "hogvm malloc failed: %s\n", strerror (errno)); return 1; } if (keep) do_malloc = 0; } dbg (stdout, "touching bytes in strides of %lli bytes ...\n", stride); for (i = 0; i < bytes; i += stride) ptr[i] = 'Z'; /* Ensure that COW happens. */ if (hang == 0) { dbg (stdout, "sleeping forever with allocated memory\n"); while (1) sleep (1024); } else if (hang > 0) { dbg (stdout, "sleeping for %llis with allocated memory\n", hang); sleep (hang); } for (i = 0; i < bytes; i += stride) { c = ptr[i]; if (c != 'Z') { err (stderr, "memory corruption at: %p\n", ptr + i); return 1; } } if (do_malloc) { free (ptr); dbg (stdout, "freed %lli bytes\n", bytes); } } return 0; }
It's an endless loop plus a memory malloc.
5, Analog disk
[root@7DGroupT1 ~]# stress --hdd 5 --hdd-bytes 1G
[root@7DGroupT1 ~]# top top - 10:35:15 up 63 days, 23:44, 2 users, load average: 9.14, 8.49, 8.29 Tasks: 124 total, 2 running, 122 sleeping, 0 stopped, 0 zombie %Cpu0 : 0.0 us, 5.8 sy, 0.0 ni, 0.0 id, 94.2 wa, 0.0 hi, 0.0 si, 0.0 st %Cpu1 : 1.0 us, 1.0 sy, 0.0 ni, 14.4 id, 83.6 wa, 0.0 hi, 0.0 si, 0.0 st %Cpu2 : 0.0 us, 4.4 sy, 0.0 ni, 0.0 id, 95.6 wa, 0.0 hi, 0.0 si, 0.0 st %Cpu3 : 0.0 us, 4.1 sy, 0.0 ni, 0.0 id, 95.9 wa, 0.0 hi, 0.0 si, 0.0 st KiB Mem : 8010528 total, 1940088 free, 1891792 used, 4178648 buff/cache KiB Swap: 0 total, 0 free, 0 used. 5687416 avail Mem [root@7DGroupT1 ~]# vmstat 1 procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu----- r b swpd free buff cache si so bi bo in cs us sy id wa st 1 7 0 1474132 146392 4499176 0 0 0 12 0 1 0 0 100 0 0 0 7 0 1384720 146392 4589204 0 0 0 107624 815 966 0 3 8 88 0 0 7 0 1292432 146392 4681232 0 0 0 98920 1036 987 0 3 13 84 0 0 7 0 1194932 146392 4777968 0 0 0 115344 1033 1207 0 4 0 96 0 1 6 0 1094312 146392 4880044 0 0 0 105260 928 930 0 3 5 92 0 1 6 0 998756 146416 4974944 0 0 0 102812 862 928 0 3 0 97 0 0 7 0 897960 146448 5075492 0 0 0 131200 1268 1565 1 4 5 91 0 3 4 0 1626628 146472 4347076 0 0 0 82804 1444 1206 0 8 10 81 0 0 7 0 2354208 146656 3620344 0 0 0 118112 2229 1256 0 27 2 70 0 0 7 0 3264136 146804 2709776 0 0 0 110632 1761 1506 0 9 0 90 0 0 7 0 3143120 146940 2831728 0 0 4 106896 1211 1112 0 5 0 95 0 0 7 0 2961456 146940 3016064 0 0 0 91456 1484 1298 0 6 0 94 0 ^C [root@7DGroupT1 ~]# sar -d 1 10:54:19 AM DEV tps rd_sec/s wr_sec/s avgrq-sz avgqu-sz await svctm %util 10:54:20 AM dev253-0 307.00 24.00 229976.00 749.19 126.47 454.24 3.26 100.00 10:54:20 AM dev11-0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 10:54:20 AM DEV tps rd_sec/s wr_sec/s avgrq-sz avgqu-sz await svctm %util 10:54:21 AM dev253-0 369.00 48.00 228528.00 619.45 127.11 347.24 2.71 100.00 10:54:21 AM dev11-0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 10:54:21 AM DEV tps rd_sec/s wr_sec/s avgrq-sz avgqu-sz await svctm %util 10:54:22 AM dev253-0 274.00 24.00 203760.00 743.74 127.45 404.61 3.65 100.00 10:54:22 AM dev11-0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 10:54:22 AM DEV tps rd_sec/s wr_sec/s avgrq-sz avgqu-sz await svctm %util 10:54:23 AM dev253-0 262.00 0.00 202000.00 770.99 127.61 486.35 3.82 100.10 10:54:23 AM dev11-0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 10:54:23 AM DEV tps rd_sec/s wr_sec/s avgrq-sz avgqu-sz await svctm %util 10:54:24 AM dev253-0 288.00 0.00 232352.00 806.78 127.92 479.62 3.47 100.00 10:54:24 AM dev11-0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 [root@7DGroupT1 ~]# iostat -x -d 1 Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await r_await w_await svctm %util vda 0.00 8603.00 4.00 255.00 16.00 101840.00 786.53 125.75 471.71 3.00 479.06 3.86 100.00 scd0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await r_await w_await svctm %util vda 0.00 8952.00 4.00 294.00 16.00 117176.00 786.52 127.03 470.79 3.75 477.14 3.36 100.00 scd0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await r_await w_await svctm %util vda 0.00 9059.00 1.00 266.00 4.00 101900.00 763.33 126.55 433.52 0.00 435.15 3.75 100.00 scd0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await r_await w_await svctm %util vda 0.00 6234.00 0.00 273.00 0.00 96784.00 709.04 127.94 391.18 0.00 391.18 3.67 100.10 scd0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 [root@7DGroupT1 ~]# iotop Total DISK READ : 0.00 B/s | Total DISK WRITE : 102.23 M/s Actual DISK READ: 0.00 B/s | Actual DISK WRITE: 75.93 M/s TID PRIO USER DISK READ DISK WRITE SWAPIN IO> COMMAND 8681 be/4 root 0.00 B/s 20.93 M/s 0.00 % 98.97 % stress --hdd 5 --hdd-bytes 1G 8677 be/4 root 0.00 B/s 20.23 M/s 0.00 % 95.92 % stress --hdd 5 --hdd-bytes 1G 8680 be/4 root 0.00 B/s 20.23 M/s 0.00 % 95.59 % stress --hdd 5 --hdd-bytes 1G 8679 be/4 root 0.00 B/s 20.61 M/s 0.00 % 95.17 % stress --hdd 5 --hdd-bytes 1G 8678 be/4 root 0.00 B/s 20.23 M/s 0.00 % 95.16 % stress --hdd 5 --hdd-bytes 1G 7298 be/4 root 0.00 B/s 0.00 B/s 0.00 % 86.91 % [kworker/u8:1] 285 be/3 root 0.00 B/s 0.00 B/s 0.00 % 86.72 % [jbd2/vda1-8] 16384 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % YDService 1 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % systemd --syst~deserialize 21 2 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [kthreadd] 3 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [ksoftirqd/0] 516 be/4 libstora 0.00 B/s 0.00 B/s 0.00 % 0.00 % lsmd -d 5 be/0 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [kworker/0:0H] 518 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % acpid 7 rt/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [migration/0] 8 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [rcu_bh] 9 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [rcu_sched] 10 rt/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [watchdog/0] 11 rt/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [watchdog/1] 12 rt/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [migration/1] 13 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [ksoftirqd/1] 526 be/4 polkitd 0.00 B/s 0.00 B/s 0.00 % 0.00 % polkitd --no-d~ Sour~ Thread] 15 be/0 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [kworker/1:0H] 16 rt/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [watchdog/2] 17 rt/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [migration/2] 18 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [ksoftirqd/2] 515 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % python -Es /us~in/tuned -l -P 20 be/0 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [kworker/2:0H] 21 rt/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [watchdog/3] 22 rt/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [migration/3] 23 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [ksoftirqd/3] 25 be/0 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [kworker/3:0H] 27 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [kdevtmpfs] 28 be/0 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [netns] 29 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [khungtaskd] 30 be/0 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [writeback]
Analog disk, it seems that the effect is also very good. Let's turn over the source code.
int hoghdd (long long bytes) { long long i, j; int fd; int chunk = (1024 * 1024) - 1; /* Minimize slow writing. */ char buff[chunk]; /* Initialize buffer with some random ASCII data. */ dbg (stdout, "seeding %d byte buffer with random data\n", chunk); for (i = 0; i < chunk - 1; i++) { j = rand (); j = (j < 0) ? -j : j; j %= 95; j += 32; buff[i] = j; } buff[i] = '\n'; while (1) { char name[] = "./stress.XXXXXX"; if ((fd = mkstemp (name)) == -1) { err (stderr, "mkstemp failed: %s\n", strerror (errno)); return 1; } dbg (stdout, "opened %s for writing %lli bytes\n", name, bytes); dbg (stdout, "unlinking %s\n", name); if (unlink (name) == -1) { err (stderr, "unlink of %s failed: %s\n", name, strerror (errno)); return 1; } dbg (stdout, "fast writing to %s\n", name); for (j = 0; bytes == 0 || j + chunk < bytes; j += chunk) { if (write (fd, buff, chunk) == -1) { err (stderr, "write failed: %s\n", strerror (errno)); return 1; } } dbg (stdout, "slow writing to %s\n", name); for (; bytes == 0 || j < bytes - 1; j++) { if (write (fd, &buff[j % chunk], 1) == -1) { err (stderr, "write failed: %s\n", strerror (errno)); return 1; } } if (write (fd, "\n", 1) == -1) { err (stderr, "write failed: %s\n", strerror (errno)); return 1; } ++j; dbg (stdout, "closing %s after %lli bytes\n", name, j); close (fd); } return 0; }
The dead loop and the for loop keep calling write. This call is to keep writing. This is also consistent with the monitoring data we see above.
6, Summary
To sum up, through these source code descriptions, please note that when using tools like this, if they are only to consume system level resources, and then observe the performance of applications under less available resources, such tools can be used.
But if you want to simulate the problems in your business layer, I advise you not to use such a tool.