concept
The purpose of concurrent programming is to make the program faster. With the same workload, one person is certainly not as busy as two people. Of course, there is a premise. Let's briefly introduce the concept
for instance:
10 bottles of beer and 1 screwdriver. One person can only use one screwdriver to open 1 bottle of beer for 1 second and 10 bottles for 10 seconds;
If there are two drivers, if it's still one person, one person can only use one driver. No matter how many drivers are, it's still 10 seconds;
If there are two people, two screwdrivers, and one person takes a screwdriver, no one opens five bottles at the same time, it only takes five seconds;
Think of people as our program. How many people are used for program control, one person with single thread, multiple people with multiple threads; Think of beer as a task and the results to be achieved; Think of the screwdriver as a cpu core. You need a screwdriver to open a beer bottle. You need a cpu to execute our program to complete the task. When there are enough drivers and enough people, many people can use the screwdriver to open beer at the same time; When there are enough cpu cores and enough threads, the task will be completed quickly. This is the theory, but look at the following examples:
public class SpeedTest { /** accumulative frequency */ private static long count = 10000 * 10000 * 10; public static void main(String[] args) throws InterruptedException { //parallel concurrent(); //serial serial(); } private static void concurrent() throws InterruptedException { Long start = System.currentTimeMillis(); Thread thread = new Thread(() -> { long a = 0L; for (int i = 0; i < count; i++) { a += 1; } System.out.printf("concurrent, a=%s%n", a); }); thread.start(); long b = 0L; for (int i = 0; i < count; i++) { b += 1; } thread.join(); Long end = System.currentTimeMillis(); System.out.printf("concurrent, b=%s, count=%s, time=%sms%n", b, count, end - start); } /** * serial */ private static void serial() { long a = 0L; long b = 0L; Long start = System.currentTimeMillis(); for (int i = 0; i < count; i++) { a += 1; } System.out.printf("serial, a=%s%n", a); for (int i = 0; i < count; i++) { b += 1; } Long end = System.currentTimeMillis(); System.out.printf("serial, b=%s, count=%s, time=%sms%n", b, count, end - start); } }
Time consumption under different count s:
Connected to the target VM, address: '127.0.0.1:51419', transport: 'socket' concurrent, b=100000000, count=100000000, time=127ms serial, b=100000000, count=100000000, time=90ms Disconnected from the target VM, address: '127.0.0.1:51419', transport: 'socket'
Connected to the target VM, address: '127.0.0.1:51369', transport: 'socket' concurrent, b=1000000000, count=1000000000, time=527ms serial, b=1000000000, count=1000000000, time=852ms Disconnected from the target VM, address: '127.0.0.1:51369', transport: 'socket'
In this way, when count = 100 million, it is still serial fast.
cpu time slice allocation
Think about it carefully. When there is only one screwdriver, can't there be more than one person? Of course, the screwdriver can be used in turn. You open a bottle of beer and then open me a bottle of beer. Compared with cpu, when there is only a single core cpu, multiple threads are executed for you first and exchanged for me. This is called cpu time slice allocation;
Context switching
Allocate time to each thread according to a certain algorithm; For another example, when I lifted the bottle cap half way, the screwdriver was given to you. No matter how you use it, I won't start again when it comes to my hand again, because when I gave you the screwdriver, I have recorded which bottle is being lifted and which position on the bottle cap. It starts directly from the last position. It's very rigorous. It's similar to cpu. When the cpu time is over, The task status of the thread will be recorded. When the cpu polls the thread again, it will be executed directly from the last end. Here, the process from saving the task status to reloading is called a context switch of the cpu; It can be found that multithreading is not necessarily fast, because context switching also takes time. When resources are insufficient, more threads will be opened, but it will be slower;
Thread, process
For another example, a group of people in one room are opening beer bottles, and a group of people in the next room are opening coke, just a screwdriver. The screwdriver is handed over from room to room. This is the program in which a single core cpu is executing two processes, and the people in it are threads. Therefore, a process can have multiple threads, and the threads are in the process, because the cpu runs very fast, Each time slice is at the millisecond level, so even a single core cpu can take into account multiple programs at the same time and can't feel it.