Nginx event driven model (second understanding + the most complete in History)

The article is very long. It is recommended to collect it and read it slowly! Java high concurrency enthusiast community: Crazy maker circle Here are some valuable learning resources:

Recommendation: wonderful blog posts on joining big factories, building architectures and vigorously improving Java internal skills

Excellent blog posts necessary for entering big factories, building architectures and vigorously improving Java internal skills1W salary increase in autumn 2021 + necessary wonderful blog posts
1: Redis distributed lock (illustration - second understanding - the most complete in History)2: Zookeeper distributed lock (illustration - second understanding - the most complete in History)
3: How to ensure the double write consistency between Redis and MySQL? (required for interview)4: Interview essential: second kill oversold solution (the most complete in History)
5:Necessary for interview: Reactor mode6: 10 minutes to understand the underlying principles of Java NIO
7:TCP/IP (illustration + second understanding + most complete in History)8: Feign principle (illustration)
9:DNS diagram (second understanding + most complete in history + high salary necessary)10: CDN diagram (second understanding + most complete in history + high salary necessary)
11: Distributed transaction (diagram + the most complete in history + hematemesis recommendation)12: seata AT mode actual combat (illustration + second understanding + the most complete in History)
13: Interpretation of seata source code (illustration + second understanding + the most complete in History)14: Actual combat of seata TCC mode (illustration + second understanding + the most complete in History)
Java interview questions 30 topics, the most complete in history, interview must brushAli, JD, meituan... Pick and walk horizontally at will!!!
1: JVM interview questions (the strongest, continuously updated, hematemesis recommendation in History)2: Java basic interview questions (the most complete, continuously updated and recommended for hematemesis in History)
3: Architecture design interview questions (the most complete, continuously updated and recommended for hematemesis in History)4: Design mode interview questions (the most complete, continuously updated and recommended for hematemesis in History)
17,Distributed transaction interview questions (the most complete, continuously updated and recommended for hematemesis in History)Consistency agreement (the most complete in History)
29,Multithreaded interview questions (the most complete in History)30,HR face classics, after passing five passes and cutting six generals, be careful of the ditch capsizing!
9.Network protocol interview questions (the most complete, continuously updated and recommended for hematemesis in History)For more topics, see[ General directory of high concurrency in crazy maker circle ]
SpringCloud blog post
nacos actual combat (the most complete in History) sentinel (the most complete + introductory tutorial in History)
Spring cloud gateway (the most complete in History)For more topics, see[ General directory of high concurrency in crazy maker circle ]

Nginx event driven model

Event driven model is one of the important mechanisms for Nginx server to ensure complete function and good performance.

Event driven model overview

In fact, event driven is not a professional vocabulary in the field of computer programming. It is an ancient model for responding to events, which is widely used in the fields of computer programming, public relations, economic activities and so on. As the name suggests, event driven is the process of mobilizing available resources to perform related tasks caused by events at the current time point in the process of continuous transaction management In the field of computer programming, the event driven model corresponds to a programming method, event driven programming, that is, event driven programming.
Event driven model is generally composed of three basic units: event collector, event transmitter and event processor.
Among them, the event collector is specially responsible for collecting all events, including those from users (such as mouse click events, keyboard input events, etc.), hardware (such as clock events, etc.) and software (such as operating system, application itself, etc.).
The event sender is responsible for distributing the events collected by the collector to the target object. The target object is the location of the event processor. The event processor is mainly responsible for the response of specific events, which is often not fully determined until the implementation stage.
In the process of programming, there are many ways to realize the event driven mechanism. Here is batch programming, that is, batch programming. Batch programming is a relatively primary programming method. The flow of software using batch programming is determined by the programmer in the actual coding process, that is, in the process of program running, there are many things It can be seen that the event driver design pays more attention to the randomness of events, so that the application can have considerable flexibility and can deal with all kinds of discrete random events from users, hardware and system, which greatly enhances the interaction and user operation between users and software Flexibility of operation.
Event driver can be implemented by any programming language, but the degree of difficulty is different. If a system takes the event driver model as the programming basis, its architecture is basically like this: the program formed by an event loop is designed in advance, and the event loop program constitutes an "event collector" , it constantly checks the event information to be processed, and then uses the "event transmitter" to pass it to the "event processor". The "event processor" is generally implemented by virtual function mechanism.

Operating system event driven processing library

The process of Nginx server responding to and processing Web requests is based on the event driven model. It also includes three basic units: event collector, event transmitter and event processor. The implementation of "event collector" and "event transmitter" of Nginx does not have much characteristics. Let's focus on its "event processor".
Generally, when we write the program of the server processing model, based on the event driven model, the "event processor" in the "target object" can be implemented in the following ways:

  • Every time the event sender passes a request, the target object creates a new process and calls the event handler to process the request.
  • Every time the event sender passes a request, the target object creates a new thread and calls the event handler to process the request.
  • Each time the "event sender" passes a request, the "target object" puts it into a list of pending events, and uses non blocking I/O to call the "event processor" to process the request.

The above three processing methods have their own characteristics. The first method will lead to poor server performance due to the high cost of creating a new process, but its implementation is relatively simple.
The second method involves thread synchronization, so it may face a series of problems such as deadlock and synchronization, and the coding is complex.
In the third way, when writing program code, the logic is more complex than the first two.

Most web servers adopt the third method and gradually form the so-called "event driven processing library".
Event driven processing library is also called multiplexing IO method. The most common include the following three kinds: select model, poll model and epoll model. The Nginx server also supports rtsig model, kqueue model, dev/poll model and eventport model. Through Nginx configuration, the Nginx server can support these event driven processing models. They are described in detail here.

select Library

The select library is a basic event driven model library supported by various versions of Linux and Windows platforms, and the definition of the interface is basically the same, but the meaning of some parameters is slightly different. The steps to use the select library are:
First, create a collection of descriptors for the event of interest. For a descriptor, you can focus on the read event, write event and Exception sending event. Therefore, you need to create three types of event descriptor sets to collect the descriptors of read event, write event and Exception event respectively.
Secondly, call the select() function provided by the bottom layer and wait for the event to occur. It should be noted that the blocking of select has nothing to do with whether to set non blocking I/O.
Then, poll each event descriptor in all event descriptor sets to check whether there is a corresponding event. If so, process it.
If the Nginx server does not specify another high-performance event driven model library during compilation, it will automatically compile the library. We can use the – with select_module and – without select_module parameters to force whether Nginx compiles the library.

poll Library

The poll library, as the basic event driven model on the Linux platform, is actually introduced in Linux 2.1.23. The Windows platform does not support the poll library.
The basic working methods of poll and select are the same. They both create a descriptor set that focuses on events, wait for these events to occur, and then poll the descriptor set to check whether events have occurred. If so, handle them.
The main difference between the poll library and the select library is that the select library needs to create a descriptor set for read events, write events and exception events respectively, so these three sets need to be polled respectively during the last polling. However, the poll library only needs to create a set, and set the read events, write events or exception events respectively on the structure corresponding to each descriptor, and the last polling It can be said that the poll library is an optimized implementation of the select library.
During the compilation process, if the Nginx server does not develop other high-performance event driven model libraries for it, it will automatically compile the library. We can use the – with poll_module and – without poll_module parameters to force whether Nginx compiles the library.

epoll Library

epoll library is one of the high-performance event driven libraries supported by Nginx server. It is recognized as a very excellent event driven model, which is very different from poll library and select library. epoll is a variant of poll library, which was introduced in Linux 2.5.44 and can be used in versions above Linux 2.6. The biggest difference between poll library and select Library in practical work is efficiency.
From the previous introduction, we know that their processing methods are to create a list of pending events, and then send the list to the kernel. When returning, poll and check the list to determine whether events occur. In this way, the efficiency is relatively low in applications with many descriptors. A better way is to hand over the management of descriptor list to the kernel It is responsible to the kernel. Once an event occurs, the kernel notifies the process of the descriptor list of the event, so as to avoid polling the whole descriptor list. epoll library is such a model.
Firstly, epoll library notifies the kernel to create an event list with N descriptors through related calls. Then, set the events of interest to these descriptors and add them to the event list of the kernel. In the specific coding process, the descriptors in the event list can also be modified and deleted through related calls.
After setting, the epoll library starts waiting for the kernel to notify the event. After an event occurs, the kernel reports the descriptor list of the event to the epoll library. The epoll library with the event list can handle the event.
Epoll library is the most efficient on Linux platform. It supports a process to open a large number of event descriptors, and the upper limit is the maximum number of files that the system can open. At the same time, the IO efficiency of epoll library does not decrease linearly with the increase of the number of descriptors, because it only operates on the "active" descriptors reported by the kernel.

rtsig model

Rtsig is the abbreviation of real time signal, which means Real-Time Signal. Strictly speaking, rtsig model is not a commonly used event driven model, but Nginx server uses the support of using Real-Time Signal to respond to events. In the official documents, rtsig model is juxtaposed with other event driven models.
When using the rtsig model, the working process will establish an rtsig queue through the system kernel to store the signals marking the occurrence of events (especially the occurrence of client requests in Nginx server applications). When each event occurs, the system kernel will generate a signal to be stored in the rtsig queue for processing by the working process.
It should be noted that the rtsig queue has a length limit, beyond which overflow will occur. By default, the maximum length of the Linux system event signal queue is set to 1024, that is, it can store up to 1024 event signals at the same time. In Linux before 2.6.6-mm2, the event signal queue of each process of the system is uniformly managed by the kernel. Users can customize the length setting by modifying the kernel parameter / proc / sys / kernel / rtsig max /. In the version after Linux 2.6.6-mm2, the kernel parameter is cancelled, and each process of the system has its own event signal queue. The size of this queue is determined by the rlimt of the Linux system_ The sigengind parameter definition determines the size when the setrlimit() system call is executed. Nginx provides workers_ rlimit_ The sigpending parameter is used to adjust the length of the event signal queue in this case.
When the rtsig queue overflows, Nginx will temporarily stop using the rtsig model and call the poll library to process the unprocessed events until the rgsit signal queue is completely cleared, and then start the rtsig model again to prevent new overflow.
Nginx provides the usage configuration of rtsig model with relevant parameters in the configuration file. When compiling the nginx server, use – with rtsig_ Module configuration option to enable rtsig model compilation.

Other event driven models

In addition to the above four main event driven models, Nginx server provides response event driven model support for specific Linux platforms. At present, kqueue model, / dev/poll model and eventport model are mainly implemented.

  • kqueue model is an efficient event driven model used to support BSD series platforms. It is mainly used on FreeBSD 4.1 and above, OpenBSD 2.9 and above, NetBSD 2.0 and above and Mac OS X platforms. This model is also a variant of the poll library. It has no essential difference from the processing mode of the epoll library. It provides efficiency by avoiding polling operations. The model supports both level triggered (also known as horizontal trigger, which triggers an event as long as the condition is met) and edge triggered (which triggers an event whenever the state changes). If you use Nginx server under these platforms, it is recommended to select this model for request processing to improve the processing performance of Nginx server.

  • /dev/poll model is applicable to efficient event driven models supporting Unix derived platforms. It is mainly used in platforms of Solaris 711 / 99 and above, HP/UX 11.22 and above, IRIX 6.5.15 and above, and Tru64 UNIX 5.1A and above. This model is a scheme proposed by Sun company to complete the event driven mechanism when developing Solaris Series platform. It uses a virtual / dev/poll device. Developers can add the file descriptor to be monitored to the device, and then obtain the event notification through ioctl() call. In the platform mentioned above, it is recommended to use this model to process requests.

  • eventport model is applicable to the efficient event driven model supporting Solaris 10 and above platforms. This model is also a scheme proposed by Sun company to complete the event driven mechanism when developing Solaris Series platform. It can effectively prevent the occurrence of kernel crash. The Nginx server supports this.

The above is the event driven library supported by the Nginx server. It can be seen that the Nginx server provides a variety of event driven models for different Linux or Unix derivative platforms, gives full play to the advantages of the system platform itself and improves the ability to process client request events to the greatest extent. In practice, we need to choose different event driven models according to specific situations and application scenarios to ensure the efficient operation of Nginx server.

Nginx default event handling Library

The connection processing mechanism of Nginx adopts different I/O models in different operating systems. Different event processing models should be selected according to different systems. The available event processing models include kqueue, rtsig, epoll, / dev/poll, select and poll. Select and epoll are standard working models, and kqueue and epoll are efficient working models, The difference is that epoll is used on Linux platform and kqueue is used on BSD system.

(1) Under Linux, Nginx uses epoll's I/O multiplexing model

(2) Under Freebsd, Nginx uses kqueue's I/O multiplexing model

(3) Under Solaris, Nginx uses the I/O multiplexing model of / dev/poll mode

(4) Under Windows, Nginx uses icop's I/O multiplexing model

cat /usr/local/nginx/conf/nginx.conf

......

events {

use epoll;

}

Recommended configuration of Nginx for high concurrency scenarios

#user  nobody;
#worker_processes  1;
worker_processes  8;  #This depends on how many core CPU s the hardware has
#development environment 
#error_log  logs/error.log  debug;
#production environment 
error_log  logs/error.log;


pid     logs/nginx.pid;


events {
  use epoll;
  worker_connections  1024000;
}


http {
  default_type 'text/html';
  charset utf-8;

  log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
  '$status $body_bytes_sent "$http_referer" '
  '"$http_user_agent" "$http_x_forwarded_for"';

  #access_log  logs/access_main.log  main;
  access_log off;   #Log function to be turned off
  #sendfile: set to on to start the mode of efficient file transfer.
  # sendfile allows Nginx to transfer data directly between disk and tcp socket when transferring files.
  # When this parameter is enabled, the data can not pass through the user buffer. Indicates that zero replication is enabled, which is recommended for production environments
  sendfile        off;
  #sendfile        on;
  #tcp_nopush     on;

  #keepalive_timeout  0;
  keepalive_timeout  65;

  #gzip  on;
  gzip off;
  #The minimum length of gzip is generally set to 1K. If it is less than 1K, it will not be compressed, otherwise it will be more and more compressed
  gzip_min_length 1024;
  #gzip compression ratio: 1 compression ratio is the smallest, the processing speed is the fastest, and 9 compression ratio is the largest but the processing is the slowest (fast transmission but cpu consumption).
  gzip_comp_level 1;
  #Match MIME type for compression
  gzip_types text/plain application/json application/javascript application/x-javascript text/css application/xml text/javascript application/x-httpd-php image/jpeg image/gif image/png application/vnd.ms-fontobject font/ttf font/opentype font/x-woff image/svg+xml  font/woff;
  gzip_vary on;
  gzip_disable "MSIE [1-6]\.";
  #Set the system to obtain several units of cache to store the compressed result data stream of gzip. For example, 4k means to apply for memory in 4k units according to 4 times of the original data size in 4k units. 4. 8k means to apply for memory in 8k unit, which is 4 times of the original data size in 8k unit.
  #If it is not set, the default value is to apply for memory space of the same size as the original data to store gzip compression results.
  gzip_buffers 4 8k;
  #gzip supports at least 1.1 by default. Now it supports at least 1.0. Modern browsers are basically not set
  gzip_http_version 1.0;
  #When Nginx is used as a reverse proxy, enable, turn on or turn off the results returned by the back-end server. The matching premise is that the back-end server must return the header containing "Via".
  #Off - turns off compression of all agent result data
  #expired - enables compression if the header contains "Expires" header information
  #No cache - enables compression if the header contains "cache control: no cache" header information
  #No store - enables compression if the header contains "cache control: no store" header information
  #Private - enables compression if the header contains "cache control: private" header information
  #no_last_modified - enables compression if the header does not contain "last modified" header information
  #no_etag - enables compression if the header does not contain "ETag" header information
  #auth - enables compression if the header contains "Authorization" header information
  #any - enable compression unconditionally
  gzip_proxied expired no-cache no-store private auth;


  #Specify cache information
  lua_shared_dict ngx_cache 128m;
  #Specify cache information
  lua_shared_dict seckill_cache 128m;
  #Ensure that only one thread accesses redis or MySQL lock for cache
  lua_shared_dict cache_lock 100k;
  #lua extension loading

  # for linux
  # lua_package_path "./?.lua;/vagrant/LuaDemoProject/src/?.lua;/usr/local/ZeroBraneStudio-1.80/?/?.lua;/usr/local/ZeroBraneStudio-1.80/?.lua;;";
  # lua_package_cpath "/usr/local/ZeroBraneStudio-1.80/bin/clibs/?.so;;";
  lua_package_path "./?.lua;/vagrant/LuaDemoProject/src/?.lua;/vagrant/LuaDemoProject/vendor/template/?.lua;/vagrant/LuaDemoProject/src/?/?.lua;/usr/local/openresty/lualib/?/?.lua;/usr/local/openresty/lualib/?.lua;;";
  lua_package_cpath "/usr/local/openresty/lualib/?/?.so;/usr/local/openresty/lualib/?.so;;";

  # for windows
  # lua_package_path "./?.lua;C:/dev/refer/LuaDemoProject/src/vendor/jwt/?.lua;C:/dev/refer/LuaDemoProject/src/?.lua;E:/tool/ZeroBraneStudio-1.80/lualibs/?/?.lua;E:/tool/ZeroBraneStudio-1.80/lualibs/?.lua;E:/tool/openresty-1.13.6.2-win32/lualib/?.lua;;";
  # lua_package_cpath "E:/tool/ZeroBraneStudio-1.80/bin/clibs/?.dll;E:/tool/openresty-1.13.6.2-win32/lualib/?.dll;;";


  # Initialize project
  init_by_lua_file luaScript/initial/loading_config.lua;

  #Debug mode (i.e. closing lua script cache)
  #lua_code_cache off;
  lua_code_cache on;

  #The proxy of the internal gateway. The internal gateway has token authentication
  upstream zuul {
    # idea development environment
    #	 server 192.168.56.121:7799;
    # centos self authentication environment
    server "cdh1:8888";
    keepalive 1000;
  }

  underscores_in_headers on;

  limit_req_zone  $arg_sku_id  zone=skuzone:10m      rate=6r/m;
  limit_req_zone  $http_user_id  zone=userzone:10m      rate=6r/m;
  limit_req_zone  $binary_remote_addr  zone=perip:10m      rate=6r/m;
  limit_req_zone  $server_name        zone=perserver:1m   rate=10r/s;
  limit_req_zone  $server_name        zone=seckill_server:1m   rate=20000r/s;

  server {
    listen       80;
    server_name  admin.nginx.server;
    default_type 'text/html';
    charset utf-8;


    limit_req  zone=perip;
    limit_req  zone=perserver;

    #web pages for the administration console
    location  / {
      if ($request_method = 'OPTIONS') {
        add_header Access-Control-Max-Age 600000;
        add_header Access-Control-Allow-Origin *;
        add_header Access-Control-Allow-Credentials true;
        add_header Access-Control-Allow-Methods 'GET, POST, OPTIONS';
        add_header 'Access-Control-Allow-Headers' 'DNT, X-Mx-ReqToken, Keep-Alive, User-Agent, X-Requested-With, If-Modified-Since, Cache-Control, Content-Type, token';
        return 204;
      }
      # web page of IDEA management console
      proxy_pass http://192.168.56.121:8066/ ;
      # proxy_pass http://zuul;
    }
  }


  server {
    listen       80 default;
    server_name  nginx.server *.nginx.server;
    default_type 'text/html';
    charset utf-8;

    # Forward zuul
    location / {
      if ($request_method = 'OPTIONS') {
        add_header Access-Control-Max-Age 600000;
        add_header Access-Control-Allow-Origin *;
        add_header Access-Control-Allow-Credentials true;
        add_header Access-Control-Allow-Methods 'GET, POST, OPTIONS';
        add_header 'Access-Control-Allow-Headers' 'DNT, X-Mx-ReqToken, Keep-Alive, User-Agent, X-Requested-With, If-Modified-Since, Cache-Control, Content-Type, token';
        return 204;
      }

      proxy_set_header X-Real-IP $remote_addr;
      proxy_set_header X-Forward-For $proxy_add_x_forwarded_for;
      proxy_set_header Host $http_host;
      proxy_set_header X-Nginx-Proxy true;
      proxy_pass http://zuul;
    }

    # Development and debugging: user services
    location  ^~ /uaa-provider/ {
      # idea development environment
      #  proxy_pass http://192.168.56.121:7702/;
      # centos self authentication environment
      proxy_pass http://192.168.56.121:7702/uaa-provider/ ;
    }

    # Development and debugging: second kill service
    location  ^~ /seckill-provider/ {
      proxy_pass http://192.168.56.121:7701/seckill-provider/ ;
    }



    # Development and debugging: management services
    location  ^~ /backend-provider/ {
      proxy_set_header X-Real-IP $remote_addr;
      proxy_set_header X-Forward-For $proxy_add_x_forwarded_for;
      proxy_set_header Host $http_host;
      proxy_set_header backend  'true'; # Add custom headers to background requests to facilitate session differentiation
      proxy_set_header X-Nginx-Proxy true;
      # Point to micro service
      proxy_pass http://192.168.56.121:6600/backend-provider/ ;

      #Point to gate way
      # proxy_pass http://zuul;
    }

    # Reverse proxy: second kill web page
    location  ^~ /seckill-web/ {
      proxy_pass http://192.168.56.121:6601/seckill-web/ ;
    }

    # Nginx+lua second kill: get second kill token
    location = /seckill-provider/api/seckill/redis/token/v2 {
      default_type 'application/json';
      charset utf-8;
      # Current limiting lua script
      access_by_lua_file luaScript/module/seckill/getToken_access_limit.lua;
      # Get spike token lua script
      content_by_lua_file luaScript/module/seckill/getToken.lua;
    }


    #  ratelimit by sku id
    location  = /ratelimit/sku {
      limit_req  zone=skuzone;
      echo "Normal response";
    }

    #  ratelimit by user id
    location  = /ratelimit/demo {
      limit_req  zone=userzone;
      echo "Normal response";
    }


    location = /50x.html{
      echo "Degraded content after current limiting";
    }

    error_page 502 503 =200 /50x.html;

    #  Access path http://cdh1/ratelimit/demo2?seckillSkuId=3
    #  ratelimit by sku id
    location  = /ratelimit/demo2 {
      # Current limiting lua script
      access_by_lua_file luaScript/module/seckill/getToken_access_limit.lua;

      echo "Normal response";
    }




  }

  server {
    listen       8080 default;
    server_name  nginx.server *.nginx.server ;
    #limit_req  zone=seckill_server;


    #The first lua script is hellword
    location /helloworld {
      default_type 'text/html';
      charset utf-8;
      content_by_lua_file luaScript/module/demo/helloworld.lua;
    }

    location / {        # Automatically match to (htm|html) format
      ## In the development phase, the configuration page does not cache files ending in html and htm
      add_header Cache-Control "private, no-store, no-cache, must-revalidate, proxy-revalidate";
      index index.html;
      root /vagrant/LuaDemoProject/src/www/static; #Server path
      default_type 'text/html';
    }

    location ~ .*\.(htm|html)$ {        # Automatically match to (htm|html) format
      ## In the development phase, the configuration page does not cache files ending in html and htm
      add_header Cache-Control "private, no-store, no-cache, must-revalidate, proxy-revalidate";
      root /vagrant/LuaDemoProject/src/www/static; #Server path
      default_type 'text/html';
    }


    location ~ .*\.(js|script)$ {        # Auto match to (jpg|gif|png) format
      add_header Cache-Control "private, no-store, no-cache, must-revalidate, proxy-revalidate";
      root /vagrant/LuaDemoProject/src/www/static; #Server path
      default_type 'application/javascript';
    }




    location ~ .*\.(css)$ {        # Auto match to (css) format
      root /vagrant/LuaDemoProject/src/www/static; #Server path
      default_type 'text/css';
    }


    # Development and commissioning: inventory service
    location  ^~ /stock-provider/ {
      proxy_set_header Host $host:$server_port;
      proxy_set_header X-Real-IP $remote_addr;
      proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
      proxy_pass http://zuul/stock-provider/ ;
    }

    # Development and debugging: second kill service
    location  ^~ /seckill-provider/ {
      #      proxy_pass http://localhost:7701/seckill-provider/ ;
      #Point to gate way
      proxy_set_header Host $host:$server_port;
      proxy_set_header X-Real-IP $remote_addr;
      proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
      proxy_pass http://zuul/seckill-provider/;
    }



    # Nginx+lua: get product information
    location = /stock-lua/gooddetail {
      default_type 'application/json';
      charset utf-8;
      # Current limiting lua script
      #access_by_lua_file luaScript/module/seckill/getToken_access_limit.lua;
      # Get spike token lua script
      content_by_lua_file luaScript/module/seckill/good_detail.lua;
    }


    # Nginx+lua second kill: get second kill token
    location  ~ /seckill-lua/(.*)/getToken/v3 {
      default_type 'application/json';
      charset utf-8;
      set $skuId $1;
      limit_req  zone=userzone;
      limit_req  zone=seckill_server;

      # Current limiting lua script
      # access_by_lua_file luaScript/module/seckill/getToken_access_limit.lua;
      # Get spike token lua script
      content_by_lua_file luaScript/module/seckill/getToken_v3.lua;
    }


    location = /echo {
      echo "echo";
    }
    location = /50x.html{
      echo "Degraded content after current limiting";
    }

    error_page 502 503 =200 /50x.html;
  }

}

Keywords: Java Operation & Maintenance Nginx

Added by leoric1928 on Sat, 23 Oct 2021 09:29:49 +0300