After the front-end static web page is deployed to the server corresponding to the specific public network IP, and then the interface is exposed, it can be accessed on the whole network. The following is the specific implementation and code analysis
1. First of all, we need to have a server with public ip address. The server I selected is Alibaba cloud's 2-core 4g server with a bandwidth of 5M, which is sufficient for normal deployment of the website.
2. Install nginx in this server. This nginx runs under Ubuntu 20.04 system
nginx version: nginx/1.18.0 (Ubuntu)
Installation:
1. You can install nginx directly by apt install nginx
Let's look at the detailed configuration below
After installing nginx, we can find its configuration in / etc/nginx
nginx.conf is its configuration file, which introduces all. Conf files under conf.d /
Then its site configuration information is configured by the sites enabled / default file. After obtaining the site file, you can configure the site of server {} in it
2. There is another installation method. First transfer the installation package file to the server, and then unzip it to the file. / configure & & make & & make install
Here are some related commands
Start: nginx or nginx -c / path of configuration file
stop: nginx -s stop
restart: nginx -s reload
check configuration file: nginx -t
check nginx startup: ps -ef | grep nginx
If an error is reported
nginx: [error] invalid PID number "" in "/run/nginx.pid"
Need to execute first
nginx -c /etc/nginx/nginx.conf
The path to the nginx.conf file can be found in the return of nginx -t.
nginx -s reload
If the above is useless, directly kill the process in nginx
configuration file
Note: the configuration file of nginx is in the conf folder under the nginx directory after installation
(1) : the newly installed configuration file is as follows:
#user nobody; worker_processes 1; #error_log logs/error.log; #error_log logs/error.log notice; #error_log logs/error.log info; #pid logs/nginx.pid; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; #log_format main '$remote_addr - $remote_user [$time_local] "$request" ' # '$status $body_bytes_sent "$http_referer" ' # '"$http_user_agent" "$http_x_forwarded_for"'; #access_log logs/access.log main; sendfile on; #tcp_nopush on; #keepalive_timeout 0; keepalive_timeout 65; #gzip on; server { listen 80; server_name localhost; #charset koi8-r; #access_log logs/host.access.log main; location / { root html; index index.html index.htm; } #error_page 404 /404.html; # redirect server error pages to the static page /50x.html # error_page 500 502 503 504 /50x.html; location = /50x.html { root html; } # proxy the PHP scripts to Apache listening on 127.0.0.1:80 # #location ~ \.php$ { # proxy_pass http://127.0.0.1; #} # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 # #location ~ \.php$ { # root html; # fastcgi_pass 127.0.0.1:9000; # fastcgi_index index.php; # fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name; # include fastcgi_params; #} # deny access to .htaccess files, if Apache's document root # concurs with nginx's one # #location ~ /\.ht { # deny all; #} } # another virtual host using mix of IP-, name-, and port-based configuration # #server { # listen 8000; # listen somename:8080; # server_name somename alias another.alias; # location / { # root html; # index index.html index.htm; # } #} # HTTPS server # #server { # listen 443 ssl; # server_name localhost; # ssl_certificate cert.pem; # ssl_certificate_key cert.key; # ssl_session_cache shared:SSL:1m; # ssl_session_timeout 5m; # ssl_ciphers HIGH:!aNULL:!MD5; # ssl_prefer_server_ciphers on; # location / { # root html; # index index.html index.htm; # } #} }
The above is the installation package (the configuration file installed by the second method)
The configuration file after the first installation is more decoupled. It is composed of multiple files, which are explained in the appeal. Configuration related.
(2) Configuration details
1. According to the above nginx configuration file, the nginx configuration can be divided into the following structures
... #Global fast events { #events fast } http{ #http block server { #server fast location { #location fast } } }
2. Structure and function of each block
(1). Global block: configure instructions that affect the nginx global. Generally, there are user groups running nginx server, pid storage path of nginx process, log storage path, introduction of configuration file, number of worker process es allowed to be generated, etc.
(2) events block: the configuration affects the nginx server or the network connection with the user. There is the maximum number of connections per process, which event driven model is selected to process connection requests, whether multiple network connections are allowed to be accepted at the same time, open multiple network connection serialization, etc
(3) http block: multiple server s can be configured, and the configuration of proxy, cache, log definition and third-party modules can be configured. Such as file import, MIME type definition, log customization, whether to use sendfile to transfer files, connection timeout, number of single connection requests, etc.
(4). server block: configure the relevant parameters of the virtual host. There can be multiple servers in one http.
(5) location block: configure the routing of requests and the processing of various pages.
3. Detailed explanation of nginx configuration file
########### Each instruction must end with a semicolon.################# #user administrator administrators; #Configure users or groups. The default is nobody. #worker_processes 2; #The number of processes allowed to be generated. The default is 1 #pid /nginx/pid/nginx.pid; #Specify the storage address of nginx process running files error_log log/error.log debug; #Make log path and level. This setting can be put into the global block, http block and server block. The level is: debug|info|notice|warn|error|crit|alert|emerg events { accept_mutex on; #Set the network connection serialization to prevent group panic. The default is on multi_accept on; #Set whether a process accepts multiple network connections at the same time. The default is off #use epoll; #Event driven model, select|poll|kqueue|epoll|resig|/dev/poll|eventport worker_connections 1024; #The maximum number of connections is 512 by default } http { include mime.types; #File extension and file type mapping table default_type application/octet-stream; #The default file type is text/plain #access_log off; #Cancel service log log_format myFormat '$remote_addr–$remote_user [$time_local] $request $status $body_bytes_sent $http_referer $http_user_agent $http_x_forwarded_for'; #Custom format access_log log/access.log myFormat; #combined is the default value for log format sendfile on; #sendfile mode is allowed to transfer files. The default is off. It can be in http block, server block and location block. sendfile_max_chunk 100k; #The number of transfers per call of each process cannot be greater than the set value. The default value is 0, that is, there is no upper limit. keepalive_timeout 65; #The connection timeout, which is 75s by default, can be set in http, server and location blocks. upstream mysvr { server 127.0.0.1:7878; server 192.168.10.121:3333 backup; #Hot standby } error_page 404 https://www.baidu.com; # Error page server { keepalive_requests 120; #Maximum number of single connection requests. listen 4545; #Listening port server_name 127.0.0.1; #Listening address location ~*^.+$ { #Request url filtering, regular matching, ~ is case sensitive, ~ * is case insensitive. #root path; #root directory #index vv.txt; #Set default page proxy_pass http://mysvr; # The request goes to the list of servers defined by mysvr deny 127.0.0.1; #Rejected ip allow 172.18.5.54; #Allowed ip } } }
The above are some of the documents before the failure. The following are some knowledge points that appear and understand in the configuration process
In configuring the file structure using the command line
There are two main imported configuration files in / etc/nginx:
1. The first is the file in conf.d
2. The second is the file in site enablers. The site enable file is the link to the file in site avaliable. You can directly modify the file in site avaliable
During initialization, there is no default file in conf.d. you need to write relevant configuration files yourself. This format can copy the files in site enablers. This is also the problem that has been wandering before. The format used has always been wrong.
# # You should look at the following URL's in order to grasp a solid understanding # of Nginx configuration files in order to fully unleash the power of Nginx. # https://www.nginx.com/resources/wiki/start/ # https://www.nginx.com/resources/wiki/start/topics/tutorials/config_pitfalls/ # https://wiki.debian.org/Nginx/DirectoryStructure # # In most cases, administrators will remove this file from sites-enabled/ and # leave it as reference inside of sites-available where it will continue to be # updated by the nginx packaging team. # # This file will automatically load configuration files provided by other # applications, such as Drupal or Wordpress. These applications will be made # available underneath a path with that package name, such as /drupal8. # # Please see /usr/share/doc/nginx-doc/examples/ for more detailed examples. ## # Default server configuration # server { listen 80 default_server; listen [::]:80 default_server; # SSL configuration # # listen 443 ssl default_server; # listen [::]:443 ssl default_server; # # Note: You should disable gzip for SSL traffic. # See: https://bugs.debian.org/773332 # # Read up on ssl_ciphers to ensure a secure configuration. # See: https://bugs.debian.org/765782 # # Self signed certs generated by the ssl-cert package # Don't use them in a production server! # # include snippets/snakeoil.conf; #This is very important. After the file is rendered, it should be in the directory. Later, it is required that the location is /. In the future, you can access it directly with the address plus the end slogan root /home/web/hexo/public; # Add index.php to the list if you are using PHP index index.html index.htm index.nginx-debian.html; server_name **.**.***.**; location / { # First attempt to serve request as file, then # as directory, then fall back to displaying a 404. try_files $uri $uri/ =404; } # pass PHP scripts to FastCGI server # #location ~ \.php$ { # include snippets/fastcgi-php.conf; # # # With php-fpm (or other unix sockets): # fastcgi_pass unix:/var/run/php/php7.4-fpm.sock; # # With php-cgi (or other tcp sockets): # fastcgi_pass 127.0.0.1:9000; #} # deny access to .htaccess files, if Apache's document root # concurs with nginx's one # #location ~ /\.ht { # deny all; #} }
If you do not use the following cases, you should be able to configure them directly in nginx
In this way, the direct address plus port can be mapped to the whole file
Firewall related code
When accessing nginx in linux, it cannot be accessed by default. Because of the firewall problem, we need to open some port numbers
View open port numbers:
Turn on the firewall
systemctl start firewalld
Turn off firewall
systemctl stop firewalld
View port opening
firewall-cmd –list-ports
Open a specific port
firewall-cmd –zone=public –add-port=81/tcp –permanent
Close a specific port
firewall-cmd –remove-port=80/tcp –permanent
Restart the firewall after changing the configuration
systemctl reload firewalld
load balancing
1. Preparation
(1) Prepare two tomcat servers, one 8080 and one 8081
(2) In the webapps directory of the two tomcat, create the edu folder, and create the page a.html in the edu folder for testing
2. Configure load balancing in the nginx configuration file
upstream myserver { server ***.***.***.***:port1 server ***.***.***.***:port2 } server { listen 80 server_name ******** location / { proxy_pass http://myserver; root index } }
3.nginx server allocation policy
First polling (default)
Each request is allocated to different back-end servers one by one in chronological order. If the back-end server goes down, it can be automatically eliminated.
The second weight
Weight means that the weight is 1 by default. The higher the weight, the more clients are assigned
The third ip_hash
Each request is allocated according to the hash result of the access ip, so that each visitor accesses a back-end server
The fourth kind of Fair (third party) allocates requests according to the response time of the back-end server, and those with short response time are allocated first.