nginx simple tutorial
Catalog
- Summary
- Installation and use
- install
- Compiling Nginx from source code
- Windows Installation
- Use
- nginx configuration reality
- http reverse proxy configuration
- Load balancing configuration
- Web site has multiple webapp configurations
- https reverse proxy configuration
- Static Site Configuration
- Reference resources
Summary
What is nginx?
Nginx (engine x) is a lightweight Web server, reverse proxy server and e-mail (IMAP/POP3) proxy server.
What is reverse proxy?
Reverse Proxy means that the proxy server receives the connection request on the internet, then forwards the request to the server on the internal network, and returns the result from the server to the client requesting the connection on the internet. At this time, the proxy server acts as a reverse proxy server to the outside.
Refer to the example in the following figure:
Installation and use
install
The release versions are divided into Linux and windows versions.
You can also download the source code and run it after compilation.
Compiling Nginx from source code
After decompressing the source code, run the following commands in the terminal:
$ ./configure
$ make
$ sudo make install
By default, Nginx is installed at / usr/local/nginx. By setting Compilation options You can change this setting.
Windows Installation
To install Nginx/Win32, you need to download it first. Then decompress it and run it. Following is an example of C disk root directory.
cd C:
cd C:\nginx-0.8.54 start nginx
Nginx / Win32 runs in a console program, not in windows service mode. Server mode is still under development.
Use
The use of nginx is relatively simple, that is, several commands.
Commonly used commands are as follows:
Nginx-s stop quickly closes Nginx, may not save relevant information, and quickly terminates the web service. Nginx-s quit shuts down Nginx smoothly, saves relevant information, and ends web services on schedule. Nginx-s reload is overloaded due to the change of Nginx-related configuration and the need to reload the configuration. Nginx-s reopen reopens the log file. Nginx-c filename specifies a configuration file for Nginx instead of the default. nginx-t does not run, but only tests configuration files. nginx checks the syntax of the configuration file and tries to open the file referenced in the configuration file. Nginx-v shows the version of nginx. Nginx-V shows the version of nginx, compiler version and configuration parameters.
If you don't want to hit the command every time, you can add a startup.bat startup file to the nginx installation directory and double-click it to run. The contents are as follows:
@echo off
rem If it has been started before startup nginx And record it. pid Documentation, meeting kill Designated process
nginx.exe -s stop
rem Test the syntax correctness of configuration files
nginx.exe -t -c conf/nginx.conf
rem display version information
nginx.exe -v
rem Start with the specified configuration nginx
nginx.exe -c conf/nginx.conf
If you run under Linux, write a shell script, much the same.
nginx configuration reality
I always believe that the configuration of various development tools or combined with the actual situation to tell, will make people more understandable.
http reverse proxy configuration
Let's start with a small goal: to complete an http reverse proxy without considering complex configuration.
The nginx.conf configuration file is as follows:
Note: conf/nginx.conf is the default configuration file for nginx. You can also specify your configuration file using nginx-c
#Running users
#user somebody;
#Start the process, usually set to equal the number of CPUs
worker_processes 1;
#Global error log
error_log D:/Tools/nginx-1.10.1/logs/error.log;
error_log D:/Tools/nginx-1.10.1/logs/notice.log notice;
error_log D:/Tools/nginx-1.10.1/logs/info.log info;
#PID file to record the process ID of nginx currently started
pid D:/Tools/nginx-1.10.1/logs/nginx.pid;
#Working mode and upper limit of connection number
events {
worker_connections 1024; #Maximum number of concurrent links for a single background worker process
}
#Setting up http server and using its reverse proxy function to provide load balancing support
http {
#Set the MIME type (mail support type), which is defined by the mime.types file
include D:/Tools/nginx-1.10.1/conf/mime.types;
default_type application/octet-stream;
#Setting logs
log_format main '[$remote_addr] - [$remote_user] [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log D:/Tools/nginx-1.10.1/logs/access.log main;
rewrite_log on;
#The sendfile instruction specifies whether nginx calls the sendfile function (zero copy mode) to output files. For general applications,
#It must be set to on. If it is used for downloading heavy load applications such as application disk IO, it can be set to off to balance the processing speed of disk and network I/O and reduce uptime of the system.
sendfile on;
#tcp_nopush on;
#Connection timeout
keepalive_timeout 120;
tcp_nodelay on;
#gzip compression switch
#gzip on;
#Setting the actual server list
upstream zp_server1{
server 127.0.0.1:8089;
}
#HTTP Server
server {
#Listen on port 80, which is a well-known port number for HTTP protocol
listen 80;
#Define access using www.xx.com
server_name www.helloworld.com;
#home page
index index.html
#Directory pointing to webapp
root D:\01_Workspace\Project\github\zp\SpringNotes\spring-security\spring-shiro\src\main\webapp;
#Coding format
charset utf-8;
#Agent configuration parameters
proxy_connect_timeout 180;
proxy_send_timeout 180;
proxy_read_timeout 180;
proxy_set_header Host $host;
proxy_set_header X-Forwarder-For $remote_addr;
#The path of the reverse proxy (bound to upstream), and the path of the mapping is set after location
location / {
proxy_pass http://zp_server1;
}
#Static file, nginx handles by itself
location ~ ^/(images|javascript|js|css|flash|media|static)/ {
root D:\01_Workspace\Project\github\zp\SpringNotes\spring-security\spring-shiro\src\main\webapp\views;
#After 30 days, the static files are not updated very much. If they are overdue, they can be set a little larger. If they are updated frequently, they can be set a little smaller.
expires 30d;
}
#Set the address to view Nginx status
location /NginxStatus {
stub_status on;
access_log on;
auth_basic "NginxStatus";
auth_basic_user_file conf/htpasswd;
}
#Disallow access to. htxxx files
location ~ /\.ht {
deny all;
}
#Error handling page (optional configuration)
#error_page 404 /404.html;
#error_page 500 502 503 504 /50x.html;
#location = /50x.html {
# root html;
#}
}
}
Well, let's try it.
- Start webapp, and note that the port to start the binding is consistent with the port set by upstream in nginx.
-
Change host: Add a DNS record to the host file in the C: Windows System32 drivers etc directory
127.0.0.1 www.helloworld.com
Start the startup.bat command in the previous article
- Access to www.helloworld.com in the browser, no surprise, can already be accessed.
Load balancing configuration
In the previous example, the proxy pointed to only one server.
However, in the actual operation of the website, most of the servers are running the same app, which needs to use load balancing to shunt.
nginx can also achieve simple load balancing functions.
Assume an application scenario where the application is deployed on three linux servers, 192.168.1.11:80, 192.168.1.12:80 and 192.168.1.13:80. The domain name of the website is www.helloworld.com and the public IP is 192.168.1.11. Deploy nginx on the server where the network IP is located, and load balance all requests.
nginx.conf is configured as follows:
http {
#Set the mime type, which is defined by the mime.type file
include /etc/nginx/mime.types;
default_type application/octet-stream;
#Setting Log Format
access_log /var/log/nginx/access.log;
#Setting the list of servers for load balancing
upstream load_balance_server {
#weigth parameters represent weights, and the higher the weights, the greater the probability of being assigned.
server 192.168.1.11:80 weight=5;
server 192.168.1.12:80 weight=1;
server 192.168.1.13:80 weight=6;
}
#HTTP Server
server {
#Listen on port 80
listen 80;
#Define access using www.xx.com
server_name www.helloworld.com;
#Load balancing requests for all requests
location / {
root /root; #Define the default site root location for the server
index index.html index.htm; #Define the name of the index file on the home page
proxy_pass http://load_balance_server ;#Request to turn to the list of servers defined by load_balance_server
#Here are some configuration of the reverse proxy (optional configuration)
#proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
#Back-end Web servers can obtain users'real IP through X-Forwarded-For
proxy_set_header X-Forwarded-For $remote_addr;
proxy_connect_timeout 90; #nginx connection timeout with back-end server (proxy connection timeout)
proxy_send_timeout 90; #Back-end server data return time (proxy send timeout)
proxy_read_timeout 90; #When the connection is successful, the response time of the back-end server (the agent receives the timeout)
proxy_buffer_size 4k; #Set up the size of the buffer where the proxy server (nginx) saves user header information
proxy_buffers 4 32k; #The proxy_buffers buffer, if the average page size is less than 32k, is set as follows
proxy_busy_buffers_size 64k; #Buffer size under high load (proxy_buffers*2)
proxy_temp_file_write_size 64k; #Set the cache folder size, greater than this value, which will be passed from the upstream server
client_max_body_size 10m; #Maximum number of single file bytes allowed for client requests
client_body_buffer_size 128k; #Buffer proxy buffers the maximum number of bytes requested by the client
}
}
}
Web site has multiple webapp configurations
When a website has more and more functions, it is often necessary to separate some relatively independent modules and maintain them independently. In this way, there are usually multiple webapp s.
For example: suppose the www.helloworld.com site has several webapp s, finance, product and admin. The way these applications are accessed is differentiated by context:
www.helloworld.com/finance/
www.helloworld.com/product/
www.helloworld.com/admin/
We know that the default port number of http is 80. If you start all three webapp applications on one server at the same time, it will be impossible to use 80 ports. Therefore, the three applications need to bind different port numbers.
So, the problem is that when users actually visit www.helloworld.com sites, they will visit different webapp s without the corresponding port number. So again, you need to use the reverse proxy for processing.
Configuration is not difficult, let's see how to do it.
http {
#Some basic configurations are omitted here
upstream product_server{
server www.helloworld.com:8081;
}
upstream admin_server{
server www.helloworld.com:8082;
}
upstream finance_server{
server www.helloworld.com:8083;
}
server {
#Some basic configurations are omitted here
#Default server pointing to product
location / {
proxy_pass http://product_server;
}
location /product/{
proxy_pass http://product_server;
}
location /admin/ {
proxy_pass http://admin_server;
}
location /finance/ {
proxy_pass http://finance_server;
}
}
}
https reverse proxy configuration
Some sites with high security requirements may use HTTPS (a secure HTTP protocol using ssl communication standards).
There is no popular HTTP protocol and SSL standard. However, there are several things you need to know to configure https with nginx:
- The fixed port number of HTTPS is 443, which is different from the 80 port of HTTP.
- The SSL standard requires the introduction of security certificates, so in nginx.conf you need to specify certificates and their corresponding key s
Others are basically the same as http reverse proxies, except that they are configured differently in the Server section.
#HTTP Server
server {
#Listen on port 443. 443 is a well-known port number. It is mainly used in HTTPS protocol.
listen 443 ssl;
#Define access using www.xx.com
server_name www.helloworld.com;
#ssl certificate file location (common certificate file format: crt/pem)
ssl_certificate cert.pem;
#ssl certificate key location
ssl_certificate_key cert.key;
#ssl configuration parameters (selective configuration)
ssl_session_cache shared:SSL:1m;
ssl_session_timeout 5m;
#Digital signature, using MD5 here
ssl_ciphers HIGH:!aNULL:!MD5;
ssl_prefer_server_ciphers on;
location / {
root /root;
index index.html index.htm;
}
}
Static Site Configuration
Sometimes we need to configure static sites (that is, html files and a bunch of static resources).
For example, if all the static resources are in the / app/dist directory, we just need to specify the home page and the host of the site in nginx.conf.
The configuration is as follows:
worker_processes 1;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
sendfile on;
keepalive_timeout 65;
gzip on;
gzip_types text/plain application/x-javascript text/css application/xml text/javascript application/javascript image/jpeg image/gif image/png;
gzip_vary on;
server {
listen 80;
server_name static.zp.cn;
location / {
root /app/dist;
index index.html;
#Forward any request to index.html
}
}
}
Then, add HOST:
127.0.0.1 static.zp.cn
At this point, by accessing static.zp.cn in the local browser, you can access the static site.
Reference resources
Author: Silence and Void
Source: http://www.cnblogs.com/jingmoxukong/
You are welcome to reproduce it in any form, but you must indicate the source.
Limited to my level, if there are inappropriate expressions in articles and codes, please do not hesitate to give advice.