Openresty Redis uses connection pool correctly (set_keepalive)

Recently, I was working on an openresty project. Every time I visit, I need to read redis through openresty to judge whether access can be allowed.

Question:

If you need to establish a connection with redis for each access, the number of connections will explode when the concurrency is large. Affect performance and system.

Scheme I:

In init_ by_ First create a redis connection in Lua, and then click access_ by_ Direct use in Lua:

init.lua:

local redis = require "redis"
client = redis.connect('127.0.0.1', 6379)

filter.lua (Demo: simple blacklist function. If the remoteip of the request has a key in redis, 403 will be returned):

client = red;

function redisHasKey(client,keyname)
	if(client:get(keyname))
	then
		return true;
	else
		return false;
	end
end

local keyname = ngx.var.remote_addr;
if(pcall(redisHasKey,client,keyname))then
	if(redisHasKey(client,keyname))
	then
		ngx.exit(403);
	else
		return
	end
else
	return
end

When nginx starts, a connection is created and only this connection is used. There will be no problem of too many connections.

Programme I questions:

1. In use, redis failure will make the function unavailable. Even if redis is restarted, nginx needs to be restarted to obtain the connection again.

2. linux tcp connection cannot be guaranteed to be uninterrupted for a long time. It cannot be repaired after interruption.

3. In the case of high concurrency and multiple requests, a single connection will block requests. Multiple requests will wait for the connection to be idle before operation, or even report errors directly.

Scheme II:

Connection pool, for connection number management

openresty officially has connection pool management (set_keep)

Syntax: syntax: ok, err = red:set_keepalive(max_idle_timeout, pool_size)

I'll try to use it. This time I only use filter lua:

local redis = require "resty.redis"
local red = redis:new()
red:set_timeouts(1000, 1000, 1000)
red:set_keepalive(1000, 20)
red:connect("127.0.0.1", 6379)
client = red;

function redisHasKey(client,keyname)
	if(client:get(keyname))
	then
		return true;
	else
		return false;
	end
end

local keyname = ngx.var.remote_addr;
if(pcall(redisHasKey,client,keyname))then
	if(redisHasKey(client,keyname))
	then
		ngx.exit(403);
	else
		return
	end
else
	return
end

Question:

More serious problems are found after use. The number of connections is not controlled by the connection pool, and a large number of errors are reported:

2021/05/19 13:57:57 [error] 2734932#0: *2019 attempt to send data on a closed socket: u:00000000401F79D0, c:0000000000000000, ft:0 eof:0, client: 127.0.0.1, server: , request: "GET / HTTP/1.0", host: "127.0.0.1:8082"
2021/05/19 13:57:57 [error] 2734932#0: *2019 attempt to send data on a closed socket: u:00000000401F79D0, c:0000000000000000, ft:0 eof:0, client: 127.0.0.1, server: , request: "GET / HTTP/1.0", host: "127.0.0.1:8082"
2021/05/19 13:57:57 [alert] 2734932#0: *2019 socket() failed (24: Too many open files), client: 127.0.0.1, server: , request: "GET / HTTP/1.0", host: "127.0.0.1:8082"
2021/05/19 13:57:57 [error] 2734932#0: *2019 attempt to send data on a closed socket: u:00000000401F6290, c:0000000000000000, ft:0 eof:0, client: 127.0.0.1, server: , request: "GET / HTTP/1.0", host: "127.0.0.1:8082"

The connection was interrupted when it was used.

solve:

Look at the official_ Usage instructions for keepalive:

set_keepalive

syntax: ok, err = red:set_keepalive(max_idle_timeout, pool_size)

Puts the current Redis connection immediately into the ngx_lua cosocket connection pool.

You can specify the max idle timeout (in ms) when the connection is in the pool and the maximal size of the pool every nginx worker process.

In case of success, returns 1. In case of errors, returns nil with a string describing the error.

Only call this method in the place you would have called the close method instead. Calling this method will immediately turn the current redis object into the closed state. Any subsequent operations other than connect() on the current object will return the closed error.

In short, set_keepalive is used instead of close. If set is used_ After keepalive, any operation other than connect will report an error.

It's important to look at the official documents!

Under improvement:

local redis = require "resty.redis"
local red = redis:new()
--red:set_timeouts(1000, 1000, 1000)
--red:set_keepalive(1000, 20)
red:connect("127.0.0.1", 6379)
client = red;

function redisHasKey(client,keyname)
	if(client:get(keyname))
	then
		return true;
	else
		return false;
	end
end

local keyname = ngx.var.remote_addr;
if(pcall(redisHasKey,client,keyname))then
	if(redisHasKey(client,keyname))
	then
		red:set_keepalive(1000, 200)
		ngx.exit(403);
	else
		red:set_keepalive(1000, 200)
		return
	end
else
	red:set_keepalive(1000, 200)
	return
end

Although it's disgusting to use, it does solve our problem.

When using ab test, the number of connections will not be strictly controlled at set_keepalive's pool_size value, but it can be related to the concurrency of the test.

Keywords: Database Nginx Redis lua

Added by soulzllc on Fri, 31 Dec 2021 09:36:41 +0200