| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
| |
re-accepting backends was really common.
|
|
|
|
|
|
|
| |
New backend connections returned 'conntimeout' whether it timed out
establishing the TCP connection or if it died waiting for the
"version\r\n" response. Now gives a 'readvalidate' if it's already
properly connected.
|
|
|
|
|
|
|
|
|
|
|
| |
A long sleep in the unix startup code made backends hit the connection
timeout before the backends were configured.
Make all the proxy tests use the unix socket instead of listening on a
hardcoded port. Proxy code is completely equivalent from the client
standpoint.
This fix should make the whole test suite run a bit faster too.
|
|
|
|
|
|
|
|
|
|
|
|
| |
The connect timeout won't fire when blocking a backend from connecting
in these tests; it will connect, send a version command to validate,
then time out on read.
With the read timeout set to 0.1 it would sometimes fail before the
restart finished, clogging log lines and causing test failures.
Now we wait for the watcher and remove a sleep, with a longer read
timeout.
|
|
|
|
|
|
|
|
|
|
|
|
| |
When mcp.pool() is called in its two argument form, ie: mcp.pool({b1,
b2}, { foo = bar }), backend objects would not be properly cached
internally, causing objects to leak.
Further, it was settings the objects into the cache table indexed by the
object itself, so they would not be cleaned up by garbage collection.
Bug was introduced as part of 6442017c (allow workers to run IO
optionally)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Adds:
mcp.active_req_limit(count)
mcp.buffer_memory_limit(kilobytes)
Divides by the number of worker threads and creates a per-worker-thread
limit for the number of concurrent proxy requests, and how many bytes
used specifically for value bytes. This does not represent total memory
usage but will be close.
Buffer memory for inbound set requests is not accounted for until after
the object has been read from the socket; to be improved in a future
update. This should be fine unless clients send just the SET request and
then hang without sending further data.
Limits should be live-adjustable via configuration reloads.
|
|
|
|
|
| |
use a specific error when timeouts happen during connection stage vs
read/write stage. it even had a test!
|
|
|
|
| |
somehow missed from earlier change with marking dead backends.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
`mcp.pool(p, { dist = etc, iothread = true }`
By default the IO thread is not used; instead a backend connection is
created for each worker thread. This can be overridden by setting
`iothread = true` when creating a pool.
`mcp.pool(p, { dist = etc, beprefix = "etc" }`
If a `beprefix` is added to pool arguments, it will create unique
backend connections for this pool. This allows you to create multiple
sockets per backend by making multiple pools with unique prefixes.
There are legitimate use cases for sharing backend connections across
different pools, which is why that is the default behavior.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The event handling code was unoptimized and temporary; it was slated for
a rewrite for performance and non-critical bugs alone. However the old
code may be causing critical bugs so it's being rewritten now.
Fixes:
- backend disconnects are detected immediately instead of on the next
time they are used.
- backend reconnects happen _after_ the retry timeout, not before
- use a persistent read handler and a temporary write handler to avoid
constantly calling epoll_ctl syscalls for potential performance boost.
Updated some tests for proxyconfig.t as it was picking up the
disconnects immediately.
Unrelated to a timing issue I resolved to the benchmark.
|
|
|
|
|
|
|
|
|
|
| |
ie:
local b1 = mcp.backend({ label = "b1", host = "127.0.0.1", port = 11511,
connecttimeout = 1, retrytimeout = 0.5, readtimeout = 0.1,
failurelimit = 11 })
... to allow for overriding connect/retry/etc tunables on a per-backend
basis. If not passed in the global settings are used.
|
|
uses mocked backend servers so we can test:
- end to end client to backend proxying
- lua API functions
- configuration reload
- various error conditions
|