summaryrefslogtreecommitdiff
path: root/t/proxyconfig.t
Commit message (Collapse)AuthorAgeFilesLines
* proxy: remove redundancy in test codedormando2023-04-281-33/+19
| | | | re-accepting backends was really common.
* proxy: return 'readvalidate' on be read timeoutdormando2023-04-281-1/+1
| | | | | | | New backend connections returned 'conntimeout' whether it timed out establishing the TCP connection or if it died waiting for the "version\r\n" response. Now gives a 'readvalidate' if it's already properly connected.
* proxy: let tests use unix socket for proxy daemondormando2023-04-281-2/+2
| | | | | | | | | | | A long sleep in the unix startup code made backends hit the connection timeout before the backends were configured. Make all the proxy tests use the unix socket instead of listening on a hardcoded port. Proxy code is completely equivalent from the client standpoint. This fix should make the whole test suite run a bit faster too.
* proxy: fix flaky test in proxyconfig.tdormando2023-04-171-5/+6
| | | | | | | | | | | | The connect timeout won't fire when blocking a backend from connecting in these tests; it will connect, send a version command to validate, then time out on read. With the read timeout set to 0.1 it would sometimes fail before the restart finished, clogging log lines and causing test failures. Now we wait for the watcher and remove a sleep, with a longer read timeout.
* proxy: fix backend leak with 2-arg mcp.pool()dormando2023-04-101-0/+6
| | | | | | | | | | | | When mcp.pool() is called in its two argument form, ie: mcp.pool({b1, b2}, { foo = bar }), backend objects would not be properly cached internally, causing objects to leak. Further, it was settings the objects into the cache table indexed by the object itself, so they would not be cleaned up by garbage collection. Bug was introduced as part of 6442017c (allow workers to run IO optionally)
* proxy: add request and buffer memory limitsdormando2023-03-261-1/+176
| | | | | | | | | | | | | | | | | | Adds: mcp.active_req_limit(count) mcp.buffer_memory_limit(kilobytes) Divides by the number of worker threads and creates a per-worker-thread limit for the number of concurrent proxy requests, and how many bytes used specifically for value bytes. This does not represent total memory usage but will be close. Buffer memory for inbound set requests is not accounted for until after the object has been read from the socket; to be improved in a future update. This should be fine unless clients send just the SET request and then hang without sending further data. Limits should be live-adjustable via configuration reloads.
* proxy: add conntimeout errordormando2023-03-211-1/+1
| | | | | use a specific error when timeouts happen during connection stage vs read/write stage. it even had a test!
* proxy: repair t/proxyconfig.tdormando2023-03-091-0/+1
| | | | somehow missed from earlier change with marking dead backends.
* core: fix another dtrace compilation issue1.6.19dormando2023-03-081-4/+4
|
* proxy: allow workers to run IO optionallydormando2023-02-241-0/+101
| | | | | | | | | | | | | | | | | `mcp.pool(p, { dist = etc, iothread = true }` By default the IO thread is not used; instead a backend connection is created for each worker thread. This can be overridden by setting `iothread = true` when creating a pool. `mcp.pool(p, { dist = etc, beprefix = "etc" }` If a `beprefix` is added to pool arguments, it will create unique backend connections for this pool. This allows you to create multiple sockets per backend by making multiple pools with unique prefixes. There are legitimate use cases for sharing backend connections across different pools, which is why that is the default behavior.
* proxy: redo libevent handling codedormando2023-02-221-4/+6
| | | | | | | | | | | | | | | | | | The event handling code was unoptimized and temporary; it was slated for a rewrite for performance and non-critical bugs alone. However the old code may be causing critical bugs so it's being rewritten now. Fixes: - backend disconnects are detected immediately instead of on the next time they are used. - backend reconnects happen _after_ the retry timeout, not before - use a persistent read handler and a temporary write handler to avoid constantly calling epoll_ctl syscalls for potential performance boost. Updated some tests for proxyconfig.t as it was picking up the disconnects immediately. Unrelated to a timing issue I resolved to the benchmark.
* proxy: add mcp.backend(t) for more overridesdormando2023-02-011-1/+37
| | | | | | | | | | ie: local b1 = mcp.backend({ label = "b1", host = "127.0.0.1", port = 11511, connecttimeout = 1, retrytimeout = 0.5, readtimeout = 0.1, failurelimit = 11 }) ... to allow for overriding connect/retry/etc tunables on a per-backend basis. If not passed in the global settings are used.
* proxy: new integration tests.dormando2023-01-251-0/+147
uses mocked backend servers so we can test: - end to end client to backend proxying - lua API functions - configuration reload - various error conditions