| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
| |
A few code paths were returning SERVER_ERROR (a retryable error)
when it should have been CLIENT_ERROR (bad protocol syntax).
|
|
|
|
|
|
|
|
|
|
| |
Cleans up logic around response handling in general. Allows returning
server-sent error messages upstream for handling.
In general SERVER_ERROR means we can keep the connection to the backend.
The rest of the errors are protocol errors, and while some are perfectly
safe to whitelist, clients should not be causing those sorts of errors
and we should cycle the backend regardless.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Adds:
mcp.active_req_limit(count)
mcp.buffer_memory_limit(kilobytes)
Divides by the number of worker threads and creates a per-worker-thread
limit for the number of concurrent proxy requests, and how many bytes
used specifically for value bytes. This does not represent total memory
usage but will be close.
Buffer memory for inbound set requests is not accounted for until after
the object has been read from the socket; to be improved in a future
update. This should be fine unless clients send just the SET request and
then hang without sending further data.
Limits should be live-adjustable via configuration reloads.
|
|
|
|
|
|
|
|
|
|
| |
Also changes the way the global context and thread contexts are fetched
from lua; via the VM extra space instead of upvalues, which is a little
faster and more universal.
It was always erroneous to run a lot of the config functions from routes
and vice versa, but there was no consistent strictness so users could
get into trouble.
|
|
|
|
|
|
|
|
| |
local res = mcp.internal(r) - takes a request object and executes it
against the proxy's internal cache instance.
Experimental as of this commit. Needs more test coverage and
benchmarking.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
`mcp.pool(p, { dist = etc, iothread = true }`
By default the IO thread is not used; instead a backend connection is
created for each worker thread. This can be overridden by setting
`iothread = true` when creating a pool.
`mcp.pool(p, { dist = etc, beprefix = "etc" }`
If a `beprefix` is added to pool arguments, it will create unique
backend connections for this pool. This allows you to create multiple
sockets per backend by making multiple pools with unique prefixes.
There are legitimate use cases for sharing backend connections across
different pools, which is why that is the default behavior.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The event handling code was unoptimized and temporary; it was slated for
a rewrite for performance and non-critical bugs alone. However the old
code may be causing critical bugs so it's being rewritten now.
Fixes:
- backend disconnects are detected immediately instead of on the next
time they are used.
- backend reconnects happen _after_ the retry timeout, not before
- use a persistent read handler and a temporary write handler to avoid
constantly calling epoll_ctl syscalls for potential performance boost.
Updated some tests for proxyconfig.t as it was picking up the
disconnects immediately.
Unrelated to a timing issue I resolved to the benchmark.
|
|
|
|
|
|
|
|
|
|
| |
ie:
local b1 = mcp.backend({ label = "b1", host = "127.0.0.1", port = 11511,
connecttimeout = 1, retrytimeout = 0.5, readtimeout = 0.1,
failurelimit = 11 })
... to allow for overriding connect/retry/etc tunables on a per-backend
basis. If not passed in the global settings are used.
|
|
|
|
|
|
|
| |
Logs any backgrounded requests that resulted in an error.
Note that this may be a temporary interface, and could be deprecated in
the future.
|
|
|
|
|
|
|
|
|
|
|
| |
- specifically the WSTAT_DECR in proxy_await.c's return code could
potentially use the wrong thread's lock
This is why I've been swapping c with thread as lock/function arguments
all over the code lately; it's very accident prone.
Am reasonably sure this causes the deadlock but need to attempt to
verify more.
|
|
|
|
|
|
| |
We were duck typing the response code for a coroutine yield before. It
would also pile random logic for overriding IO's in certain cases. This
now makes everything explicit and more clear.
|
|
|
|
|
|
|
|
| |
- removes unused "completed" IO callback handler
- moves primary post-IO callback handlers from the queue definition to
the actual IO objects.
- allows IO object callbacks to be handled generically instead of based
on the queue they were submitted from.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
have a bug where updating a token and then requesting it again returns
the previous token. Also have other branches which require use of the
flattened request.
Removes the extra allocation space for lua request objects as we're not
flattening into the end of the memory.
I was originally doing this using a lot of lua but just copying the
string a few times has some better properties:
1) should actually be faster with less lua + fewer allocations
2) can be optimized to do minimal copying (avoid keys, append new flags,
etc)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Originally I envisioned taking an inbound request object, tagging it
with the time, and at the very end of a function logging is called. This
would give you the total time of the "backend" part of a request.
On rethinking, the timing information that's most useful in the proxy's
perspective is the time it takes for a response to happen + the status
of a response. One request may generate several sub-responses and it is
impossible to check the timing of each of those and log outliers.
You now cannot get the total time elapsed in a function anymore, but I
believe that is less useful information to the user of a proxy. The best
picture of latency will still be from the client, and response latency
can educate the proxy on issues with backends.
resp:elapsed() has been added as a compromise; it returns the elapsed
microseconds that a response took, so you can add the time together and
get an approximation of total time (if running req/resp's sequentially).
This change also means that calling mcp.await() and waiting for multiple
responses will give the timing of each sub-response accurately.
|
|
|
|
|
|
|
|
| |
updates the io_uring code to match the updates on the libevent side.
needs more work before merge:
- auditing error conditions
- try harder for some code deduplication
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
A backend's connection object is technically owned by the IO thread
after it has been created. An error in how this was done lead to invalid
backends being infinitely retried despite the underlying object being
collected.
This change adds an extra indirection to backend objects: a backend_wrap
object, which just turns the backend connection into an arbitrary
pointer instead of lua memory owned by the config VM.
- When backend connections are created, this pointer is shipped to the
IO thread to have its connection instantiated.
- When the wrap object is garbage collected (ie; no longer referenced by
any pool object), the be conn. pointer is again shipped to the IO
thread, which then removes any pending events, closes the sock, and
frees data.
|
|
|
|
|
|
|
| |
1) more IOV's per syscall
2) if a backend got a large stack of pending IO's + continual writes the
CPU usage of the IO thread would bloat while looping past already
flushed IO objects.
|
|
|
|
|
|
| |
mcp.await(request, pools, 0, mcp.AWAIT_BACKGROUND) will, instead of
waiting on any request to return, simply return an empty table as soon
as the background requests are dispatched.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Improvements to handling of new and failed backend socket connections.
Previously connections were initiated immediately, and initially from
the config thread, yet completion of opening sockets wouldn't happen
until a request tried to use that backend.
Now we open connections via the IO thread, as well as validate new
connections with a "version\r\n" command.
Also fixes a couple of error conditions (parsing, backend disconnect)
where clients could hang waiting for a retry time in certain conditions.
Now connections should re-establish immediately and dead backends should
flip into a bad fast-fail state quicker.
|
|
|
|
| |
returns early on a hit, else waits for N non-error responses.
|
|
|
|
| |
should make isolation/testing earlier.
|
|
|
|
|
|
|
|
| |
allows using tagged listeners (ex; `-l tag[test]:127.0.0.1:11212`) to
select a top level route for a function.
expects there to not be dozens of listeners, but for a handful will be
faster than a hash table lookup.
|
|
|
|
|
|
| |
function accepts a flag, returns (bool, token|nil).
bool indicates if the flag exists, and if the flag has a token it is
returned instead of nil as the second value.
|
|
|
|
|
|
|
|
|
| |
Add mcp.request function for quick checking if a flag exists in a
request string.
Also updates internal code for checking the length of a token to use the
endcap properly, and uses that for the r:token(n) requets as well, which
fixes a subtle bug of the token length being too long.
|
| |
|
|
|
|
|
|
|
|
|
| |
well I tested it a few times during the code split PR, but apparently
not with the final PR. This updates the API and also fixes some
compilation issues.
There will still be bugs until I reorganize the sqe/cqe lifecycle but it
can still be tested.
|
|
|
|
|
|
|
|
|
|
|
| |
experimental change. *io_uring mode is presently broken*
there are some potential protocol desync bugs due to mcmc handling its
own buffers and the newer event handler handling its own syscalls.
this change should have better separation of code for the buffer
tracking. if this change works I will add some optimizations to reduce
memmove's.
|
|
|
|
|
|
|
|
|
| |
delete the magic logging and require mcp.log_req* be used if you want
those types of entries to appear. keeps a separate data stream from
"proxyuser" just in case that's useful.
proxycmds wasn't able to get enough context to autogenerate useful log
lines, so I'd rather not have it in there at all.
|
|
|
|
|
|
| |
Lua level API for logging full context of a request/response. Provides
log_req() for simple logging and log_reqsample() for conditional
logging.
|
|
|
|
|
|
|
|
| |
previously mcp.await() only worked if it was called before any other
dispatches.
also fixes a bug if the supplied pool table was key=value instead of an
array-type table.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
avoids sending the response to the client, in most cases. works by
stripping the noreply status from the request before sending it along,
so the proxy itself knows when to move the request forward.
has sharp edges:
- only looking at the request object that's actually sent to the
backend, instead of the request object that created the coroutine.
- overriding tokens in lua to re-set the noreply mode would break the
protocol.
So this change helps us validate the feature but solidifying it requires
moving it to the "edges" of processing; before the coroutine and after
any command assembly (or within the command assembly).
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
this is ketama-based, with options for minor compat changes with major
libraries.
does _not_ support weights. The weights bits in the original ketama
broke the algorithm, as changing the number of points would shift
unrelated servers when the list changes.
this also changes backends to take a "name" specifically, instead of an
"ip address". Though note if supplying a hostname instead of an IP there
might be inline DNS lookups on reconnects.
|
|
now's a good time to at least shove functional subsections of code into
their own files. Some further work to clearly separate the API's will
help but looks not too terrible.
Big bonus is getting the backend handling code away from the frontend
handling code, which should make it easier to follow.
|