summaryrefslogtreecommitdiff
path: root/proxy_lua.c
Commit message (Collapse)AuthorAgeFilesLines
* proxy: move mcp.stats and mcp.request to routesdormando2023-04-261-2/+2
| | | | | These functions landed on the wrong side of "pool or routes" move commit a while back. They are lacking in test coverage.
* proxy: fix data corruption bugdormando2023-04-121-3/+6
| | | | | | | | | | | | | | | | Bug introduced in 6c80728: use after free for response buffer while under concurrency. The await code has a different method of wrapping up a lua coroutine than a standard response, so it was not managing the lifecycle of the response object properly, causing data buffers to be reused before being written back to the client. This fix separates the accounting of memory from the freeing of the buffer, so there is no more race. Further restructuring is needed to both make this less bug prone and make memory accounting be lock step with the memory freeing.
* proxy: fix backend leak with 2-arg mcp.pool()dormando2023-04-101-1/+1
| | | | | | | | | | | | When mcp.pool() is called in its two argument form, ie: mcp.pool({b1, b2}, { foo = bar }), backend objects would not be properly cached internally, causing objects to leak. Further, it was settings the objects into the cache table indexed by the object itself, so they would not be cleaned up by garbage collection. Bug was introduced as part of 6442017c (allow workers to run IO optionally)
* proxy: rip out io_uring codedormando2023-03-261-24/+0
| | | | | | | | | | | | | | with the event handler rewrite the IO thread scales much better (up to 8-12 worker threads), leaving the io_uring code in the dust. realistically io_uring won't be able to beat the event code if you're using kernels older than 6.2, which is brand new. Instead of carrying all this code around and having people randomly try it to get more performance, I want to rip it out of the way and add it back in later when it makes sense. I am using mcshredder as a platform to learn and keep up to date with io_uring, and will port over its usage pattern when it's time.
* proxy: overhaul backend error handlingdormando2023-03-261-0/+4
| | | | | | | | | | Cleans up logic around response handling in general. Allows returning server-sent error messages upstream for handling. In general SERVER_ERROR means we can keep the connection to the backend. The rest of the errors are protocol errors, and while some are perfectly safe to whitelist, clients should not be causing those sorts of errors and we should cycle the backend regardless.
* proxy: add request and buffer memory limitsdormando2023-03-261-0/+49
| | | | | | | | | | | | | | | | | | Adds: mcp.active_req_limit(count) mcp.buffer_memory_limit(kilobytes) Divides by the number of worker threads and creates a per-worker-thread limit for the number of concurrent proxy requests, and how many bytes used specifically for value bytes. This does not represent total memory usage but will be close. Buffer memory for inbound set requests is not accounted for until after the object has been read from the socket; to be improved in a future update. This should be fine unless clients send just the SET request and then hang without sending further data. Limits should be live-adjustable via configuration reloads.
* proxy: restrict functions for lua config vs routedormando2023-03-261-62/+77
| | | | | | | | | | Also changes the way the global context and thread contexts are fetched from lua; via the VM extra space instead of upvalues, which is a little faster and more universal. It was always erroneous to run a lot of the config functions from routes and vice versa, but there was no consistent strictness so users could get into trouble.
* proxy: add mcp.internal(r) APIdormando2023-02-251-0/+8
| | | | | | | | local res = mcp.internal(r) - takes a request object and executes it against the proxy's internal cache instance. Experimental as of this commit. Needs more test coverage and benchmarking.
* proxy: allow workers to run IO optionallydormando2023-02-241-35/+112
| | | | | | | | | | | | | | | | | `mcp.pool(p, { dist = etc, iothread = true }` By default the IO thread is not used; instead a backend connection is created for each worker thread. This can be overridden by setting `iothread = true` when creating a pool. `mcp.pool(p, { dist = etc, beprefix = "etc" }` If a `beprefix` is added to pool arguments, it will create unique backend connections for this pool. This allows you to create multiple sockets per backend by making multiple pools with unique prefixes. There are legitimate use cases for sharing backend connections across different pools, which is why that is the default behavior.
* proxy: redo libevent handling codedormando2023-02-221-1/+3
| | | | | | | | | | | | | | | | | | The event handling code was unoptimized and temporary; it was slated for a rewrite for performance and non-critical bugs alone. However the old code may be causing critical bugs so it's being rewritten now. Fixes: - backend disconnects are detected immediately instead of on the next time they are used. - backend reconnects happen _after_ the retry timeout, not before - use a persistent read handler and a temporary write handler to avoid constantly calling epoll_ctl syscalls for potential performance boost. Updated some tests for proxyconfig.t as it was picking up the disconnects immediately. Unrelated to a timing issue I resolved to the benchmark.
* proxy: disallow overriding mn commanddormando2023-02-011-0/+5
| | | | | | When using CMD_ANY_STORAGE to enable the proxy this causes the MN command to no longer work as intended; the proxy eats the command and does not flush the client response pipeline.
* proxy: add mcp.backend(t) for more overridesdormando2023-02-011-10/+112
| | | | | | | | | | ie: local b1 = mcp.backend({ label = "b1", host = "127.0.0.1", port = 11511, connecttimeout = 1, retrytimeout = 0.5, readtimeout = 0.1, failurelimit = 11 }) ... to allow for overriding connect/retry/etc tunables on a per-backend basis. If not passed in the global settings are used.
* proxy: add mcp.await_logerrors()dormando2023-01-271-0/+1
| | | | | | | Logs any backgrounded requests that resulted in an error. Note that this may be a temporary interface, and could be deprecated in the future.
* proxy: clean logic around lua yieldingdormando2023-01-121-1/+2
| | | | | | We were duck typing the response code for a coroutine yield before. It would also pile random logic for overriding IO's in certain cases. This now makes everything explicit and more clear.
* proxy: exposed resp:elapsed()dormando2023-01-091-0/+1
| | | | | I'm a bit concerned that the compile warning popped up only after I started changing something else. Need to upgrade my OS again? :(
* proxy: log time now relative to resp lifetimedormando2023-01-051-8/+10
| | | | | | | | | | | | | | | | | | | | | | | Originally I envisioned taking an inbound request object, tagging it with the time, and at the very end of a function logging is called. This would give you the total time of the "backend" part of a request. On rethinking, the timing information that's most useful in the proxy's perspective is the time it takes for a response to happen + the status of a response. One request may generate several sub-responses and it is impossible to check the timing of each of those and log outliers. You now cannot get the total time elapsed in a function anymore, but I believe that is less useful information to the user of a proxy. The best picture of latency will still be from the client, and response latency can educate the proxy on issues with backends. resp:elapsed() has been added as a compromise; it returns the elapsed microseconds that a response took, so you can add the time together and get an approximation of total time (if running req/resp's sequentially). This change also means that calling mcp.await() and waiting for multiple responses will give the timing of each sub-response accurately.
* proxy: fix lifecycle of backend connectionsdormando2022-12-121-42/+129
| | | | | | | | | | | | | | | | | | A backend's connection object is technically owned by the IO thread after it has been created. An error in how this was done lead to invalid backends being infinitely retried despite the underlying object being collected. This change adds an extra indirection to backend objects: a backend_wrap object, which just turns the backend connection into an arbitrary pointer instead of lua memory owned by the config VM. - When backend connections are created, this pointer is shipped to the IO thread to have its connection instantiated. - When the wrap object is garbage collected (ie; no longer referenced by any pool object), the be conn. pointer is again shipped to the IO thread, which then removes any pending events, closes the sock, and frees data.
* proxy: add mcp.AWAIT_BACKGROUNDdormando2022-12-011-0/+1
| | | | | | mcp.await(request, pools, 0, mcp.AWAIT_BACKGROUND) will, instead of waiting on any request to return, simply return an empty table as soon as the background requests are dispatched.
* proxy: fix crash when backensd are gc'ddormando2022-10-201-2/+4
| | | | | | crept in on an earlier cleanup commit. for some reason pools get gc'ed pretty quickly but backends take several HUP's so I didn't see this until I sat here HUP'ing the proxy 10+ times in a row.
* proxy: backend connection improvementdormando2022-10-201-14/+18
| | | | | | | | | | | | | | | | Improvements to handling of new and failed backend socket connections. Previously connections were initiated immediately, and initially from the config thread, yet completion of opening sockets wouldn't happen until a request tried to use that backend. Now we open connections via the IO thread, as well as validate new connections with a "version\r\n" command. Also fixes a couple of error conditions (parsing, backend disconnect) where clients could hang waiting for a retry time in certain conditions. Now connections should re-establish immediately and dead backends should flip into a bad fast-fail state quicker.
* proxy: add mcp.await FASTGOOD flagdormando2022-09-271-0/+1
| | | | returns early on a hit, else waits for N non-error responses.
* proxy: remove most references to settings globaldormando2022-09-151-13/+13
| | | | should make isolation/testing earlier.
* proxy: update mcmc and calling conventionsdormando2022-09-021-2/+2
| | | | | | | upstream fixes: mcmc would return OK to garbage responses, which was probably causing issues in the past. This does remove the MCMC_CODE_MISS and replace it with MCMC_CODE_END.
* proxy: mcp.attach(CMD, r, "tag")dormando2022-08-241-10/+76
| | | | | | | | allows using tagged listeners (ex; `-l tag[test]:127.0.0.1:11212`) to select a top level route for a function. expects there to not be dozens of listeners, but for a handful will be faster than a hash table lookup.
* proxy: allow mcp.pool to ignore a nil second argdormando2022-08-231-1/+3
| | | | | just helps make the lua cleaner. if you pass in a non-nil wrong second argument it'll still error out.
* proxy: backend object cache was brokendormando2022-08-161-1/+1
| | | | | a few versions back the indexes changed. should've been counting up from the bottom anyway... now reloads should stay fast.
* proxy: add req:flag_token("F")dormando2022-08-031-0/+1
| | | | | | function accepts a flag, returns (bool, token|nil). bool indicates if the flag exists, and if the flag has a token it is returned instead of nil as the second value.
* proxy: mcp.response code and rline APIdormando2022-08-031-0/+34
| | | | | | | | | Adds resp:code(), which returns code you can compare with mcp.MCMC_CODE_* Adds resp:line(), which returns the raw response line after the command. This can be used in lua while the flag/token API is missing for the response object.
* proxy: add r:has_flag(), fix r:token() lengthdormando2022-08-031-0/+1
| | | | | | | | | Add mcp.request function for quick checking if a flag exists in a request string. Also updates internal code for checking the length of a token to use the endcap properly, and uses that for the r:token(n) requets as well, which fixes a subtle bug of the token length being too long.
* proxy: mcp.request improvementsdormando2022-08-031-0/+18
| | | | | | | | | | - errors if a string value is missing the "\r\n" terminator - properly uses a value from a response object - allows passing in a request object for the value as well - also adds r:vlen() for recovering the value length of a response think this still needs r:flags() or similar?
* proxy: rework backend buffer handlingdormando2022-07-241-1/+1
| | | | | | | | | | | experimental change. *io_uring mode is presently broken* there are some potential protocol desync bugs due to mcmc handling its own buffers and the newer event handler handling its own syscalls. this change should have better separation of code for the buffer tracking. if this change works I will add some optimizations to reduce memmove's.
* 'proxyreqs' does not work unless 'proxyuser' also providedSailesh Mukil2022-04-121-2/+2
| | | | | Ideally we should be able to see the requst logs only after: "watch proxyreqs"
* proxy: mcp.log_req* API interfacedormando2022-04-081-0/+121
| | | | | | Lua level API for logging full context of a request/response. Provides log_req() for simple logging and log_reqsample() for conditional logging.
* proxy: add ring_hash builtindormando2022-02-241-17/+18
| | | | | | | | | | | | | this is ketama-based, with options for minor compat changes with major libraries. does _not_ support weights. The weights bits in the original ketama broke the algorithm, as changing the number of points would shift unrelated servers when the list changes. this also changes backends to take a "name" specifically, instead of an "ip address". Though note if supplying a hostname instead of an IP there might be inline DNS lookups on reconnects.
* proxy: pull chunks into individual c filesdormando2022-02-181-0/+752
now's a good time to at least shove functional subsections of code into their own files. Some further work to clearly separate the API's will help but looks not too terrible. Big bonus is getting the backend handling code away from the frontend handling code, which should make it easier to follow.