summaryrefslogtreecommitdiff
path: root/proxy_request.c
Commit message (Collapse)AuthorAgeFilesLines
* proxy: some TODO/FIXME updatesdormando2023-03-261-1/+1
| | | | rest seem honestly reasonable. no huge red flags anymore.
* proxy: add request and buffer memory limitsdormando2023-03-261-0/+12
| | | | | | | | | | | | | | | | | | Adds: mcp.active_req_limit(count) mcp.buffer_memory_limit(kilobytes) Divides by the number of worker threads and creates a per-worker-thread limit for the number of concurrent proxy requests, and how many bytes used specifically for value bytes. This does not represent total memory usage but will be close. Buffer memory for inbound set requests is not accounted for until after the object has been read from the socket; to be improved in a future update. This should be fine unless clients send just the SET request and then hang without sending further data. Limits should be live-adjustable via configuration reloads.
* proxy: iterate modified request handlingdormando2023-01-091-100/+73
| | | | | | | | | | | | | | | have a bug where updating a token and then requesting it again returns the previous token. Also have other branches which require use of the flattened request. Removes the extra allocation space for lua request objects as we're not flattening into the end of the memory. I was originally doing this using a lot of lua but just copying the string a few times has some better properties: 1) should actually be faster with less lua + fewer allocations 2) can be optimized to do minimal copying (avoid keys, append new flags, etc)
* proxy: log time now relative to resp lifetimedormando2023-01-051-2/+0
| | | | | | | | | | | | | | | | | | | | | | | Originally I envisioned taking an inbound request object, tagging it with the time, and at the very end of a function logging is called. This would give you the total time of the "backend" part of a request. On rethinking, the timing information that's most useful in the proxy's perspective is the time it takes for a response to happen + the status of a response. One request may generate several sub-responses and it is impossible to check the timing of each of those and log outliers. You now cannot get the total time elapsed in a function anymore, but I believe that is less useful information to the user of a proxy. The best picture of latency will still be from the client, and response latency can educate the proxy on issues with backends. resp:elapsed() has been added as a compromise; it returns the elapsed microseconds that a response took, so you can add the time together and get an approximation of total time (if running req/resp's sequentially). This change also means that calling mcp.await() and waiting for multiple responses will give the timing of each sub-response accurately.
* proxy: req:flag_token("F", "Freplacement")dormando2022-08-091-2/+28
| | | | | | returns (exists, previous_token) optional second argument will replace the flag/token with supplied flag/token, or nothing if "" is passed.
* proxy: fix subtle parser bug for some cmds1.6.16dormando2022-08-031-2/+14
| | | | | | | in the request parser the new endcap token was always the end of the line, but it should be the start of the next token. For some use cases the scanning stops early, or if we have too many tokens it would make the final token look like the rest of the line.
* proxy: add req:flag_token("F")dormando2022-08-031-0/+37
| | | | | | function accepts a flag, returns (bool, token|nil). bool indicates if the flag exists, and if the flag has a token it is returned instead of nil as the second value.
* proxy: add r:has_flag(), fix r:token() lengthdormando2022-08-031-8/+31
| | | | | | | | | Add mcp.request function for quick checking if a flag exists in a request string. Also updates internal code for checking the length of a token to use the endcap properly, and uses that for the r:token(n) requets as well, which fixes a subtle bug of the token length being too long.
* proxy: mcp.request improvementsdormando2022-08-031-5/+15
| | | | | | | | | | - errors if a string value is missing the "\r\n" terminator - properly uses a value from a response object - allows passing in a request object for the value as well - also adds r:vlen() for recovering the value length of a response think this still needs r:flags() or similar?
* proxy: mcplib_request_token() doesn't delimit the final token in a requestSailesh Mukil2022-07-251-7/+4
| | | | | | | | | mcplib_request_token() allows us to parse each token in a request string. The existing implementation delimits tokens using <whitespace>. This approach works for every token but the last one which will be followed by \r\n. This patch uses the token offsets present within the request parser to calculate the token boundaries instead of looping until the next delimiter.
* proxy: mcp.request(cmd, [val | resp])dormando2022-03-011-1/+13
| | | | | | | | | mcp.request can now take a response object and internally copy the value. bit faster than doing it through C. iterating from here should allow taking a reference to the resp object and directly pointing to its value, but we need to make resp objects immutable first.
* proxy: hacky method of supporting noreply/quietdormando2022-03-011-3/+26
| | | | | | | | | | | | | | | | avoids sending the response to the client, in most cases. works by stripping the noreply status from the request before sending it along, so the proxy itself knows when to move the request forward. has sharp edges: - only looking at the request object that's actually sent to the backend, instead of the request object that created the coroutine. - overriding tokens in lua to re-set the noreply mode would break the protocol. So this change helps us validate the feature but solidifying it requires moving it to the "edges" of processing; before the coroutine and after any command assembly (or within the command assembly).
* proxy: pull chunks into individual c filesdormando2022-02-181-0/+672
now's a good time to at least shove functional subsections of code into their own files. Some further work to clearly separate the API's will help but looks not too terrible. Big bonus is getting the backend handling code away from the frontend handling code, which should make it easier to follow.