|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
To speed up how soon the connected client sees data we now attempt to
flush data from the application thread when we get new data to write to
the socket.
This saves us the need to wake up the main thread, which would then
return from select(), process all sockets, look for the ones that are
writable, and then call select() again. When that select() would return
it would finally start writing data to the remote socket.
There was also no gaurantee that the main thread would get the lock for
the output buffers, and it would not be able to write any data at all
thereby looping on select() until the application thread had written
enough data to the buffers for it to hit the high water mark, or the
response was fully buffered, potentially overflowing from memory buffers
to disk.
If the socket is not ready for data, due it being non-blocking, we will
not flush any data at all, and will go notify/wake up the main thread to
start sending the data when the socket is ready.
Delivery of first byte from the WSGI application to the remote client is
now faster, and it may alleviate buffer pressure. Especially if the
remote client is connected over localhost, as is the case with a load
balancer in front of waitress.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This inserts a callable `waitress.client_disconnected` into the
environment that allows the task to check if the client disconnected
while waiting for the response at strategic points in the execution,
allowing to cancel the operation.
It requires setting the new adjustment `channel_request_lookahead` to a
value larger than 0, which continues to read requests from a channel
even if a request is already being processed on that channel, up to the
given count, since a client disconnect is detected by reading from a
readable socket and receiving an empty result.
|