| Commit message (Collapse) | Author | Age | Files | Lines |
|\ |
|
| |
| |
| |
| | |
Change-Id: Ib864c7dc6c8c7bb849f4f97a1239eb5cc04c424c
|
|\ \ |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
User provided keys are need to debug those tracebacks/timeouts when
clients talking to memcached, in order to associate those failures
with specific memcache usages within swift services.
Change-Id: I07491bb4ebc3baa13cf09f64a04a61011d561409
|
|\ \ \ |
|
| |/ /
| | |
| | |
| | |
| | |
| | | |
Create a unit test to verify client timeout for multiple requests
Change-Id: I974e01cd2cb18f4ea87c3966dbf4b06bff22ed39
|
|\ \ \ |
|
| | | |
| | | |
| | | |
| | | | |
Change-Id: Ib3c9b274bbd2e643f3febbdf54a8a43f4775944b
|
|\ \ \ \
| |/ / / |
|
| |/ /
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Replicated, unencrypted metadata is written down differently on py2
vs py3, and has been since we started supporting py3. Fortunately,
we can inspect the raw xattr bytes to determine whether the pickle
was written using py2 or py3, so we can properly read legacy py2 meta
under py3 rather than hitting a unicode error.
Closes-Bug: #2012531
Change-Id: I5876e3b88f0bb1224299b57541788f590f64ddd4
|
|\ \ \ |
|
| |/ /
| | |
| | |
| | | |
Change-Id: Ie2a8e4eced6688e5a98aa37c3c7b0c13fd2ddeee
|
|\ \ \
| |/ /
|/| | |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Clients sometimes hold open connections "just in case" they might later
pipeline requests. This can cause issues for proxies, especially if
operators restrict max_clients in an effort to improve response times
for the requests that *do* get serviced.
Add a new keepalive_timeout option to give proxies a way to drop these
established-but-idle connections without impacting active connections
(as may happen when reducing client_timeout). Note that this requires
eventlet 0.33.4 or later.
Change-Id: Ib5bb84fa3f8a4b9c062d58c8d3689e7030d9feb3
|
|\ \ \ |
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
... and clean up WatchDog start a little.
If this pattern proves useful we could consider extending it.
Change-Id: Ia85f9321b69bc4114a60c32a7ad082cae7da72b3
|
|\ \ \ \ |
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
Previously swift.common.utils monkey patched logging.thread,
logging.threading, and logging._lock upon import with eventlet
threading modules, but that is no longer reasonable or necessary.
With py3, the existing logging._lock is not patched by eventlet,
unless the logging module is reloaded. The existing lock is not
tracked by the gc so would not be found by eventlet's
green_existing_locks().
Instead we group all monkey patching into utils function and apply
patching consistently across daemons and WSGI servers.
Co-Authored-By: Clay Gerrard <clay.gerrard@gmail.com>
Co-Authored-By: Alistair Coles <alistairncoles@gmail.com>
Closes-Bug: #1380815
Change-Id: I6f35ad41414898fb7dc5da422f524eb52ff2940f
|
|\ \ \ \ \ |
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
Updating shard range cache has been restructured and upgraded to v2
which only persist the essential attributes in memcache (see
Related-Change). This is the following patch to restructure the
listing shard ranges cache for object listing in the same way.
UpgradeImpact
=============
The cache key for listing shard ranges in memcached is renamed
from 'shard-listing/<account>/<container>' to
'shard-listing-v2/<account>/<container>', and cache data is
changed to be a list of [lower bound, name]. As a result, this
will invalidate all existing listing shard ranges stored in the
memcache cluster.
Co-Authored-By: Alistair Coles <alistairncoles@gmail.com>
Related-Change: If98af569f99aa1ac79b9485ce9028fdd8d22576b
Change-Id: I54a32fd16e3d02b00c18b769c6f675bae3ba8e01
|
|\ \ \ \ \ \ |
|
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | | |
X-Backend-* headers were previously passed to the backend server with
only a subset of all request types:
* all object requests
* container GET, HEAD
* account GET, HEAD
In these cases, X-Backend-* headers were transferred to backend
requests implicitly as a consequence of *all* the headers in the
request that the proxy is handling being copied to the backend
request.
With this change, X-Backend-* headers are explicitly copied from the
request that the proxy is handling to the backend request, for every
request type.
Note: X-Backend-* headers are typically added to a request by the
proxy app or middleware, prior to creating a backend request.
X-Backend-* headers are removed from client requests by the gatekeeper
middleware, so clients cannot send X-Backend-* headers to backend
servers. An exception is an InternalClient that does not have
gatekeeper middleware, deliberately so that internal daemons such as
the sharder can send X-Backend-* headers to the backend servers.
Also, BaseController.generate_request_headers() is fixed to prevent
accessing a None type when transfer is True but the orig_req is None.
Change-Id: I05fb9a3e1c98d96bbe01da2ee28474e0f57297e6
|
|\ \ \ \ \ \ \ |
|
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | | |
Simplify ECFragGetter by removing code that guards against the policy
fragment_size being None or zero.
Policy fragment_size must be > 0: the fragment_size is based on the
ec_segment_size, which is verified as > 0 when constructing an EC
policy. This is asserted by test_parse_storage_policies in
test.unit.common.test_storage_policy.TestStoragePolicies.
Also, rename client_chunk_size to fragment_size for clarity.
Change-Id: Ie1efaab3bd0510275d534b5c023cb73c98bec90d
|
|\ \ \ \ \ \ \ \ |
|
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | | |
This effects both daemon config parsing and paste-deploy config parsing
when using conf.d. When the WSGI servers were loaded from a flat file
they have always been case-sensitive. This difference was surprising
(who wants anything case-insensitive?) and potentially dangerous for
values like RECLAIM_AGE.
UpgradeImpact:
Previously the option keys in swift's configuration .ini files were
sometimes parsed in a case-insensitive manner, so you could use
CLIENT_TIMEOUT and the daemons would recognize you meant client_timeout.
Now upper-case or mixed-case option names, such as CLIENT_TIMEOUT or
Client_Timeout, will be ignored.
Change-Id: Idd8e552d9fe98b84d7cee1adfa431ea3ae93345d
|
|\ \ \ \ \ \ \ \ \
| |_|_|_|_|_|/ / /
|/| | | | | | | | |
|
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | | |
This puts replication info in antique rings loaded with metadata_only=True.
Closes-Bug: #1696837
Change-Id: Idf263a7f7a984a1307bd74040ac8f8bb1651bc79
|
|\ \ \ \ \ \ \ \ \ |
|
| |/ / / / / / / /
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | | |
Closes-Bug: #2017021
Change-Id: If422f99a77245b35ab755857f9816c1e401a4e22
|
|\ \ \ \ \ \ \ \ \
| |_|_|_|_|_|/ / /
|/| | | | | | | | |
|
| |/ / / / / / /
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | | |
The log message phrase 'ChunkWriteTimeout fetching fragments'
implies that the timeout has occurred
while getting a fragment (from the backend object server)
when in fact the timeout has occurred
waiting to yield the fragment to the app iter.
Hence, changing message to 'ChunkWriteTimeout feeding fragments'
Change-Id: Ic0813e6a9844da1130091d27e3dbe272ea871d11
|
|\ \ \ \ \ \ \ \
| | |/ / / / / /
| |/| / / / / /
| |_|/ / / / /
|/| | | | | | |
|
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | | |
Change-Id: Ibb82555830b88962cc765fc88281ca42a9ce9d9c
|
| |/ / / / /
|/| | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
Refactor and add some targeted unit tests. No behavioral change.
Change-Id: I153528b8a1709f3756c261cf3eb2acfd5de10f9c
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
The test claimed to assert that ChunkWriteTimeouts are logged, but
the test would in fact pass if the timeouts were not logged.
Change-Id: Ic9d119858397e8aeccaf7f89487f9e62f16ee453
|
| |_|_|/ /
|/| | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
`much_older` has to be much older than `older`, or the test gets
flakey. See
- test_cleanup_ondisk_files_reclaim_non_data_files,
- test_cleanup_ondisk_files_reclaim_with_data_files, and
- test_cleanup_ondisk_files_reclaim_with_data_files_legacy_durable
for a more standard definition of "much_older".
Closes-Bug: #2017024
Change-Id: I1eaa501827f4475ddc0c20d82cf0a6d4a5e98f75
|
|\ \ \ \ \ |
|
| | |_|/ /
| |/| | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
Use double-underscore to separate to ensure old code blows up rather
than misinterpret encoded offsets.
Change-Id: Idf9b5118e9b64843e0c4dd7088b498b165f33db4
|
|\ \ \ \ \
| |/ / / /
|/| | | | |
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
Not sure how we didn't catch this before; py2 gate jobs still seem to run these
tests and they'd pass??
Change-Id: I24a5680d19af609b92588249610e4a1f128bdad3
|
|\ \ \ \ \
| |_|/ / /
|/| | | | |
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
The SsyncSender encodes object file timestamps in a compact form and
the SsyncReceiver decodes the timestamps and compares them to its
object file set.
The encoding represents the meta file timestamp as a delta from the
data file timestamp, NOT INCLUDING the data file timestamp offset.
Previously, the decoding was erroneously calculating the meta file
timestamp as the sum of the delta plus the data file timestamp
INCLUDING the offset.
For example, if the SssyncSender has object file timestamps:
ts_data = t0_1.data
ts_meta = t1.data
then the receiver would erroneously perceive that the sender has:
ts_data = t0_1.data
ts_meta = t1_1.data
As described in the referenced bug report, this erroneous decoding
could cause the SsyncReceiver to request that the SsyncSender sync an
object that is already in sync, which results in a 409 Conflict at the
receiver. The 409 causes the ssync session to terminate, and the same
process repeats on the next attempt.
Closes-Bug: #2007643
Co-Authored-By: Clay Gerrard <clay.gerrard@gmail.com>
Change-Id: I74a0aac0ac29577026743f87f4b654d85e8fcc80
|
|\ \ \ \ \ |
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
allow_modify_pipeline is no longer supported, but if a caller is still
setting it to True then raise ValueError, because the InternalClient
instance will no longer behave in the way the caller previously
expected.
Change-Id: I24015b8becc7289a7d72f9a5863d201e27bcc955
|
|\ \ \ \ \ \
| |/ / / / / |
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
The internal client is suppose to be internal to the cluster, and as
such we rely on it to not remove any headers we decide to send. However
if the allow_modify_pipeline option is set the gatekeeper middleware is
added to the internal client's proxy pipeline.
So firstly, this patch removes the allow_modify_pipeline option from the
internal client constructor. And when calling loadapp
allow_modify_pipeline is always passed with a False.
Further, an op could directly put the gatekeeper middleware into the
internal client config. The internal client constructor will now check
the pipeline and raise a ValueError if one has been placed in the
pipeline.
To do this, there is now a check_gatekeeper_loaded staticmethod that will
walk the pipeline which called from the InternalClient.__init__ method.
Enabling this walking through the pipeline, we are now stashing the wsgi
pipeline in each filter so that we don't have to rely on 'app' naming
conventions to iterate the pipeline.
Co-Authored-By: Alistair Coles <alistairncoles@gmail.com>
Change-Id: Idcca7ac0796935c8883de9084d612d64159d9f92
|
| |_|/ / /
|/| | | |
| | | | |
| | | | |
| | | | | |
Partial-Bug: #2015274
Change-Id: I3e26f8d4e5de0835212ebc2314cac713950c85d7
|
|\ \ \ \ \ |
|
| | |_|_|/
| |/| | |
| | | | |
| | | | |
| | | | | |
Partial-Bug: #2015274
Change-Id: I5b7ab3b2c150ec1513b3e6ebc4b27808d5df042c
|