| Commit message (Collapse) | Author | Age | Files | Lines |
|\ |
|
| |
| |
| |
| | |
Change-Id: Ib864c7dc6c8c7bb849f4f97a1239eb5cc04c424c
|
|\ \ |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
User provided keys are need to debug those tracebacks/timeouts when
clients talking to memcached, in order to associate those failures
with specific memcache usages within swift services.
Change-Id: I07491bb4ebc3baa13cf09f64a04a61011d561409
|
|\ \ \ |
|
| | | |
| | | |
| | | |
| | | | |
Change-Id: I9fdc74d26fd830f463c077c912cdcf00eaab1dfa
|
|\ \ \ \
| |/ / / |
|
| |/ /
| | |
| | |
| | |
| | |
| | |
| | | |
As it was, it would hide issues in the logging or ratelimiter
implementations.
Change-Id: I9e557442401ef17b753f45b9e1cb181e71784ccf
|
|\ \ \
| |/ /
|/| | |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Clients sometimes hold open connections "just in case" they might later
pipeline requests. This can cause issues for proxies, especially if
operators restrict max_clients in an effort to improve response times
for the requests that *do* get serviced.
Add a new keepalive_timeout option to give proxies a way to drop these
established-but-idle connections without impacting active connections
(as may happen when reducing client_timeout). Note that this requires
eventlet 0.33.4 or later.
Change-Id: Ib5bb84fa3f8a4b9c062d58c8d3689e7030d9feb3
|
|\ \ \ |
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
... and clean up WatchDog start a little.
If this pattern proves useful we could consider extending it.
Change-Id: Ia85f9321b69bc4114a60c32a7ad082cae7da72b3
|
|\ \ \ \ |
|
| |/ / /
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Previously swift.common.utils monkey patched logging.thread,
logging.threading, and logging._lock upon import with eventlet
threading modules, but that is no longer reasonable or necessary.
With py3, the existing logging._lock is not patched by eventlet,
unless the logging module is reloaded. The existing lock is not
tracked by the gc so would not be found by eventlet's
green_existing_locks().
Instead we group all monkey patching into utils function and apply
patching consistently across daemons and WSGI servers.
Co-Authored-By: Clay Gerrard <clay.gerrard@gmail.com>
Co-Authored-By: Alistair Coles <alistairncoles@gmail.com>
Closes-Bug: #1380815
Change-Id: I6f35ad41414898fb7dc5da422f524eb52ff2940f
|
|\ \ \ \ |
|
| | |/ /
| |/| |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Updating shard range cache has been restructured and upgraded to v2
which only persist the essential attributes in memcache (see
Related-Change). This is the following patch to restructure the
listing shard ranges cache for object listing in the same way.
UpgradeImpact
=============
The cache key for listing shard ranges in memcached is renamed
from 'shard-listing/<account>/<container>' to
'shard-listing-v2/<account>/<container>', and cache data is
changed to be a list of [lower bound, name]. As a result, this
will invalidate all existing listing shard ranges stored in the
memcache cluster.
Co-Authored-By: Alistair Coles <alistairncoles@gmail.com>
Related-Change: If98af569f99aa1ac79b9485ce9028fdd8d22576b
Change-Id: I54a32fd16e3d02b00c18b769c6f675bae3ba8e01
|
|\ \ \ \ |
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
This effects both daemon config parsing and paste-deploy config parsing
when using conf.d. When the WSGI servers were loaded from a flat file
they have always been case-sensitive. This difference was surprising
(who wants anything case-insensitive?) and potentially dangerous for
values like RECLAIM_AGE.
UpgradeImpact:
Previously the option keys in swift's configuration .ini files were
sometimes parsed in a case-insensitive manner, so you could use
CLIENT_TIMEOUT and the daemons would recognize you meant client_timeout.
Now upper-case or mixed-case option names, such as CLIENT_TIMEOUT or
Client_Timeout, will be ignored.
Change-Id: Idd8e552d9fe98b84d7cee1adfa431ea3ae93345d
|
|\ \ \ \ \
| |_|_|/ /
|/| | | | |
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
This puts replication info in antique rings loaded with metadata_only=True.
Closes-Bug: #1696837
Change-Id: Idf263a7f7a984a1307bd74040ac8f8bb1651bc79
|
|\ \ \ \ \
| |/ / / /
|/| / / /
| |/ / / |
|
| |/ /
| | |
| | |
| | | |
Change-Id: Ibb82555830b88962cc765fc88281ca42a9ce9d9c
|
|/ /
| |
| |
| | |
Change-Id: I7ab605d48972e8dc06e630d160c745baeea91355
|
|\ \ |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
allow_modify_pipeline is no longer supported, but if a caller is still
setting it to True then raise ValueError, because the InternalClient
instance will no longer behave in the way the caller previously
expected.
Change-Id: I24015b8becc7289a7d72f9a5863d201e27bcc955
|
|\ \ \
| |/ / |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
The internal client is suppose to be internal to the cluster, and as
such we rely on it to not remove any headers we decide to send. However
if the allow_modify_pipeline option is set the gatekeeper middleware is
added to the internal client's proxy pipeline.
So firstly, this patch removes the allow_modify_pipeline option from the
internal client constructor. And when calling loadapp
allow_modify_pipeline is always passed with a False.
Further, an op could directly put the gatekeeper middleware into the
internal client config. The internal client constructor will now check
the pipeline and raise a ValueError if one has been placed in the
pipeline.
To do this, there is now a check_gatekeeper_loaded staticmethod that will
walk the pipeline which called from the InternalClient.__init__ method.
Enabling this walking through the pipeline, we are now stashing the wsgi
pipeline in each filter so that we don't have to rely on 'app' naming
conventions to iterate the pipeline.
Co-Authored-By: Alistair Coles <alistairncoles@gmail.com>
Change-Id: Idcca7ac0796935c8883de9084d612d64159d9f92
|
| | |
| | |
| | |
| | |
| | | |
Partial-Bug: #2015274
Change-Id: I3e26f8d4e5de0835212ebc2314cac713950c85d7
|
|\ \ \ |
|
| | |/
| |/|
| | |
| | |
| | | |
Partial-Bug: #2015274
Change-Id: I5b7ab3b2c150ec1513b3e6ebc4b27808d5df042c
|
|/ /
| |
| |
| | |
Change-Id: I71ad4de0ab3af8e7e865cb924f96e5c415935654
|
| |
| |
| |
| |
| | |
Related-Bug: #2015274
Change-Id: I6e7c1a19a39f51e4520dabfcfad65817534b42a2
|
|\ \ |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Reseller admins can set new headers on accounts like
X-Account-Quota-Bytes-Policy-<policy-name>: <quota>
This may be done to limit consumption of a faster, all-flash policy, for
example.
This is independent of the existing X-Account-Meta-Quota-Bytes header, which
continues to limit the total storage for an account across all policies.
Change-Id: Ib25c2f667e5b81301f8c67375644981a13487cfe
|
|\ \ \
| |/ /
|/| | |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
There may be circumstances when an internal client wishes to modify
container sysmeta that is hidden from the user. It is desirable that
this happens without modifying the put-timestamp and therefore the
last-modified time that is reported in responses to client HEADs and
GETs.
This patch modifies the container server so that a POST will not
update the container put_timestamp if an X-Backend-No-Timestamp-Update
header is included with the request and has a truthy value.
Note: there are already circumstances in which container sysmeta is
modified without changing the put_timestamp:
- PUT requests with shard range content do not update put_timestamp.
- the sharder updates sysmeta directly via the ContainerBroker without
modifying put_timestamp.
Change-Id: I835b2dd58bc1d4fb911629e4da2ea4b9697dd21b
|
|\ \ \
| |_|/
|/| | |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Also:
- move some tests to test_utils.TestNamespace.
- move ShardName class in file (no change to class)
- move end_marker method from ShardRange to Namespace
Related-Change: If98af569f99aa1ac79b9485ce9028fdd8d22576b
Change-Id: Ibd5614d378ec5e9ba47055ba8b67a42ab7f7453c
|
|\ \ \
| |/ / |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Restructure the shard ranges that are stored in memcache for
object updating to only persist the essential attributes of
shard ranges in memcache (lower bounds and names), so the
aggregate of memcache values is much smaller and retrieval
will be much faster too.
Co-Authored-By: Alistair Coles <alistairncoles@gmail.com>
Co-Authored-By: Tim Burke <tim.burke@gmail.com>
UpgradeImpact
=============
The cache key for updating shard ranges in memcached is renamed
from 'shard-updating/<account>/<container>' to
'shard-updating-v2/<account>/<container>', and cache data is
changed to be a list of [lower bound, name]. As a result, this
will invalid all existing updating shard ranges stored in the
memcache cluster.
Change-Id: If98af569f99aa1ac79b9485ce9028fdd8d22576b
|
|\ \ \
| |/ /
|/| | |
|
| | |
| | |
| | |
| | | |
Change-Id: I1ab7376b5e68a3deaad5aca113ad55bde00b2238
|
|\ \ \
| |_|/
|/| | |
|
| |/
| |
| |
| |
| | |
Closes-Bug: #1697860
Change-Id: I500a86de390b24b9d08a478d695a7d62c447e779
|
|/
|
|
|
|
|
|
| |
Normally, the proxy object controller would be adding these, but when
encrypted, there won't be any headers in the x-object-meta-* namespace.
Closes-Bug: #1868045
Change-Id: I8e708a60ee63f679056300fc9d68227e46d605e8
|
|
|
|
|
|
|
|
| |
Adding a "use_replication" field to the node dict, a helper function to
set use_replication dict value for a node copy by looking up the header
value for x-backend-use-replication-network
Change-Id: Ie05af464765dc10cf585be851f462033fc6bdec7
|
|\ |
|
| |
| |
| |
| |
| | |
Change-Id: Ifccedbe7662925db55d0d8cd9e2e66a03126f661
Closes-Bug: #1816181
|
|\ \ |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
The SQLite in-memory databases have been great for testing but
as the swift DatabaseBroker's have become more complex, the limitations
of in memory databases are being reached. Mostly due
to the introduction of container sharding where a broker sometimes needs
to make multiple connections to the same database as the same time.
Rather then rework the real broker logic to better support in-memory
testing, it's actually easier to just remove the in-memory broker tests
and use a "real" broker in a tempdir. This allows us to better test how
brokers behave in real life, pending files and all.
This patch replaces all the :memory: brokers in the tests with real ones
placed in a tempdir. To achieve this, we new base unittest class `TestDBBase`
has been added that creates, cleans up and provides some helper methods
to manage the db path and location.
Further, all references to :memory: in the Database brokers have been
removed.
Change-Id: I5983132f776b84db634fef39c833d5cfdce11980
|