| Commit message (Collapse) | Author | Age | Files | Lines |
... | |
|/ / / /
| | | |
| | | |
| | | | |
Change-Id: If8fbcaff8e5676accb10e6c3c49387bc0de0cdb9
|
| |_|/
|/| |
| | |
| | |
| | | |
Related-Bug: #2015274
Change-Id: I6e7c1a19a39f51e4520dabfcfad65817534b42a2
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
The ContainerSharder._send_shard_ranges method sets an
'x-backend-use-replication-network' header with value 'true', so if
the PUT to the root container fails the log message should show the
replication ip and port of the container server.
Change-Id: I8c84f6ee15e6999f71b092bbeed414065a22ee8b
|
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Fix missing space between the epoch time and "DB state:" in the log
message.
Change-Id: Ib654ba58cdcbf245458816460a15c964dfdb073c
|
|\ \ \ |
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Reseller admins can set new headers on accounts like
X-Account-Quota-Bytes-Policy-<policy-name>: <quota>
This may be done to limit consumption of a faster, all-flash policy, for
example.
This is independent of the existing X-Account-Meta-Quota-Bytes header, which
continues to limit the total storage for an account across all policies.
Change-Id: Ib25c2f667e5b81301f8c67375644981a13487cfe
|
|\ \ \ \
| |/ / /
|/| | | |
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
There may be circumstances when an internal client wishes to modify
container sysmeta that is hidden from the user. It is desirable that
this happens without modifying the put-timestamp and therefore the
last-modified time that is reported in responses to client HEADs and
GETs.
This patch modifies the container server so that a POST will not
update the container put_timestamp if an X-Backend-No-Timestamp-Update
header is included with the request and has a truthy value.
Note: there are already circumstances in which container sysmeta is
modified without changing the put_timestamp:
- PUT requests with shard range content do not update put_timestamp.
- the sharder updates sysmeta directly via the ContainerBroker without
modifying put_timestamp.
Change-Id: I835b2dd58bc1d4fb911629e4da2ea4b9697dd21b
|
|\ \ \ \
| |_|/ /
|/| | | |
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Also:
- move some tests to test_utils.TestNamespace.
- move ShardName class in file (no change to class)
- move end_marker method from ShardRange to Namespace
Related-Change: If98af569f99aa1ac79b9485ce9028fdd8d22576b
Change-Id: Ibd5614d378ec5e9ba47055ba8b67a42ab7f7453c
|
|\ \ \ \
| |/ / / |
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Restructure the shard ranges that are stored in memcache for
object updating to only persist the essential attributes of
shard ranges in memcache (lower bounds and names), so the
aggregate of memcache values is much smaller and retrieval
will be much faster too.
Co-Authored-By: Alistair Coles <alistairncoles@gmail.com>
Co-Authored-By: Tim Burke <tim.burke@gmail.com>
UpgradeImpact
=============
The cache key for updating shard ranges in memcached is renamed
from 'shard-updating/<account>/<container>' to
'shard-updating-v2/<account>/<container>', and cache data is
changed to be a list of [lower bound, name]. As a result, this
will invalid all existing updating shard ranges stored in the
memcache cluster.
Change-Id: If98af569f99aa1ac79b9485ce9028fdd8d22576b
|
|\ \ \ \
| |/ / /
|/| | | |
|
| | |/
| |/|
| | |
| | | |
Change-Id: I1ab7376b5e68a3deaad5aca113ad55bde00b2238
|
|\ \ \ |
|
| | |/
| |/|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Following a memcache restart in a SAIO, I've seen the following happen
during an object HEAD:
- etag_quoter wants to get account/container info to decide whether to
quote-wrap or not
- account info is a cache miss, so we make a no-auth'ed HEAD to the next
filter in the pipeline
- eventually this gets down to ratelimit, which *also* wants to get
account info
- still a cache miss, so we make a *separate* HEAD that eventually talks
to the backend and populates cache
- ratelimit realizes it can't ratelimit the request and lets the
original HEAD through to the backend
There's a related bug about how something similar can happen when the
backend gets overloaded, but *everything is working* -- we just ought to
be talking straight to the proxy app.
Note that there's likely something similar going on with container info,
but the hardcoded 10% sampling rate makes it harder to see if you're
monitoring raw metric streams.
I thought I fixed this in the related change, but no :-/
Change-Id: I49447c62abf9375541f396f984c91e128b8a05d5
Related-Change: If9249a42b30e2a2e7c4b0b91f947f24bf891b86f
Related-Bug: #1883214
|
|\ \ \ |
|
| |/ /
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Make all logs that are associated with a ContainerBroker include the
path to the DB file and the namespace path to the container.
UpgradeImpact:
There is a change in the format of sharder log messages, including
some warning and error level logs.
Change-Id: I7d2fe064175f002055054a72f348b87dc396772b
|
|\ \ \
| |/ /
|/| | |
|
| |/
| |
| |
| |
| | |
Closes-Bug: #1697860
Change-Id: I500a86de390b24b9d08a478d695a7d62c447e779
|
|\ \ |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
The related change fixed a bug in the ContainerSharder
yield_objects_to_shard_range method by replacing two DB lookups for
misplaced objects, one for undeleted rows and one for deleted rows,
with a single lookup for both deleted and undeleted rows. This
significantly increased the time to make the lookup because it could
no longer take advantage of the object table 'deleted' field index.
This patch reinstates the separate lookups for undeleted and deleted
rows in yield_objects. In isolation that change would re-introduce the
bug that was fixed by the Related-Change. The bug is therefore now
addressed by changing the yield_objects_to_shard_range implementation
so that both undeleted and deleted objects are consumed from the
yield_objects generator.
Change-Id: I337f4e54d1bcd4c5484fe56cfc886b16077982f5
Related-Change: Ie8404f0c7e84d3916f0e0fa62afc54f1f43a4d06
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Normally, the proxy object controller would be adding these, but when
encrypted, there won't be any headers in the x-object-meta-* namespace.
Closes-Bug: #1868045
Change-Id: I8e708a60ee63f679056300fc9d68227e46d605e8
|
| |/
|/|
| |
| |
| |
| |
| |
| | |
X-Backend-Allow-Method was used in some iteration, but not the version
of the patch that finally landed.
Change-Id: Id637253bb68bc839f5444a74c91588d753ef4379
Related-Change: Ia13ee5da3d1b5c536eccaadc7a6fdcd997374443
|
| |
| |
| |
| |
| |
| |
| |
| | |
Adding a "use_replication" field to the node dict, a helper function to
set use_replication dict value for a node copy by looking up the header
value for x-backend-use-replication-network
Change-Id: Ie05af464765dc10cf585be851f462033fc6bdec7
|
|\ \ |
|
| | |
| | |
| | |
| | |
| | | |
Change-Id: Ifccedbe7662925db55d0d8cd9e2e66a03126f661
Closes-Bug: #1816181
|
|\ \ \ |
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Previously, we would track the failure in the failure_devs_info set, but
not actually use it to update failure stats unless some other exception
occurred.
Change-Id: Ib28196191275022fcb74d2365910240cc7c61c3a
|
|\ \ \ \ |
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
The SQLite in-memory databases have been great for testing but
as the swift DatabaseBroker's have become more complex, the limitations
of in memory databases are being reached. Mostly due
to the introduction of container sharding where a broker sometimes needs
to make multiple connections to the same database as the same time.
Rather then rework the real broker logic to better support in-memory
testing, it's actually easier to just remove the in-memory broker tests
and use a "real" broker in a tempdir. This allows us to better test how
brokers behave in real life, pending files and all.
This patch replaces all the :memory: brokers in the tests with real ones
placed in a tempdir. To achieve this, we new base unittest class `TestDBBase`
has been added that creates, cleans up and provides some helper methods
to manage the db path and location.
Further, all references to :memory: in the Database brokers have been
removed.
Change-Id: I5983132f776b84db634fef39c833d5cfdce11980
|
|\ \ \ \ \ |
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
In the container backend we have a delete_meta_whitelist which
whitelists specific metadata from being cleared when we delete a broker.
Currently the sharding root and quoted root are in there.
They're there because we now process deleted shards and we need to know
what a shard's root is. But since that time we've found and
edge case bug where we've found old sharding brokers that are stuck
because they're deleted (a deleted shard will clear the sysmeta-sharding
truth value), but also has lost their shard-ranges. This makes them fail
the sharding_enabled check and therefore never get a chance to pull new
shard ranges from their root and are therefore stuck.
A deleted shard wont be recreated, so it doesn't hurt to keep this
sharding sysmeta value, and will be an extra line of defence stopping
similar stuck container issues in the future.
Change-Id: I0bef534eca71b9ce2b29927021b1977463ffbe74
|
|\ \ \ \ \ \
| |/ / / / /
|/| | | | | |
|
| | |_|_|/
| |/| | |
| | | | |
| | | | |
| | | | |
| | | | | |
setuptools seems to be in the process of deprecating pkg_resources.
Change-Id: I64f1434a5acab99057beb4f397adca85bdcc4ab6
|
|\ \ \ \ \
| |/ / / /
|/| | | | |
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
Once a shard container has been created as part of the sharder cycle it
pulls the shards own_shard_range, updates the object_count and
bytes_used and pushes this to the root container. The root container can
use these to display the current container stats.
However, it is not until a shard gets to the CLEAVED state, that it
holds enough information for it's namespace, so before this the number
it returns is incorrect. Further, when we find and create a shard, it
starts out with the number of objects, at the time, that are expected to
go into them. This is better answer then, say, nothing.
So it's better for the shard to send it's current own_shard_range but
don't update the stats until it can be authoritive of that answer.
This patch adds a new SHARD_UPDATE_STAT_STATES that track what
ShardRange states a shard needs to be in in order to be responsible,
current definition is:
SHARD_UPDATE_STAT_STATES = [ShardRange.CLEAVED, ShardRange.ACTIVE,
ShardRange.SHARDING, ShardRange.SHARDED,
ShardRange.SHRINKING, ShardRange.SHRUNK]
As we don't want to update the OSR stats and the meta_timestmap, also
move tombstone updates to only happen when in a SHARD_UPDATE_STAT_STATES
state.
Change-Id: I838dbba3c791fffa6a36ffdcf73eceeaff718373
|
|\ \ \ \ \ |
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
Function to get update shards is only related to ObjectController
in obj.py, similar to existing get list shards function of
ContainerController in container.py.
Change-Id: Ie20fbf9b46db20f2928198a16305d7509af833db
|
|\ \ \ \ \ \
| |/ / / / / |
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
This patch will add more granularity to shard operation cache or
backend metrics, and then remove some of existing and duplicated
metrics.
Before this patch, related metrics are:
1.shard_<op>.cache.[hit|miss|skip]
2.shard_<op>.backend.<status_int>
where op is 'listing' or 'updating'.
With this patch, they are going to become:
1.shard_<op>.infocache.hit
cache hits with infocache.
2.shard_<op>.cache.hit
cache hits with memcache.
3.shard_<op>.cache.[miss|bypass|skip|force_skip|disabled|error]
.<status_int>
Those are operations made to backend due to below reasons.
miss: cache misses.
bypass: metadata didn't support a cache lookup
skip: the selective skips per skip percentage config.
force_skip: the request with 'x-newest' header.
disabled: memcache is disabled.
error: memcache connection error.
For each kind of operation metrics, suffix <status_int> will
count operations with different status. Then a sum of all
status sub-metrics will the total metrics of that operation.
UpgradeImpact
=============
Metrics dashboard will need updates to display those changed metrics
correctly, also infocache metrics are newly added, please see above
message for all changes needed.
Co-Authored-By: Clay Gerrard <clay.gerrard@gmail.com>
Change-Id: Ib8be30d3969b4b4808664c43e94db53d10e6ef4c
|
|\ \ \ \ \ \ |
|
| | |/ / / /
| |/| | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
pytest still complains about some 20k warnings, but the vast majority
are actually because of eventlet, and a lot of those will get cleaned up
when upper-constraints picks up v0.33.2.
Change-Id: If48cda4ae206266bb41a4065cd90c17cbac84b7f
|
| |/ / / /
|/| | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
Previously, clients could use XML external entities (XXEs) to read
arbitrary files from proxy-servers and inject the content into the
request. Since many S3 APIs reflect request content back to the user,
this could be used to extract any secrets that the swift user could
read, such as tempauth credentials, keymaster secrets, etc.
Now, disable entity resolution -- any unknown entities will be replaced
with an empty string. Without resolving the entities, the request is
still processed.
[CVE-2022-47950]
Closes-Bug: #1998625
Co-Authored-By: Romain de Joux <romain.de-joux@ovhcloud.com>
Change-Id: I84494123cfc85e234098c554ecd3e77981f8a096
|
|\ \ \ \ \ |
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
We've seen S3 clients expecting to be able to send request lines like
GET https://cluster.domain/bucket/key HTTP/1.1
instead of the expected
GET /bucket/key HTTP/1.1
Testing against other, independent servers with something like
( echo -n $'GET https://www.google.com/ HTTP/1.1\r\nHost: www.google.com\r\nConnection: close\r\n\r\n' ; sleep 1 ) | openssl s_client -connect www.google.com:443
suggests that it may be reasonable to accept them; the RFC even goes so
far as to say
> To allow for transition to the absolute-form for all requests in some
> future version of HTTP, a server MUST accept the absolute-form in
> requests, even though HTTP/1.1 clients will only send them in
> requests to proxies.
(See https://datatracker.ietf.org/doc/html/rfc7230#section-5.3.2)
Fix it at the protocol level, so everywhere else we can mostly continue
to assume that PATH_INFO starts with a / like we always have.
Co-Authored-By: Clay Gerrard <clay.gerrard@gmail.com>
Change-Id: I04012e523f01e910f41d5a41cdd86d3d2a1b9c59
|
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
Change-Id: If6af519440fb444539e2526ea4dcca0ec0636388
|
|/ / / / /
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
No sooner do we fix the gate than tox has a new release that breaks
it again. Let's give them a bit to settle down; in the mean time, stick
with 3.x.
See https://github.com/tox-dev/tox/issues/2811
Also simplify our warning suppressions. The message filter is a regex,
so any prefix of the message will suffice. This allows us to also drop a
new message seen on CentOS 8:
CryptographyDeprecationWarning: Python 3.6 is no longer
supported by the Python core team. Therefore, support for
it is deprecated in cryptography. The next release of
cryptography (40.0) will be the last to support Python 3.6.
As we've previously seen with cryptography warnings, this can slow down
our probe tests to the point that they time out.
Change-Id: I316170442c67c1b4a5b87f9a1168cc04ca2417b8
|
|\ \ \ \ \ |
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
Applied deltas:
- Fix http.client references
- Inline HTTPStatus codes
- Address request line splitting (https://bugs.python.org/issue33973)
- Special-case py2 header-parsing
- Address multiple leading slashes in request path
(https://github.com/python/cpython/issues/99220)
Change-Id: Iae28097668213aa0734837ff21aef83251167d19
|