| Commit message (Collapse) | Author | Age | Files | Lines |
|\ |
|
| |
| |
| |
| | |
Change-Id: Ib864c7dc6c8c7bb849f4f97a1239eb5cc04c424c
|
| |
| |
| |
| | |
Change-Id: I9fdc74d26fd830f463c077c912cdcf00eaab1dfa
|
| |
| |
| |
| |
| |
| |
| | |
As it was, it would hide issues in the logging or ratelimiter
implementations.
Change-Id: I9e557442401ef17b753f45b9e1cb181e71784ccf
|
|/
|
|
| |
Change-Id: I7ab605d48972e8dc06e630d160c745baeea91355
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Reseller admins can set new headers on accounts like
X-Account-Quota-Bytes-Policy-<policy-name>: <quota>
This may be done to limit consumption of a faster, all-flash policy, for
example.
This is independent of the existing X-Account-Meta-Quota-Bytes header, which
continues to limit the total storage for an account across all policies.
Change-Id: Ib25c2f667e5b81301f8c67375644981a13487cfe
|
|\ |
|
| |
| |
| |
| | |
Change-Id: I1ab7376b5e68a3deaad5aca113ad55bde00b2238
|
|/
|
|
|
|
|
|
| |
Normally, the proxy object controller would be adding these, but when
encrypted, there won't be any headers in the x-object-meta-* namespace.
Closes-Bug: #1868045
Change-Id: I8e708a60ee63f679056300fc9d68227e46d605e8
|
|\ |
|
| |
| |
| |
| |
| | |
Change-Id: Ifccedbe7662925db55d0d8cd9e2e66a03126f661
Closes-Bug: #1816181
|
| |
| |
| |
| |
| |
| | |
setuptools seems to be in the process of deprecating pkg_resources.
Change-Id: I64f1434a5acab99057beb4f397adca85bdcc4ab6
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Previously, clients could use XML external entities (XXEs) to read
arbitrary files from proxy-servers and inject the content into the
request. Since many S3 APIs reflect request content back to the user,
this could be used to extract any secrets that the swift user could
read, such as tempauth credentials, keymaster secrets, etc.
Now, disable entity resolution -- any unknown entities will be replaced
with an empty string. Without resolving the entities, the request is
still processed.
[CVE-2022-47950]
Closes-Bug: #1998625
Co-Authored-By: Romain de Joux <romain.de-joux@ovhcloud.com>
Change-Id: I84494123cfc85e234098c554ecd3e77981f8a096
|
|\ \ |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
We need to support the aforementioned headers in our s3 apis
and raise an InvalidArgumentError if a s3 client makes a request
Change-Id: I2c5b18e52da7f33b31ba386cdbd042f90b69ef97
|
|\ \ \ |
|
| |/ /
| | |
| | |
| | |
| | | |
Closes-Bug: #1883172
Change-Id: Ie44288976ac5a507c27bd175c5f56c9b0bd04fe0
|
|\ \ \
| |/ /
|/| | |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
We've had this option for a year now, and it seems to help. Let's enable
it for everyone. Note that Swift clients still need to opt into the
async delete via a query param, while S3 clients get it for free.
Change-Id: Ib4164f877908b855ce354cc722d9cb0be8be9921
|
|\ \ \ |
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
The loading and creation of the Memcache ring in the middleware is
rather interesting. It not only reads the config file, but also may
look for a `/etc/swift/memcache.conf`. Further, we are know are
looking at using the MemcacheRing client in more places.
So this patch moves the config reading from the middleware and
into a `load_memcache` helper method in swift/common/memcached.py.
Drive-by: cleanup unused stuff in middleware test module
Change-Id: I028722facfbe3ff8092b6bdcc931887a169cc49a
|
|/ / /
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
The formpost middleware turns POST requests into object PUT requests.
This request goes through other middlewares.
Even if the user requests an object upload using formpost, the user should be able to know the contents of the error response that occurred in other middleware.
However, in formpost, using only the status received by the _perform_subrequest() method, the error response body is altered and returned to the user (e.g., `400 Bad Request`), so it is difficult for the user to know why the error occurred.
Therefore, not only status but also response should be added to the return value of _perform_subrequest(), and if there is an error response body, it should be delivered to the user without altering it.
Closes-Bug: #1990942
Change-Id: I4020e90dfaf7370a7941d123e9bea920c09b1aa0
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
If you've got thousands of requests per second for objects in a single
container, you basically NEVER want that container's info to ever fall
out of memcache. If it *does*, all those clients are almost certainly
going to overload the container.
Avoid this by allowing some small fraction of requests to bypass and
refresh the cache, pushing out the TTL as long as there continue to be
requests to the container. The likelihood of skipping the cache is
configurable, similar to what we did for shard range sets.
Change-Id: If9249a42b30e2a2e7c4b0b91f947f24bf891b86f
Closes-Bug: #1883324
|
|\ \ \ |
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
This is a fairly blunt tool: ratelimiting is per device and
applied independently in each worker, but this at least provides
some limit to disk IO on backend servers.
GET, HEAD, PUT, POST, DELETE, UPDATE and REPLICATE methods may be
rate-limited.
Only requests with a path starting '<device>/<partition>', where
<partition> can be cast to an integer, will be rate-limited. Other
requests, including, for example, recon requests with paths such as
'recon/version', are unconditionally forwarded to the next app in the
pipeline.
OPTIONS and SSYNC methods are not rate-limited. Note that
SSYNC sub-requests are passed directly to the object server app
and will not pass though this middleware.
Change-Id: I78b59a081698a6bff0d74cbac7525e28f7b5d7c1
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
... and reword some mpu listing logic
Related-Change-Id: I923033e863b2faf3826a0f5ba84307addc34f986
Change-Id: If1909bb7210622908f2ecc5e06d53cd48250572a
|
|\ \ \ \ |
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
* Get rid of a bunch of accidental blockquote formatting
* Always declare a lexer to use for ``.. code::`` blocks
Change-Id: I8940e75b094843e542e815dde6b6be4740751813
|
|\ \ \ \ \
| |/ / / /
|/| | | | |
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
Co-Authored-By: Jianjian Huo <jhuo@nvidia.com>
Change-Id: I7837a2ec7dee9a657e36147c208c524b5a01671d
|
|\ \ \ \ \
| |/ / / /
|/| | | | |
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
When clients issue a ?multipart-manifest=delete request to non-SLOs, we
try to fetch the manifest then drain and close the response upon seeing
it wasn't actually an SLO manifest. This could previously cause the extra
transfer (and discard) of several gigabytes of data.
Now, add two extra headers to the request:
* Range: bytes=-1
* X-Backend-Ignore-Range-If-Metadata-Present: X-Static-Large-Object
The first limits how much data we'll be discarding, while the second tells
object servers to ignore the range header if it's an SLO manifest. Note
that object-servers may still need to return more than one byte to the
proxy -- an EC policy will require that we get a full fragment's worth
from each server -- but at least we've got a better cap on our downside.
Why one byte? Because range requests weren't designed to be able to
return no data. Why the last byte (as opposed to the first)? Because
bytes=0-0 will 416 on a zero-byte object, while bytes=-1 will 200.
Note that the backend header was introduced in Swift 2.24.0 -- if we get
a response from an older object-server, it may respect the Range header
even though it's returning an SLO manifest. In that case, retry without
either header.
Related-Bug: #1980954
Co-Authored-By: Romain de Joux <romain.de-joux@ovhcloud.com>
Change-Id: If3861e5b9c4f17ab3b82ea16673ddb29d07820a1
|
|\ \ \ \ \ |
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
We've known this would eventually be necessary for a while [1], and
way back in 2017 we started seeing SHA-1 collisions [2].
This patch follows the approach of soft deprecation of SHA1 in tempurl.
It's still a default digest, but we'll start with warning as the
middleware is loaded and exposing any deprecated digests
(if they're still allowed) in /info.
Further, because there is much shared code between formpost and tempurl, this
patch also goes and refactors shared code out into swift.common.digest.
Now that we have a digest, we also move digest related code:
- get_hmac
- extract_digest_and_algorithm
[1] https://www.schneier.com/blog/archives/2012/10/when_will_we_se.html
[2] https://security.googleblog.com/2017/02/announcing-first-sha1-collision.html
Change-Id: I581cadd6bc79e623f1dae071025e4d375254c1d9
|
|\ \ \ \ \ \
| |/ / / / /
|/| / / / /
| |/ / / / |
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
In get_slo_segments a GET subrequest is processed to get SLO manifest,
but if the object is not a SLO the response was not drain/closed.
Closes-Bug: 1980954
Change-Id: I7862c8ef153416c00c8ca7d6bf2f3556a1776d8c
|
|/ / / /
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Go back to allowing sha1 by default, but still warn that the deprecation
is happening, removal from default will come soon, and removal of all
support will come after that.
Change-Id: I4ebd92ff9358ca0679716a4af085333dde1f726a
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Without this, it's really hard for operators to feel comfortable
removing support for now-deprecated digests.
This patch adds new statsd metrics in the form of:
formpost.digests.<digest>
tempurl.digest.<digest>
Something like:
formpost.digessts.sha1
tempurl.digests.sha512
Change-Id: I203607a0576582330241172d05bf8fd223bbbb9d
|
| | | |
| | | |
| | | |
| | | |
| | | | |
Change-Id: I269a59559a943fbf2781224d6962b25f6e07d30c
Related-Change: Iadb0a40092b8347eb5c04785cc14d1324cc9396f
|
|\ \ \ \ |
|
| | | | |
| | | | |
| | | | |
| | | | | |
Change-Id: Id441688aac1088041e243b8ee70710d9c5d7911b
|
|/ / / /
| | | |
| | | |
| | | |
| | | |
| | | | |
This allows to be always more compatible with AWS S3.
Change-Id: Icf6da9e9abba4abb825a5b109ff978e586319fbb
|
|\ \ \ \ |
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
Sha1 has known to be deprecated for a while so allow the formpost
middleware to use SHA256 and SHA512. Follow the tempurl model and
accept signatures of the form:
<hex-encoded signature>
or
sha1:<base64-encoded signature>
sha256:<base64-encoded signature>
sha512:<base64-encoded signature>
where the base64-encoding can be either standard or URL-safe, and the
trailing '=' chars may be stripped off.
As part of this, pull the signature-parsing out to a new function, and
add detection for hex-encoded sha512 signatures to tempurl.
Change-Id: Iaba3725551bd47d75067a634a7571485b9afa2de
Related-Change: Ia9dd1a91cc3c9c946f5f029cdefc9e66bcf01046
Co-Authored-By: Tim Burke <tim.burke@gmail.com>
Closes-Bug: #1794601
|
|/ / / /
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
In s3api's request object we copy backend headers into the request
environ for logging after we call get_response. The problem with s3api
copy is that we make a pre-flight HEAD request to the source object
using the same request object, so the first response backend headers
pollute the request and the proxy won't over-ride the backend header
with the correct storage policy.
As a possible fix we simply remove the problematic header from the
request object after the pre-flight HEAD request finishes.
Change-Id: I40b252446b3a1294a5ca8b531f224ce9c16f9aba
|
|\ \ \ \ |
|
| | |_|/
| |/| |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Using pure Swift ACLs, there is no real difference between private,
bucket-owner-full-control, and bucket-owner-read.
Drive-By: Return NotImplemented on log-delivery-write, just like we do
for authenticated-read.
Change-Id: I79761b0f1f5f90f2602005e3e0428d201b5c813e
|
|\ \ \ \
| |/ / /
|/| | | |
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
We've known this would eventually be necessary for a while [1], and
way back in 2017 we started seeing SHA-1 collisions [2].
[1] https://www.schneier.com/blog/archives/2012/10/when_will_we_se.html
[2] https://security.googleblog.com/2017/02/announcing-first-sha1-collision.html
UpgradeImpact:
==============
"sha1" has been removed from the default set of `allowed_digests` in the
tempurl middleware config. If your cluster still has clients requiring
the use of SHA-1,
- explicitly configure `allowed_digests` to include "sha1" and
- encourage your clients to move to more-secure algorithms.
Depends-On: https://review.opendev.org/c/openstack/tempest/+/832771
Change-Id: I6e6fa76671c860191a2ce921cb6caddc859b1066
Related-Change: Ia9dd1a91cc3c9c946f5f029cdefc9e66bcf01046
Closes-Bug: #1733634
|
|\ \ \ \
| |_|/ /
|/| | | |
|