| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
| |
We need to support the aforementioned headers in our s3 apis
and raise an InvalidArgumentError if a s3 client makes a request
Change-Id: I2c5b18e52da7f33b31ba386cdbd042f90b69ef97
|
|
|
|
|
| |
Change-Id: I269a59559a943fbf2781224d6962b25f6e07d30c
Related-Change: Iadb0a40092b8347eb5c04785cc14d1324cc9396f
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In s3api's request object we copy backend headers into the request
environ for logging after we call get_response. The problem with s3api
copy is that we make a pre-flight HEAD request to the source object
using the same request object, so the first response backend headers
pollute the request and the proxy won't over-ride the backend header
with the correct storage policy.
As a possible fix we simply remove the problematic header from the
request object after the pre-flight HEAD request finishes.
Change-Id: I40b252446b3a1294a5ca8b531f224ce9c16f9aba
|
|
|
|
| |
Change-Id: Ibe514a7ab22d475517b1efc50de676f47d741a4c
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The current implementation of s3 signature calculation
rely on WSGI Url encoding which is discouraged by AWS:
https://docs.aws.amazon.com/AmazonS3/latest/API/sig-v4-header-based-auth.html.
This leads to reject requests with valid signature.
This update encode only characters specified by AWS except
'A'-'Z', 'a'-'z', '0'-'9', '-', '.', '_', and '~' to comply
AWS signature calculation.
Fixes LP Bug #1961841
Change-Id: Ifa8f94544224c3379e7f2805f6f86d0b0a47279a
|
|\ |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
The *_swift_info functions use in module global dicts to provide a
registry mechanism for registering and getting swift info.
This is an abnormal pattern and doesn't quite fit into utils. Further
we looking at following this pattern for sensitive info to trim in the
future.
So this patch does some house cleaning and moves this registry to a new
module swift.common.registry. And updates all the references to it.
For backwards compat we still import the *_swift_info methods into utils
for any 3rd party tools or middleware.
Change-Id: I71fd7f50d1aafc001d6905438f42de4e58af8421
|
|\ \ |
|
| |/
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Previous problems included:
- returning wsgi strings quoted assuming UTF-8 on py3 when initiating
or completing multipart uploads
- trying to str() some unicode on py2 when listing parts, leading to
UnicodeEncodeErrors
Change-Id: Ibc1d42c8deffe41c557350a574ae80751e9bd565
|
|/
|
|
|
|
|
|
| |
Sometimes a cluster might be accessible via more than one set
of domain names. Allow operators to configure them such that
virtual-host style requests work with all names.
Change-Id: I83b2fded44000bf04f558e2deb6553565d54fd4a
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Trying to copy an object with non-ASCII characters in its name results
in, depending on the pipeline:
- an error code 412 because of a badly urlencoded path
- an error code 500 "TypeError: Expected a WSGI string"
This commit fixes the problem by calling str_to_wsgi on the object name
after it has been urldecoded. We do not need to call this on the
container name because it is supposed to contain only ASCII characters.
Change-Id: If837d4e55735b10a783c85d91f37fbea5e3baf1d
|
|
|
|
| |
Change-Id: Ie91a90fbb3488af63a51dcd18fa2c60ad00e234d
|
|
|
|
|
|
|
|
|
|
|
|
| |
Mostly when I disconnect during upload with s3api clients I see the
proxy log a traceback for EPIPE, but if I set my my client_timeout low
and the proxy initiates the disconnect s3api will get surprised by the
499 response and return 500.
Now s3api will handle it the same as a RequestTimeout, which looks like
a 400 on the wire if anyone is still there.
Change-Id: I08be94fc5cf16679f41a2fd08ce1d52ce6300871
|
|\ |
|
| |
| |
| |
| | |
Change-Id: If04c083ccc9f63696b1f53ac13edc932740a0654
|
|\ \
| |/
|/| |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Some middlewares (notably staticweb) use the absence of a REMOTE_USER to
determine that a request was unauthenticated and as a result should be
handled differently. This could cause problems for S3 requests that
were authenticated via s3api's custom auth logic, including
* server errors when a container listing request gets handled by
staticweb or
* losing storage policy information because staticweb copied the request
environment.
Change-Id: Idf29c6866fec7b413c4369dce13c4788666c0934
Closes-Bug: #1833287
Related-Change: I5fe5ab31d6b2d9f7b6ecb3bfa246433a78e54808
|
| |
| |
| |
| |
| |
| |
| |
| | |
Turns out, there's at least one project out there that wants to subclass
S3Request (though I still don't think that's advisable).
Change-Id: Id504fa3379bc440fb08b2bb2423f87a407d3c6af
Related-Change: I4a65f50828b4e90ff6be2c3b343b295e442cc59e
|
| |
| |
| |
| | |
Change-Id: Ibf934d5a859ca61f928452e05b8a57bdc69a3350
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
While we're at it, make the default match AWS's 15 minute limit (instead
of our old 5 minute limit).
UpgradeImpact
=============
This (somewhat) weakens some security protections for requests over the
S3 API; operators may want to preserve the prior behavior by setting
allowable_clock_skew = 300
in the [filter:s3api] section of their proxy-server.conf
Co-Authored-By: Alistair Coles <alistairncoles@gmail.com>
Change-Id: I0da777fcccf056e537b48af4d3277835b265d5c9
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
That way we don't have to plumb in some half-dozen options one-by-one.
Also, increase test coverage for s3request.py
Co-Authored-By: Alistair Coles <alistairncoles@gmail.com>
Change-Id: I4a65f50828b4e90ff6be2c3b343b295e442cc59e
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Previously, we unconditionally lower-cased the client-provided
X-Amz-Content-SHA256 header, which led to SignatureDoesNotMatch errors
since the client and server didn't agree on the canonical request.
Now, only lower-case the value when making comparisons; leave it alone
for signature-calculation purposes.
Change-Id: I746d8e641c884ccd7838082ff07f958ee101de18
Related-Change: I3d6e2e4542a5ed03a6d31ec0ef4837d1de30a045
Closes-Bug: #1910827
|
|/
|
|
|
|
| |
Also, allow upper-cased (and mixed-case) SHAs.
Change-Id: I3d6e2e4542a5ed03a6d31ec0ef4837d1de30a045
|
|
|
|
|
|
|
| |
...instead of raising AttributeError.
Change-Id: I6515c36cb7f5b98f715bc8c33f1f822b1cfad668
Closes-Bug: #1908412
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
md5 is not an approved algorithm in FIPS mode, and trying to
instantiate a hashlib.md5() will fail when the system is running in
FIPS mode.
md5 is allowed when in a non-security context. There is a plan to
add a keyword parameter (usedforsecurity) to hashlib.md5() to annotate
whether or not the instance is being used in a security context.
In the case where it is not, the instantiation of md5 will be allowed.
See https://bugs.python.org/issue9216 for more details.
Some downstream python versions already support this parameter. To
support these versions, a new encapsulation of md5() is added to
swift/common/utils.py. This encapsulation is identical to the one being
added to oslo.utils, but is recreated here to avoid adding a dependency.
This patch is to replace the instances of hashlib.md5() with this new
encapsulation, adding an annotation indicating whether the usage is
a security context or not.
While this patch seems large, it is really just the same change over and
again. Reviewers need to pay particular attention as to whether the
keyword parameter (usedforsecurity) is set correctly. Right now, all
of them appear to be not used in a security context.
Now that all the instances have been converted, we can update the bandit
run to look for these instances and ensure that new invocations do not
creep in.
With this latest patch, the functional and unit tests all pass
on a FIPS enabled system.
Co-Authored-By: Pete Zaitcev
Change-Id: Ibb4917da4c083e1e094156d748708b87387f2d87
|
|\ |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Add a new config option to SLO, allow_async_delete, to allow operators
to opt-in to this new behavior. If their expirer queues get out of hand,
they can always turn it back off.
If the option is disabled, handle the delete inline; this matches the
behavior of old Swift.
Only allow an async delete if all segments are in the same container and
none are nested SLOs, that way we only have two auth checks to make.
Have s3api try to use this new mode if the data seems to have been
uploaded via S3 (since it should be safe to assume that the above
criteria are met).
Drive-by: Allow the expirer queue and swift-container-deleter to use
high-precision timestamps.
Change-Id: I0bbe1ccd06776ef3e23438b40d8fb9a7c2de8921
|
|/
|
|
| |
Change-Id: I5fe5ab31d6b2d9f7b6ecb3bfa246433a78e54808
|
|
|
|
|
| |
Change-Id: Ia8db40227343e9c4555267c62072a1c9bfc28c66
Closes-Bug: #1893811
|
|\ |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
... and to determine {account}, {container}, and {object} template
values, as well as statsd metric names.
UpgradeImpact:
--------------
Be aware that this will cause an increase in the proxy-logging statsd
metrics emited for s3api responses. However, this will more accurately
reflect the state of the system.
Change-Id: Idbea6fadefb2061f83eed735ef198b88ba7aaf69
|
|/
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When I call the S3 API using the AWS .NET SDK, I get the following error.
An error occurred (AuthorizationHeaderMalformed) when calling the
ListBuckets operation: The authorization header is malformed;
the region 'regionone' is wrong; expecting 'RegionOne'
The reason is that the AWS .NET SDK generates a signature by changing
the region name to lowercase. (AWS region names are all lowercase.)
The default region name of OpenStack is RegionOne, and custom region
names with capital letters can also be used.
If you set the location of the S3 API to a region name containing
uppercase letters, the AWS .NET SDK cannot be used.
There are two ways to solve this problem.
1. Force the location item of S3 API middleware setting to be set
to lower case.
2. If the request contains credentail parameters that contain the
lowercase region name, the region name of string_to_sign is
modified to lowercase to generate a valid signature.
I think the second way is to make it more compatible.
Closes-Bug: #1888444
Change-Id: Ifb58b854b93725ed2a1e3bbd87f447f2ab40ea91
|
|
|
|
| |
Change-Id: If0fc8ec4d8056afb741bf74b82598a26683dfcd7
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
https://docs.aws.amazon.com/cli/latest/userguide/cliv2-migration.html#cliv2-migration-s3-copy-metadata
AWS CLI version 2 improves Amazon S3 handling of file properties
and tags when performing multipart copies. We still don't supprt
object tagging hence the aws s3 cp command fails for mulitpart
copies with default options.
This way get tagging request will receive an empty tagset in
response and mulitpart copies will work fine
Change-Id: I1f031b05025cafac00e86966c240aa5f7258d0bf
|
|
|
|
|
|
|
|
|
|
| |
Use swift.backend_path entry in wsgi environment to propagate
backend PATH_INFO.
Needed by ceilometermiddleware to extract account/container info
from PATH_INFO, patch: https://review.opendev.org/#/c/718085/
Change-Id: Ifb3c6c30835d912c5ba4b2e03f2e0b5cb392671a
|
|\ |
|
| |
| |
| |
| | |
Change-Id: I331b6871e5b62f61809338a1abddafe1263e7f02
|
|\ \ |
|
| |/
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Translate AWS S3 Object Versioning API requests to native Swift Object
Versioning API, speficially:
* bucket versioning status
* bucket versioned objects listing params
* object GETorHEAD & DELETE versionId
* multi_delete versionId
Change-Id: I8296681b61996e073b3ba12ad46f99042dc15c37
Co-Authored-By: Tim Burke <tim.burke@gmail.com>
Co-Authored-By: Clay Gerrard <clay.gerrard@gmail.com>
|
|\ \
| |/
|/| |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Currently, they just 500 as an unexpected response status. Much better
would be S3's '503 Slow Down' response.
Of course, that's all dependent on where you place ratelimit in your
pipeline -- and we haven't really given clear guidance on that. I'm not
actually sure you *want* ratelimit to be after s3api and auth... but if
you *do*, let's at least handle it gracefully.
Change-Id: I36f0954fd9949d7d1404a0c381b917d1cfb17ec5
Related-Bug: 1669888
|
|/
|
|
| |
Change-Id: I89f3a4b5a3a8c160afb298aad726acce09c65265
|
|\ |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Since the change in s3_token_middleware to retrieve the auth info
from keystone directly, now, we don't need to have any tokens provided
by keystone in the request header as X-Auth-Token.
Note that this makes the pipeline ordering change documented in the
related changes mandatory, even when working with a v2 Keystone server.
Change-Id: I7c251a758dfc1fedb3fb61e351de305b431afa79
Related-Change: I21e38884a2aefbb94b76c76deccd815f01db7362
Related-Change: Ic9af387b9192f285f0f486e7171eefb23968007e
|
| |
| |
| |
| |
| |
| | |
...instead of logging tracebacks about unexpected status codes.
Change-Id: Iadb0a40092b8347eb5c04785cc14d1324cc9396f
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
(Some versions of?) awscli/boto3 will do v4 signatures but send a
Content-MD5 for end-to-end validation. Since a X-Amz-Content-SHA256
is still required to calculate signatures, it uses UNSIGNED-PAYLOAD
similar to how signatures work for pre-signed URLs.
Look for UNSIGNED-PAYLOAD and skip SHA256 validation if set.
Change-Id: I571c16c196dae4e4f8fb41904c8850d0054b1fe9
Related-Change: I61eb12455c37376be4d739eee55a5f439216f0e9
|
|/
|
|
|
|
|
|
|
|
| |
S3 supports two metadata operations on object copy: COPY and REPLACE.
When using REPLACE, the Content-Type should be set to the one supplied
by the caller. When using COPY, the existing object's Content-Type value
is used.
Change-Id: Ic7c6278dedef308c9219eb45751abfa5655f144f
Closes-Bug: #1828907
|
|
|
|
|
|
|
| |
Drive-by: When passing a list or tuple to swob.Response as an app_iter,
check that it's full of byte strings.
Change-Id: Ifc35aacb2e45004f74c871f08ff3c52bc57c1463
|
|\ |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
When get_container_info is called and not authenticated, it will
make a HEAD subrequest and get info by passing resp.sw_headers to
headers_to_container_info. This will lose all sysmeta stored in
resp.sysmeta_headers.
The patch fixes this by passing all sw_headers and
sysmeta_headers to headers_to_container_info.
Change-Id: I6e538ed7a748b60bdb9db7e894eaedc9d72559c1
Closes-Bug: #1765679
|