| Commit message (Collapse) | Author | Age | Files | Lines |
... | |
|\ \ |
|
| |/
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Previously, we would use the X-Amz-Content-SHA256 value when calculating
signatures, but wouldn't actually check the content that was sent. This
would allow a malicious third party that managed to capture the headers
for an object upload to overwrite that with arbitrary content provided
they could do so within the 5-minute clock-skew window.
Now, we wrap the wsgi.input that's sent on to the proxy-server app to
hash content as it's read and raise an error if there's a mismatch. Note
that clients using presigned-urls to upload have no defense against a
similar replay attack.
Notwithstanding the above security consideration, this *also* provides
better assurances that the client's payload was received correctly. Note
that this *does not* attempt to send an etag in footers, however, so the
proxy-to-object-server connection is not guarded against bit-flips.
In the future, Swift will hopefully grow a way to perform SHA256
verification on the object-server. This would offer two main benefits:
- End-to-end message integrity checking.
- Move CPU load of calculating the hash from the proxy (which is
somewhat CPU-bound) to the object-server (which tends to have CPU to
spare).
Change-Id: I61eb12455c37376be4d739eee55a5f439216f0e9
Closes-Bug: 1765834
|
|\ \
| |/
|/| |
|
| |
| |
| |
| |
| | |
Change-Id: I7dda8a25c9e13b0d81293f0a966c34713c93f6ad
Related-Bug: 1810026
|
| |
| |
| |
| |
| |
| | |
This hasn't been necessary since https://github.com/openstack/swift3/commit/cd094ee
Change-Id: I08a8303c67e3192f506fb49f6f89a88219b1d93c
|
|\ \ |
|
| |/
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
S3 docs say:
> Processing of a Complete Multipart Upload request could
> take several minutes to complete. After Amazon S3 begins
> processing the request, it sends an HTTP response header
> that specifies a 200 OK response. While processing is in
> progress, Amazon S3 periodically sends whitespace
> characters to keep the connection from timing out. Because
> a request could fail after the initial 200 OK response has
> been sent, it is important that you check the response
> body to determine whether the request succeeded.
Let's do that, too!
Change-Id: Iaf420983c41256ee9a4c43cfd74025d2ca069ae6
Closes-Bug: 1718811
Related-Change: I65cee5f629c87364e188aa05a06d563c3849c8f3
|
|\ \ |
|
| |/
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
...if and only if encryption is enabled. A few things to note about server-side
encryption:
- We register whether encryption is present and enabled when the proxy server
starts up.
- This is generally considered an operator feature, not a user-facing one. S3
API users can now learn more about how your cluster is set up than they
previously could.
- If encryption is enabled but there are no keymasters in the pipeline, all
writes will fail with "Unable to retrieve encryption keys."
- There's still a 'swift.crypto.override' env key that keymasters can set to
skip encryption, so this isn't a full guarantee that things will be
encrypted. On the other hand, none of the keymasters in Swift ever set that
override.
Note that this *does not* start including x-amz-server-side-encryption
headers in the response, neither during PUT nor GET. We should only
send that when we know for sure that the data on disk was encrypted.
Change-Id: I4c20bca7fedb839628f1b2f8611807631b8bf430
Related-Bug: 1607116
Related-Change: Icf28dc57e589f9be20937947095800d7ce57b2f7
|
|/
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is more likely to be the default region that a client would try for
v4 signatures.
UpgradeImpact:
==============
Deployers with clusters that relied on the old implicit default
location of US should explicitly set
location = US
in the [filter:s3api] section of proxy-server.conf before upgrading.
Change-Id: Ib6659a7ad2bd58d711002125e7820f6e86383be8
|
|\ |
|
| |
| |
| |
| | |
Change-Id: Ieedb54074d9d3494843597395e325a39d59976ad
|
|\ \ |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
When a bucket already exists, PUT returned a BucketAlreadyExists error.
AWS S3 returns BucketAlreadyOwnedByYou error instead, so this changes
the error returned by swift3.
When sending a PUT request to a bucket, if the bucket already exists and
is not owned by the user, return 409 conflict error,
BucketAlreadyExists.
Change-Id: I32a0a9add57ca0e4d667b5eb538dc6ea53359944
Closes-Bug: #1498231
|
|\ \ \
| |_|/
|/| | |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Previously, the 'x-amz-metadata-directive' header was ignored except to
ensure that it had a valid value if it existed. In all cases any
metadata specified was applied to the copied object, while
non-conflicting metadata remained.
This patch fixes this behaviour.
Now, if the 'x-amz-metadata-directive' header is set to 'REPLACE'
during a copy operation, the s3api middleware sets the
'x-fresh-metadata' header to 'True' to replace valid metadata values.
If the 'x-amz-metadata-directive' header is set to 'COPY' or if it is
omited during a copy operation, then the s3api middleware removes all
metadata (custom or not) from the request to prevent it from being
changed.
Content-Type can never be set on an S3 copy operation, even if the
metadata directive is set to 'REPLACE', so it is specifically filtered
out on copy.
Change-Id: I333e46758bd2b7a29f672c098af267849232c911
Closes-Bug: #1433875
|
|\ \ \
| |_|/
|/| | |
|
| |/
| |
| |
| |
| |
| |
| |
| | |
This lets clients know when they used the wrong region,
for example.
Change-Id: Id3ac41e994705d3befe32df60ff4241c334a78b7
Closes-Bug: #1674842
|
| |
| |
| |
| |
| |
| |
| | |
This is comparable to what AWS returns, and should greatly simplify
debugging when diagnosing 403s.
Change-Id: Iabfcbaae919598e22f39b2dfddac36b75653fc10
|
|\ \ |
|
| |/
| |
| |
| |
| |
| | |
Otherwise, users can create buckets in accounts they don't own.
Change-Id: I13d557c32b12529ef1087c52f7af302a33d33acb
|
| |
| |
| |
| |
| |
| | |
We don't support it yet, so return 501 Not Implemented.
Change-Id: Ie2f4bd1bfdb1bcbdf1a0f0db9d542b6057e9d2ec
|
|/
|
|
|
|
|
|
|
|
|
| |
We don't support it yet, so return 501 Not Implemented. Previously, we'd
store the aws-chunked content (!) and most clients would see it as data
corruption.
See https://docs.aws.amazon.com/AmazonS3/latest/API/sigv4-streaming.html
for more information.
Change-Id: I697962039667980ef89212bc480f8b1d3fbd718c
|
|
This attempts to import openstack/swift3 package into swift upstream
repository, namespace. This is almost simple porting except following items.
1. Rename swift3 namespace to swift.common.middleware.s3api
1.1 Rename also some conflicted class names (e.g. Request/Response)
2. Port unittests to test/unit/s3api dir to be able to run on the gate.
3. Port functests to test/functional/s3api and setup in-process testing
4. Port docs to doc dir, then address the namespace change.
5. Use get_logger() instead of global logger instance
6. Avoid global conf instance
Ex. fix various minor issue on those steps (e.g. packages, dependencies,
deprecated things)
The details and patch references in the work on feature/s3api are listed
at https://trello.com/b/ZloaZ23t/s3api (completed board)
Note that, because this is just a porting, no new feature is developed since
the last swift3 release, and in the future work, Swift upstream may continue
to work on remaining items for further improvements and the best compatibility
of Amazon S3. Please read the new docs for your deployment and keep track to
know what would be changed in the future releases.
Change-Id: Ib803ea89cfee9a53c429606149159dd136c036fd
Co-Authored-By: Thiago da Silva <thiago@redhat.com>
Co-Authored-By: Tim Burke <tim.burke@gmail.com>
|