| Commit message (Collapse) | Author | Age | Files | Lines |
|\ |
|
| |
| |
| |
| |
| |
| |
| | |
We need to support the aforementioned headers in our s3 apis
and raise an InvalidArgumentError if a s3 client makes a request
Change-Id: I2c5b18e52da7f33b31ba386cdbd042f90b69ef97
|
|/
|
|
|
| |
Closes-Bug: #1883172
Change-Id: Ie44288976ac5a507c27bd175c5f56c9b0bd04fe0
|
|
|
|
|
|
|
| |
... and reword some mpu listing logic
Related-Change-Id: I923033e863b2faf3826a0f5ba84307addc34f986
Change-Id: If1909bb7210622908f2ecc5e06d53cd48250572a
|
|
|
|
|
| |
Co-Authored-By: Jianjian Huo <jhuo@nvidia.com>
Change-Id: I7837a2ec7dee9a657e36147c208c524b5a01671d
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
s3api bucket listing elements currently have LastModified values with
millisecond precision. This is inconsistent with the value of the
Last-Modified header returned with an object GET or HEAD response
which has second precision. This patch reduces the precision to
seconds in bucket listings and upload part listings. This is also
consistent with observation of an aws listing response.
The last modified values in the swift native listing *up* to
the nearest second to be consistent with the seconds-precision
Last-Modified time header that is returned with an object GET or HEAD.
However, we continue to include millisecond digits set to 0 in the
last-modified string, e.g.: '2014-06-10T22:47:32.000Z'.
Also, fix the last modified time returned in an object copy response
to be consistent with the last modified time of the object that was
created. Previously it was rounded down, but it should be rounded up.
Change-Id: I8c98791a920eeedfc79e8a9d83e5032c07ae86d3
|
|
|
|
|
|
|
| |
When the versioning is enabled (or suspended), AWS specifies
in the error message that all versions should be deleted.
Change-Id: I3da9469a5cfed031a2cee85e1dfcd78bbe54695a
|
|\ |
|
| |
| |
| |
| |
| | |
Closes-Bug: 1966396
Change-Id: I253d8e3e8678fad3fde43259ed3225df4048a458
|
|/
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When deleting multiple objects, S3 allows to enable
a quiet mode with the 'Quiet' key.
At AWS S3, the value of this key is case-insensitive.
- Quiet mode is enabled if the value is 'true'
(regardless of case).
- Otherwise, in all other cases (even a non-boolean value),
this mode will be disabled.
Also, some tools (like Minio's python API) send the value 'True'
(and not 'true').
Change-Id: Id9d1da2017b8d13242ae1f410347febb013e9ce1
|
|\ |
|
| |
| |
| |
| |
| | |
Co-Authored-By: Florent Vennetier <florent.vennetier@ovhcloud.com>
Change-Id: I635bc91faa7709f9df9cdf3aec157a21c08923ca
|
|\ \ |
|
| |/
| |
| |
| | |
Change-Id: Idcda76f7a880a18c3bac699e0fb2435e4a54abbd
|
|/
|
|
|
|
|
| |
Specifically, parameters that may contain non-ASCII characters,
such as the prefix and marker to list current uploads.
Change-Id: Icfae68825f94ddf2412c0274c3d500e265117e8e
|
|\ |
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
When listing current multipart uploads, we have to keep listing the
container until we either list all the entries or there are enough MPUs
to return to the caller. Otherwise, it is impossible to list all of the
multipart uploads when some of them have > 1000 parts.
Change-Id: I923033e863b2faf3826a0f5ba84307addc34f986
|
|\ \
| |/
|/| |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
The *_swift_info functions use in module global dicts to provide a
registry mechanism for registering and getting swift info.
This is an abnormal pattern and doesn't quite fit into utils. Further
we looking at following this pattern for sensitive info to trim in the
future.
So this patch does some house cleaning and moves this registry to a new
module swift.common.registry. And updates all the references to it.
For backwards compat we still import the *_swift_info methods into utils
for any 3rd party tools or middleware.
Change-Id: I71fd7f50d1aafc001d6905438f42de4e58af8421
|
|/
|
|
|
|
|
|
|
|
| |
Previous problems included:
- returning wsgi strings quoted assuming UTF-8 on py3 when initiating
or completing multipart uploads
- trying to str() some unicode on py2 when listing parts, leading to
UnicodeEncodeErrors
Change-Id: Ibc1d42c8deffe41c557350a574ae80751e9bd565
|
|
|
|
|
|
|
|
|
|
|
| |
Bucket ACLs:
The contents of the container are unnecessarily listed.
Object ACLs:
The content of the object is unnecessarily fetched.
Additionally, because the data is skipped, a 499 error is returned on a subrequest.
Change-Id: I1e6ccc8ec4a54375b5817498c4ac7f995656a794
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
s3api returns multipart parts listings out of order and possibly
missing. For example, if there are 2000 parts, the first 12 parts
returned by s3api currently will be: 1, 10-19, 100. Then after part
199, the following part is 1000, and so on.
The change fixes this behavior by internally listing all of the parts
(with default settings, this should be 1 listing request, as the 10000
parts limit matches the Swift listing limit). After that, the parts are
sorted and delimited/marker settings are applied to craft the response
for the client.
Change-Id: I150cf53b07e7d2d8de1d6e8c1fb08c07b9afe842
|
|\ |
|
| |
| |
| |
| | |
Change-Id: I2347a73ff23c5c7d415f23d864fc29147e4a1754
|
|\ \
| |/
|/| |
|
| |
| |
| |
| | |
Change-Id: I8ce73e2e21e9216484130ba3bd1e77b45eb1d77c
|
|/
|
|
|
|
|
| |
We've occasionally seen errors here where the body is empty. Hopefully
knowing more about the response will shed some light on what happened.
Change-Id: I69c748ebf721579a5fae85333ce3d4e999b9eb2a
|
|\ |
|
| |
| |
| |
| |
| |
| |
| | |
And stop sending WSGI strings on py3.
Change-Id: I9b769e496aa7c8ed5862c2d7310f643838328084
Closes-Bug: #1853654
|
|\ \
| |/ |
|
| |
| |
| |
| |
| |
| |
| |
| | |
Real AWS only includes an empty delimiter element when doing a
version-aware listing.
Change-Id: Id246a157c576eac93375be084ada3740f1e09793
Closes-Bug: #1853663
|
|\ \ |
|
| | |
| | |
| | |
| | |
| | | |
Related-Bug: 1888444
Change-Id: I5188b277e8d7fb2c9835e63b951fb944782b4819
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Previously, s3api would try to write a WSGI string to the manifest
(which would always fail to validate).
Change-Id: Idd8846dec2251d55ca74ddc793794d798f7d27f6
Closes-Bug: 1906289
|
|\ \ \ |
|
| |/ /
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
md5 is not an approved algorithm in FIPS mode, and trying to
instantiate a hashlib.md5() will fail when the system is running in
FIPS mode.
md5 is allowed when in a non-security context. There is a plan to
add a keyword parameter (usedforsecurity) to hashlib.md5() to annotate
whether or not the instance is being used in a security context.
In the case where it is not, the instantiation of md5 will be allowed.
See https://bugs.python.org/issue9216 for more details.
Some downstream python versions already support this parameter. To
support these versions, a new encapsulation of md5() is added to
swift/common/utils.py. This encapsulation is identical to the one being
added to oslo.utils, but is recreated here to avoid adding a dependency.
This patch is to replace the instances of hashlib.md5() with this new
encapsulation, adding an annotation indicating whether the usage is
a security context or not.
While this patch seems large, it is really just the same change over and
again. Reviewers need to pay particular attention as to whether the
keyword parameter (usedforsecurity) is set correctly. Right now, all
of them appear to be not used in a security context.
Now that all the instances have been converted, we can update the bandit
run to look for these instances and ensure that new invocations do not
creep in.
With this latest patch, the functional and unit tests all pass
on a FIPS enabled system.
Co-Authored-By: Pete Zaitcev
Change-Id: Ibb4917da4c083e1e094156d748708b87387f2d87
|
|/ /
| |
| |
| | |
Change-Id: I3d23d5ff4b3f76db15e84ed37b1cb8503eb58dd5
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
When completing a multipart-upload, include the upload-id in sysmeta.
If we can't find the upload marker, check the final object name; if it
has an upload-id in sysmeta and it matches the upload-id that we're
trying to complete, allow the complete to continue.
Also add an early return if the already-completed upload's ETag matches
the computed ETag for the user's request. This should help clients that
can't take advantage of how we dribble out whitespace to try to keep the
conneciton alive: The client times out, retries, and if the upload
actually completed, it gets a fast 200 response.
Change-Id: I38958839be5b250c9d268ec7c50a56cdb56c2fa2
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
https://docs.aws.amazon.com/cli/latest/userguide/cliv2-migration.html#cliv2-migration-s3-copy-metadata
AWS CLI version 2 improves Amazon S3 handling of file properties
and tags when performing multipart copies. We still don't supprt
object tagging hence the aws s3 cp command fails for mulitpart
copies with default options.
This way get tagging request will receive an empty tagset in
response and mulitpart copies will work fine
Change-Id: I1f031b05025cafac00e86966c240aa5f7258d0bf
|
|/
|
|
|
|
|
|
|
|
|
|
| |
Previously, attempting to GET, HEAD, or DELETE an object with a non-null
version-id would cause 500s, with logs complaining about how
version-aware operations require that the container is versioned
Now, we'll early-return with a 404 (on GET or HEAD) or 204 (on DELETE).
Change-Id: I46bfd4ae7d49657a94734962c087f350e758fead
Closes-Bug: 1874295
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The repo is Python using both Python 2 and 3 now, so update hacking to
version 2.0 which supports Python 2 and 3. Note that latest hacking
release 3.0 only supports version 3.
Fix problems found.
Remove hacking and friends from lower-constraints, they are not needed
for installation.
Change-Id: I9bd913ee1b32ba1566c420973723296766d1812f
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Translate AWS S3 Object Versioning API requests to native Swift Object
Versioning API, speficially:
* bucket versioning status
* bucket versioned objects listing params
* object GETorHEAD & DELETE versionId
* multi_delete versionId
Change-Id: I8296681b61996e073b3ba12ad46f99042dc15c37
Co-Authored-By: Tim Burke <tim.burke@gmail.com>
Co-Authored-By: Clay Gerrard <clay.gerrard@gmail.com>
|
|
|
|
|
|
|
| |
... and drive-by a import rename
Co-Authored-By: Clay Gerrard <clay.gerrard@gmail.com>
Change-Id: I1eaf075ff9855cfa03e7991bdf33375b0e4397e6
|
|
|
|
|
|
|
|
|
|
|
|
| |
When users upload an MPU object, s3api will automatically
create a segment container if one doesn't already exist.
Currently, s3api will create the segment bucket using the
cluster's default storage policy. This patch changes that
behavior to use the same storage policy as the primary bucket.
Change-Id: Ib64a06868bd3670a1d4a1860ac29122e1ede7c39
Closes-Bug: 1832390
|
|\ |
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Even when your cluster's configured funny, like your
container_listing_limit is too low, or your max_manifest_segments and
max_upload_part_num are too high, an abort should (attempt to) clean up
*all* segments.
Change-Id: I5a57f919cc74ddb08bbb35a7d852fbc1457185e8
|
|/
|
|
|
|
|
| |
Related-Change-Id: I179ea6180d31146bb947061c69b1807c59529ac8
Related-Change-Id: I056edc68aee8c0db2a2c4a5b9e3d242a895975b3
Change-Id: I84bd29ae48ff1b0826794a8fdf9aa87670ad4aa4
|
|
|
|
|
| |
Change-Id: I1152c47c52f6482ec877142c96845b00bf6dcc5b
Related-Change: I130ba5014b7eff458d87ab29eb42fe45607c9a12
|
|
|
|
|
|
|
|
|
|
|
| |
PUT request on an existing container will trigger an update on
container db. When disks where container db landed are under
heavy loads, update on the container db may fail due to LockTimout.
Hence, we first check existence, if it's not there, we PUT.
Change-Id: Ic61153948e35f1c09b05bfc97dfac3fb487b0898
Closes-Bug: 1780204
|
|\ |
|