summaryrefslogtreecommitdiff
path: root/doc
Commit message (Collapse)AuthorAgeFilesLines
* Merge "docs: Remove references to out-dated install guides"Zuul2023-05-173-17/+1
|\
| * docs: Remove references to out-dated install guidesTim Burke2023-05-163-17/+1
| | | | | | | | Change-Id: Idbff951506ee2f3b288eda00217c902314393877
* | docs: Update versions in Getting Started docTim Burke2023-05-161-2/+2
|/ | | | Change-Id: Ibed9dc0afbdb922d06f7798bdac01db7c55b19f1
* docs: Fix broken paste/pastedeploy linksTim Burke2023-04-279-10/+10
| | | | | Closes-Bug: #2016463 Change-Id: Id500a2429b7412823970a06e3e82b1d1646c70b8
* docs: Clean up cross-domain doc formatting; call out CWE-942Tim Burke2023-04-191-8/+22
| | | | Change-Id: I7ab605d48972e8dc06e630d160c745baeea91355
* Update urlWei LingFei2023-03-243-3/+3
| | | | | | | | The OpenStack project is currently maintained on opendev.org, with github.com serving as a mirror repository. Replace the source code repository address for the python-swiftclient project from github.com to opendev.org. Change-Id: I650a80cb45febc457c42360061faf3a9799e6131
* quotas: Add account-level per-policy quotasTim Burke2023-03-211-0/+2
| | | | | | | | | | | | | | Reseller admins can set new headers on accounts like X-Account-Quota-Bytes-Policy-<policy-name>: <quota> This may be done to limit consumption of a faster, all-flash policy, for example. This is independent of the existing X-Account-Meta-Quota-Bytes header, which continues to limit the total storage for an account across all policies. Change-Id: Ib25c2f667e5b81301f8c67375644981a13487cfe
* Merge "docs: Add memcache.conf config doc"Zuul2023-02-283-1/+78
|\
| * docs: Add memcache.conf config docMatthew Oliver2023-02-223-1/+78
| | | | | | | | Change-Id: I29d00e939a3842bd064382575955fa3e255242eb
* | Present `pytest` steps in development guidelinesAlexander Fadeev2023-02-251-1/+32
|/ | | | | | | Explain how to prepare venv with `tox devenv` Closes-Bug: #2003984 Change-Id: Idc536034a36646de9c1880c8d0bc0a387b130ac2
* Switch to pytestTim Burke2022-12-091-1/+1
| | | | | | | | | | nose has not seen active development for many years now. With py310, we can no longer use it due to import errors. Also update lower contraints Closes-Bug: #1993531 Change-Id: I215ba0d4654c9c637c3b97953d8659ac80892db8
* Merge "slo: Default allow_async_delete to true"Zuul2022-12-011-1/+0
|\
| * slo: Default allow_async_delete to trueTim Burke2021-12-211-1/+0
| | | | | | | | | | | | | | | | We've had this option for a year now, and it seems to help. Let's enable it for everyone. Note that Swift clients still need to opt into the async delete via a query param, while S3 clients get it for free. Change-Id: Ib4164f877908b855ce354cc722d9cb0be8be9921
* | proxy: Add a chance to skip memcache for get_*_info callsTim Burke2022-08-301-200/+228
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | If you've got thousands of requests per second for objects in a single container, you basically NEVER want that container's info to ever fall out of memcache. If it *does*, all those clients are almost certainly going to overload the container. Avoid this by allowing some small fraction of requests to bypass and refresh the cache, pushing out the TTL as long as there continue to be requests to the container. The likelihood of skipping the cache is configurable, similar to what we did for shard range sets. Change-Id: If9249a42b30e2a2e7c4b0b91f947f24bf891b86f Closes-Bug: #1883324
* | Merge "Add backend rate limiting middleware"Zuul2022-08-301-0/+7
|\ \
| * | Add backend rate limiting middlewareAlistair Coles2022-05-201-0/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This is a fairly blunt tool: ratelimiting is per device and applied independently in each worker, but this at least provides some limit to disk IO on backend servers. GET, HEAD, PUT, POST, DELETE, UPDATE and REPLICATE methods may be rate-limited. Only requests with a path starting '<device>/<partition>', where <partition> can be cast to an integer, will be rate-limited. Other requests, including, for example, recon requests with paths such as 'recon/version', are unconditionally forwarded to the next app in the pipeline. OPTIONS and SSYNC methods are not rate-limited. Note that SSYNC sub-requests are passed directly to the object server app and will not pass though this middleware. Change-Id: I78b59a081698a6bff0d74cbac7525e28f7b5d7c1
* | | Merge "Various doc formatting cleanups"Zuul2022-08-1519-639/+666
|\ \ \
| * | | Various doc formatting cleanupsTim Burke2022-08-0219-639/+666
| | | | | | | | | | | | | | | | | | | | | | | | | | | | * Get rid of a bunch of accidental blockquote formatting * Always declare a lexer to use for ``.. code::`` blocks Change-Id: I8940e75b094843e542e815dde6b6be4740751813
* | | | Merge "Update "Getting Started" requirements"Zuul2022-08-101-7/+8
|\ \ \ \ | |/ / /
| * | | Update "Getting Started" requirementsTim Burke2022-08-021-7/+8
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | * Call out liberasurecode as a requirement * Include more py3 versions * Use anonymous links Change-Id: Ib1f8ef5e36825b9c241d2a4d838ea01b3df70da0
* | | | Stop using unicode literals in docs conf.pyjiaqi072022-08-031-4/+4
|/ / / | | | | | | | | | Change-Id: I8ce6749c3d634c68e5d4a15d812a046514cc35f5
* | | Merge "formpost: deprecate sha1 signatures"Zuul2022-07-261-0/+11
|\ \ \
| * | | formpost: deprecate sha1 signaturesMatthew Oliver2022-07-261-0/+11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We've known this would eventually be necessary for a while [1], and way back in 2017 we started seeing SHA-1 collisions [2]. This patch follows the approach of soft deprecation of SHA1 in tempurl. It's still a default digest, but we'll start with warning as the middleware is loaded and exposing any deprecated digests (if they're still allowed) in /info. Further, because there is much shared code between formpost and tempurl, this patch also goes and refactors shared code out into swift.common.digest. Now that we have a digest, we also move digest related code: - get_hmac - extract_digest_and_algorithm [1] https://www.schneier.com/blog/archives/2012/10/when_will_we_se.html [2] https://security.googleblog.com/2017/02/announcing-first-sha1-collision.html Change-Id: I581cadd6bc79e623f1dae071025e4d375254c1d9
* | | | Merge "DB Replicator: Add handoff_delete option"Zuul2022-07-222-6/+44
|\ \ \ \ | |/ / / |/| | |
| * | | DB Replicator: Add handoff_delete optionMatthew Oliver2022-07-212-6/+44
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Currently the object-replicator has an option called `handoff_delete` which allows us to define the the number of replicas which are ensured in swift. Once a handoff node ensures that many successful responses it can go ahead and delete the handoff partition. By default it's 'auto' or rather the number of primary nodes. But this can be reduced. It's useful in draining full disks, but has to be used carefully. This patch adds the same option to the DB replicator and works the same way. But instead of deleting a partition it's done at the per DB level. Because it's done in the DB Replicator level it means the option is now available to both the Account and Container replicators. Change-Id: Ide739a6d805bda20071c7977f5083574a5345a33
* | | | proxy-logging: Allow to add domain in log messagesAymeric Ducroquetz2022-06-221-0/+1
| | | | | | | | | | | | | | | | Change-Id: Id441688aac1088041e243b8ee70710d9c5d7911b
* | | | Merge "s3api tests: allow AWS credential file loading"Zuul2022-06-021-0/+20
|\ \ \ \
| * | | | s3api tests: allow AWS credential file loadingAlistair Coles2022-06-011-0/+20
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When switching the s3api cross-compatibility tests' target between a Swift endpoint and an S3 endpoint, allow specifying an AWS CLI style credentials file as an alternative to editing the swift 'test.conf' file. Change-Id: I5bebca91821552d7df1bc7fa479b6593ff433925
* | | | | Merge "tempurl: Deprecate sha1 signatures"Zuul2022-06-011-15/+22
|\ \ \ \ \ | |/ / / / |/| | | |
| * | | | tempurl: Deprecate sha1 signaturesTim Burke2022-04-221-15/+22
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We've known this would eventually be necessary for a while [1], and way back in 2017 we started seeing SHA-1 collisions [2]. [1] https://www.schneier.com/blog/archives/2012/10/when_will_we_se.html [2] https://security.googleblog.com/2017/02/announcing-first-sha1-collision.html UpgradeImpact: ============== "sha1" has been removed from the default set of `allowed_digests` in the tempurl middleware config. If your cluster still has clients requiring the use of SHA-1, - explicitly configure `allowed_digests` to include "sha1" and - encourage your clients to move to more-secure algorithms. Depends-On: https://review.opendev.org/c/openstack/tempest/+/832771 Change-Id: I6e6fa76671c860191a2ce921cb6caddc859b1066 Related-Change: Ia9dd1a91cc3c9c946f5f029cdefc9e66bcf01046 Closes-Bug: #1733634
* | | | | ceph-tests: Remove known-failureTim Burke2022-05-181-1/+0
| |_|/ / |/| | | | | | | | | | | | | | | | | | | | | | | Apparently we fixed that recently without realizing it. Change-Id: I2f623ffc1400f018c203e930a7b78dfdb9d6e61c Related-Change: I8c98791a920eeedfc79e8a9d83e5032c07ae86d3
* | | | Rip out pickle support in our memcached clientTim Burke2022-04-271-15/+0
|/ / / | | | | | | | | | | | | | | | | | | We said this would be going away back in 1.7.0 -- lets actually remove it. Change-Id: I9742dd907abea86da9259740d913924bb1ce73e7 Related-Change: Id7d6d547b103b4f23ebf5be98b88f09ec6027ce4
* | | Doc: Update links in associated projectsTakashi Kajinami2022-04-191-22/+9
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Replace github by opendev because currently opendev is the source and github is its mirror. Also, update links for repositories managed by SwiftStack organization. Unfortunately some repositories are no longer available so are removed from the list. Change-Id: Ic223650eaf7a1934f489c8b713c6d8da1239f3c5
* | | Swauth is retiredTakashi Kajinami2022-04-192-3/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The swauth project is already retired[1]. The documentation is updated to reflect status of the project. Also, this change removes reference to this middleware in unit tests. [1] https://opendev.org/x/swauth/ Change-Id: I3d8e46d85ccd965f9b51006c330e391dcdc24a34
* | | doc: also add reverse option to pagination docMatthew Oliver2022-04-081-0/+34
| | | | | | | | | | | | Change-Id: I4ee5a52ec9fb5f1920cd6869f6b1245c3787391c
* | | CI: Run ceph and rolling upgrade tests under py3Tim Burke2022-04-041-12/+105
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | As part of that, the ceph test runner needed up-rev'ing to run under py3. As a result, the known-failures shifted. Trim the on-demand rolling upgrade jobs list -- now that it's running py3, we only expect it to pass for train and beyond. Also, pin smmap version on py2 -- otherwise, the remaining experimental jobs running on centos-7 fail. Change-Id: Ibe46aecf0f4461be59eb206bfe9063cc1bfff706
* | | s3api: Make the 'Quiet' key value case insensitiveAymeric Ducroquetz2022-03-241-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When deleting multiple objects, S3 allows to enable a quiet mode with the 'Quiet' key. At AWS S3, the value of this key is case-insensitive. - Quiet mode is enabled if the value is 'true' (regardless of case). - Otherwise, in all other cases (even a non-boolean value), this mode will be disabled. Also, some tools (like Minio's python API) send the value 'True' (and not 'true'). Change-Id: Id9d1da2017b8d13242ae1f410347febb013e9ce1
* | | Add docs for registry moduleTim Burke2022-02-101-0/+10
| | | | | | | | | | | | | | | | | | | | | Drive-By: make the register_sensitive_header() implementation more obviously case-insensitive. Change-Id: I5b299bc0adb526c468c6364a5706eb86809533e5
* | | Trim sensitive information in the logs (CVE-2017-8761)Matthew Oliver2022-02-091-1/+13
|/ / | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Several headers and query params were previously revealed in logs but are now redacted: * X-Auth-Token header (previously redacted in the {auth_token} field, but not the {headers} field) * temp_url_sig query param (used by tempurl middleware) * Authorization header and X-Amz-Signature and Signature query parameters (used by s3api middleware) This patch adds some new middleware helper methods to track headers and query parameters that should be redacted by proxy-logging. While instantiating the middleware, authors can call either: register_sensitive_header('case-insensitive-header-name') register_sensitive_param('case-sensitive-query-param-name') to add items that should be redacted. The redaction uses proxy-logging's existing reveal_sensitive_prefix config option to determine how much to reveal. Note that query params will still be logged in their entirety if eventlet_debug is enabled. UpgradeImpact ============= The reveal_sensitive_prefix config option now applies to more items; operators should review their currently-configured value to ensure it is appropriate for these new contexts. In particular, operators should consider reducing the value if it is more than 20 or so, even if that previously offered sufficient protection for auth tokens. Co-Authored-By: Tim Burke <tim.burke@gmail.com> Closes-Bug: #1685798 Change-Id: I88b8cfd30292325e0870029058da6fb38026ae1a
* | Deprecate LogAdapter.set_statsd_prefixAlistair Coles2022-02-071-8/+0
|/ | | | | | | | | | | | | | | | Previously, the set_statsd_prefix method was used to mutate a logger's StatsdClient tail prefix after a logger was instantiated. This pattern had led to unexpected mutations (see Related-Change). The tail_prefix can now be passed as an argument to get_logger(), and is then forwarded to the StatsdClient constructor, for a more explicit assignment pattern. The set_statsd_prefix method is left in place for backwards compatibility. A DeprecationWarning will be raised if it is used to mutate the StatsdClient tail prefix. Change-Id: I7692860e3b741e1bc10626e26bb7b27399c325ab Related-Change: I0522b1953722ca96021a0002cf93432b973ce626
* reconstructor: restrict max objects per revert jobAlistair Coles2021-12-031-0/+57
| | | | | | | | | | | | | | | | | | Previously the ssync Sender would attempt to revert all objects in a partition within a single SSYNC request. With this change the reconstructor daemon option max_objects_per_revert can be used to limit the number of objects reverted inside a single SSYNC request for revert type jobs i.e. when reverting handoff partitions. If more than max_objects_per_revert are available, the remaining objects will remain in the sender partition and will not be reverted until the next call to ssync.Sender, which would currrently be the next time the reconstructor visits that handoff partition. Note that the option only applies to handoff revert jobs, not to sync jobs. Change-Id: If81760c80a4692212e3774e73af5ce37c02e8aff
* Merge "Make SAIO reconciler multiprocess"Zuul2021-11-205-5/+169
|\
| * Make SAIO reconciler multiprocessTim Burke2021-10-225-5/+169
| | | | | | | | Change-Id: Iadaf898743a76e345264f1506af5318530bed0e0
* | sharidng: update doc to only mention auto_shard experimentalMatthew Oliver2021-10-191-6/+6
| | | | | | | | | | | | | | | | | | | | | | | | There are been members of the community running sharding in production and it's awesome. It's just the auto-sharding swift of that remains experimental. This patch removes the big sharding warning from the top of the sharding overview page and better emphasises that it's the audo_shard option that isn't ready for production use. Change-Id: Id2c842cffad58fb6fd5e1d12619c46ffcb38f8a5
* | Add and pipe reconstructor stats through reconMatthew Oliver2021-08-202-1/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | This patch plumbs the object-reconstructor stats that are dropped into recon cache out through the middleware and swift-recon tool. This adds a '/recon/reconstruction/object' to the middleware. As such the swift-recon tool has grown a '-R' or '--reconstruction' option access this data from each node. Plus some tests and documentation updates. Change-Id: I98582732ca5ccb2e7d2369b53abf9aa8c0ede00c
* | Fix the sysctl parameter used to tune connectionsLuciano Lo Giudice2021-07-211-1/+1
| | | | | | | | | | | | | | | | | | The documentation currently uses the sysctl parameter: 'net.ipv4.netfilter.ip_conntrack_max', but it's been deprecated for a long time. This patch switches it to: 'net.netfilter.nf_conntrack_max', which is the modern equivalent. Change-Id: I3fd5d4060840092bca53af7da7dbaaa600e936a3
* | diskfile: don't remove recently written non-durablesAlistair Coles2021-07-192-0/+16
|/ | | | | | | | | | | | | | | | | | | | | | | | | | | DiskFileManager will remove any stale files during cleanup_ondisk_files(): these include tombstones and nondurable EC data fragments whose timestamps are older than reclaim_age. It can usually be safely assumed that a non-durable data fragment older than reclaim_age is not going to become durable. However, if an agent PUTs objects with specified older X-Timestamps (for example the reconciler or container-sync) then there is a window of time during which the object server has written an old non-durable data file but has not yet committed it to make it durable. Previously, if another process (for example the reconstructor) called cleanup_ondisk_files during this window then the non-durable data file would be removed. The subsequent attempt to commit the data file would then result in a traceback due to there no longer being a data file to rename, and of course the data file is lost. This patch modifies cleanup_ondisk_files to not remove old, otherwise stale, non-durable data files that were only written to disk in the preceding 'commit_window' seconds. 'commit_window' is configurable for the object server and defaults to 60.0 seconds. Closes-Bug: #1936508 Related-Change: I0d519ebaaade35249fb7b17bd5f419ffdaa616c0 Change-Id: I5f3318a44af64b77a63713e6ff8d0fd3b6144f13
* Merge "sharder: avoid small tail shards"Zuul2021-07-081-0/+12
|\
| * sharder: avoid small tail shardsAlistair Coles2021-07-071-0/+12
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | A container is typically sharded when it has grown to have an object count of shard_container_threshold + N, where N << shard_container_threshold. If sharded using the default rows_per_shard of shard_container_threshold / 2 then this would previously result in 3 shards: the tail shard would typically be small, having only N rows. This behaviour caused more shards to be generated than desirable. This patch adds a minimum-shard-size option to swift-manage-shard-ranges, and a corresponding option in the sharder config, which can be used to avoid small tail shards. If set to greater than one then the final shard range may be extended to more than rows_per_shard in order to avoid a further shard range with less than minimum-shard-size rows. In the example given, if minimum-shard-size is set to M > N then the container would shard into two shards having rows_per_shard rows and rows_per_shard + N respectively. The default value for minimum-shard-size is rows_per_shard // 5. If all options have their default values this results in minimum-shard-size being 100000. Closes-Bug: #1928370 Co-Authored-By: Matthew Oliver <matt@oliver.net.au> Change-Id: I3baa278c6eaf488e3f390a936eebbec13f2c3e55
* | Merge "sharder: support rows_per_shard in config file"Zuul2021-07-071-7/+9
|\ \ | |/