summaryrefslogtreecommitdiff
path: root/test
Commit message (Collapse)AuthorAgeFilesLines
* Merge "Add cap_length helper"HEADmasterZuul2023-05-171-0/+9
|\
| * Add cap_length helperTim Burke2023-04-051-0/+9
| | | | | | | | Change-Id: Ib864c7dc6c8c7bb849f4f97a1239eb5cc04c424c
* | Merge "memcached: log user provided keys in exception error logging."Zuul2023-05-171-26/+61
|\ \
| * | memcached: log user provided keys in exception error logging.Jianjian Huo2023-05-011-26/+61
| | | | | | | | | | | | | | | | | | | | | | | | User provided keys are need to debug those tracebacks/timeouts when clients talking to memcached, in order to associate those failures with specific memcache usages within swift services. Change-Id: I07491bb4ebc3baa13cf09f64a04a61011d561409
* | | Merge "Unit test for keepalive timeout"Zuul2023-05-173-5/+102
|\ \ \
| * | | Unit test for keepalive timeoutShreeya Deshpande2023-05-103-5/+102
| |/ / | | | | | | | | | | | | | | | Create a unit test to verify client timeout for multiple requests Change-Id: I974e01cd2cb18f4ea87c3966dbf4b06bff22ed39
* | | Merge "testing xattr metadata with py3.8"Zuul2023-05-151-0/+33
|\ \ \
| * | | testing xattr metadata with py3.8Clay Gerrard2023-05-021-0/+33
| | | | | | | | | | | | | | | | Change-Id: Ib3c9b274bbd2e643f3febbdf54a8a43f4775944b
* | | | Merge "Properly read py2 object metadata on py3"Zuul2023-05-152-28/+235
|\ \ \ \ | |/ / /
| * | | Properly read py2 object metadata on py3Tim Burke2023-05-022-28/+235
| |/ / | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Replicated, unencrypted metadata is written down differently on py2 vs py3, and has been since we started supporting py3. Fortunately, we can inspect the raw xattr bytes to determine whether the pickle was written using py2 or py3, so we can properly read legacy py2 meta under py3 rather than hitting a unicode error. Closes-Bug: #2012531 Change-Id: I5876e3b88f0bb1224299b57541788f590f64ddd4
* | | Merge "Sharder: add timing metrics for individual steps and total time spent."Zuul2023-05-101-0/+84
|\ \ \
| * | | Sharder: add timing metrics for individual steps and total time spent.Jianjian Huo2023-05-031-0/+84
| |/ / | | | | | | | | | Change-Id: Ie2a8e4eced6688e5a98aa37c3c7b0c13fd2ddeee
* | | Merge "wsgi: Add keepalive_timeout option"Zuul2023-05-091-0/+5
|\ \ \ | |/ / |/| |
| * | wsgi: Add keepalive_timeout optionTim Burke2023-04-181-0/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Clients sometimes hold open connections "just in case" they might later pipeline requests. This can cause issues for proxies, especially if operators restrict max_clients in an effort to improve response times for the requests that *do* get serviced. Add a new keepalive_timeout option to give proxies a way to drop these established-but-idle connections without impacting active connections (as may happen when reducing client_timeout). Note that this requires eventlet 0.33.4 or later. Change-Id: Ib5bb84fa3f8a4b9c062d58c8d3689e7030d9feb3
* | | Merge "Log (Watchdog's) Timeouts with duration"Zuul2023-05-012-7/+14
|\ \ \
| * | | Log (Watchdog's) Timeouts with durationClay Gerrard2023-04-282-7/+14
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | ... and clean up WatchDog start a little. If this pattern proves useful we could consider extending it. Change-Id: Ia85f9321b69bc4114a60c32a7ad082cae7da72b3
* | | | Merge "Don't monkey patch logging on import"Zuul2023-04-283-6/+83
|\ \ \ \
| * | | | Don't monkey patch logging on importChetan Mishra2023-04-283-6/+83
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Previously swift.common.utils monkey patched logging.thread, logging.threading, and logging._lock upon import with eventlet threading modules, but that is no longer reasonable or necessary. With py3, the existing logging._lock is not patched by eventlet, unless the logging module is reloaded. The existing lock is not tracked by the gc so would not be found by eventlet's green_existing_locks(). Instead we group all monkey patching into utils function and apply patching consistently across daemons and WSGI servers. Co-Authored-By: Clay Gerrard <clay.gerrard@gmail.com> Co-Authored-By: Alistair Coles <alistairncoles@gmail.com> Closes-Bug: #1380815 Change-Id: I6f35ad41414898fb7dc5da422f524eb52ff2940f
* | | | | Merge "Proxy: restructure cached listing shard ranges"Zuul2023-04-283-88/+103
|\ \ \ \ \
| * | | | | Proxy: restructure cached listing shard rangesJianjian Huo2023-04-173-88/+103
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Updating shard range cache has been restructured and upgraded to v2 which only persist the essential attributes in memcache (see Related-Change). This is the following patch to restructure the listing shard ranges cache for object listing in the same way. UpgradeImpact ============= The cache key for listing shard ranges in memcached is renamed from 'shard-listing/<account>/<container>' to 'shard-listing-v2/<account>/<container>', and cache data is changed to be a list of [lower bound, name]. As a result, this will invalidate all existing listing shard ranges stored in the memcache cluster. Co-Authored-By: Alistair Coles <alistairncoles@gmail.com> Related-Change: If98af569f99aa1ac79b9485ce9028fdd8d22576b Change-Id: I54a32fd16e3d02b00c18b769c6f675bae3ba8e01
* | | | | | Merge "proxy controller: always pass x-backend-* headers to backend"Zuul2023-04-281-4/+62
|\ \ \ \ \ \
| * | | | | | proxy controller: always pass x-backend-* headers to backendAlistair Coles2023-04-191-4/+62
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | X-Backend-* headers were previously passed to the backend server with only a subset of all request types: * all object requests * container GET, HEAD * account GET, HEAD In these cases, X-Backend-* headers were transferred to backend requests implicitly as a consequence of *all* the headers in the request that the proxy is handling being copied to the backend request. With this change, X-Backend-* headers are explicitly copied from the request that the proxy is handling to the backend request, for every request type. Note: X-Backend-* headers are typically added to a request by the proxy app or middleware, prior to creating a backend request. X-Backend-* headers are removed from client requests by the gatekeeper middleware, so clients cannot send X-Backend-* headers to backend servers. An exception is an InternalClient that does not have gatekeeper middleware, deliberately so that internal daemons such as the sharder can send X-Backend-* headers to the backend servers. Also, BaseController.generate_request_headers() is fixed to prevent accessing a None type when transfer is True but the orig_req is None. Change-Id: I05fb9a3e1c98d96bbe01da2ee28474e0f57297e6
* | | | | | | Merge "ECFragGetter: assume policy.fragment_size is non-zero"Zuul2023-04-281-18/+3
|\ \ \ \ \ \ \
| * | | | | | | ECFragGetter: assume policy.fragment_size is non-zeroAlistair Coles2023-04-261-18/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Simplify ECFragGetter by removing code that guards against the policy fragment_size being None or zero. Policy fragment_size must be > 0: the fragment_size is based on the ec_segment_size, which is verified as > 0 when constructing an EC policy. This is asserted by test_parse_storage_policies in test.unit.common.test_storage_policy.TestStoragePolicies. Also, rename client_chunk_size to fragment_size for clarity. Change-Id: Ie1efaab3bd0510275d534b5c023cb73c98bec90d
* | | | | | | | Merge "Make all config parsing case-sensitive"Zuul2023-04-282-5/+19
|\ \ \ \ \ \ \ \
| * | | | | | | | Make all config parsing case-sensitiveClay Gerrard2023-04-282-5/+19
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This effects both daemon config parsing and paste-deploy config parsing when using conf.d. When the WSGI servers were loaded from a flat file they have always been case-sensitive. This difference was surprising (who wants anything case-insensitive?) and potentially dangerous for values like RECLAIM_AGE. UpgradeImpact: Previously the option keys in swift's configuration .ini files were sometimes parsed in a case-insensitive manner, so you could use CLIENT_TIMEOUT and the daemons would recognize you meant client_timeout. Now upper-case or mixed-case option names, such as CLIENT_TIMEOUT or Client_Timeout, will be ignored. Change-Id: Idd8e552d9fe98b84d7cee1adfa431ea3ae93345d
* | | | | | | | | Merge "ring: Centralize device normalization"Zuul2023-04-281-6/+10
|\ \ \ \ \ \ \ \ \ | |_|_|_|_|_|/ / / |/| | | | | | | |
| * | | | | | | | ring: Centralize device normalizationTim Burke2023-04-261-6/+10
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This puts replication info in antique rings loaded with metadata_only=True. Closes-Bug: #1696837 Change-Id: Idf263a7f7a984a1307bd74040ac8f8bb1651bc79
* | | | | | | | | Merge "tests: Fix config numbers in test_versioning_with_metadata_replication"Zuul2023-04-281-4/+4
|\ \ \ \ \ \ \ \ \
| * | | | | | | | | tests: Fix config numbers in test_versioning_with_metadata_replicationTim Burke2023-04-271-4/+4
| |/ / / / / / / / | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Closes-Bug: #2017021 Change-Id: If422f99a77245b35ab755857f9816c1e401a4e22
* | | | | | | | | Merge "Error logs changed for ChunkWriteTimeout"Zuul2023-04-281-1/+1
|\ \ \ \ \ \ \ \ \ | |_|_|_|_|_|/ / / |/| | | | | | | |
| * | | | | | | | Error logs changed for ChunkWriteTimeoutShreeya Deshpande2023-04-271-1/+1
| |/ / / / / / / | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The log message phrase 'ChunkWriteTimeout fetching fragments' implies that the timeout has occurred while getting a fragment (from the backend object server) when in fact the timeout has occurred waiting to yield the fragment to the app iter. Hence, changing message to 'ChunkWriteTimeout feeding fragments' Change-Id: Ic0813e6a9844da1130091d27e3dbe272ea871d11
* | | | | | | | Merge "tests for wsgi/daemon config parsing"Zuul2023-04-273-6/+223
|\ \ \ \ \ \ \ \ | | |/ / / / / / | |/| / / / / / | |_|/ / / / / |/| | | | | |
| * | | | | | tests for wsgi/daemon config parsingClay Gerrard2023-04-143-6/+223
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Change-Id: Ibb82555830b88962cc765fc88281ca42a9ce9d9c
* | | | | | | ECFragGetter: simplify iter_bytes_from_response_partAlistair Coles2023-04-261-2/+67
| |/ / / / / |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | Refactor and add some targeted unit tests. No behavioral change. Change-Id: I153528b8a1709f3756c261cf3eb2acfd5de10f9c
* | | | | | Assert ChunkWriteTimeout errors are loggedAlistair Coles2023-04-251-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The test claimed to assert that ChunkWriteTimeouts are logged, but the test would in fact pass if the timeouts were not logged. Change-Id: Ic9d119858397e8aeccaf7f89487f9e62f16ee453
* | | | | | tests: Fix test_cleanup_ondisk_files_commit_windowTim Burke2023-04-201-1/+1
| |_|_|/ / |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | `much_older` has to be much older than `older`, or the test gets flakey. See - test_cleanup_ondisk_files_reclaim_non_data_files, - test_cleanup_ondisk_files_reclaim_with_data_files, and - test_cleanup_ondisk_files_reclaim_with_data_files_legacy_durable for a more standard definition of "much_older". Closes-Bug: #2017024 Change-Id: I1eaa501827f4475ddc0c20d82cf0a6d4a5e98f75
* | | | | Merge "ssync: Round-trip offsets in meta/ctype Timestamps"Zuul2023-04-183-5/+80
|\ \ \ \ \
| * | | | | ssync: Round-trip offsets in meta/ctype TimestampsTim Burke2023-04-173-5/+80
| | |_|/ / | |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | Use double-underscore to separate to ensure old code blows up rather than misinterpret encoded offsets. Change-Id: Idf9b5118e9b64843e0c4dd7088b498b165f33db4
* | | | | Merge "tests: Fix PriorityQueue import"Zuul2023-04-171-2/+1
|\ \ \ \ \ | |/ / / / |/| | | |
| * | | | tests: Fix PriorityQueue importTim Burke2023-04-131-2/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Not sure how we didn't catch this before; py2 gate jobs still seem to run these tests and they'd pass?? Change-Id: I24a5680d19af609b92588249610e4a1f128bdad3
* | | | | Merge "ssync: fix decoding of ts_meta when ts_data has offset"Zuul2023-04-143-1/+108
|\ \ \ \ \ | |_|/ / / |/| | | |
| * | | | ssync: fix decoding of ts_meta when ts_data has offsetAlistair Coles2023-02-273-1/+108
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The SsyncSender encodes object file timestamps in a compact form and the SsyncReceiver decodes the timestamps and compares them to its object file set. The encoding represents the meta file timestamp as a delta from the data file timestamp, NOT INCLUDING the data file timestamp offset. Previously, the decoding was erroneously calculating the meta file timestamp as the sum of the delta plus the data file timestamp INCLUDING the offset. For example, if the SssyncSender has object file timestamps: ts_data = t0_1.data ts_meta = t1.data then the receiver would erroneously perceive that the sender has: ts_data = t0_1.data ts_meta = t1_1.data As described in the referenced bug report, this erroneous decoding could cause the SsyncReceiver to request that the SsyncSender sync an object that is already in sync, which results in a 409 Conflict at the receiver. The 409 causes the ssync session to terminate, and the same process repeats on the next attempt. Closes-Bug: #2007643 Co-Authored-By: Clay Gerrard <clay.gerrard@gmail.com> Change-Id: I74a0aac0ac29577026743f87f4b654d85e8fcc80
* | | | | Merge "InternalClient: error if allow_modify_pipeline is True"Zuul2023-04-141-0/+21
|\ \ \ \ \
| * | | | | InternalClient: error if allow_modify_pipeline is TrueAlistair Coles2023-04-141-0/+21
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | allow_modify_pipeline is no longer supported, but if a caller is still setting it to True then raise ValueError, because the InternalClient instance will no longer behave in the way the caller previously expected. Change-Id: I24015b8becc7289a7d72f9a5863d201e27bcc955
* | | | | | Merge "internal_client: Remove allow_modify_pipeline option"Zuul2023-04-143-4/+67
|\ \ \ \ \ \ | |/ / / / /
| * | | | | internal_client: Remove allow_modify_pipeline optionMatthew Oliver2023-04-143-4/+67
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The internal client is suppose to be internal to the cluster, and as such we rely on it to not remove any headers we decide to send. However if the allow_modify_pipeline option is set the gatekeeper middleware is added to the internal client's proxy pipeline. So firstly, this patch removes the allow_modify_pipeline option from the internal client constructor. And when calling loadapp allow_modify_pipeline is always passed with a False. Further, an op could directly put the gatekeeper middleware into the internal client config. The internal client constructor will now check the pipeline and raise a ValueError if one has been placed in the pipeline. To do this, there is now a check_gatekeeper_loaded staticmethod that will walk the pipeline which called from the InternalClient.__init__ method. Enabling this walking through the pipeline, we are now stashing the wsgi pipeline in each filter so that we don't have to rely on 'app' naming conventions to iterate the pipeline. Co-Authored-By: Alistair Coles <alistairncoles@gmail.com> Change-Id: Idcca7ac0796935c8883de9084d612d64159d9f92
* | | | | | Pull libc-related functions out to a separate moduleTim Burke2023-04-123-561/+601
| |_|/ / / |/| | | | | | | | | | | | | | | | | | | Partial-Bug: #2015274 Change-Id: I3e26f8d4e5de0835212ebc2314cac713950c85d7
* | | | | Merge "Pull timestamp-related functions out to a separate module"Zuul2023-04-123-849/+882
|\ \ \ \ \
| * | | | | Pull timestamp-related functions out to a separate moduleTim Burke2023-04-053-849/+882
| | |_|_|/ | |/| | | | | | | | | | | | | | | | | | Partial-Bug: #2015274 Change-Id: I5b7ab3b2c150ec1513b3e6ebc4b27808d5df042c