| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
| |
Let users have the option to revert to the previous behavior. They may have
some odd load balancer setup, or a custom API implementation where repeated
_bulk_get attempts may cause unexpected issues.
|
|
|
|
|
| |
We're not using couchdbci-debian:arm64v8-buster-erlang containers any longer
and instead using multiarch images with buildx.
|
|
|
|
| |
As per comment in https://github.com/apache/couchdb/pull/4164#discussion_r965330220
|
|
|
|
|
|
|
|
| |
Go with the 2-indent mode. In emacs it would be:
```
'(groovy-indent-offset 2)
```
|
| |
|
|\
| |
| | |
Integrate docs into the main repo
|
| | |
|
| | |
|
|/
|
|
|
|
|
|
| |
If docs are changed then docs "check" is run If only docs changed and
not other files, then only docs are built and other stages are
"fast-forwarded".
Also, remove docs from gitignore and from rebar.config.
|
| |
|
|
|
|
|
|
|
|
|
| |
Currently, when POSTing to `/_session` with a Content-Type header
other than either `application/x-www-form-urlencoded` or
`application/json`, the error response can be surprising.
This changes the response to 415 `bad_content_type` when it's not one
of the above.
|
| |
|
|\
| |
| | |
return a nice error if non-object passed to _bulk_get
|
|/ |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
By now most of the CouchDB implementations support `_bulk_get`, so
let's update the replicator to take advantage of that.
To be backwards compatible assume some endpoints will not support
`_bulk_get` and may return either a 500 or 400 error. In that case the
replicator will fall back to fetching individual document revisions
like it did previously. For additional backward compatibility, and to
keep things simple, support only the `application/json` `_bulk_get`
response format. (Ideally, we'd send multiple Accept headers with
various `q` preference parameters for `json` and `multipart/related`
content, then do the right thing based on the response, however, none
of the recent Apache CouchDB implementations support that scheme
properly).
Since fetching attachments with application/json response is not
optimal, attachments are fetched individually. This means there are
two main reasons for the replicator to fall back to fetching
individual revisions: 1) when _bulk_get endpoint is not supported and
2) when the document revisions contain attachments.
To avoid wasting resource repeatedly attempting to use `_bulk_get `and
then falling back to individual doc fetches, maintain some historical
stats about the rate of failure, and if it crosses a threshold, skip
calling `_bulk_get` altogether. This is implemented with a moving
exponential average, along with periodic probing to see if `_bulk_get`
usage becomes viable again.
To give the users some indication about how successful `_bulk_get`
usage is, introduce two replication statistics parameters:
* `bulk_get_attempts`: _bulk_get document revisions attempts made.
* `bulk_get_docs` : `_bulk_get` document revisions successfully retrieved.
These are persisted in the replication checkpoints along with the rest
of the job statistics and visible in `_scheduler/jobs` and
`_active_tasks` output.
Since we updated the replication job statistics, perform some minor
cleanups in that area:
- Stop using the process dictionary for the reporting timestamp. Use
a regular record state field instead.
- Use casts instead of a calls when possible. We still rely on
report_seq_done calls as a synchronization point to make sure we
don't overrun the message queues for the replication worker and
scheduler job process.
- Add stats update API functions instead of relying on naked
`gen_server` calls and casts. The functions make it clear which
process is being updated: the replication worker or the main
replication scheduler job process.
For testing, rely on the variety of existing replication tests running
and passing. The recently merged replication test overhaul from [pull
the tests form using the node-local (back-end API) to chttpd (the
cluster API), which actually implements `_bulk_get`. In this way, the
majority of replication tests should test the `_bulk_get` API usage
alongside whatever else they are testing. There there is new test
checking that `_bulk_get` fallback works and testing the
characteristics of the new statistics parameters.
|
|\
| |
| | |
Allow and evaluate nested json claim roles
|
|/ |
|
|
|
|
|
|
|
|
| |
OTP 25 generates warnings like the following:
src/chttpd/test/eunit/chttpd_util_test.erl:33:48: Warning: variable '_Persist' is already bound. If you mean to ignore this value, use '_' or a different underscore-prefixed name
Create an explicit `Persist` variable set `false` to suppress those warnings.
|
|
|
|
|
|
| |
The test doesn't check if the hash algorithm is supported by the
erlang vm. The test for supported hash algorithms was only missing
in the test itself and not in CouchDB.
Refactor test and verify hash names during test runs.
|
|
|
|
|
|
|
|
|
|
|
| |
Introduce a new config setting "hash_algorithms".
The values of the new config parameter is a list of comma-separated values of Erlang hash algorithms.
An example:
hash_algorithms = sha256, sha, md5
This line will use and generate new cookies with the sha256 hash algorithm and accept/verify cookies with the given hash algorithms sha256, sha and md5.
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Occasionally, this test fails with the following stack trace:
cpse_gather: make_test_fun (cpse_incref_decref)...*failed*
in function cpse_test_ref_counting:'-cpse_incref_decref/1-fun-0-'/2 (src/cpse_test_ref_counting.erl, line 44)
in call from cpse_test_ref_counting:cpse_incref_decref/1 (src/cpse_test_ref_counting.erl, line 44)
in call from eunit_test:run_testfun/1 (eunit_test.erl, line 71)
in call from eunit_proc:run_test/1 (eunit_proc.erl, line 522)
in call from eunit_proc:with_timeout/3 (eunit_proc.erl, line 347)
in call from eunit_proc:handle_test/2 (eunit_proc.erl, line 505)
in call from eunit_proc:tests_inorder/3 (eunit_proc.erl, line 447)
in call from eunit_proc:with_timeout/3 (eunit_proc.erl, line 337)
**error:{assert,[{module,cpse_test_ref_counting},
{line,44},
{expression,"lists : member ( Pid , Pids1 )"},
{expected,true},
{value,false}]}
output:<<"">>
Wrap the former assertion in a `test_util:wait` call to account for
the apparent race between client readiness and
`couch_db_engine:monitored_by/1`.
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Use the clustered version of source and target endpoints.
Also, use common setup and teardown functions. The test was previously quite
clever in trying to save an extra few lines by parameterizing the "use
checkpoing" vs "don't use checkpoints" scenarios with a foreachx and a custom
notifier function. Instead, opt for more clarity and use the usual TDEF_FE
macro and just setup two separate test one which uses checkpoints and one which
doesn't.
Another major win is is using the utility db comparison function instead of a
duplicate copy of it.
|
|
|
|
|
|
| |
Use the clustered version of the source and target endoints and switch to using
common test setup and teardown function functions. Overall it added to quite a
few number of lines saved.
|
|
|
|
|
|
|
| |
Use the clustered versions of endpoints for the test.
Also, use the common setup and teardown helpers and remove the foreachx
silliness.
|
|
|
|
|
|
| |
Switch the test to use the clustered endpoints.
Use the common test setup and teardown functions as well as the TDEF_FE macros.
|
|
|
|
| |
Use the TDEF_FE macro and cleanup ?_test(begin...end) instances.
|
|
|
|
| |
Use the TDEF_FE macro and remove the ugly ?_test(begin...end) construct.
|
|
|
|
|
| |
Use common setup and teardown helpers, TDEF_FE macros and remove all the
foreachx nonsense.
|
|
|
|
|
|
|
|
|
|
|
|
| |
Use comon setup and teardown function and the TDEF_FE macros.
Also, remove quite a bit of foreachx and remote boilerplate which is not needed
any longer.
Most of the changes however consisted in update all the db operations to use
fabric instead of couch. Luckily, most of those have fabric equivalents, and
fabric calls are even shorter as they don't need open, re-open and close
operations.
|
|
|
|
|
|
|
| |
Use commong setup functions and TDEF_FE macro.
Removing the foreachx and the remote vs local junk really trimmed down the
size. The test content was tiny compared to the clunky EUnit setup logic.
|
|
|
|
|
|
| |
Use common setup and teardown helpers along with some local replicate/2 and db_url/2 functions.
Remove foreachx goop and use TDEF_FE for consistency with other tests.
|
|
|
|
| |
Use the TDEF_FE macro and remove the ?_test(begin...end) construct.
|
|
|
|
|
|
|
| |
Take advantage of the helper setup and teardown functions.
Switching to a simpler TDEF_FE macro instead of foreachx and inorder setup
cruft also saves some lines of code.
|
|
|
|
| |
Use the TDEF_FE macro and clean up ?_test(begin...end) cruft.
|
|
|
|
|
|
|
|
|
|
| |
Start using the common setup and tear down functions from the test helper.
Also using the test definitions to use the TDEF_FE macro.
Since the setup function already creates a target endpoint database and the
test is also in charge of creating test database, we just remove the target db
before the replication jobs start.
|
|
|
|
|
| |
The main changes are just using the TDEF_FE macros and removng the
_test(begin...end) silliness.
|
|
|
|
|
|
|
|
| |
Compactor tests are the only tests which continue using the local ports since
they deals with triggering and managing low level compaction processes.
However, it was still possible to improve the tests somewhat by using the
TDEF_FE macros and removing some left-over foreachx cruft.
|
|
|
|
|
| |
Switch test module to use the clustered enpdpoints and TDEF_FE test
macros.
|
|
|
|
|
|
|
|
|
|
|
|
| |
In preparation to start using chttpd (fabric) endpoints add some common utility
functions to the replication test helper module.
Since `couch_db:fold_docs/4` doesn't exit for fabric, use the changes feed to
get all the revision leafs. That is used when comparing database endpoints.
It turns our the majority replication tests can use the exact same setup,
teardown and db_url functions so make sure those are also available in the
helper module.
|
| |
|
| |
|
|
|
|
|
|
|
|
| |
Don't rely on the default gen_server 5 second timeout.
Use `infinity` as that's also effectively used for doc updates.
Fixes https://github.com/apache/couchdb/issues/4142
|
|
|
|
| |
After reverting #4094, bringing this back as a seperate fix.
|
|\
| |
| | |
Add editors magic lines
|
|/ |
|