| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
| |
Closes #3362
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
This patch introduces a macro and inserts it everywhere we catch errors
and then generatre a stacktrace.
So far the only thing that is a little bit ugly is that in two places,
I had to add a header include dependency on couch_db.erl where those
modules didn’t have any ties to couchdb/* before, alas. I’d be willing
to duplicate the macros in those modules, if we don’t want the include
dependency.
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Fair share replication scheduler allows configuring job priorities
per-replicator db.
Previously jobs from all the replication dbs would be added to the scheduler
and run in a round-robin order. This update makes it possible to specify the
relative priority of jobs from different databases. For example, there could be
low, high and default priority _replicator dbs.
The original algorithm comes from the [A Fair Share
Scheduler](https://proteusmaster.urcf.drexel.edu/urcfwiki/images/KayLauderFairShare.pdf
"Fair Share Scheduler") paper by Judy Kay and Piers Lauder. A summary of how
the algorithm works is included in the top level comment in the
couch_replicator_share module.
There is minimal modification to the main scheduler logic. Besides the
share accounting logic each cycle, the other changes are:
* Running and stopping candidates are now picked based on the priority first,
and then on their last_started timestamp.
* When jobs finish executing mid-cycle, their charges are accounted for. That
holds for jobs which terminate normally, are removed by the user, or crash.
Other interesting aspects are the interaction with the error back-off mechanism
and how one-shot replications are treated:
* The exponential error back-off mechanism is unaltered and takes precedence
over the priority values. That means unhealthy jobs are rejected and
"penalized" before the priority value is even looked at.
* One-shot replications, once started, are not stopped during each scheduling
cycle unless the operator manually adjusts the `max_jobs` parameter. That
behavior is necessary to preserve the "snapshot" semantics and is retained in
this update.
|
|
|
|
|
|
| |
This is needed to prepare for the Fair Share scheduler feature since
both the scheduler and the fair share module will end up referencing
the #job record.
|
|\
| |
| | |
Use stored peer when available in json_req_obj
|
|/ |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously, if a JWT claim was present, it was validated regardless of
whether it was required.
However, according to the spec [1]:
"all claims that are not understood by implementations MUST be ignored"
which we interpret to mean that we should not attempt to validate
claims we don't require.
With this change, only claims listed in required checks are validated.
[1] https://tools.ietf.org/html/rfc7519#section-4
|
|
|
|
|
|
|
| |
This defines a configuration file which specifies sections and fields
for config values that are redacted from logs. Specifically, all
values from the "admins" section and the value of "password" in the
"replicator" section are redacted.
|
|\
| |
| | |
Remove outdated couch_httpd_external module
|
|/ |
|
|\
| |
| | |
Read and validate JSON payload on POST to _changes
|
| | |
|
|/ |
|
|
|
|
|
| |
The config application depends on couch_log, so include it when
setting up and tearing down tests.
|
|\
| |
| | |
Add "couch_server" aggregate to _system output
|
|/
|
|
|
|
| |
This helps ease transition from singleton couch_server to
multiple. The "couch_server" message queue is simply the sum of the
couch_server_X message queues.
|
|\
| |
| | |
Preserve max_dbs_open division during config change
|
|/
|
|
| |
And prevent max_dbs_open going below 1.
|
|\
| |
| | |
Couch server sharding
|
| | |
|
|/ |
|
|\
| |
| | |
Show process status in active_tasks
|
|/
|
|
|
| |
This allows users to verify that compaction processes are suspended
outside of any configured strict_window.
|
|
|
|
| |
Resolves #2858
|
|
|
|
|
|
|
| |
chunked (#3340)
Transfer-Encoding: chunked causes the server to wait indefinitely, then issue a a 500 error when the client finally hangs up, when PUTing a multipart/related document + attachments.
This commit fixes that issue by adding proper handling for chunked multipart/related requests.
|
|
|
|
| |
The current default for max_attachment_size is infinity.
This commit changes that to 1 gibibyte.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* Simplify and speedup dev node startup
This patch introduces an escript that generates an Erlang .boot script
to start CouchDB using the in-place .beam files produced by the compile
phase of the build. This allows us to radically simplify the boot
process as Erlang computes the optimal order for loading the necessary
modules.
In addition to the simplification this approach offers a significant
speedup when working inside a container environment. In my test with
the stock .devcontainer it reduces startup time from about 75 seconds
down to under 5 seconds.
* Rename boot_node to monitor_parent
* Add formatting suggestions from python-black
Co-authored-by: Paul J. Davis <paul.joseph.davis@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This PR adds a Dockerfile and associated configuration to enable
developers to quickly provision an environment with all dependencies
installed to work on CouchDB 3.x.
The container configuration also installs the Erlang Language Server
extension. That extension needs a minimal configuration file in the root
of the project in order to find the include files, so I've added that as
well. We could likely iterate and enhance that configuration file
further with linters, dialyzers configurations, etc.
Finally, it allows a developer to set the SpiderMonkey version in an
$SM_VSN environment variable so that we can do a better job of
preserving the simplicity of `./configure; make` inside the container.
|
|
|
|
|
|
|
|
|
|
|
| |
Previously, if an error was thrown in a `with_ddoc_proc/2` callback, the
process was still returned to the process pool in the `after` clause. However,
in some cases, for example when processing a _list response, the process might
end up stuck in a bad state state, such that it could not be re-used anymore.
In such a case, a subsequent user of that couch_js process would end up
throwing an error and crashing.
Fixes #2962
|
|
|
|
|
| |
All endpoints but _session support gzip encoding and there's no practical reason for that.
This commit enables gzip decoding on compressed requests to _session.
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
| |
Enable build time configurable monitor for custodian and remove custom
sensu events.
|
|
|
|
|
|
|
|
|
|
|
|
| |
Closes #2906
* Added a suffix to the first line of couchjs with the (static) version number compiled
* Update rebar.config.script
* In couchjs -h replaced the link to jira with a link to github
Co-authored-by: simon.klassen <simon.klassen>
Co-authored-by: Jan Lehnardt <jan@apache.org
Co-authored-by: Simon Klassen <6997477+sklassen@users.noreply.github.com>
|
|\
| |
| | |
Merge custodian
|
| | |
|
| |
| |
| |
| | |
Remove Cloudant references
|
| |\
|/ / |
|
| |\
| | |
| | | |
Report detailed missing shard ranges
|
| |/
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Previously we relied on finding how many possible (max) rings could be obtained
from the whole range.
The current approach is to apply some heuristics to report details across the
individual ranges. The algorithm is roughly as follows:
* Find out the max number of rings that can be obtained (MaxN)
* Assign MaxN to the all those ranges ^
* Add individual ranges for leftover shards. These are alive shards but not
part of the MaxN rings. These might form partial rings, and, if extra shards
would come alive gain, form full rings.
* Report shards which are missing completely and mark those as having a count
of 0. These are shard ranges that are in the map but there are no live
copies encountered. If any of these were to come back alive, they might
complete one or more of the partial rings from the previous step or form new
rings.
|
| |\
| | |
| | | |
Add split shard handling
|
| |/
| |
| |
| |
| |
| |
| |
| | |
In case of split shards the range based shard count matching doesn't work
anymore. Instead, use the new `mem3_util:calculate_max_n/1` function to check
the maximum effective N for a given set (livee, safe) of db shards.
This commit works only with the shard split branch of CouchDB.
|