summaryrefslogtreecommitdiff
Commit message (Collapse)AuthorAgeFilesLines
* fix badmatch when fetching latest revfix/3362/latest-conflictsJan Lehnardt2021-03-141-1/+7
| | | | Closes #3362
* chore: update dependency pointers to release tagsJan Lehnardt2021-03-142-5/+2
|
* feat: somewhat hacky version detectionJan Lehnardt2021-03-146-17/+24
|
* feat: update deps to support otp23 in a full buildJan Lehnardt2021-03-132-3/+7
|
* feat: work around get_stacktrace deprecation/removalJan Lehnardt2021-03-1315-58/+75
| | | | | | | | | | | This patch introduces a macro and inserts it everywhere we catch errors and then generatre a stacktrace. So far the only thing that is a little bit ugly is that in two places, I had to add a header include dependency on couch_db.erl where those modules didn’t have any ties to couchdb/* before, alas. I’d be willing to duplicate the macros in those modules, if we don’t want the include dependency.
* feat(couchjs): add support for SpiderMonkey 86feat/3.x/sm-86Jan Lehnardt2021-03-136-4/+827
|
* [fixup] remove extra blank lineNick Vatamaniuc2021-03-111-1/+0
|
* [fixup] Use =< when clearing 0 entries from priority and usage tablesNick Vatamaniuc2021-03-111-4/+1
|
* Fair Share Replication Scheduler ImplementationNick Vatamaniuc2021-03-114-77/+1006
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Fair share replication scheduler allows configuring job priorities per-replicator db. Previously jobs from all the replication dbs would be added to the scheduler and run in a round-robin order. This update makes it possible to specify the relative priority of jobs from different databases. For example, there could be low, high and default priority _replicator dbs. The original algorithm comes from the [A Fair Share Scheduler](https://proteusmaster.urcf.drexel.edu/urcfwiki/images/KayLauderFairShare.pdf "Fair Share Scheduler") paper by Judy Kay and Piers Lauder. A summary of how the algorithm works is included in the top level comment in the couch_replicator_share module. There is minimal modification to the main scheduler logic. Besides the share accounting logic each cycle, the other changes are: * Running and stopping candidates are now picked based on the priority first, and then on their last_started timestamp. * When jobs finish executing mid-cycle, their charges are accounted for. That holds for jobs which terminate normally, are removed by the user, or crash. Other interesting aspects are the interaction with the error back-off mechanism and how one-shot replications are treated: * The exponential error back-off mechanism is unaltered and takes precedence over the priority values. That means unhealthy jobs are rejected and "penalized" before the priority value is even looked at. * One-shot replications, once started, are not stopped during each scheduling cycle unless the operator manually adjusts the `max_jobs` parameter. That behavior is necessary to preserve the "snapshot" semantics and is retained in this update.
* Move replicator #job record to couch_replicator.hrlNick Vatamaniuc2021-03-113-16/+16
| | | | | | This is needed to prepare for the Fair Share scheduler feature since both the scheduler and the fair share module will end up referencing the #job record.
* Merge pull request #3409 from apache/fix-get-peer-in-chttpd_externalEric Avdey2021-03-092-1/+122
|\ | | | | Use stored peer when available in json_req_obj
| * Use stored peer when available in json_req_objfix-get-peer-in-chttpd_externalEric Avdey2021-03-092-1/+122
|/
* Ignore unchecked JWT claimsJay Doane2021-03-011-10/+26
| | | | | | | | | | | | | | | | Previously, if a JWT claim was present, it was validated regardless of whether it was required. However, according to the spec [1]: "all claims that are not understood by implementations MUST be ignored" which we interpret to mean that we should not attempt to validate claims we don't require. With this change, only claims listed in required checks are validated. [1] https://tools.ietf.org/html/rfc7519#section-4
* Configure sensitive config values for redactionJay Doane2021-02-231-0/+4
| | | | | | | This defines a configuration file which specifies sections and fields for config values that are redacted from logs. Specifically, all values from the "admins" section and the value of "password" in the "replicator" section are redacted.
* Merge pull request #3374 from apache/remove-couch_httpd_externalEric Avdey2021-02-172-147/+1
|\ | | | | Remove outdated couch_httpd_external module
| * Remove outdated couch_httpd_external moduleremove-couch_httpd_externalEric Avdey2021-02-172-147/+1
|/
* Merge pull request #3373 from apache/3087-read-body-on-post-to-changesEric Avdey2021-02-173-2/+66
|\ | | | | Read and validate JSON payload on POST to _changes
| * Read and validate JSON payload on POST to _changes3087-read-body-on-post-to-changesEric Avdey2021-02-172-1/+65
| |
| * Use updated json_req_obj function in changes custom filterEric Avdey2021-02-171-1/+1
|/
* Include necessary dependency in jwtf keystore test setup & teardownJay Doane2021-02-161-2/+2
| | | | | The config application depends on couch_log, so include it when setting up and tearing down tests.
* Merge pull request #3370 from apache/couch_server_system_aggregateRobert Newson2021-02-152-2/+12
|\ | | | | Add "couch_server" aggregate to _system output
| * Add "couch_server" aggregate to _system outputcouch_server_system_aggregateRobert Newson2021-02-152-2/+12
|/ | | | | | This helps ease transition from singleton couch_server to multiple. The "couch_server" message queue is simply the sum of the couch_server_X message queues.
* Merge pull request #3368 from apache/couch_server_config_changeRobert Newson2021-02-151-4/+10
|\ | | | | Preserve max_dbs_open division during config change
| * Preserve max_dbs_open division during config changecouch_server_config_changeRobert Newson2021-02-151-4/+10
|/ | | | And prevent max_dbs_open going below 1.
* Merge pull request #3366 from apache/couch_server_shardingRobert Newson2021-02-1212-117/+201
|\ | | | | Couch server sharding
| * Shard couch_server for performanceRobert Newson2021-02-1210-109/+188
| |
| * encapsulate db_updated call in a functionRobert Newson2021-02-123-9/+14
|/
* Merge pull request #3361 from apache/active-tasks-process-statusRobert Newson2021-02-091-1/+10
|\ | | | | Show process status in active_tasks
| * Show process status in active_tasksactive-tasks-process-statusRobert Newson2021-02-091-1/+10
|/ | | | | This allows users to verify that compaction processes are suspended outside of any configured strict_window.
* fix: finish_cluster failure due to missing uuidSteven Tang2021-02-041-0/+3
| | | | Resolves #2858
* Fix PUT of multipart/related attachments support for Transfer-Encoding: ↵Bessenyei Balázs Donát2021-02-023-2/+22
| | | | | | | chunked (#3340) Transfer-Encoding: chunked causes the server to wait indefinitely, then issue a a 500 error when the client finally hangs up, when PUTing a multipart/related document + attachments. This commit fixes that issue by adding proper handling for chunked multipart/related requests.
* Set a finite default for max_attachment_size (#3347)Bessenyei Balázs Donát2021-02-013-4/+26
| | | | The current default for max_attachment_size is infinity. This commit changes that to 1 gibibyte.
* Simplify and speedup dev node startup (#3337)Adam Kocoloski2021-01-216-154/+68
| | | | | | | | | | | | | | | | | | | | * Simplify and speedup dev node startup This patch introduces an escript that generates an Erlang .boot script to start CouchDB using the in-place .beam files produced by the compile phase of the build. This allows us to radically simplify the boot process as Erlang computes the optimal order for loading the necessary modules. In addition to the simplification this approach offers a significant speedup when working inside a container environment. In my test with the stock .devcontainer it reduces startup time from about 75 seconds down to under 5 seconds. * Rename boot_node to monitor_parent * Add formatting suggestions from python-black Co-authored-by: Paul J. Davis <paul.joseph.davis@gmail.com>
* Add a .devcontainer configuration for 3.x (#3336)Adam Kocoloski2021-01-195-1/+59
| | | | | | | | | | | | | | | This PR adds a Dockerfile and associated configuration to enable developers to quickly provision an environment with all dependencies installed to work on CouchDB 3.x. The container configuration also installs the Erlang Language Server extension. That extension needs a minimal configuration file in the root of the project in order to find the include files, so I've added that as well. We could likely iterate and enhance that configuration file further with linters, dialyzers configurations, etc. Finally, it allows a developer to set the SpiderMonkey version in an $SM_VSN environment variable so that we can do a better job of preserving the simplicity of `./configure; make` inside the container.
* Do not return broken processes to the query process poolNick Vatamaniuc2021-01-152-4/+69
| | | | | | | | | | | Previously, if an error was thrown in a `with_ddoc_proc/2` callback, the process was still returned to the process pool in the `after` clause. However, in some cases, for example when processing a _list response, the process might end up stuck in a bad state state, such that it could not be re-used anymore. In such a case, a subsequent user of that couch_js process would end up throwing an error and crashing. Fixes #2962
* Allow gzipped requests to _session (#3322)Bessenyei Balázs Donát2021-01-153-3/+14
| | | | | All endpoints but _session support gzip encoding and there's no practical reason for that. This commit enables gzip decoding on compressed requests to _session.
* Add to credo ignores and gitignore new file_system dependencyAlessio Biancalana2021-01-042-0/+2
|
* Upgrade Credo to 1.5.4Alessio Biancalana2021-01-042-3/+4
|
* Goodbye 2020. Hello 2021. YES. (#3317)Joan Touzet2021-01-021-1/+1
|
* treat 408 as a retryable error condition (#3303) (#3307)Robert Newson2020-12-211-18/+18
|
* Support pluggable custodian monitorJay Doane2020-12-156-45/+148
| | | | | Enable build time configurable monitor for custodian and remove custom sensu events.
* 2906 couchjs sm version (#2911) (#3297)Joan Touzet2020-12-144-4/+4
| | | | | | | | | | | | Closes #2906 * Added a suffix to the first line of couchjs with the (static) version number compiled * Update rebar.config.script * In couchjs -h replaced the link to jira with a link to github Co-authored-by: simon.klassen <simon.klassen> Co-authored-by: Jan Lehnardt <jan@apache.org Co-authored-by: Simon Klassen <6997477+sklassen@users.noreply.github.com>
* Merge pull request #3296 from apache/custodian-mergeJay Doane2020-12-1411-0/+858
|\ | | | | Merge custodian
| * Build custodian and include in releasescustodian-mergeJay Doane2020-12-132-0/+3
| |
| * Update license and READMEJay Doane2020-12-139-9/+89
| | | | | | | | Remove Cloudant references
| * Merge remote-tracking branch 'custodian/master' into custodian-mergeJay Doane2020-12-139-0/+775
| |\ |/ /
| * Merge pull request #26 from cloudant/more-detailed-ranges-reportNick Vatamaniuc2019-04-111-7/+118
| |\ | | | | | | Report detailed missing shard ranges
| | * Report detailed missing shard rangesNick Vatamaniuc2019-04-091-7/+118
| |/ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Previously we relied on finding how many possible (max) rings could be obtained from the whole range. The current approach is to apply some heuristics to report details across the individual ranges. The algorithm is roughly as follows: * Find out the max number of rings that can be obtained (MaxN) * Assign MaxN to the all those ranges ^ * Add individual ranges for leftover shards. These are alive shards but not part of the MaxN rings. These might form partial rings, and, if extra shards would come alive gain, form full rings. * Report shards which are missing completely and mark those as having a count of 0. These are shard ranges that are in the map but there are no live copies encountered. If any of these were to come back alive, they might complete one or more of the partial rings from the previous step or form new rings.
| * Merge pull request #25 from cloudant/shard-splitNick Vatamaniuc2019-04-031-22/+13
| |\ | | | | | | Add split shard handling
| | * Add split shard handlingNick Vatamaniuc2019-04-031-22/+13
| |/ | | | | | | | | | | | | | | In case of split shards the range based shard count matching doesn't work anymore. Instead, use the new `mem3_util:calculate_max_n/1` function to check the maximum effective N for a given set (livee, safe) of db shards. This commit works only with the shard split branch of CouchDB.