| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
| |
This enables the container environment to automatically default to
SpiderMonkey 60, which is the version we want to use for the
Debian Buster OS on which the container image is based.
|
|
|
|
| |
Also a drive-by fix to note that Elixir is now required.
|
|
|
|
|
|
| |
It'd be nice if there was a way for Erlang LS to merge config files, as
some of the settings are likely user-specific and others (like this one)
we want to provide as defaults out of the box.
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
| |
This container is sufficient to build CouchDB 3.x from source using
./configure --dev --spidermonkey-version 60
and run the full test suite to completion.
|
|
|
|
|
|
|
|
|
|
|
| |
Previously, if an error was thrown in a `with_ddoc_proc/2` callback, the
process was still returned to the process pool in the `after` clause. However,
in some cases, for example when processing a _list response, the process might
end up stuck in a bad state state, such that it could not be re-used anymore.
In such a case, a subsequent user of that couch_js process would end up
throwing an error and crashing.
Fixes #2962
|
|
|
|
|
| |
All endpoints but _session support gzip encoding and there's no practical reason for that.
This commit enables gzip decoding on compressed requests to _session.
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
| |
Enable build time configurable monitor for custodian and remove custom
sensu events.
|
|
|
|
|
|
|
|
|
|
|
|
| |
Closes #2906
* Added a suffix to the first line of couchjs with the (static) version number compiled
* Update rebar.config.script
* In couchjs -h replaced the link to jira with a link to github
Co-authored-by: simon.klassen <simon.klassen>
Co-authored-by: Jan Lehnardt <jan@apache.org
Co-authored-by: Simon Klassen <6997477+sklassen@users.noreply.github.com>
|
|\
| |
| | |
Merge custodian
|
| | |
|
| |
| |
| |
| | |
Remove Cloudant references
|
| |\
|/ / |
|
| |\
| | |
| | | |
Report detailed missing shard ranges
|
| |/
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Previously we relied on finding how many possible (max) rings could be obtained
from the whole range.
The current approach is to apply some heuristics to report details across the
individual ranges. The algorithm is roughly as follows:
* Find out the max number of rings that can be obtained (MaxN)
* Assign MaxN to the all those ranges ^
* Add individual ranges for leftover shards. These are alive shards but not
part of the MaxN rings. These might form partial rings, and, if extra shards
would come alive gain, form full rings.
* Report shards which are missing completely and mark those as having a count
of 0. These are shard ranges that are in the map but there are no live
copies encountered. If any of these were to come back alive, they might
complete one or more of the partial rings from the previous step or form new
rings.
|
| |\
| | |
| | | |
Add split shard handling
|
| |/
| |
| |
| |
| |
| |
| |
| | |
In case of split shards the range based shard count matching doesn't work
anymore. Instead, use the new `mem3_util:calculate_max_n/1` function to check
the maximum effective N for a given set (livee, safe) of db shards.
This commit works only with the shard split branch of CouchDB.
|
| |\
| | |
| | | |
Update handle_config_terminate API
|
| | | |
|
| |/
| |
| |
| | |
COUCHDB-3102
|
| |\
| | |
| | | |
Update to use pluggable storage engine APIs
|
| |/
| |
| |
| | |
COUCHDB-3287
|
| |\
| | |
| | | |
Fix 'handle_config_terminate/3'
|
| |/ |
|
| |\
| | |
| | | |
45855 dbnext
|
| | |
| | |
| | |
| | | |
BugzID: 45855
|
| | |
| | |
| | |
| | | |
BugzID: 45855
|
| | | |
|
| |/ |
|
| |\
| | |
| | | |
Use warning level for non-critical cases
|
| | | |
|
| |/
| |
| |
| |
| |
| |
| |
| | |
n=2 or n>N cases are not "critical" - that is, they don't require immediate
operator intervention. Custodian should send alerts that reflect the true
urgency of the situation in order to reduce alert fatigue.
BugzID: 31759
|
| |\
| | |
| | | |
Remove bacon from the list of system databases to check
|
| |/
| |
| |
| |
| |
| |
| |
| |
| | |
The bacon db is only installed on multitenant clusters, so
continuing to check for its existence on all clusters only
leads to a cluttering of alerts. Bacon's performance on MT
clusters is monitored by sensu.
BugzID: 28630
|
| | |
|
| | |
|
| | |
|
| |\
| | |
| | | |
Account for true maintenance mode
|
| | | |
|
| | | |
|
| | | |
|
| | | |
|
| | | |
|
| | | |
|