| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
|
|
|
|
|
| |
This ensures that admin password hashes are the same on all nodes when
passwords are set directly on each node rather than through the
coordinator node.
|
| |
|
|
|
|
|
| |
The underlying clustered _all_docs call can cause significant extra load
during compaction.
|
|
|
|
|
| |
This adds an API call for looking up a single design doc regardless of
whether the database is clustered or not.
|
|
|
|
| |
This fixes inability to set keys with regex symbols in them
|
|
|
|
|
|
| |
This enables backwards compatbility with nodes still running the old
version of fabric_rpc when a cluster is upgraded to master. This has no
effect once all nodes are upgraded to the latest version.
|
| |
|
| |
|
| |
|
| |
|
|
|
| |
Closes #1053
|
|
|
|
|
| |
The Makefile target builds a python3 venv at .venv and installs
black if possible. Since black is Python 3.6 and up only, we
skip the check on systems with an older Python 3.x.
|
| |
|
| |
|
| |
|
|
|
|
|
| |
This commit introduces a new option `snooze_period_ms` (measured in
milliseconds), and deprecates `snooze_period` while still supporting it
for obvious legacy reasons.
|
|
|
|
|
|
|
| |
This restrict _purge and _purged_infos_limit to server admin
in terms of the security level required to run them.
Fixes #1799
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
It has a fix to revert user socket buffer size to 8192 and also
allow setting this buffer values directly (not necessarily
via {recbuf, ...}).
Fixes #1810
Warning:
2.19.0 blacklists a series of OTP releases: 21.2, 21.2.1, 21.2.2
This is done via a runtime check of the ssl application version.
The blacklist seems valid as there is a bug which prevents data from
being delivered on TSL sockets. That could affect either CouchDB
server side (chttpd) or replication client side (ibrowse).
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This server admin-only endpoint forces an n-way sync of all shards
across all nodes on which they are hosted.
This can be useful for an administrator adding a new node to the
cluster, after updating _dbs so that the new node hosts an existing db
with content, to force the new node to sync all of that db's shards.
Users may want to bump their `[mem3] sync_concurrency` value to a
larger figure for the duration of the shards sync.
Closes #1807
|
| |
|
|
|
|
| |
COUCHDB-3226
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
There was a subtle bug when opening specific revisions in
fabric_doc_open_revs due to a race condition between updates being
applied across a cluster.
The underlying cause here was due to the stemming after a document had
been updated more than revs_limit number of times along with concurrent
reads to a node that had not yet made the update. To illustrate lets
consider a document A which has a revision history from `{N, RevN}` to
`{N+1000, RevN+1000}` (assuming revs_limit is the default 1000). If we
consider a single node perspective when an update comes in we added the
new revision and stem the oldest revision. The docs the revisions on the
node would be `{N+1, RevN+1}` to `{N+1001, RevN+1001}`.
The bug exists when we attempt to open revisions on a different node
that has yet to apply the new update. In this case when
fabric_doc_open_revs could be called with `{N+1000, RevN+1000}`. This
results in a response from fabric_doc_open_revs that includes two
different `{ok, Doc}` results instead of the expected one instance. The
reason for this is that one document has revisions `{N+1, RevN+1}` to
`{N+1000, RevN+1000}` from the node that has applied the update, while
the node without the update responds with revisions `{N, RevN}` to
{N+1000, RevN+1000}`.
To rephrase that, a node that has applied an update can end up returning
a revision path that contains `revs_limit - 1` revisions while a node
wihtout the update returns all `revs_limit` revisions. This slight
change in the path prevented the responses from being properly combined
into a single response.
This bug has existed for many years. However, read repair effectively
prevents it from being a significant issue by immediately fixing the
revision history discrepancy. This was discovered due to the recent bug
in read repair during a mixed cluster upgrade to a release including
clustered purge. In this situation we end up crashing the design
document cache which then leads to all of the design document requests
being direct reads which can end up causing cluster nodes to OOM and
die. The conditions require a significant number of design document
edits coupled with already significant load to those modified design
documents. The most direct example observed was a clustered that had a
significant number of filtered replications in and out of the cluster.
|
|
|
|
|
|
|
|
|
| |
Previously `end_time` was generated converting the start_time to universal,
then passing that to `httpd_util:rfc1123_date/1`. However, `rfc1123_date/1`
also transates its argument from local to UTC time, that is it accepts input to
be in local time format.
Fixes #1841
|
|
|
|
|
|
|
|
| |
- fix function_clause error on invalid DB security objects
when the request body of PUT db/_security endpoint is not
a correct json format
Closes #1384
|
|
|
|
|
| |
This avoids needlessly making cross-cluster fabric:update_docs(Db, [], Opts)
calls.
|
|\
| |
| | |
Bump fauxton, docs, version to 2.3.0
|
|/ |
|
| |
|
|
|
|
|
|
|
|
| |
The configuration of query servers is done via environment variables.
Calling os:getenv every time we need a new query process is expensive.
Instead we extact all configured query servers from the environemt on
`couch_proc_manager` startup and cache them in ets table.
fixes #1772
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously the compaction daemon looked for design docs in each shard file.
This worked well for versions < 2.x, however, for clustered databases design
documents will only be found in their respective shards based on the document
id hashing algorithm. This meant that in a default setup of Q=8 only the views
of one shard range, where the _design document lives, would be compacted.
The fix for this issue is to use fabric to retrive all the design documents for
clustered database.
Issue #1579
|
| |
|
| |
|
|\
| |
| | |
Check code format before running elixir test suite
|
| |\
| |/
|/| |
|
|\ \
| | |
| | |
| | | |
* Switch scripts to python3
* Update mango test harness to use venv
|
| | | |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Before we were ignoring venv setup in the Makefile, so update the test runner
to use that instead of pestering developers to install those dependencies by
hand.
Issue #1632
|
|/ /
| |
| |
| |
| |
| | |
Ran 2to3 and fixed a few deprecated warnings
Issue #1632
|
| | |
|
| | |
|
| | |
|
|/ |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Force couch_replicator_auth_session plugin to refresh the session periodically.
Normally it is not needed as the session would be refreshed when requests start
failing with a 401 (authentication) or 403 (authorization) errors. In some
cases when anonymous writes are allowed to the database and a VDU function is
used to forbid writes based on the authenticated in username, requests with an
expired session cookie will not fail with a 401 and the session will not be
refreshed.
To fix the issue using these two approaches:
1. Use cookie's max-age expiry time to schedule a refresh. To ensure that time
is provided in the cookie, switch the the option to enable it by default. This
handles the issue for endpoints which are updated with this commit.
2. For endpoints which do not put a max-age time in the cookie, use a value
that's less than CouchDB's default auth timeout. If users changed their
auth timeout value, and use VDUs in the pattern described above, and don't
update their endpoints to version which sends max-age by default, they could
adjust `[replicator] session_refresh_interval_sec` to their auth timeout minus
some small delay.
Of course refresh based on auth/authz failures should still works as before.
Fixes #1607
|
| |
|
|\
| |
| | |
Fix couch_epi typespec for data provider
|
|/
|
|
|
|
|
|
|
|
| |
There were few problems:
- `module` was renamed into `static_module`
- https://github.com/apache/couchdb/commit/0fefc859eb9c18120064317da61a30adaeac5f92#diff-d9e3e3c91d4866fe966666619bda7991
- `callback_module` was added
- https://github.com/apache/couchdb/commit/cf65280466499d652cff1171a2039af49c5677e8#diff-d9e3e3c91d4866fe966666619bda7991
- data provider specification can include options
- https://github.com/apache/couchdb/blob/master/src/couch_epi/src/couch_epi_plugin.erl#L143
|
|\
| |
| | |
Do not use [] in feature_flags configuration
|
| | |
|