| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
| |
|
|
|
|
|
|
|
|
| |
To make this work, I had to change the default -name from the old
couchdb@localhost to couchdb@127.0.0.1. This matches the advice
we already had in vm.args to use FQDN or IP address, anyway.
Once this merges I'll look at doing a Windows version, if possible.
|
|
|
|
| |
We change syntax issues that make the tests incompatible for python3
but also ensure that it still runs using python2.
|
|
|
|
|
|
|
|
|
|
| |
Previously attachment uploading from a PSE to non-PSE node would
fail as the attachment streaming API changed between version.
This commit handles downgrading attachment streams from PSE nodes so that
non-PSE nodes can write them.
COUCHDB-3288
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This change is to account for differences in the #db record when a
cluster is operating in a mixed version state (i.e., when running a
rolling reboot to upgrade).
There are only a few operations that are valid on #db records that are
shared between nodes so rather than attempt to map the entire API
between the old and new records we're limiting to just the required API
calls.
COUCHDB-3288
|
|
|
|
|
|
|
|
| |
A mixed cluster (i.e., during a rolling reboot) will want to include
this commit in a release before deploying PSE code to avoid spurious
erros during the upgrade.
COUCHDB-3288
|
|
|
|
|
|
|
|
| |
This completes the removal of public access to the db record from the
couch application. The large majority of which is removing direct access
to the #db.name, #db.main_pid, and #db.update_seq fields.
COUCHDB-3288
|
|
|
|
| |
COUCHDB-3288
|
|
|
|
| |
COUCHDB-3288
|
|
|
|
|
|
|
|
|
| |
This removes introspection of the #db record by couch_server. While its
required for the pluggable storage engine upgrade, its also nice to
remove the hacky overloading of #db record fields for couch_server
logic.
COUCHDB-3288
|
|
|
|
|
|
|
|
|
| |
These functions were originally implemented in fabric_rpc.erl where they
really didn't belong. Moving them to couch_db.erl allows us to keep the
unit tests intact rather than just removing them now that the #db record
is being made private.
COUCHDB-3288
|
|
|
|
|
|
|
|
| |
Since we're getting ready to add API functions to couch_db.erl now is a
good time to clean up the exports list so that changes are more easily
tracked.
COUCHDB-3288
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously an individual failed request would be tried 10 times in a row with
an exponential backoff starting at 0.25 seconds. So the intervals in seconds
would be:
`0.25, 0.5, 1, 2, 4, 8, 16, 32, 64, 128`
For a total of about 250 seconds (or about 4 minutes). This made sense before
the scheduling replicator because if a replication job had crashed in the
startup phase enough times it would not be retried anymore. With a scheduling
replicator, it makes more sense to stop the whole task, and let the scheduling
replicatgor retry later. `retries_per_request` then becomes something used
mainly for short intermettent network issues.
The new retry schedule is
`0.25, 0.5, 1, 2, 4`
Or about 8 seconds.
An additional benefit when the job is stopped quicker, the user can find out
about the problem sooner from the _scheduler/docs and _scheduler/jobs status
endpoints and can rectify the problem. Otherwise a single request retrying for
4 minutes would be indicated there as the job is healthy and running.
Fixes #810
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Instead wait 15 seconds after last cluster configuration change, if there
were no more changes to the cluster, stop rexi buffers and servers for nodes
which are no longer connected.
Extract and reuse cluster stability check from `couch_replicator_clustering`
and move it to `mem3_cluster` module, so both replicator and rexi can use it.
Users of `mem3_cluster` would implement a behavior callback API then spawn_link
the cluster monitor with their specific period values.
This also simplifies the logic in rexi_server_mon as it no longer needs to
handle `{nodeup, _}` and `{nodedown, _}` messages. On any cluster membership
change it will get a `cluster_unstable` message. It then immediately spawns new
servers and buffers if needed. Only when cluster has stabilized it will stop
servers and buffers for disconnected nodes. The idea is to allow for short
periods of disconnects between nodes before throwing away all the buffered
messages.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Use snappy's `uncompressed_length` and external binary format's binary spec to
get uncompressed size.
http://erlang.org/doc/apps/erts/erl_ext_dist.html
`erlang:external_size` is function provided since R16B3 use it without the
`try ... catch` fallback. Also make sure to use `[{minor_version, 1}]` to match
what `?term_to_bin` macro does.
Fixes #835
|
|
|
|
|
|
|
|
|
| |
To make it easier to distinguish between a selector in _find and a
selector in _index. Rename the selector in the _index to
partialfilterselector. It also gives a bit more of an explanation of
what this selector does.
|
|
|
|
|
| |
* Run mango tests with make check
* Update README-DEV.rst
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
JSON index selection in Mango previously deemed an
index to be usable if a range scan on the first component
of its compound key could be generated from the query selector.
For instance, if I had an index:
[A, B]
is_usable would return true for the selector:
{"A": "foo"}
This is incorrect because JSON indexes only index documents that contain all
the fields in the index; null values are ok, but the field must be present.
That means that for the above selector, the index would implicitly include
only documents where B exists, missing documents where {"A":5} matched but
field B was not present.
This commit changes is_usable so that it only returns true if all the keys
in the index are required to exist by the selector. This means that in the
worst case e.g. none of the predicates can be used to generate a range query,
we should end up performing a full index scan, but this is still more
efficient than a full database scan.
We leave the generation of the optimal range for a given index as a separate
exercise - currently this happens after index selection.
Potentially we'd want to score indexes during index selection based on their
ability to restrict the result set, etc.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously users had to URL encode replication IDs when using
`_scheduler/jobs/<job_id>` endpoint because Mochiweb incorrectly decoded the
`+` character from URL path. So users were forced to encode so that the
replicator would correctly receive a `+` after Mochiweb parsing.
`+` is decoded as ` ` (space) probably because in query strings that's a valid
application/x-www-form-urlencoded encoding, but that decoding is not meant for
decoding URL paths, only query strings.
Notice RFC 3986 https://tools.ietf.org/html/rfc3986#section-2.2
`+` is a `sub-delim` (term from RFC) and in the path component it can be used
unquoted as a delimiter.
https://tools.ietf.org/html/rfc3986#section-3.3
Indeed, the replication ID is a compound ID and `+` is a valid delimiter
which separates the base part from the extensions.
For more details see also:
https://github.com/perwendel/spark/issues/490
https://www.w3.org/TR/html401/interact/forms.html#h-17.13.4.1
Fixes #825
|
|\
| |
| | |
Don't crash on invalid inline attachments
|
|/ |
|
|
|
|
|
|
|
|
|
|
| |
* Add selector support for json indexes
Adds selector support to json indexes. The selector can be used to
filter what documents are added to the index. When executing a query
the index will only be used if the index is specified in the use_index
field.
|
|
|
|
|
|
|
|
|
|
| |
When a JS test requested a restart server we would wip the current log
file. This makes it hard to debug failing tests occasionally when they
happen just after a restart. This change prevents just opens log files
in read/write mode specifically when a test requests a server restart.
The current behavior for interactive use of `dev/run` will continue to
truncate log files on startup.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
max_document_size currently checks document sizes based on Erlang's external
term size of the jiffy-decoded document body. This makes sense because that's
what used to store the data on disk and it's what manipulated by the CouchDB
internals.
However erlang term size is not always a good approximation of the size of json
encoded data. Sometimes it can be way off (I've seen 30% off) and It's hard for
users to estimate or check the external term size beforehand. So for example if
max_document_size is 1MB, CouchDB might reject user's 600KB json document
because Erlang's external term size of that document greater than 1MB.
To fix the issue provide a module which calculates the encoded size of a json
document. The size calculation approximates as well, since there is no
canonical json size as it depends on the encoder used.
Issue #659
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
If the compaction daemon cannot calculate the free space
for a volume, do not crash CouchDB. Instead, log a warning
that free space could not be calculated and continue.
Compaction of the database is not necessarily prevented -
just that the disk space for this specific volume
won't be taken into account when deciding whether
to automatically compact or not.
This is primarily to cope with edge cases arising from
ERL-343, whereby disksup:get_disk_data() returns invalid
paths for volumes containing whitespace.
Fixes #732
|
| |
|
|
|
|
|
|
| |
Fixes a regression where a 500 status code was returned when
no index is available to service a _find query because the
sort order does not match any available indexes.
|
|
|
|
|
|
| |
The assertion functions inherited from unittest
provide clearer errors when tests fail - use these
in preference to plain assert.
|
|
|
|
|
|
| |
Replace use of native assert with unittest.assertX.
This ensures we return descriptive errors when assertions
fail.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously, index selection for a given query
was run twice for each request - once to add
a warning in case a full database scan would be
performed and then again when the query was executed.
This moves the warning generation so that it occurs
at the end of the query processing and we can use
the existing index context to decide whether to
add a warning or not.
Whilst only a minor optimisation (which also assumes
we don't have cached query plans etc), it
at least moves index selection to where you'd expect
it to happen (query planning).
|
|
|
|
|
|
|
|
|
|
|
|
| |
* add operator tests for text indexes
* add operator tests for _all_docs
* add tests for null and range handling
Tests consistent behaviour for handling null values
and range queries between different index types
(_all_docs, json indexes and text indexes).
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Currently CouchDB has configurable single document body size limits, as well as
http request body limits, and this commit implements attachment size limit.
Maximum attachment size can be configured with:
```
[couchdb]
max_attachment_size = Bytes | infinity
```
`infinity` (i.e. no maximum) is the default value it also preserves the current
behavior.
Fixes #769
|
|
|
|
|
|
|
|
|
| |
Previously only `views` sections could have a `lib` object. But some users
might choose to have a library for filters for example.
This makes it agree with this section of the wiki:
https://wiki.apache.org/couchdb/CommonJS_Modules
|
|
|
|
| |
Clarify behaviour for null / missing fields. Convert tests
to unittest assertions for clearer errors.
|
| |
|
|\
| |
| | |
include mrview options in _explain result
|
|/
|
|
|
|
|
| |
_explain previously returned the options passed in by the user but
not those modified at execution time by Mango. Now we include
index-specific options (mrargs for map/reduce indexes) in the output,
allowing us to see e.g. when include_docs was used.
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
Accept an "execution_stats" parameter to _find. If present, return
a new "execution_stats" object in the response which contains
information about the query executed. Currently, this is only
implemented for json/all_docs indexes and contains:
- total keys examined (currently always 0 for json indexes)
- total documents examined (when include_docs=true used)
- total quorum documents examined (when fabric doc lookups used)
|
|\
| |
| | |
Add a debugging utilities for listing processes
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|