| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The Mango test suite previously assumed that the target
CouchDB for testing was at http://127.0.0.1:15984 with
username "testuser" and password "testpass".
It's helpful to be able to override these defaults
so we can test against other environments, including those
that do not support basic authentication. This commit
adds support for overriding the defaults using environment
variables.
Now that we enable tests to be run against remote
clusters, default to n=1 at database creation time
to prevent assertion failures due to eventual
consistency between replicas.
|
|
|
|
|
|
|
|
|
|
|
|
| |
* Split out text index selection tests
* Skip operator tests that do not apply to text indexes
* Only run array length tests against text indexes
* Fix index crud tests when text indexes available
* Use environment variable to switch on text tests
* Fix incorrect text sort assertion in test
* Always use -test in fixture filename
* Fix index selection test compatibility with #816.
* Improve test README
|
|
|
|
| |
Add a test to show the parital_filter_selector functionality will work
with design docs that don't have a selector defined in it by default
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Mango previously constrained range queries against JSON indexes
(map/reduce views) to startkey=[]&endkey=[{}]. In Mango, JSON
index keys are always compound (i.e. always arrays), but this
restriction resulted in Mango failing to match documents where
the indexed value was an object.
For example, an index with keys:
[1],
[2],
[{"foo": 3}]
would be restricted such that only [1] and [2] were returned
if a range query was issued.
On its own, this behaviour isn't necessarily unintuitive, but
it is different from the behaviour of a non-indexed Mango
query, so the query results would change in the presence of an
index.
Additonally, it prevented operators or selectors which explicitly
depend on a full index scan (such as $exists) from returning a
complete result set.
This commit changes the maximum range boundary from {} to a
value that collates higher than any JSON object, so all
array/compound keys will be included.
Note that this uses an invalid UTF-8 character, so we depend
on the view engine not barfing when this is passed as a
parameter. In addition, we can't represent the value in JSON
so we need to subtitute is when returning a query plan
in the _explain endpoint.
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
It seems safe to assume that if a user specifies that
results should be sorted by a field, that field needs to exist
(but could be null) in the results returned.
In #816 we can only use an index if all its columns are
required to exist in the selector, so this improves
compatibility with the old behaviour by allowing sort
fields to be included in the index coverage check for
JSON indexes.
|
|\
| |
| | |
no need to make it look one needs to create an issue on top of a PR
|
|/ |
|
|
|
|
|
|
|
|
|
|
|
| |
As it turns out I made a bit of a mistake when I forgot that the old
ddoc_cache implementation had an ets_lru process registered as
ddoc_cache_lru. These cast messages were causing that process to crash.
If a cluster had enough design document activity and enough nodes this
would cause nodes with the old ddoc_cache implementation to reboot the
entire VM. This was a cascading failure due to the ets_lru process
restarting frequently enough that it brought down the entire ddoc_cache
application.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
`now/0` is deprecated since Erlang 18.0, and a set of new time related
functions are available.
Usually `now/0` can be replaced with `os:timestamp/0`, however in some
instances it was used effectively to produce monotonically incrementing values
rather than timestamps. So added a new `couch_util:unique_monotonic_integer/0`.
Most functional changes are in couch_uuid module. There `now/0` was used both
as a timestamp and for uniqueness. To emulate previous behavior, a local
incrementing clock sequence is used. If `os:timestamp/0` does not advance since
last call then the local clock is advanced by 1 microsecond and that's used to
generate the next V1 UUIDs. As soon as os:timestamp/0` catches up, the local
sequence reset to that latest value.
Also exported function `utc_random/0` was not used, after updating the function
it wasn't exported anymore.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* Make couch_peruser a proper Erlang app
* Start and stop couch_peruser in the test suite
* feat: mango test runner: do not rely on timeout for CouchDB start alone
On slow build nodes, 10 seconds might not be enough of a wait.
* Ensure a user creation is handlined on one node only
This patch makes use of the mechanism that ensures that replications
are only run on one node.
When the cluster has nodes added/removed all changes listeners are
restarted.
* track cluster state in gen_server state and get notfied from mem3 directly
* move couch_replication_clustering:owner/3 to mem3.erl
* remove reliance on couch_replicator_clustering, handle cluster state internally
* make sure peruser listeners are only initialised once per node
* add type specs
* fix tests
* simplify couch_persuer.app definition
* add registered modules
* remove leftover code from olde notification system
* s/clusterState/state/ && s/state/changes_state/
* s,init/0,init_state/0,
* move function declaration around for internal consistency
* whitespace
* update README
* document ini entries
* unlink changes listeners before exiting them so we survive
* fix state call
* fix style
* fix state
* whitespace and more state fixes
* 80 cols
Closes #749
|
|\
| |
| | |
Return reduce overflow errors to the client
|
|/
|
|
|
|
| |
This changes the reduce overflow error to return an error to the client
rather than blowing up the view build. This allows views that have a
single bad reduce to build while not crushing the server's RAM usage.
|
| |
|
|
|
|
|
|
| |
Previously gzip compression asssumed that only one final result chunk would be
emitted during finalization. But in general, and specifically in 20.0 that's
not true.
|
|
|
|
| |
To fix random compatibility issue
|
|
|
|
|
|
|
|
|
| |
Use erlang release version to decide if the newer `rand` module is present or
not.
`erlang:function_exported(rand, uniform, 0)` could not be used here as it
returns false when function isn't loaded, even if module and function are both
available.
|
|
|
|
|
|
| |
Mango execution stats previously incremented the result count
at a point where the final result might be discarded. Instead,
increment the count when we know the result is being included
in the response.
|
|\
| |
| |
| |
| | |
Whitelist system DB names as valid _dbs docids
Closes #858
|
|/
|
|
|
|
|
|
|
| |
Currently, it is impossible to PUT/POST modified shard maps to any
`_dbs/_*` document because the document _ids are reserved. This change
permits these specific db/docid combinations as valid, so PUT/POST
operations can succeed. The specific list comes from SYSTEM_DATABASES.
Unit tests have been added.
|
|
|
|
| |
Replaced with crypto:strong_rand_bytes
|
|\
| |
| | |
Update missing dependencies in README-DEV
|
| | |
|
|/ |
|
|\
| |
| | |
Merge pull request #827 from almightyju/master
|
| |\
| |/
|/| |
|
| |
| |
| |
| | |
Folsom depended on 0.8.2 as well so had to update folsom and bump its tag.
|
|\ \
| | |
| | |
| | | |
Remove bashisms in remsh script
Also fix bug introduced in refactoring
|
| | | |
|
|/ / |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Use a shorter and more informative single line string:
```
Starting replication f9a503bf456a4779fd07901a6dbdb501+continuous+create_target (http://adm:*****@127.0.0.1:15984/a/ -> http://adm:*****@127.0.0.1:15984/bar/) from doc _replicator:my_rep2 worker_procesess:4 worker_batch_size:500 session_id:b4df2a53e33fb6441d82a584a8888f85
```
For replication from _replicate endpoint, doc info is skipped and it is clearly
indicated a `_replicate` replication:
```
Starting replication aa0aa3244d7886842189980108178651+continuous+create_target (http://adm:*****@localhost:15984/a/ -> http://adm:*****@localhost:15984/t/) from _replicate endpoint worker_procesess:4 worker_batch_size:500 session_id:6fee11dafc3d8efa6497c67ecadac35d
```
Also remove redundant `starting new replication...` log.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Previously replicator was unnesessarily verbose during crashes. This commit
reduces the vorbosity and make the error messages more helpful.
Most of the replication failures happen in the startup phase when both target
and source are opened. That's a good place to handle common errors, and there
were a few already handled (db not found, lack of authorization). This commit
adds another other common one - inability to resolve endpoint host names. This
covers cases were there user mistypes the host name or there is a DNS issue.
Also during the startup phase, if an error occurs a stacktrace was logged in
addition to the whole state of the #rep{} record. Most of the rep record and
the stack are not that useful compared to how much noise it generates. So
instead, log only a few relevant fields from #rep{} and only the top 2 stack
frames. Combined with dns lookup failure this change results in almost a 4x
(2KB vs 500B) reduction in log noise while providing better debugging
information.
One last source of excessive log noise the dumping of the full replicator job
state during crashes. This included both the #rep and the #rep_state records.
Those have a lot of redundnat information, and since they are dumped as tuples,
it was hard to use and find the values of each individual field. In this case
`format_status/2` was improved to dump only a selected set of field along with
their names. This results in another 3x reduction in log noise.
|
| |
| |
| |
| |
| |
| |
| |
| | |
To make this work, I had to change the default -name from the old
couchdb@localhost to couchdb@127.0.0.1. This matches the advice
we already had in vm.args to use FQDN or IP address, anyway.
Once this merges I'll look at doing a Windows version, if possible.
|
| |
| |
| |
| | |
We change syntax issues that make the tests incompatible for python3
but also ensure that it still runs using python2.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Previously attachment uploading from a PSE to non-PSE node would
fail as the attachment streaming API changed between version.
This commit handles downgrading attachment streams from PSE nodes so that
non-PSE nodes can write them.
COUCHDB-3288
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
This change is to account for differences in the #db record when a
cluster is operating in a mixed version state (i.e., when running a
rolling reboot to upgrade).
There are only a few operations that are valid on #db records that are
shared between nodes so rather than attempt to map the entire API
between the old and new records we're limiting to just the required API
calls.
COUCHDB-3288
|
| |
| |
| |
| |
| |
| |
| |
| | |
A mixed cluster (i.e., during a rolling reboot) will want to include
this commit in a release before deploying PSE code to avoid spurious
erros during the upgrade.
COUCHDB-3288
|
| |
| |
| |
| |
| |
| |
| |
| | |
This completes the removal of public access to the db record from the
couch application. The large majority of which is removing direct access
to the #db.name, #db.main_pid, and #db.update_seq fields.
COUCHDB-3288
|
| |
| |
| |
| | |
COUCHDB-3288
|
| |
| |
| |
| | |
COUCHDB-3288
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
This removes introspection of the #db record by couch_server. While its
required for the pluggable storage engine upgrade, its also nice to
remove the hacky overloading of #db record fields for couch_server
logic.
COUCHDB-3288
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
These functions were originally implemented in fabric_rpc.erl where they
really didn't belong. Moving them to couch_db.erl allows us to keep the
unit tests intact rather than just removing them now that the #db record
is being made private.
COUCHDB-3288
|
| |
| |
| |
| |
| |
| |
| |
| | |
Since we're getting ready to add API functions to couch_db.erl now is a
good time to clean up the exports list so that changes are more easily
tracked.
COUCHDB-3288
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Previously an individual failed request would be tried 10 times in a row with
an exponential backoff starting at 0.25 seconds. So the intervals in seconds
would be:
`0.25, 0.5, 1, 2, 4, 8, 16, 32, 64, 128`
For a total of about 250 seconds (or about 4 minutes). This made sense before
the scheduling replicator because if a replication job had crashed in the
startup phase enough times it would not be retried anymore. With a scheduling
replicator, it makes more sense to stop the whole task, and let the scheduling
replicatgor retry later. `retries_per_request` then becomes something used
mainly for short intermettent network issues.
The new retry schedule is
`0.25, 0.5, 1, 2, 4`
Or about 8 seconds.
An additional benefit when the job is stopped quicker, the user can find out
about the problem sooner from the _scheduler/docs and _scheduler/jobs status
endpoints and can rectify the problem. Otherwise a single request retrying for
4 minutes would be indicated there as the job is healthy and running.
Fixes #810
|
| | |
|
| | |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Instead wait 15 seconds after last cluster configuration change, if there
were no more changes to the cluster, stop rexi buffers and servers for nodes
which are no longer connected.
Extract and reuse cluster stability check from `couch_replicator_clustering`
and move it to `mem3_cluster` module, so both replicator and rexi can use it.
Users of `mem3_cluster` would implement a behavior callback API then spawn_link
the cluster monitor with their specific period values.
This also simplifies the logic in rexi_server_mon as it no longer needs to
handle `{nodeup, _}` and `{nodedown, _}` messages. On any cluster membership
change it will get a `cluster_unstable` message. It then immediately spawns new
servers and buffers if needed. Only when cluster has stabilized it will stop
servers and buffers for disconnected nodes. The idea is to allow for short
periods of disconnects between nodes before throwing away all the buffered
messages.
|