| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Implement a configurable delay before retrying a document fetch in replicator.
missing_doc exceptions usually happen when there is a continuous replication
set up and the source is updated. The change might appear in the changes feed,
but when worker tries to fetch the document's revisions it talks to a
node where internal replication hasn't caught up and so it throws an exception.
Previously the delay was hard-coded at 0 (that is retrying was immediate). The
replication would still make progress, but after crashing, retrying and
generating a lot of unnecessary log noise. Since updating a source while
continuous replication is running is a common scenario, it's worth optimizing
for it and avoiding wasting resources and spamming logs.
|
| |
|
| |
|
| |
|
|
|
|
| |
While going to http://localhost:5984/_utils/verify_install.html returns `Not found.`, browsing to `http://localhost:5984/_utils/#/verifyinstall` works. Maybe the url is outdated
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
| |
Mango tests are failing due to flaky index deletion issues. We change
the value of w to 1 since n=1.
|
| |
|
| |
|
|
|
|
| |
Closes #824
|
| |
|
|
|
|
|
| |
This changes the couchjs --no-eval flag to --eval and disables
eval() and Function() constructors by default in couchjs.
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The Mango test suite previously assumed that the target
CouchDB for testing was at http://127.0.0.1:15984 with
username "testuser" and password "testpass".
It's helpful to be able to override these defaults
so we can test against other environments, including those
that do not support basic authentication. This commit
adds support for overriding the defaults using environment
variables.
Now that we enable tests to be run against remote
clusters, default to n=1 at database creation time
to prevent assertion failures due to eventual
consistency between replicas.
|
|
|
|
|
|
|
|
|
|
|
|
| |
* Split out text index selection tests
* Skip operator tests that do not apply to text indexes
* Only run array length tests against text indexes
* Fix index crud tests when text indexes available
* Use environment variable to switch on text tests
* Fix incorrect text sort assertion in test
* Always use -test in fixture filename
* Fix index selection test compatibility with #816.
* Improve test README
|
|
|
|
| |
Add a test to show the parital_filter_selector functionality will work
with design docs that don't have a selector defined in it by default
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Mango previously constrained range queries against JSON indexes
(map/reduce views) to startkey=[]&endkey=[{}]. In Mango, JSON
index keys are always compound (i.e. always arrays), but this
restriction resulted in Mango failing to match documents where
the indexed value was an object.
For example, an index with keys:
[1],
[2],
[{"foo": 3}]
would be restricted such that only [1] and [2] were returned
if a range query was issued.
On its own, this behaviour isn't necessarily unintuitive, but
it is different from the behaviour of a non-indexed Mango
query, so the query results would change in the presence of an
index.
Additonally, it prevented operators or selectors which explicitly
depend on a full index scan (such as $exists) from returning a
complete result set.
This commit changes the maximum range boundary from {} to a
value that collates higher than any JSON object, so all
array/compound keys will be included.
Note that this uses an invalid UTF-8 character, so we depend
on the view engine not barfing when this is passed as a
parameter. In addition, we can't represent the value in JSON
so we need to subtitute is when returning a query plan
in the _explain endpoint.
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
It seems safe to assume that if a user specifies that
results should be sorted by a field, that field needs to exist
(but could be null) in the results returned.
In #816 we can only use an index if all its columns are
required to exist in the selector, so this improves
compatibility with the old behaviour by allowing sort
fields to be included in the index coverage check for
JSON indexes.
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
As it turns out I made a bit of a mistake when I forgot that the old
ddoc_cache implementation had an ets_lru process registered as
ddoc_cache_lru. These cast messages were causing that process to crash.
If a cluster had enough design document activity and enough nodes this
would cause nodes with the old ddoc_cache implementation to reboot the
entire VM. This was a cascading failure due to the ets_lru process
restarting frequently enough that it brought down the entire ddoc_cache
application.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
`now/0` is deprecated since Erlang 18.0, and a set of new time related
functions are available.
Usually `now/0` can be replaced with `os:timestamp/0`, however in some
instances it was used effectively to produce monotonically incrementing values
rather than timestamps. So added a new `couch_util:unique_monotonic_integer/0`.
Most functional changes are in couch_uuid module. There `now/0` was used both
as a timestamp and for uniqueness. To emulate previous behavior, a local
incrementing clock sequence is used. If `os:timestamp/0` does not advance since
last call then the local clock is advanced by 1 microsecond and that's used to
generate the next V1 UUIDs. As soon as os:timestamp/0` catches up, the local
sequence reset to that latest value.
Also exported function `utc_random/0` was not used, after updating the function
it wasn't exported anymore.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* Make couch_peruser a proper Erlang app
* Start and stop couch_peruser in the test suite
* feat: mango test runner: do not rely on timeout for CouchDB start alone
On slow build nodes, 10 seconds might not be enough of a wait.
* Ensure a user creation is handlined on one node only
This patch makes use of the mechanism that ensures that replications
are only run on one node.
When the cluster has nodes added/removed all changes listeners are
restarted.
* track cluster state in gen_server state and get notfied from mem3 directly
* move couch_replication_clustering:owner/3 to mem3.erl
* remove reliance on couch_replicator_clustering, handle cluster state internally
* make sure peruser listeners are only initialised once per node
* add type specs
* fix tests
* simplify couch_persuer.app definition
* add registered modules
* remove leftover code from olde notification system
* s/clusterState/state/ && s/state/changes_state/
* s,init/0,init_state/0,
* move function declaration around for internal consistency
* whitespace
* update README
* document ini entries
* unlink changes listeners before exiting them so we survive
* fix state call
* fix style
* fix state
* whitespace and more state fixes
* 80 cols
Closes #749
|
|
|
|
|
|
| |
This changes the reduce overflow error to return an error to the client
rather than blowing up the view build. This allows views that have a
single bad reduce to build while not crushing the server's RAM usage.
|
| |
|
|
|
|
|
|
| |
Previously gzip compression asssumed that only one final result chunk would be
emitted during finalization. But in general, and specifically in 20.0 that's
not true.
|
|
|
|
| |
To fix random compatibility issue
|
|
|
|
|
|
|
|
|
| |
Use erlang release version to decide if the newer `rand` module is present or
not.
`erlang:function_exported(rand, uniform, 0)` could not be used here as it
returns false when function isn't loaded, even if module and function are both
available.
|
|
|
|
|
|
| |
Mango execution stats previously incremented the result count
at a point where the final result might be discarded. Instead,
increment the count when we know the result is being included
in the response.
|
|
|
|
|
|
|
|
|
| |
Currently, it is impossible to PUT/POST modified shard maps to any
`_dbs/_*` document because the document _ids are reserved. This change
permits these specific db/docid combinations as valid, so PUT/POST
operations can succeed. The specific list comes from SYSTEM_DATABASES.
Unit tests have been added.
|
|
|
|
| |
Replaced with crypto:strong_rand_bytes
|
| |
|
| |
|
| |
|
|
|
|
| |
Folsom depended on 0.8.2 as well so had to update folsom and bump its tag.
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Use a shorter and more informative single line string:
```
Starting replication f9a503bf456a4779fd07901a6dbdb501+continuous+create_target (http://adm:*****@127.0.0.1:15984/a/ -> http://adm:*****@127.0.0.1:15984/bar/) from doc _replicator:my_rep2 worker_procesess:4 worker_batch_size:500 session_id:b4df2a53e33fb6441d82a584a8888f85
```
For replication from _replicate endpoint, doc info is skipped and it is clearly
indicated a `_replicate` replication:
```
Starting replication aa0aa3244d7886842189980108178651+continuous+create_target (http://adm:*****@localhost:15984/a/ -> http://adm:*****@localhost:15984/t/) from _replicate endpoint worker_procesess:4 worker_batch_size:500 session_id:6fee11dafc3d8efa6497c67ecadac35d
```
Also remove redundant `starting new replication...` log.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously replicator was unnesessarily verbose during crashes. This commit
reduces the vorbosity and make the error messages more helpful.
Most of the replication failures happen in the startup phase when both target
and source are opened. That's a good place to handle common errors, and there
were a few already handled (db not found, lack of authorization). This commit
adds another other common one - inability to resolve endpoint host names. This
covers cases were there user mistypes the host name or there is a DNS issue.
Also during the startup phase, if an error occurs a stacktrace was logged in
addition to the whole state of the #rep{} record. Most of the rep record and
the stack are not that useful compared to how much noise it generates. So
instead, log only a few relevant fields from #rep{} and only the top 2 stack
frames. Combined with dns lookup failure this change results in almost a 4x
(2KB vs 500B) reduction in log noise while providing better debugging
information.
One last source of excessive log noise the dumping of the full replicator job
state during crashes. This included both the #rep and the #rep_state records.
Those have a lot of redundnat information, and since they are dumped as tuples,
it was hard to use and find the values of each individual field. In this case
`format_status/2` was improved to dump only a selected set of field along with
their names. This results in another 3x reduction in log noise.
|