| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
| |
|
|
|
|
|
|
|
|
| |
Replication cancelation doesn't immediately update active tasks. Instead, use
the new `waitReplicationTaskStop(rep_id)` function to propery wait for the
task status.
Issue #634
|
|
|
|
|
|
|
|
|
|
|
| |
The previous version of this test relied on trying to bump into the
all_dbs_active error from the couch_server LRU. This proves to be rather
difficult to reliably provide assertions on behavior. In hindsight all
we really care about is that the compactor holds a monitor against the
database and then we can trust couch_server will not evict anything that
is actively monitored.
Fixes #680
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This was used from a test only and it wasn't reliable. Because of replicator
job delays initialization the `State` would be either #rep_state{} or #rep. If
replication job hasn't finished initializing, then state would be #rep{} and a
call like get_details which matches the state with #rep_state{] would fail with
the batmatch error.
As seen in issue #686
So remove `get_details` call and let the test rely on task polling as all other
tests do.
|
|\
| |
| | |
Use test_util:stop_config in mem3_util_test
|
|/
|
|
|
|
|
| |
The config:stop is asynchronous which causes test failures with error
like the following
{error,{already_started,<0.32662.3>}
|
|\
| |
| | |
3367 fix test case
|
| | |
|
| |
| |
| |
| |
| |
| |
| |
| | |
We should use random names for databases. Otherwise the test fails with
database already exists error. This commit uses random name for users db
and corrects the section name for `authentication_db` setting.
COUCHDB-3367
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
couch_server is responsible for calling hash_admin_passwords whenever
"admin" section of config changes. However as you can see it from
[here](https://github.com/apache/couchdb/blob/master/src/couch/src/couch_server.erl#L219)
the call is asynchronous. This means that our test cases might fail when
we try to using admin user while admin password is not yet hashed.
COUCHDB-3367
|
| |
| |
| |
| | |
COUCHDB-3367
|
| |
| |
| |
| |
| |
| |
| |
| | |
We weren't stopping the correct set of applications as well as
forgetting to unload meck. I've also changed the test generators so that
they execute all of the provided assertions.
Fix #687
|
|/ |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is a second attempt to fix _local_docs end-point. The previous one didn't
work on big enough btree_local, because local btree doesn't have reduction fun,
so reuse of couch_db_updater:btree_by_id_reduce/2 was crashing on a bad match
as soon as btree_local was getting kp_node. Also using full fold to calculate
total_rows value turned out to be resource expensive when a database have
significant number of local documents.
This fix avoids calculating of total_rows and offset instead always setting
them to null and also setting to null update_seq when requested, since it
doesn't have meaning in context of local documents.
A fabric module fabric_view_all_docs.erl was copied and modified as
fabric_view_local_docs.erl, because re-using it for processing of both types of
the documents was getting rather convoluted.
Jira: COUCHDB-3337
|
|\
| |
| | |
Allow keep_sending_changes to use hot code upgrade
|
| | |
|
| | |
|
|\ \
| | |
| | | |
Add stable and update support to Mango
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
This brings mango inline with views by supporting the new options
`stable` and `update`.
Fixes #621
chore: whitespace
feat: add stale option to Mango
fix: opts parsing
|
|\ \ \ |
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
If dbs do not exist, we catch the error for mem3_sync_security
so that it can continue for databases that do exist.
https://github.com/apache/couchdb/pull/538
COUCHDB-3423
|
| | | |
| | | |
| | | |
| | | | |
https://github.com/apache/couchdb-peruser/pull/3
|
| |/ /
|/| | |
|
|\ \ \
| | | |
| | | | |
use crypto:strong_rand_bytes
|
|/ / / |
|
|\ \ \
| | | |
| | | | |
Remove couch_crypto
|
| | | |
| | | |
| | | |
| | | |
| | | | |
The crypto:{hash,hash_init,hash_update,hash_final} functions exist
in all the versions of erlang supported by CouchDB.
|
|\ \ \ \ |
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
Previously, we were calculating the ExternalSize for views by summing
up all the nodes in the btree. Furthermore, this was the compressed
size. Now we modify the reduce function to return an ExternalSize for
uncompressed values in the KVList.
PR: https://github.com/apache/couchdb/pull/608
COUCHDB-3430
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
(#606)""
This reverts commit c8ee29505c718ed6bd9687a664dae11d984d89a7.
PR is here: https://github.com/apache/couchdb/pull/606
|
|/ / / /
| | | |
| | | |
| | | | |
This reverts commit dce6e34686329e711e1a6c50aae00761ecb3262e.
|
|/ / /
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Use ejson body instead of compressed body for external size
In two places where we calculate the ExternalSize of the document body,
we use the Summary which is a compressed version of the doc body. We
change this to use the actual ejson body. In copy_docs we don't have
access to the #doc record so we can't access the meta where we store
the ejson body. Unfortunately, this means we have to decompress the
document body after reading it from disk.
COUCHDB-3429
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
The "should die" test was close to the edge of timing out. Daemon started up,
slept for 1 second then died. However max_retries is 3 so the whole thing was
happening 3 times in a row. The total wait was 4 seconds, but on slow machines
1 extra second was not enough for the overhead of forking the 3 processes and
other setup stuff.
Set restart times to 2. Hopefully 4 seconds should be enough overhead for 2
restarts.
Also adjust sleep time for the "die quickly" test. 1 second there might not
be enough for both restarts, so made it 2 just to be safe.
Issue #675
|
| | | |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
The test was flaky for a variety of reasons:
* waitForSeq only waited for 3 seconds and on failure it never explitly
indicated an error and just waited for the comparison below to fail. So made
it wait for 30 seconds and also throw an exception right away if it fails.
* Last waitForSeq was used after task was canceled. So it just wasted time
waiting until timeout as the task was null. So created a function to
wait for task to be null.
* waitForSeq spun in a tight do/while loop querying _active_tasks. In some test
environment with minimal CPU resources that's not the greatest thing to do.
So made it wait for 0.5 seconds between retries.
* waitForSeq waited for replication task's through_seq value to match source
update sequence from source db info. Those don't necessarily match. Instead
made waitForSeq use the changes feed last sequence since that's what the
replication task uses to update through_seq.
|
| | |
| | |
| | |
| | | |
Previously it returned a 500 error.
|
| | |
| | |
| | |
| | | |
Closes #664
|
| | |
| | |
| | |
| | | |
Also includes pointers to couchdb-pkg daemon scripts.
|
| | | |
|
| | | |
|
| | | |
|
| | | |
|
| | | |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
I believe the race here was that the query could return before the
actual index updating process exited. Then it was just a race to
creating a database before the monitor is released.
If we do end up creating the database before the monitor is released
then the database we want to have closed ends up being ignored as its
not idle. The second two created databases then don't end up forcing the
database from couch_server's LRU which leads us to the timeout waiting
for DatabaseMonRef to fire.
Fixes #655
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
This was encountered during the test suite runs on Travis. It turns out
that when we restart the indexer its possible to already have the 'EXIT'
message in our mailbox. When we do we'll then crash with an unknown_info
error since our updater pid was changed during the restart.
This change simple filters any 'EXIT' message from the old updater from
the mailbox before restarting thew new index updater.
Fixes #649
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Looking into #649 I realized there's a pretty terrible race condition if
an index is compacted quickly followed by an index update. Since we
don't check the index updater message it would be possible for us to
swap out a compaction change, followed by immediately resetting to the
new state from the index updater. This would be bad as we'd possibly end
up with a situation where our long lived index would be operating on a
file that no longer existed on disk.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
There's a theory that the low memory limits on our CI instances are
causing the tests spawning JS processes to fail. Given that we don't
need them here we can trivially exclude that as a cause of the test
failures.
Fixes #631
|
| | | |
|
| | | |
|