| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
There were a couple issues with the previous ddoc_cache implementation
that made it possible to tip over the ddoc_cache_opener process. First,
there were a lot of messages flowing through a single gen_server. And
second, the cache relied on periodically evicting entries to ensure
proper behavior in not caching an entry forever after it had changed on
disk.
The new version makes two important changes. First, entries now have an
associated process that manages the cache entry. This process will
periodically refresh the entry and if the entry has changed or no longer
exists the process will remove its entry from cache.
The second major change is that the cache entry process directly mutates
the related ets table entries so that our performance is not dependent
on the speed of ets table mutations. Using a custom entry that does no
work the cache can now sustain roughly one million operations a second
with a twenty thousand clients fighting over a cache limited to one
thousand items. In production this means that cache performance will
likely be rate limited by other factors like loading design documents
from disk.
|
|
|
|
|
| |
This is an old merge artifact that was duplicating the event
notifications twice per design document update.
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
| |
Previous default timeout of 5 seconds was not enough when running in an
environment where disk access is severly throttled.
To add a timeout, changed the test function into a test generator. That also
made the `with` construct un-necessary.
Fixes #695
|
| |
|
|\ |
|
| | |
|
|/
|
|
| |
https://github.com/apache/couchdb-config/pull/16
|
|
|
|
|
|
| |
Looks like an oversight in commit 789f75d.
Closes #703
|
| |
|
|
|
|
|
|
|
|
| |
The test was repeatedly creating/deleting the exact same DB
name, which is a recipe for disaster. Changed to use unique
DB names.
Closes #705.
|
|
|
|
|
|
|
|
|
|
|
| |
Previously, we potentially could attempt to restart couch,
immediately attempt to see if couch had restarted, and fail
if the server wasn't there (pre- or post-restart).
This change wraps all attempts to contact couch in restartServer()
with try blocks and simplifies the check-if-restarted logic.
Closes #669. May or may not help with #673.
|
|
|
|
|
|
| |
LEGAL-303
Closes #697
|
| |
|
|
|
|
|
|
|
|
| |
Could reproduce issue #633 by limiting disk throughput in a VBox
VM instance to about 5KB. Try to increase the timeouts to let it
handle such apparent slowdowns.
Fixed #633
|
| |
|
|
|
|
|
|
|
|
| |
Replication cancelation doesn't immediately update active tasks. Instead, use
the new `waitReplicationTaskStop(rep_id)` function to propery wait for the
task status.
Issue #634
|
|
|
|
|
|
|
|
|
|
|
| |
The previous version of this test relied on trying to bump into the
all_dbs_active error from the couch_server LRU. This proves to be rather
difficult to reliably provide assertions on behavior. In hindsight all
we really care about is that the compactor holds a monitor against the
database and then we can trust couch_server will not evict anything that
is actively monitored.
Fixes #680
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This was used from a test only and it wasn't reliable. Because of replicator
job delays initialization the `State` would be either #rep_state{} or #rep. If
replication job hasn't finished initializing, then state would be #rep{} and a
call like get_details which matches the state with #rep_state{] would fail with
the batmatch error.
As seen in issue #686
So remove `get_details` call and let the test rely on task polling as all other
tests do.
|
|\
| |
| | |
Use test_util:stop_config in mem3_util_test
|
|/
|
|
|
|
|
| |
The config:stop is asynchronous which causes test failures with error
like the following
{error,{already_started,<0.32662.3>}
|
|\
| |
| | |
3367 fix test case
|
| | |
|
| |
| |
| |
| |
| |
| |
| |
| | |
We should use random names for databases. Otherwise the test fails with
database already exists error. This commit uses random name for users db
and corrects the section name for `authentication_db` setting.
COUCHDB-3367
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
couch_server is responsible for calling hash_admin_passwords whenever
"admin" section of config changes. However as you can see it from
[here](https://github.com/apache/couchdb/blob/master/src/couch/src/couch_server.erl#L219)
the call is asynchronous. This means that our test cases might fail when
we try to using admin user while admin password is not yet hashed.
COUCHDB-3367
|
| |
| |
| |
| | |
COUCHDB-3367
|
| |
| |
| |
| |
| |
| |
| |
| | |
We weren't stopping the correct set of applications as well as
forgetting to unload meck. I've also changed the test generators so that
they execute all of the provided assertions.
Fix #687
|
|/ |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is a second attempt to fix _local_docs end-point. The previous one didn't
work on big enough btree_local, because local btree doesn't have reduction fun,
so reuse of couch_db_updater:btree_by_id_reduce/2 was crashing on a bad match
as soon as btree_local was getting kp_node. Also using full fold to calculate
total_rows value turned out to be resource expensive when a database have
significant number of local documents.
This fix avoids calculating of total_rows and offset instead always setting
them to null and also setting to null update_seq when requested, since it
doesn't have meaning in context of local documents.
A fabric module fabric_view_all_docs.erl was copied and modified as
fabric_view_local_docs.erl, because re-using it for processing of both types of
the documents was getting rather convoluted.
Jira: COUCHDB-3337
|
|\
| |
| | |
Allow keep_sending_changes to use hot code upgrade
|
| | |
|
| | |
|
|\ \
| | |
| | | |
Add stable and update support to Mango
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
This brings mango inline with views by supporting the new options
`stable` and `update`.
Fixes #621
chore: whitespace
feat: add stale option to Mango
fix: opts parsing
|
|\ \ \ |
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
If dbs do not exist, we catch the error for mem3_sync_security
so that it can continue for databases that do exist.
https://github.com/apache/couchdb/pull/538
COUCHDB-3423
|
| | | |
| | | |
| | | |
| | | | |
https://github.com/apache/couchdb-peruser/pull/3
|
| |/ /
|/| | |
|
|\ \ \
| | | |
| | | | |
use crypto:strong_rand_bytes
|
|/ / / |
|
|\ \ \
| | | |
| | | | |
Remove couch_crypto
|
| | | |
| | | |
| | | |
| | | |
| | | | |
The crypto:{hash,hash_init,hash_update,hash_final} functions exist
in all the versions of erlang supported by CouchDB.
|
|\ \ \ \ |
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
Previously, we were calculating the ExternalSize for views by summing
up all the nodes in the btree. Furthermore, this was the compressed
size. Now we modify the reduce function to return an ExternalSize for
uncompressed values in the KVList.
PR: https://github.com/apache/couchdb/pull/608
COUCHDB-3430
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
(#606)""
This reverts commit c8ee29505c718ed6bd9687a664dae11d984d89a7.
PR is here: https://github.com/apache/couchdb/pull/606
|
|/ / / /
| | | |
| | | |
| | | | |
This reverts commit dce6e34686329e711e1a6c50aae00761ecb3262e.
|
|/ / /
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Use ejson body instead of compressed body for external size
In two places where we calculate the ExternalSize of the document body,
we use the Summary which is a compressed version of the doc body. We
change this to use the actual ejson body. In copy_docs we don't have
access to the #doc record so we can't access the meta where we store
the ejson body. Unfortunately, this means we have to decompress the
document body after reading it from disk.
COUCHDB-3429
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
The "should die" test was close to the edge of timing out. Daemon started up,
slept for 1 second then died. However max_retries is 3 so the whole thing was
happening 3 times in a row. The total wait was 4 seconds, but on slow machines
1 extra second was not enough for the overhead of forking the 3 processes and
other setup stuff.
Set restart times to 2. Hopefully 4 seconds should be enough overhead for 2
restarts.
Also adjust sleep time for the "die quickly" test. 1 second there might not
be enough for both restarts, so made it 2 just to be safe.
Issue #675
|
| | | |
|