| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
| |
COUCHDB-3326
|
| |
|
| |
|
|
|
|
|
|
| |
Turns out we weren't properly handling when a document id was being
repeated in a single purge batch. This fixes that. Squerge to
implementing the APIs bit
|
|
|
|
|
|
|
|
|
|
|
|
| |
I've taken some liberty cleaning up the tests for purges in this module
by using a few helper functions and some minor reformatting. Currently
the repeated docid test is failing because there's a bug in the
purge_docs logic in couch_db_updater where its not accounting for the
fact that a user may have specified a doc id multiple times.
As for doc updates we'll have to apply purges from the first purge infos
and return responses that correspond to which updates actually took
effect.
|
|
|
|
| |
Silly me. Squerge to couch_db_engine API change
|
|
|
|
| |
Messed up the name and the return value. Squery to implement APIs
|
|
|
|
|
| |
Unless we're replicating purge infos its an error to repeat a UUID.
Squerge to implement APIs
|
|
|
|
|
| |
We need to record all purge info requests even if they don't actually
remove any revisions. Squerge to implementing the APIs commit
|
| |
|
|
|
|
| |
COUCHDB-3326
|
| |
|
|
|
|
| |
COUCHDB-3326
|
|
|
|
| |
COUCHDB-3326
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
| |
Rewrite purge logic to use the new couch_db_engine purge APIs. This work
will allow for the new purge behaviors to enable clustered purge.
|
| |
|
|
|
|
|
|
| |
Previously there were two separate database references and it was not
clear which was used where. This simplifies things by reducing it to a
single instance so that the logic is simpler.
|
|
|
|
|
|
|
| |
It turns out that if any storage engine has to open itself during
a callback it would end up violating the guarantee of a single writer.
This change in the test suite changes things to use couch_server so that
storage engines are now free to do as they want reopening themselves.
|
|
|
|
|
|
| |
This fixes a minor race by opening the database before closing it. This
was never found to be an issue in production and was just caught while
contemplating the PSE test suite.
|
|
|
|
|
|
|
|
| |
There's a race where if a database is opened with a default_security set
and it crashes before first compact, and is then reopened after the
default_security option has changed that it will pick the second
security option. This ensures change closes that relatively obscure bug
that was only found during testing.
|
| |
|
| |
|
|
|
|
|
|
|
|
| |
This was introduced in:
https://github.com/apache/couchdb/commit/083239353e919e897b97e8a96ee07cb42ca4eccd
Issue #1286
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The changes listener started in setup of mem3_shards test
was crashing when tried to register on unstarted couch_event
server, so the test was either fast enough to do assertions
before of that or failed on dead listener process.
This change removes dependency on mocking and uses
a standard test_util's star and stop of couch. Module start
moved into the test body to avoid masking potential failure
in a setup.
Also the tests mem3_sync_security_test and mem3_util_test
been modified to avoid setup and teardown side effects.
|
| |
|
|\
| |
| | |
Adopt fake_db to PSE changes
|
|/
|
|
|
|
|
|
|
|
| |
With db headers moved into engine's state, any fake_db call,
that's trying to setup sequences for tests (e.g. in mem3_shards)
crashing with context setup failed.
It's not trivial to compose a proper `engine` field outside of
couch app, so instead this fix makes fake_db to set engine
transparently, unless it was provided in a payload.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Replication jobs are backed off based on the number of consecutive crashes,
that is, we count the number of crashes in a row and then penalize jobs with an
exponential wait based that number. After a job runs without crashing for 2
minutes, we consider it healthy and stop going back in its history and looking
for crashes.
Previously a job's state was set to `crashing` only if there were any
consecutive errors. So it could have ran for 3 minutes, then user deletes the
source database, job crashes and stops. Until it runs again the state would
have been shown as `pending`. For internal accounting purposes that's correct
but it is confusing for the user because the last event in its history is a
crash.
This commit makes sure that if the last even in job's history is a crash user
will see the jobs as `crashing` with the respective crash reason. The
scheduling algorithm didn't change.
Fixes #1276
|
|\
| |
| | |
call commit_data where needed
|
|/
|
|
| |
Regression since introduction of PSE
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
| |
In replicator, after client auth plugin updates headers it could also update
its private context. Make sure to pass the updated httpdb record along to
response processing code.
For example, session plugin updates the epoch number in its context, and it
needs the epoch number later in response processing to make the decision
whether to refresh the cookie or not.
|
|
|
|
|
|
|
|
| |
Attachment receiver process is started with a plain spawn. If middleman process
dies, receiver would hang forever waiting on receive. After a long enough time
quite a few of these receiver processes could accumulate on a server.
Fixes #1264
|