| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
| |
Bugzid: 104903
|
|
|
|
|
|
|
|
| |
This was introduced in:
https://github.com/apache/couchdb/commit/083239353e919e897b97e8a96ee07cb42ca4eccd
Issue #1286
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The changes listener started in setup of mem3_shards test
was crashing when tried to register on unstarted couch_event
server, so the test was either fast enough to do assertions
before of that or failed on dead listener process.
This change removes dependency on mocking and uses
a standard test_util's star and stop of couch. Module start
moved into the test body to avoid masking potential failure
in a setup.
Also the tests mem3_sync_security_test and mem3_util_test
been modified to avoid setup and teardown side effects.
|
| |
|
|\
| |
| | |
Adopt fake_db to PSE changes
|
|/
|
|
|
|
|
|
|
|
| |
With db headers moved into engine's state, any fake_db call,
that's trying to setup sequences for tests (e.g. in mem3_shards)
crashing with context setup failed.
It's not trivial to compose a proper `engine` field outside of
couch app, so instead this fix makes fake_db to set engine
transparently, unless it was provided in a payload.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Replication jobs are backed off based on the number of consecutive crashes,
that is, we count the number of crashes in a row and then penalize jobs with an
exponential wait based that number. After a job runs without crashing for 2
minutes, we consider it healthy and stop going back in its history and looking
for crashes.
Previously a job's state was set to `crashing` only if there were any
consecutive errors. So it could have ran for 3 minutes, then user deletes the
source database, job crashes and stops. Until it runs again the state would
have been shown as `pending`. For internal accounting purposes that's correct
but it is confusing for the user because the last event in its history is a
crash.
This commit makes sure that if the last even in job's history is a crash user
will see the jobs as `crashing` with the respective crash reason. The
scheduling algorithm didn't change.
Fixes #1276
|
|\
| |
| | |
call commit_data where needed
|
|/
|
|
| |
Regression since introduction of PSE
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
| |
In replicator, after client auth plugin updates headers it could also update
its private context. Make sure to pass the updated httpdb record along to
response processing code.
For example, session plugin updates the epoch number in its context, and it
needs the epoch number later in response processing to make the decision
whether to refresh the cookie or not.
|
|
|
|
|
|
|
|
| |
Attachment receiver process is started with a plain spawn. If middleman process
dies, receiver would hang forever waiting on receive. After a long enough time
quite a few of these receiver processes could accumulate on a server.
Fixes #1264
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Key tree module is a candidate to use property tests on as it mostly deals with
manipulating a single data structure and functions are referentially
transparent, that is, they aren't many side-effects like IO for example.
The test consists of two main parts - generators and properties.
Generators generate random input, for example revision trees, and properties
check that certain properties hold, for example that after stemming all the
leaves are still present in the revtree.
To run the test:
make eunit apps=couch suites=couch_key_tree_prop_tests
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Dialyzer run discovered:
```
Unknown function couch_replicator_httpd_utils:validate_rep_props/1
```
Indeed, the function should be
```
couch_replicator_httpd_util:validate_rep_props/1
```
|
|
|
| |
deflate_N value is more clear description and obvious to replace only N. the same description is in documentation.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
This bug prevents the proper resumption of compactions that died during
the meta copy phase. The issue is that we were setting the update_seq
but not copying over the id and seq tree states. Thus when compaction
resumed from the bad files we'd end up skipping the part where we copy
docs over and then think everything was finished. Thus completely
clearing a database of its contents.
Luckily this isn't release code and as such should have fairly minimal
impact other than those who might be running off master.
|
| |
|
|
|
|
|
|
| |
This was a latent bad merge that failed to remove the duplicate receive
statement. This ended up discarding the monitor's 'DOWN' message which
leads to an infinite loop in couch_os_proces:killer/1.
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
| |
Closes #1238
1. log errors from waitForSuccess
2. log errors in testFun()
3. spinloop replaces arbitrary wait timeout
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* Fix binary optimization warning
* Use proper config delete in couch_peruser_test
* Fix weird spacing
* Use test_util's wait in tests instead of custom one
* Remove obsolete constant
* Make get_security to wait for proper sec object
|
|\
| |
| | |
Various top-level directory cleanups
|
|/
|
|
|
|
|
| |
* Replace list of committers with link to ASF page showing committer list
* Move introspect escript to build-aux/, update Makefiles to match
* Remove unmaintained Vagrantfile, if a new maintainer steps up we can revisit
* Remove obsolete license.skip file, was used for auto-header insertion
|
|\
| |
| | |
Allow couch_os_daemons to live in directories with spaces
|
| |\
| |/
|/| |
|
|\ \
| | |
| | | |
Import couchdb-setup application
|
| | | |
|
| |\ \
|/ / / |
|
| |\ \ |
|
| |/ /
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Single node setups want an n=1 setting, but that is the only
time the number of nodes and the number of replicas is linked.
In larger clusters, the values should not be the same. This
patch ensures that for clusters >3 nodes, we do not have to
tell the users to set node_count to 3 in the _cluster_setup
API.
More context for this in https://issues.apache.org/jira/browse/COUCHDB-2594
|
| |\ \ |
|
| | | | |
|
| | | | |
|
| | | | |
|
| | | | |
|
| | | | |
|
| |/ /
| | |
| | |
| | | |
Addresses apache/couchdb:593
|
| |\ \
| | | |
| | | |
| | | |
| | | | |
* asf/salt-distribution:
fix cluster setup: use same admin pq salt on all nodes
|
| | | | |
|
| |\ \ \
| | | | |
| | | | |
| | | | |
| | | | | |
* adrienverge/COUCHDB-3119:
add_node: Don't fail if node name != "couchdb" or "node1"
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
Adding nodes to a cluster fails if the node names (the `name` of
`name@hostname` in vm.args) is different from "couchdb".
The code currently infers this name from the port: "node1" if 15984,
"node2" if 25984, "node3" if 35984, "couchdb" otherwise. No other
possibility.
This is not suited for a production set-up, where multiple servers could
have different names.
This patch fixes this problem by adding an optional "name" option to the
"add_node" command:
POST /_cluster_setup
{
"action": "add_node",
"username": "root",
"password": "******",
"host": "production-server.com",
"port": 5984,
"name": "node5"
}
This fixes: COUCHDB-3119
|
| | | | | |
|
| |\ \ \ \
| | |_|/ /
| |/| | |
| | | | |
| | | | |
| | | | |
| | | | | |
* robertkowalski/2594-2598-number-of-nodes:
fix wording
use config:setineger/3
require nodecount on setup
|
| | | | | |
|
| | | | | |
|