| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
| |
This uses the new `couch_util:set_mqd_off_heap/0` function to set
message queues to off_heap for some of our critical processes that
receive a significant amount of load in terms of message volume.
|
|
|
|
|
|
|
|
|
|
|
|
| |
In Erlang VMs starting with version 19.0 have a new process_flag to
store messages off the process heap. This is extremely useful for
processes that can have huge numbers of messages in their mailbox. For
CouchDB this is most usually observed when couch_server backs up with a
large message queue which wedges the entire node.
This utility function will set a process's message_queue_data flag to
off_heap in a way that doesn't break builds of CouchDB on older Erlang
VMs while automatically enabling the flag on VMs that do support it.
|
|
|
|
|
|
|
|
|
|
| |
The `should_merge_tree_to_itself` and `should_merge_tree_of_odd_length`
tests were both invalid as merging does not support merging of anything
other than a linear path. This failure was covered up by the fact that
the stem operation will detect and cover up any errors from a failed
merge.
Co-Authored-By: Nick Vatamaniuc <vatamane@apache.org>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is two related optimizations for stemming revisions. The first
optimization re-writes the stemming algorithm to drop it from an O(N^2)
to O(N) operation by using a depth first search through the tree and
tracking which revisions are within `revs_limit` revs from a leaf
dropping any revision that exceeds that limit.
The second optimization is just that we avoid calling stemming more
often than necessary by switching away from using `merge/3` to `merge/2`
and then calling `stem/2` only when necessary.
Co-Authored-By: Nick Vatamaniuc <vatamane@apache.org>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
It was brought to my attention that the active size looked a bit funny
occasionally as it could be larger than the file size. Given that active
size is defined to be the number of bytes used in a view file it must be
strictly greater than or equal to zero as well as less than or equal to
file size. Thus the great hunt had begun!
While I'd love more than anything to regail you, Dear Reader, with the
tales, the joys, the highs, and yes, even the lows of this search, it
turns out that I cannot. I must not. For there were none!
Turns out this was a trivial bug where we were re-using the ExternalSize
calculation instead of applying `couch_btree:size/1` to all of the
btrees in the `#mrview` records. Simple bug comes with a correspondingly
simple fix.
I also noticed that the info tests were broken and not being run so I
spent a few minutes cleaning those up to make the various assumptions.
|
|\
| |
| | |
Add compile's command line options, introduce `ERL_OPTS` and make `bin_opt_info` optional.
|
| | |
|
| | |
|
| |
| |
| |
| |
| |
| | |
Flag `debug_info` auto-propagated from rebar,
so there are no need to set it manually or we'll
end up with a double entry in opts.
|
| |
| |
| |
| |
| |
| | |
Allow to compile a specified list of apps.
Fix for eunit's OPTS regexp on systems
with sed withouth GNU extension (i.e MacOS)
|
|/
|
|
| |
This reverts commit b58021e6d9751fa36a4164974664e86248d444fd.
|
| |
|
| |
|
| |
|
|
|
|
| |
What a kooky idea, but I guess we're committed to it.
|
|
|
|
|
|
| |
Rather than packing the stats into an ejson object on each write
we use a more compact tuple format on disk and then turn it into
ejson at query time.
|
|
|
|
|
| |
If we're in a cluster, finalization runs at the coordinator. Otherwise,
couch_mrview can run it directly to simplify things for consumers.
|
|
|
|
|
|
| |
Releases and dialyzer checks need app dependencies to work properly
Issue: #1346
|
|
|
|
|
|
|
|
|
|
|
| |
This introduces a new builtin reduce function, which uses a HyperLogLog
algorithm to estimate the number of distinct keys in the view index. The
precision is currently fixed to 2^11 observables andtherefore uses
approximately 1.5 KB of memory.
It also introduces a finalize step which can be used to improve the
efficiency of other builtin reduce functions going forward.
Closes COUCHDB-2971
|
|
|
|
|
|
| |
* Newly provisioned images
* Add ubuntu bionic (18.04) support
* Add our own couch-js/couch-libmozjs185 pkgs to rolling repo build
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Mango requires that a JSON index can only be used to fulfil
a query if the "selector" field covers all of the fields the
index. For example, if an index contains ["a", "b"] but the
selector only requires field ["a"] to exist in the matching documents,
the index would not be valid for the query (because it only includes
documents containing both "a" and "b").
There is a special case here around built-in fields; _id and _rev
specifically, because they are guaranteed to exist in any matching
documents.
If a user declares an index ["a", "_id"], we can safely exclude "_id"
from the index coverage check, so a selector of {"a": "foo"} should be
able to use this index.
Prior to this commit, a user would have to alter the selector so that
it covered the "_id" field, e.g. {"a": "foo",
"_id": {"$exists": true}}. The commit removes the need to explicitly
cover _id or _rev fields in the query selector.
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
| |
This was introduced in:
https://github.com/apache/couchdb/commit/083239353e919e897b97e8a96ee07cb42ca4eccd
Issue #1286
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The changes listener started in setup of mem3_shards test
was crashing when tried to register on unstarted couch_event
server, so the test was either fast enough to do assertions
before of that or failed on dead listener process.
This change removes dependency on mocking and uses
a standard test_util's star and stop of couch. Module start
moved into the test body to avoid masking potential failure
in a setup.
Also the tests mem3_sync_security_test and mem3_util_test
been modified to avoid setup and teardown side effects.
|
| |
|
|\
| |
| | |
Adopt fake_db to PSE changes
|
|/
|
|
|
|
|
|
|
|
| |
With db headers moved into engine's state, any fake_db call,
that's trying to setup sequences for tests (e.g. in mem3_shards)
crashing with context setup failed.
It's not trivial to compose a proper `engine` field outside of
couch app, so instead this fix makes fake_db to set engine
transparently, unless it was provided in a payload.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Replication jobs are backed off based on the number of consecutive crashes,
that is, we count the number of crashes in a row and then penalize jobs with an
exponential wait based that number. After a job runs without crashing for 2
minutes, we consider it healthy and stop going back in its history and looking
for crashes.
Previously a job's state was set to `crashing` only if there were any
consecutive errors. So it could have ran for 3 minutes, then user deletes the
source database, job crashes and stops. Until it runs again the state would
have been shown as `pending`. For internal accounting purposes that's correct
but it is confusing for the user because the last event in its history is a
crash.
This commit makes sure that if the last even in job's history is a crash user
will see the jobs as `crashing` with the respective crash reason. The
scheduling algorithm didn't change.
Fixes #1276
|
|\
| |
| | |
call commit_data where needed
|
|/
|
|
| |
Regression since introduction of PSE
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
| |
In replicator, after client auth plugin updates headers it could also update
its private context. Make sure to pass the updated httpdb record along to
response processing code.
For example, session plugin updates the epoch number in its context, and it
needs the epoch number later in response processing to make the decision
whether to refresh the cookie or not.
|
|
|
|
|
|
|
|
| |
Attachment receiver process is started with a plain spawn. If middleman process
dies, receiver would hang forever waiting on receive. After a long enough time
quite a few of these receiver processes could accumulate on a server.
Fixes #1264
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Key tree module is a candidate to use property tests on as it mostly deals with
manipulating a single data structure and functions are referentially
transparent, that is, they aren't many side-effects like IO for example.
The test consists of two main parts - generators and properties.
Generators generate random input, for example revision trees, and properties
check that certain properties hold, for example that after stemming all the
leaves are still present in the revtree.
To run the test:
make eunit apps=couch suites=couch_key_tree_prop_tests
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Dialyzer run discovered:
```
Unknown function couch_replicator_httpd_utils:validate_rep_props/1
```
Indeed, the function should be
```
couch_replicator_httpd_util:validate_rep_props/1
```
|
|
|
| |
deflate_N value is more clear description and obvious to replace only N. the same description is in documentation.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
This bug prevents the proper resumption of compactions that died during
the meta copy phase. The issue is that we were setting the update_seq
but not copying over the id and seq tree states. Thus when compaction
resumed from the bad files we'd end up skipping the part where we copy
docs over and then think everything was finished. Thus completely
clearing a database of its contents.
Luckily this isn't release code and as such should have fairly minimal
impact other than those who might be running off master.
|
| |
|
|
|
|
|
|
| |
This was a latent bad merge that failed to remove the duplicate receive
statement. This ended up discarding the monitor's 'DOWN' message which
leads to an infinite loop in couch_os_proces:killer/1.
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
| |
Closes #1238
1. log errors from waitForSuccess
2. log errors in testFun()
3. spinloop replaces arbitrary wait timeout
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* Fix binary optimization warning
* Use proper config delete in couch_peruser_test
* Fix weird spacing
* Use test_util's wait in tests instead of custom one
* Remove obsolete constant
* Make get_security to wait for proper sec object
|
|\
| |
| | |
Various top-level directory cleanups
|