| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously there was a race between reporting the source update sequence
between the the workers and the changes readers. Each one used separate
incrementing timestamp sequences.
In some cases that lead to pending changes being stuck. For example, if changes
reader reported the highest sequence with timestamp 10000, then later workers
reported it with sequences 5000, 5001, 5002, then all those reports would be
ignored and users would see an always lagging pending changes value reported
with timestamp 1000.
The fix is to thread the last_sequence update through the changes queue to
the changes manager, so only its timestamp sequence will be used. This removes
the race condition.
|
|\
| |
| | |
Fix compaction daemon tests
|
| |
| |
| |
| |
| |
| |
| |
| | |
View compacting process can be just a bit
slow on exit after swap_compacted call.
This leads to the test failure, because
compacting process holding on db monitor
|
| | |
|
|/
|
|
|
|
|
|
|
|
| |
With meck unload set during per suit
teardown all the mocked modules crashing
with "not_mocked" error on attempt
to stop couch in the final cleanup.
This fix moves load and unload meck
modules into global setup and cleanup
|
|
|
|
|
|
| |
Adds X-Frame-Options support to help protect against clickjacking.
X-Frame-Options is configurable via the config and allows for DENY,
SAMEORIGIN and ALLOW-FROM
|
|\
| |
| |
| | |
COUCHDB-3358
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
When indexing a set of fields for text search, we also create a special
field called $fieldnames. It contains values for all the fields that
need to be indexed. In order to do that, we need a unique list of the
form [[<<"$fieldnames">>, Name, [] | Rest]. The old code would add an
element to the list, and then check for membership via lists:member/2.
This is inefficient. Some documents can contain a large number of
fields, so we will use gb_sets to create a unique set of fields, and
then extract out the field names.
COUCHDB-3358
|
|\ \
| | |
| | | |
Wait for listener's exit during restart test
|
|/ /
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
To test a config listener's restart we are
deleting event handler in config_event
and then immediately checking event manager.
This creates race when we can be a slightly
early and catch old handler yet to be
removed or slightly late and get
new handler that already been installed.
This patch addresses this by waiting for
the old listener to quit and then waiting
for a new handler to be installed.
|
|\ \
| | |
| | | |
Fix race in couchdb_views_tests
|
|/ /
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
There are a race condition in `restore_backup_db_file`
function between couch_server eviction of an old
db updater and a test quering view on restored db file.
The query in test can get old record for the updater
and then crash with `noproc` exception.
This change makes `restore_backup_db_file` to wait
until start of the new db updater.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Currently we return a 500 and something like
{"error":"{not_found,missing}","reason":"{1,<<\"000\">>}"}
when an attempt is made to put an attachment document with a
non-existent revision.
This changes the behavior to return a 409 and
{"error":"not_found","reason":"missing_rev"}"
|
| | |
|
|\ \
| | |
| | | |
Pass error through (usually timeout)
|
|/ / |
|
| |
| |
| |
| | |
Issue #551
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Logging is based on an environment variable:
`COUCHDB_IO_LOG_DIR`
If set, logs will go to that directory.
Logs are per `couch_os_process` Erlang process. There are 3 files saved for
each process:
```
<unixtimestamp>_<erlangpid>.in.log : Input, data coming from the proess
<unixtimestamp>_<erlangpid>.out.log : Output, data going to the process
<unixtimestamp>_<erlangpid>.meta : Error reason
```
Log files are saved as named (visible) files only if an error occurs. If there
is no error, disk space will still be used as long the process is alive. But as
soon as it exists, file will be unlinked and space will be reclaimed.
Issue: #551
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
The timeout=1 (1ms) parameter would some times trigger extra newlines to be
included in the response body. The use of `binary:split/2` would then
return different portions of the body depending on timing in the
cluster. This change adds a helper function to split out all newlines in
the response and then returns the last non-empty line.
This also removes introspection of the clustered update sequence since
this is an HTTP API behavior tests and those are defined as opaque
values.
COUCHDB-3415
|
| |
| |
| |
| | |
COUCHDB-3360/FB 85485
|
| |
| |
| |
| |
| |
| | |
Even if it clean up fails use `kill` to avoid failing the next set of tests.
Issue: #571
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
As it turns out, the original change in COUCHDB-3298 ends up hurting
disk usage when a view emits large amounts of data (i.e., more than
half of the btree chunk size). The cause for this is that instead of
writing single element nodes it would instead prefer to write kv nodes
with three elements. While normally we might prefer this in memory, it
turns out that our append only storage this causes a significantly more
amount of trash on disk.
We can show this with a few trivial examples. Imagine we write KV's a
through f. The two following patterns show the nodes as we write each
new kv.
Before 3298:
[]
[a]
[a, b]
[a, b]', [c]
[a, b]', [c, d]
[a, b]', [c, d]', [e]
[a, b]', [c, d]', [e, f]
After 3298:
[]
[a]
[a, b]
[a, b, c]
[a, b]', [c, d]
[a, b]', [c, d, e]
[a, b]', [c, d]', [e, f]
The thing to realize here is which of these nodes end up as garbage. In
the first example we end up with [a], [a, b], [c], [c, d], and [e] nodes
that have been orphaned. Where as in the second case we end up with
[a], [a, b], [a, b, c], [c, d], [c, d, e] as nodes that have been
orphaned. A quick aside, the reason that [a, b] and [c, d] are orphaned
is due to how a btree update works. For instance, when adding c, we read
[a, b] into memory, append c, and then during our node write we call
chunkify which gives us back [a, b], [c] which leads us to writing [a,
b] a second time.
The main benefit of this patch is to realize when its possible to reuse
a node that already exists on disk. It achieves this by looking at the
list of key/values when writing new nodes and comparing it to the old
list of key/values for the node read from disk. By checking to see if
the old list exists unchanged in the new list we can just reuse the old
node. Node reuse is limited to when the old node is larger than 50% of
the chunk threshold to maintain the B+Tree properties.
The disk usage improvements this gives can also be quite dramatic. In
the case above when we have ordered keys with large values (> 50% of the
btree chunk size) we find upwards of 50% less disk usage. Random keys
also benefit as well though to a lesser extent depending on disk size
(as they will often be in the middle of an existing node which prevents
our optimization).
COUCHDB-3298
|
| |
| |
| |
| | |
This reverts commit 8556adbb98e79a09ec254967ee6acf3bef8d1fb6.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Previously idle dbs, especially sys dbs like _replicator once opened
once for scanning would stay open forever. In a large cluster with many
_replicator shards that can add up to a significant overhead, mostly in terms
of number of active processes.
Add a mechanism to close dbs which have an idle db updater. Before hibernation
was used to limit the memory pressure, however that is often not enough.
Some databases are only read periodically so their updater would time
out. To prevent that from happening keep the last read timestamp in
the couch file process dictionary. Idle check then avoid closing dbs
which have been recently read from.
(Original idea for using timeouts in gen_server replies belongs to
Paul Davis)
COUCHDB-3323
|
|\ \
| | |
| | | |
Avoid using length to detect non empty list
|
|/ /
| |
| |
| |
| |
| | |
length(Values) is O(n) operation. Which could get expensive for long
lists. Change the code to rely on pattern matching to detect non empty
lists.
|
| |
| |
| |
| |
| | |
Update to latest Jiffy. Not a whole lot new but this removes a huge test
file that caused my source code analyzer to run slowly.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
There are two implementations for filtering the _changes feed by doc_ids. One
implementation is faster, but requires more server resources. The other is
cheaper but very slow. This commit allows configuration of the threshold at
which couch changes to using the slower implementation using:
[couchdb]
changes_doc_ids_optimization_threshold = 100
COUCHDB-3425
|
|\ \
| | |
| | | |
Fix encoding issues
|
|/ /
| |
| |
| |
| |
| |
| | |
This fixes encoding issues in responses on following requests:
- PUT: {db}/{design}/_update/{update}/{doc_id}
- PUT: {db}/{doc_id}/{att_id}
|
| | |
|
| | |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
There are a race condition in `get_view/4`,
between aquiring index's Pid and getting
its state, that surface when the same
signature DDoc got rapidly deleted
and re-created.
This patch addresses this by adding
retry logic to get_view_index_state.
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
This test case is failing for the same reason as the failures in #559,
namely a GET on a _show, a PUT to the _show's ddoc to change the _show
function, and a subsequent GET on the same _show that returns a result
that is seemingly outdated. Late ddoc_cache eviction is still the
problem; setting ddoc_cache max_objects to 0 ensures this test always
passes.
Based on the discussion in #559 I am deleting this test as well, direct
on master with approval from @janl and @davisp.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
The full discussion is in #559, but here is a summary.
Through instrumentation of ddoc_cache and ets_lru I suspect that this is
caused by cache eviction happening after the second GET. While I didn't
instrument exactly where the GET occurs it's clear that it's fairly
late, certainly after the PUT 201 is returned, and likely after the
subsequent GET actually reads from ddoc_cache.
After applying a change to allow me to completely disable the ddoc_cache
(-ddoc_cache max_objects 0 in vm.args) I ran the test on a loop
overnight, and the test never failed (>1000 executions). Previously the
test would fail every 20-30 executions.
TL;DR: we can't guarantee immediate ddoc_cache eviction on a ddoc
update, even for a single .couch file on a single node. (For obvious
reasons we definitely can't guarantee this in a cluster configuration.)
I will document this as a backwards compatibility change in 2.0 and
forward with a separate checkin to couchdb-documentation.
Thanks to @rnewson @janl and @davisp for helping track this one down!
This checkin also includes an improvement to the output when a JS test
fails a notEquals assertion.
Closes #559
|
| | |
|
| | |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
restartServer() is still erroring out sometimes in Travis/Jenkins. This
PR both bumps the timeout to 15s as well as changes the detection
mechanism for restart to look for the uptime in _system to reset to a
low number.
This PR also removes the eclipsed redundant restartServer() definition
in couch_test_runner.js.
Closes #553
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Before when a design doc is updated/deleted, only one couch_index
process was notified - the one which shard contained a design doc.
couch_index processes from other shards still continued to exist,
and indexing activities for these processes were still be going on.
The patch notifies couch_index_processes on all shards
COUCHDB-3400
|
| | |
|