| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
| |
When compactor finds an old compaction file, before the state was upgraded to a
a proplist, the state will be `Root` from `emsort`, which is a `{BB, Prev}`
tuple not an integer.
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Before the bypass existed, ioq would call `gen_server:call()`
on hehalf of it calling module with the queueing logic in between.
Commit e641a740 introduced a way to bypass any queues, but the
delegated `gen_server:call()` there was added without a timeout
parameter, leading to a default timeout of 5000ms.
A problem manifests here when operations that are sent through
ioq that take longer than that 5000ms timeout. In practice, these
operations should be very rare and this timeout should be a help
on overloaded systems. However, one sure-fire way to cause an issue
on an otherwise idle machine is raise the max_document_size and
store unreasonably large documents, think 50MB+ of raw JSON).
Not that we recommend this, but folks have run this fine on 2.x
before the ioq changes and it isn’t too hard to support here.
By adding an `infinity` timeout delegated `gen_server:call()` in
the queue bypasse case, this no longer applies.
Thanks to Joan @woahli Touzet, Bob @rnewson Newson and
Paul @davisp Davis for helping to track this down.
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
| |
Previously, when pending jobs were picked in the `ets:foldl` traversal, both
running and non-running jobs were considered and a large number of running jobs
could displace pending jobs in the accumulator. In the worst case, no crashed
jobs would be restarted during rescheduling.
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
| |
Currently, result of GET `/_session` reports the `authentication_db` of
the obsolete admin port 5986. This updates it to report the actual db
used for authentication, provided it is configured. Otherwise, it omits
`authentication_db` entirely from the session info.
(cherry picked from commit 1e9d0e3c1828d828bb3e8efdbbbd2e348ff518f2)
|
|\
| |
| | |
3.x backports verbump
|
| | |
|
| |
| |
| |
| | |
process
|
|/ |
|
| |
|
| |
|
| |
|
|
|
|
|
|
| |
Mozilla did this years ago:
https://hg.mozilla.org/mozilla-central/rev/41d9d32ab5a7
|
|
|
|
|
|
|
| |
We've seen a crash if DbPartitioned is false and ViewPartitioned is
true, which is obviously nonsense. The effect of the `nocase` is the
termination of the couch_index_server gen_server, which is a serious
amplification of a small (user-initiated) oddity.
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* Remove unused string conversion functions
* Set UTF-8 encoding when compiling scripts
* Encode JavaScript strings as UTF-8 for printing
* Check that only strings are passed to print
* Use builtin UTF-8 conversions in http.cpp
* Add tests for couchjs UTF-8 support
* Remove custom UTF-8 conversion functions
We're now using 100% built-in functionality of SpiderMonkey to handle
all UTF-8 conversions.
* Report error messages at global scope
Previously we weren't reporting any uncaught exceptions or compilation
errors. This changes that to print any compilation errors or any
uncaught exceptions with stack traces.
The previous implementation of `couch_error` was attempting to call
`String.replace` on the `stack` member string of the thrown exception.
This likely never worked and attempting to fix I was unable to properly
invoke the `String.replace` function. This changes the implementation to
use the builtin stack formatting method instead.
* Modernize sources to minimize changes for 68
These are a handful of changes that modernize various aspects of the
couchjs 60 source files. Behaviorally they're all benign but will
shorten the diff required for adding support for SpiderMonkey 68.
Co-authored-by: Joan Touzet <wohali@apache.org>
|
|\
| |
| | |
safer binary_to_term in mango_json_bookmark
|
|/ |
|
|
|
|
|
| |
Co-authored-by: Joan Touzet <wohali@users.noreply.github.com>
Co-authored-by: Jan Lehnardt <jan@apache.org>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously, in https://github.com/apache/couchdb/pull/1783, the logic
was wrong in relation to how certain operators interacted with empty
arrays. We modify this logic to make it such that:
{"foo":"bar", "bar":{"$in":[]}}
and
{"foo":"bar", "bar":{"$all":[]}}
should return 0 results.
Co-authored-by: Joan Touzet <wohali@users.noreply.github.com>
|
|
|
|
|
|
| |
/etc/vm.args; also parses name from config. (#2738) (#2809)
Co-authored-by: Joan Touzet <wohali@users.noreply.github.com>
Co-authored-by: Simon Klassen <6997477+sklassen@users.noreply.github.com>
|
|
|
|
|
| |
Co-authored-by: Joan Touzet <wohali@users.noreply.github.com>
Co-authored-by: Will Holley <willholley@apache.org>
|
|
|
|
|
|
|
|
|
|
|
| |
Previously the sort and copy phases when handling document IDs was not
measured in _active_tasks. This adds size tracking to allow operators a
way to measure progress during those phases.
I'd like to thank Vitaly for the example in #1006 that showed a clean
way for tracking the size info in `couch_emsort`.
Co-Authored-By: Vitaly Goot <vitaly.goot@gmail.com>
|
|
|
|
|
|
|
|
| |
This updates couch_db_updater to use the new multi-IO API functions
(append_terms/pread_terms) in couch_file. This optimization benefits us
by no longer requiring the `couch_emsort:merge/1` step to copy
`#full_doc_info{}` records multiple times while also not being penalized
by signficantly increasing the number of calls through couch_file APIs.
|
|
|
|
|
| |
This uses the new couch_file:append_terms/2 function to write all chunks
in a single write call.
|
|
|
|
|
|
|
| |
These functions allow the caller to append multiple terms or binaries to
a file and receive the file position and size for each individual
element. This is to optimize throughput in situations where we want to
write multiple pieces of independant data.
|
| |
|
|
|
|
|
|
|
| |
This change adds a new `#comp_st{}` record that is used to pass
compaction state through the various compaction steps. There are zero
changes to the existing compaction logic. This merely sets the stage for
adding our docid copy optimization.
|
|
|
|
| |
Port reduce_false.js and reduce_builtin.js to Elixir
|
| |
|
| |
|
| |
|
|
|
| |
Move "users_db_security_editable" to the correct location in the ini file
|
|
|
| |
Send correct seq values for filtered changes
|
|
|
| |
Set cookie domain when DELETE'ing
|
|
|
| |
Fix create db options on secondary shard creation
|