| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
| |
Some elixir test cases don't have actual module tag. Add tags to
help include or exclude them in CI test.
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
| |
This fixes a94e693f32672e4613bce0d80d0b9660f85275ea because a race
condition exisited where the 'DOWN' message could be received
before the compactor pid is spawned. Adding a synchronous call to
get the compactor pid guarantees that the couch_db_updater process
handling of finish_compaction has occurred.
|
| |
|
|
|
| |
We need to call StartFun as it might add headers, etc.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Smoosh monitors the compactor pid to determine when the compaction jobs
finishes, and uses this for its idea of concurrency. However, this isn't
accurate in the case where the compaction job has to re-spawn to catch up on
intervening changes since the same logical compaction job continues with
another pid and smoosh is not aware. In such cases, a smoosh channel with
concurrency one can start arbitrarily many additional database compaction jobs.
To solve this problem, we added a check to see if a compaction PID exists for
a db in `start_compact`. But wee need to add another check because this check
is only for shard that comes off the queue. So the following can still occur:
1. Enqueue a bunch of stuff into channel with concurrency 1
2. Begin highest priority job, Shard1, in channel
3. Compaction finishes, discovers compaction file is behind main file
4. Smoosh-monitored PID for Shard1 exits, a new one starts to finish the job
5. Smoosh receives the 'DOWN' message, begins the next highest priority job,
Shard2
6. Channel concurrency is now 2, not 1
This change adds another check into the 'DOWN' message so that it checks for
that specific shard. If the compaction PID exists then it means a new process
was spawned and we just monitor that one and add it back to the queue. The
length of the queue does not change and therefore we won’t spawn new
compaction jobs.
|
| |
|
|
|
|
|
|
| |
Previously there was an error thrown which prevented emitting _scheduler/docs
responses. Instead of throwing an error, return `null` if the URL cannot be
parsed.
|
|\
| |
| | |
Add option to delay responses until the end
|
|/
|
|
|
|
|
|
|
|
|
| |
When set, every response is sent once fully generated on the server
side. This increases memory usage on the nodes but simplifies error
handling for the client as it eliminates the possibility that the
response will be deliberately terminated midway through due to a
timeout.
The config value can be changed at runtime without impacting any
in-flight responses.
|
| |
|
|\
| |
| | |
Improve jwtf keystore error handling
|
|/ |
|
| |
|
|\
| |
| | |
Allow to continue to cleanup search index even if there is invalid design document
|
|/
|
|
|
|
|
|
| |
In some situation where design document for search index created by
customer is not valid, the _search_cleanup endpoint will stop to clean
up. This will leave some search index orphan. The change is to allow
to continue to clean up search index even if there is invalid design
document for search.
|
|\
| |
| | |
fix bookmark passing with text indexes
|
|/
|
|
|
|
|
|
|
|
|
| |
Previously, we passed in the unpacked version of the bookmark with
the cursor inside the options field. This worked fine for _find because
we didn't need to return it to the user. But for _explain, we return
the value back as unpacked tuple instead of a string and jiffy:encode/1
complains. Now we correctly extract the bookmark out of options, unpack
it, and then pass it separately in it's own field. This way options
retains it's original string form for the user so that invalid_ejson
is not thrown.
|
|\
| |
| | |
bypass partition query limit for mango
|
| | |
|
|/
|
|
|
|
|
|
| |
When partition_query_limit is set for couch_mrview, it limits how many
docs can be scanned when executing partitioned queries. But this limits
mango's doc scans internally. This leads to documents not being scanned
to fulfill a query. This fixes:
https://github.com/apache/couchdb/issues/2795
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
If we don't handle it, it throws an error when trying to encode the full URL
string, for example:
```
badarg,[
{mochiweb_util,quote_plus,2,[{file,"src/mochiweb_util.erl"},{line,192}]},
{couch_replicator_httpc,query_args_to_string,2,[{file,"src/couch_replicator_httpc.erl"},{line,421}]},
{couch_replicator_httpc,full_url,2,[{file,"src/couch_replicator_httpc.erl"},{line,413}]},
{couch_replicator_api_wrap,open_doc_revs,6,[{file,"src/couch_replicator_api_wrap.erl"},{line,255}]}
]
```
This is also similar to what we did for open_revs encoding: https://github.com/apache/couchdb/commit/a2d0c4290dde2015e5fb6184696fec3f89c81a4b
|
|\
| |
| | |
fixup: Build couch_js for redhat linux
|
|/
|
|
|
|
| |
When building couch_js in RHEL, there is one error occurring with "undefined
reference to symbol '_ZTVN10__cxxabiv117__class_type_infoE@@CXXABI_1.3'".
This commit is to adjust binding library to address this issue.
|
|\
| |
| | |
Don't crash couch_index_server if the db isn't known yet
|
|/
|
|
|
|
|
| |
If a ddoc is added immediately after database creation (_users and
_replicator when couchdb is used in a multi-tenant fashion), we can
crash couch_index_server in handle_db_event, as mem3_shards:local
throws an error.
|
| |
|
|\
| |
| | |
Unlink index pid and swallow EXIT message if present
|
|/
|
|
|
|
|
|
|
| |
This should prevent unexpected exit messages arriving which crash
couch_index_server.
Patch suggested by davisp.
Closes #3061.
|
| |
|
| |
|
| |
|
|
|
|
| |
Resolves #2858
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Before the bypass existed, ioq would call `gen_server:call()`
on hehalf of it calling module with the queueing logic in between.
Commit e641a740 introduced a way to bypass any queues, but the
delegated `gen_server:call()` there was added without a timeout
parameter, leading to a default timeout of 5000ms.
A problem manifests here when operations that are sent through
ioq that take longer than that 5000ms timeout. In practice, these
operations should be very rare and this timeout should be a help
on overloaded systems. However, one sure-fire way to cause an issue
on an otherwise idle machine is raise the max_document_size and
store unreasonably large documents, think 50MB+ of raw JSON).
Not that we recommend this, but folks have run this fine on 2.x
before the ioq changes and it isn’t too hard to support here.
By adding an `infinity` timeout delegated `gen_server:call()` in
the queue bypasse case, this no longer applies.
Thanks to Joan @woahli Touzet, Bob @rnewson Newson and
Paul @davisp Davis for helping to track this down.
|
| |
|
| |
|