| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
| |
It turns out the "application/json" header is added downstream of the
delayed response's StartFun which is skipped for buffered responses.
This adds those headers back into the response.
|
|
|
|
|
|
|
| |
Depending on the order of tests, couch_rate:budget/1 can return a batch
size larger than the number of documents. This leaves the test grabbing
a copy of the initial _active_task data that contains `"changes_done": 0`
which fails the test.
|
|\ |
|
| |
| |
| |
| |
| |
| | |
Previously there was an error thrown which prevented emitting _scheduler/docs
responses. Instead of throwing an error, return `null` if the URL cannot be
parsed.
|
| |\
| | |
| | | |
Add option to delay responses until the end
|
| |/
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
When set, every response is sent once fully generated on the server
side. This increases memory usage on the nodes but simplifies error
handling for the client as it eliminates the possibility that the
response will be deliberately terminated midway through due to a
timeout.
The config value can be changed at runtime without impacting any
in-flight responses.
|
| | |
|
| |\
| | |
| | | |
Improve jwtf keystore error handling
|
| |/ |
|
| | |
|
| |\
| | |
| | | |
Allow to continue to cleanup search index even if there is invalid design document
|
| |/
| |
| |
| |
| |
| |
| |
| | |
In some situation where design document for search index created by
customer is not valid, the _search_cleanup endpoint will stop to clean
up. This will leave some search index orphan. The change is to allow
to continue to clean up search index even if there is invalid design
document for search.
|
| |\
| | |
| | | |
fix bookmark passing with text indexes
|
| |/
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Previously, we passed in the unpacked version of the bookmark with
the cursor inside the options field. This worked fine for _find because
we didn't need to return it to the user. But for _explain, we return
the value back as unpacked tuple instead of a string and jiffy:encode/1
complains. Now we correctly extract the bookmark out of options, unpack
it, and then pass it separately in it's own field. This way options
retains it's original string form for the user so that invalid_ejson
is not thrown.
|
| |\
| | |
| | | |
bypass partition query limit for mango
|
| | | |
|
| |/
| |
| |
| |
| |
| |
| |
| | |
When partition_query_limit is set for couch_mrview, it limits how many
docs can be scanned when executing partitioned queries. But this limits
mango's doc scans internally. This leads to documents not being scanned
to fulfill a query. This fixes:
https://github.com/apache/couchdb/issues/2795
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
If we don't handle it, it throws an error when trying to encode the full URL
string, for example:
```
badarg,[
{mochiweb_util,quote_plus,2,[{file,"src/mochiweb_util.erl"},{line,192}]},
{couch_replicator_httpc,query_args_to_string,2,[{file,"src/couch_replicator_httpc.erl"},{line,421}]},
{couch_replicator_httpc,full_url,2,[{file,"src/couch_replicator_httpc.erl"},{line,413}]},
{couch_replicator_api_wrap,open_doc_revs,6,[{file,"src/couch_replicator_api_wrap.erl"},{line,255}]}
]
```
This is also similar to what we did for open_revs encoding: https://github.com/apache/couchdb/commit/a2d0c4290dde2015e5fb6184696fec3f89c81a4b
|
| |\
| | |
| | | |
fixup: Build couch_js for redhat linux
|
| |/
| |
| |
| |
| |
| | |
When building couch_js in RHEL, there is one error occurring with "undefined
reference to symbol '_ZTVN10__cxxabiv117__class_type_infoE@@CXXABI_1.3'".
This commit is to adjust binding library to address this issue.
|
| |\
| | |
| | | |
Don't crash couch_index_server if the db isn't known yet
|
| |/
| |
| |
| |
| |
| |
| | |
If a ddoc is added immediately after database creation (_users and
_replicator when couchdb is used in a multi-tenant fashion), we can
crash couch_index_server in handle_db_event, as mem3_shards:local
throws an error.
|
| | |
|
| |\
| | |
| | | |
Unlink index pid and swallow EXIT message if present
|
| |/
| |
| |
| |
| |
| |
| |
| |
| | |
This should prevent unexpected exit messages arriving which crash
couch_index_server.
Patch suggested by davisp.
Closes #3061.
|
| | |
|
| | |
|
| | |
|
| |
| |
| |
| | |
Resolves #2858
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Before the bypass existed, ioq would call `gen_server:call()`
on hehalf of it calling module with the queueing logic in between.
Commit e641a740 introduced a way to bypass any queues, but the
delegated `gen_server:call()` there was added without a timeout
parameter, leading to a default timeout of 5000ms.
A problem manifests here when operations that are sent through
ioq that take longer than that 5000ms timeout. In practice, these
operations should be very rare and this timeout should be a help
on overloaded systems. However, one sure-fire way to cause an issue
on an otherwise idle machine is raise the max_document_size and
store unreasonably large documents, think 50MB+ of raw JSON).
Not that we recommend this, but folks have run this fine on 2.x
before the ioq changes and it isn’t too hard to support here.
By adding an `infinity` timeout delegated `gen_server:call()` in
the queue bypasse case, this no longer applies.
Thanks to Joan @woahli Touzet, Bob @rnewson Newson and
Paul @davisp Davis for helping to track this down.
|
| | |
|
| | |
|
| | |
|
| | |
|
| |\
| | |
| | | |
Allow drilldown for search to always be specified as list of lists
|
| |/
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
To use multiple `drilldown` parameters users had to define
`drilldown` multiple times to be able supply them.
This caused interoperability issues as most languages require
defining query parameters and request bodies as associative
arrays, maps or dictionaries where the keys are unique.
This change enables defining `drilldown` as a list of lists so
that other languages can define multiple drilldown keys and values.
Co-authored-by: Robert Newson <rnewson@apache.org>
|
| | |
|
| |
| |
| |
| | |
Co-authored-by: Robert Newson <rnewson@apache.org>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
According to https://docs.couchdb.org/en/master/ddocs/search.html there
are parameters for searches that are not allowed for partitioned queries.
Those restrictions were not enforced, thus making the software and docs
inconsistent.
This commit adds them to validation so that the behavior matches the one
described in the docs.
|
| |
| |
| |
| |
| |
| |
| | |
Previously, when pending jobs were picked in the `ets:foldl` traversal, both
running and non-running jobs were considered and a large number of running jobs
could displace pending jobs in the accumulator. In the worst case, no crashed
jobs would be restarted during rescheduling.
|
| |\
| | |
| | | |
Report if FIPS mode is enabled
|
| |/
| |
| |
| |
| | |
This will only report "fips" in the welcome message if FIPS mode
was enabled at boot (i.e, in vm.args).
|
| | |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Closes #2906
* Added a suffix to the first line of couchjs with the (static) version number compiled
* Update rebar.config.script
* In couchjs -h replaced the link to jira with a link to github
Co-authored-by: simon.klassen <simon.klassen>
Co-authored-by: Jan Lehnardt <jan@apache.org
|