summaryrefslogtreecommitdiff
Commit message (Collapse)AuthorAgeFilesLines
* Add module tag to elixir test casesadd-tag-to-elixirjiangph2020-09-2913-0/+13
| | | | | Some elixir test cases don't have actual module tag. Add tags to help include or exclude them in CI test.
* port rewrite and rewrite_js tests into elixirJuanjo Rodriguez2020-09-297-116/+691
|
* Preserve query string rewrite when the request contains a bodyJuanjo Rodriguez2020-09-242-1/+18
|
* Drop Jenkins ppc64le builds (for now) (#3151)Joan Touzet2020-09-151-43/+47
|
* fix race condition (#3150)Tony Sun2020-09-142-1/+10
| | | | | | | This fixes a94e693f32672e4613bce0d80d0b9660f85275ea because a race condition exisited where the 'DOWN' message could be received before the compactor pid is spawned. Adding a synchronous call to get the compactor pid guarantees that the couch_db_updater process handling of finish_compaction has occurred.
* Port view_conflicts.js, view_errors.js and view_include_docs.js into elixir·Juanjo Rodriguez2020-09-117-3/+643
|
* Fix buffer_response=true (#3145)Robert Newson2020-09-102-10/+14
| | | We need to call StartFun as it might add headers, etc.
* add remonitor code to DOWN message (#3144)Tony Sun2020-09-101-6/+25
| | | | | | | | | | | | | | | | | | | | | | | | | | Smoosh monitors the compactor pid to determine when the compaction jobs finishes, and uses this for its idea of concurrency. However, this isn't accurate in the case where the compaction job has to re-spawn to catch up on intervening changes since the same logical compaction job continues with another pid and smoosh is not aware. In such cases, a smoosh channel with concurrency one can start arbitrarily many additional database compaction jobs. To solve this problem, we added a check to see if a compaction PID exists for a db in `start_compact`. But wee need to add another check because this check is only for shard that comes off the queue. So the following can still occur: 1. Enqueue a bunch of stuff into channel with concurrency 1 2. Begin highest priority job, Shard1, in channel 3. Compaction finishes, discovers compaction file is behind main file 4. Smoosh-monitored PID for Shard1 exits, a new one starts to finish the job 5. Smoosh receives the 'DOWN' message, begins the next highest priority job, Shard2 6. Channel concurrency is now 2, not 1 This change adds another check into the 'DOWN' message so that it checks for that specific shard. If the compaction PID exists then it means a new process was spawned and we just monitor that one and add it back to the queue. The length of the queue does not change and therefore we won’t spawn new compaction jobs.
* Introduce .asf.yaml file (#3020)Joan Touzet2020-09-101-0/+32
|
* Handle malformed URLs when stripping URL creds in couch_replicatorNick Vatamaniuc2020-09-091-2/+26
| | | | | | Previously there was an error thrown which prevented emitting _scheduler/docs responses. Instead of throwing an error, return `null` if the URL cannot be parsed.
* Merge pull request #3129 from apache/delay_response_until_endRobert Newson2020-09-073-9/+125
|\ | | | | Add option to delay responses until the end
| * Add option to delay responses until the enddelay_response_until_endRobert Newson2020-09-043-9/+125
|/ | | | | | | | | | | When set, every response is sent once fully generated on the server side. This increases memory usage on the nodes but simplifies error handling for the client as it eliminates the possibility that the response will be deliberately terminated midway through due to a timeout. The config value can be changed at runtime without impacting any in-flight responses.
* Make COPY doc return only one "ok"Bessenyei Balázs Donát2020-09-042-1/+13
|
* Merge pull request #3125 from apache/improve_jwtf_keystore_error_handlingRobert Newson2020-09-032-8/+20
|\ | | | | Improve jwtf keystore error handling
| * return a clean error if pem_decode failsRobert Newson2020-09-032-8/+20
|/
* Tag elixir tests into meaningful groupsAlessio Biancalana2020-09-0177-0/+120
|
* Merge pull request #3118 from apache/dreyfus-cleanup-with-invalid-ddocPeng Hui Jiang2020-09-012-6/+35
|\ | | | | Allow to continue to cleanup search index even if there is invalid design document
| * Allow to continue to cleanup search index even if there is invalid ddocdreyfus-cleanup-with-invalid-ddocjiangph2020-09-012-6/+35
|/ | | | | | | | In some situation where design document for search index created by customer is not valid, the _search_cleanup endpoint will stop to clean up. This will leave some search index orphan. The change is to allow to continue to clean up search index even if there is invalid design document for search.
* Merge pull request #3116 from apache/fix-explain-text-indexesTony Sun2020-08-312-8/+19
|\ | | | | fix bookmark passing with text indexes
| * fix bookmark passing with text indexesfix-explain-text-indexesTony Sun2020-08-312-8/+19
|/ | | | | | | | | | | Previously, we passed in the unpacked version of the bookmark with the cursor inside the options field. This worked fine for _find because we didn't need to return it to the user. But for _explain, we return the value back as unpacked tuple instead of a string and jiffy:encode/1 complains. Now we correctly extract the bookmark out of options, unpack it, and then pass it separately in it's own field. This way options retains it's original string form for the user so that invalid_ejson is not thrown.
* Merge pull request #3105 from apache/fix-partition-query-limitTony Sun2020-08-284-5/+68
|\ | | | | bypass partition query limit for mango
| * update dev/run formatting to adhere to python format checksfix-partition-query-limitTony Sun2020-08-271-1/+4
| |
| * bypass partition query limit for mangoTony Sun2020-08-273-4/+64
|/ | | | | | | | When partition_query_limit is set for couch_mrview, it limits how many docs can be scanned when executing partitioned queries. But this limits mango's doc scans internally. This leads to documents not being scanned to fulfill a query. This fixes: https://github.com/apache/couchdb/issues/2795
* Handle jiffy returning an iolist when encoding atts_since query stringNick Vatamaniuc2020-08-201-1/+1
| | | | | | | | | | | | | | | | If we don't handle it, it throws an error when trying to encode the full URL string, for example: ``` badarg,[ {mochiweb_util,quote_plus,2,[{file,"src/mochiweb_util.erl"},{line,192}]}, {couch_replicator_httpc,query_args_to_string,2,[{file,"src/couch_replicator_httpc.erl"},{line,421}]}, {couch_replicator_httpc,full_url,2,[{file,"src/couch_replicator_httpc.erl"},{line,413}]}, {couch_replicator_api_wrap,open_doc_revs,6,[{file,"src/couch_replicator_api_wrap.erl"},{line,255}]} ] ``` This is also similar to what we did for open_revs encoding: https://github.com/apache/couchdb/commit/a2d0c4290dde2015e5fb6184696fec3f89c81a4b
* Merge pull request #3056 from apache/build-couchjs-for-redhat-linuxPeng Hui Jiang2020-08-201-2/+2
|\ | | | | fixup: Build couch_js for redhat linux
| * fixup: Build couch_js for redhat linuxjiangph2020-08-201-2/+2
|/ | | | | | When building couch_js in RHEL, there is one error occurring with "undefined reference to symbol '_ZTVN10__cxxabiv117__class_type_infoE@@CXXABI_1.3'". This commit is to adjust binding library to address this issue.
* Merge pull request #3075 from apache/couch_index_server_crash2archive/prototype/layerprototype/layerprototype/fdb-laerRobert Newson2020-08-141-1/+5
|\ | | | | Don't crash couch_index_server if the db isn't known yet
| * Don't crash couch_index_server if the db isn't known yetcouch_index_server_crash2Robert Newson2020-08-141-1/+5
|/ | | | | | | If a ddoc is added immediately after database creation (_users and _replicator when couchdb is used in a multi-tenant fashion), we can crash couch_index_server in handle_db_event, as mem3_shards:local throws an error.
* Validate shard specific query params on db create requestEric Avdey2020-08-132-9/+165
|
* Merge pull request #3068 from apache/couch_index_server_crashRobert Newson2020-08-121-2/+8
|\ | | | | Unlink index pid and swallow EXIT message if present
| * Unlink index pid and swallow EXIT message if presentRobert Newson2020-08-121-2/+8
|/ | | | | | | | | This should prevent unexpected exit messages arriving which crash couch_index_server. Patch suggested by davisp. Closes #3061.
* Remove wrongly commited file from #2955 (#3070)Joan Touzet2020-08-101-89/+0
|
* Windows: provide full path to epmdJoan Touzet2020-08-031-0/+1
|
* added $keyMapMatch Mango operatorMichal Borkowski2020-07-272-0/+41
|
* fix: finish_cluster failure due to missing uuidSteven Tang2020-07-261-0/+3
| | | | Resolves #2858
* Port view multi_key tests into elixirJuanjo Rodriguez2020-07-236-3/+513
|
* port update_documents.js into elixirJuanjo Rodriguez2020-07-223-2/+326
|
* port view_sandboxing.js into elixirJuanjo Rodriguez2020-07-223-1/+193
|
* New cname for couchdb-vm2, see INFRA-20435 (#2982)Joan Touzet2020-07-202-6/+6
|
* Port view_compaction test to elixirJuanjo Rodriguez2020-07-074-2/+109
|
* Port view_collation_raw.js to elixirJuanjo Rodriguez2020-07-073-1/+161
|
* fix: set gen_server:call() timeout to infinity on ioq bypassJan Lehnardt2020-07-031-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | Before the bypass existed, ioq would call `gen_server:call()` on hehalf of it calling module with the queueing logic in between. Commit e641a740 introduced a way to bypass any queues, but the delegated `gen_server:call()` there was added without a timeout parameter, leading to a default timeout of 5000ms. A problem manifests here when operations that are sent through ioq that take longer than that 5000ms timeout. In practice, these operations should be very rare and this timeout should be a help on overloaded systems. However, one sure-fire way to cause an issue on an otherwise idle machine is raise the max_document_size and store unreasonably large documents, think 50MB+ of raw JSON). Not that we recommend this, but folks have run this fine on 2.x before the ioq changes and it isn’t too hard to support here. By adding an `infinity` timeout delegated `gen_server:call()` in the queue bypasse case, this no longer applies. Thanks to Joan @woahli Touzet, Bob @rnewson Newson and Paul @davisp Davis for helping to track this down.
* Port view_update_seq.js into elixirJuanjo Rodriguez2020-06-303-1/+144
|
* Port reader_acl test into elixir test suiteJuanjo Rodriguez2020-06-293-2/+257
|
* Skip tests as temporary views are not supportedJuanjo Rodriguez2020-06-271-0/+1
|
* Tests already ported to elixirJuanjo Rodriguez2020-06-272-0/+2
|
* Merge pull request #2958 from bessbd/allow-drilldown-list-of-listsarchive/prototype/fdnprototype/fdnRobert Newson2020-06-222-0/+203
|\ | | | | Allow drilldown for search to always be specified as list of lists
| * Allow drilldown for search to always be specified as list of listsBessenyei Balázs Donát2020-06-222-0/+203
|/ | | | | | | | | | | | | | To use multiple `drilldown` parameters users had to define `drilldown` multiple times to be able supply them. This caused interoperability issues as most languages require defining query parameters and request bodies as associative arrays, maps or dictionaries where the keys are unique. This change enables defining `drilldown` as a list of lists so that other languages can define multiple drilldown keys and values. Co-authored-by: Robert Newson <rnewson@apache.org>
* Upgrade Credo to 1.4.0Alessio Biancalana2020-06-182-3/+3
|
* fix: send CSP header to make Fauxotn work fullyJan Lehnardt2020-06-183-2/+91
| | | | Co-authored-by: Robert Newson <rnewson@apache.org>