summaryrefslogtreecommitdiff
Commit message (Collapse)AuthorAgeFilesLines
* Properly combine base and extra headers when making replicator requestssort-headers-before-mergingNick Vatamaniuc2020-10-091-2/+27
| | | | | | | | | | | | Previously we subtly relied on one set of headers being sorted, then sorted the other set of headers, and ran `lists:ukeymerge/3`. That function, however, needs both arguments to be sorted in order for it to work as expected. If one argument wasn't sorted we could get duplicate headers easily, which is what was observed in testing. A better fix than just sorting both sets of keys, is to use an actual header processing library to combine them so we can account for case insensitivity as well.
* Do not shortcut replicator job initialization if replication ID matchesNick Vatamaniuc2020-10-081-4/+0
| | | | | | | | | | | Previously there was an attempt to shortcut some of the job initialization if replication ID in the job data and the newly computed one matched. However, that logic was wrong as it skipped over the job data state update. The effect was that if a job was in a pending state, and it re-initialized, say when a node restarted, its job data would still indicate it as "pending" until the next checkpoint. If the job is continuous, and there are no more updates on the source, the state of the job would stay in "pending" indefinitely.
* Fixes to CI process for main branch (#3204)Joan Touzet2020-10-072-3/+3
|
* Remove JS tests + support for harness (#3197)Joan Touzet2020-10-07144-18832/+30
|
* Enable merge commits to mainRobert Newson2020-10-071-1/+1
|
* Remove javascript tests from main build processJuanjo Rodriguez2020-10-072-2/+0
|
* port users_db_security tests to elixirJuanjo Rodriguez2020-10-074-7/+540
|
* Complete the port of security_validation tests to ElixirJuanjo Rodriguez2020-10-072-132/+118
|
* Port show_documents and list_views to ElixirJuanjo Rodriguez2020-10-075-4/+1033
|
* Add module tag to elixir test cases (#3178)Peng Hui Jiang2020-10-0710-0/+10
| | | | | Some elixir test cases don't have actual module tag. Add tags to help include or exclude them in CI test.
* port rewrite and rewrite_js tests into elixirJuanjo Rodriguez2020-10-077-116/+691
|
* Preserve query string rewrite when the request contains a bodyJuanjo Rodriguez2020-10-072-1/+18
|
* Properly initialize `user` in replication job's stateNick Vatamaniuc2020-10-061-2/+4
| | | | | This value is emitted in _active_tasks and was previously emitting `null` values from the state record's defaults.
* simplify max_document_size commentRobert Newson2020-10-061-4/+2
|
* Keep auth properties as a map in replicator's httpdb recordNick Vatamaniuc2020-10-053-3/+6
| | | | | | | | | | Previously there was an attempt to keep backwards compatibility with 3.x replicator plugins by transforming the auth into a proplist with `maps:to_list/1`. However, that didn't account for nested properties, so we could have ended up with a top level of props with maps for some values. Instead of making things too complicating, and doing a nested transform to proplists, just keep the auth object as a map and let the plugins handle the compatibility issue.
* Add node and pid to _scheduler/jobs outputNick Vatamaniuc2020-09-301-2/+6
| | | | | | This is mainly for compatibility with CouchDB 3.x Ref: https://docs.couchdb.org/en/stable/api/server/common.html#scheduler-jobs
* Fix error reporting when fetching replication filtersNick Vatamaniuc2020-09-303-6/+18
| | | | | | | | | | | Don't unnecessarily unwrap the fetch error since `error_info/1` can already handle the current shape. Also, make sure to translate the reason to binary for consistency with the other filter fetching errors in the `couch_replicator_filters` module. Add a test to ensure we return the `filter_fetch_error` term as that is explicitly turned into a 404 error in chttpd, so we try to maintain compatibility with CouchDB <= 3.x code.
* Fix transient replication job state wait logicNick Vatamaniuc2020-09-301-1/+3
| | | | | | | | Make sure to handle both `finished` and `pending` states when waiting for a transient jobs. A transient job will go to the `failed` state if it cannot fetch the filter from the source endpoint. For completeness, we also account for `pending` states in there in the remote chance the job get rescheduled again.
* Optimizations for reading reduce viewsPaul J. Davis2020-09-301-1/+6
| | | | | These are a few micro optimizations to avoid unnecessary work when reading from a single reduce function during a view read.
* Add elixir tests for builtin reduce group levelsGarren Smith2020-09-301-0/+549
|
* Add test suite for reduce viewsPaul J. Davis2020-09-301-0/+745
|
* Use ebtree for reduce functionsPaul J. Davis2020-09-304-38/+327
|
* Upgrade legacy viewsPaul J. Davis2020-09-304-41/+551
|
* Reimplement db wide view size trackingPaul J. Davis2020-09-303-529/+357
|
* Views on ebtreePaul J. Davis2020-09-3016-485/+687
|
* Export fabric2_fdb:chunkify_binary/1,2Paul J. Davis2020-09-301-15/+18
|
* Workaround dirty schedulers in run_queue stats (#3168)Russell Branca2020-09-231-2/+15
|
* Fix include directive in couch_views_batch_implPaul J. Davis2020-09-211-1/+1
| | | | This fixes compilation if CouchDB is used as a dependency.
* Fix bug in ebtree:umerge_members/4Paul J. Davis2020-09-171-1/+17
| | | | | Caught during Elixir tests. I've added a unit test to `ebtree.erl` to ensure we don't regress in the future.
* Fix flaky couch_replicator_job_server testsNick Vatamaniuc2020-09-171-6/+18
| | | | | Job exits are asynchronous so we ensure we wait for exit signals to be handled before checking the state.
* Add url validation in replicator creds stripping logicNick Vatamaniuc2020-09-171-46/+58
| | | | | | | | | Previously, in 3.x we re-parsed the endpoint URLs with `ibrowse_lib:parse_url/1` when stripping credentials, which threw an error if the URL was invalid. So we try to preserve that same logic. Backport some tests from 3.x to make sure URL stripping works when URL is valid and, also use the nicer ?TDEF and ?TDEF_FE test helpers.
* Merge branch master into prototype/fdb-layerPaul J. Davis2020-09-16196-1078/+10004
|\
| * Drop Jenkins ppc64le builds (for now) (#3151)Joan Touzet2020-09-151-43/+47
| |
| * fix race condition (#3150)Tony Sun2020-09-142-1/+10
| | | | | | | | | | | | | | This fixes a94e693f32672e4613bce0d80d0b9660f85275ea because a race condition exisited where the 'DOWN' message could be received before the compactor pid is spawned. Adding a synchronous call to get the compactor pid guarantees that the couch_db_updater process handling of finish_compaction has occurred.
| * Port view_conflicts.js, view_errors.js and view_include_docs.js into elixir·Juanjo Rodriguez2020-09-117-3/+643
| |
| * Fix buffer_response=true (#3145)Robert Newson2020-09-102-10/+14
| | | | | | We need to call StartFun as it might add headers, etc.
| * add remonitor code to DOWN message (#3144)Tony Sun2020-09-101-6/+25
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Smoosh monitors the compactor pid to determine when the compaction jobs finishes, and uses this for its idea of concurrency. However, this isn't accurate in the case where the compaction job has to re-spawn to catch up on intervening changes since the same logical compaction job continues with another pid and smoosh is not aware. In such cases, a smoosh channel with concurrency one can start arbitrarily many additional database compaction jobs. To solve this problem, we added a check to see if a compaction PID exists for a db in `start_compact`. But wee need to add another check because this check is only for shard that comes off the queue. So the following can still occur: 1. Enqueue a bunch of stuff into channel with concurrency 1 2. Begin highest priority job, Shard1, in channel 3. Compaction finishes, discovers compaction file is behind main file 4. Smoosh-monitored PID for Shard1 exits, a new one starts to finish the job 5. Smoosh receives the 'DOWN' message, begins the next highest priority job, Shard2 6. Channel concurrency is now 2, not 1 This change adds another check into the 'DOWN' message so that it checks for that specific shard. If the compaction PID exists then it means a new process was spawned and we just monitor that one and add it back to the queue. The length of the queue does not change and therefore we won’t spawn new compaction jobs.
| * Introduce .asf.yaml file (#3020)Joan Touzet2020-09-101-0/+32
| |
| * Handle malformed URLs when stripping URL creds in couch_replicatorNick Vatamaniuc2020-09-091-2/+26
| | | | | | | | | | | | Previously there was an error thrown which prevented emitting _scheduler/docs responses. Instead of throwing an error, return `null` if the URL cannot be parsed.
| * Merge pull request #3129 from apache/delay_response_until_endRobert Newson2020-09-073-9/+125
| |\ | | | | | | Add option to delay responses until the end
| | * Add option to delay responses until the enddelay_response_until_endRobert Newson2020-09-043-9/+125
| |/ | | | | | | | | | | | | | | | | | | | | When set, every response is sent once fully generated on the server side. This increases memory usage on the nodes but simplifies error handling for the client as it eliminates the possibility that the response will be deliberately terminated midway through due to a timeout. The config value can be changed at runtime without impacting any in-flight responses.
| * Make COPY doc return only one "ok"Bessenyei Balázs Donát2020-09-042-1/+13
| |
| * Merge pull request #3125 from apache/improve_jwtf_keystore_error_handlingRobert Newson2020-09-032-8/+20
| |\ | | | | | | Improve jwtf keystore error handling
| | * return a clean error if pem_decode failsRobert Newson2020-09-032-8/+20
| |/
| * Tag elixir tests into meaningful groupsAlessio Biancalana2020-09-0177-0/+120
| |
| * Merge pull request #3118 from apache/dreyfus-cleanup-with-invalid-ddocPeng Hui Jiang2020-09-012-6/+35
| |\ | | | | | | Allow to continue to cleanup search index even if there is invalid design document
| | * Allow to continue to cleanup search index even if there is invalid ddocdreyfus-cleanup-with-invalid-ddocjiangph2020-09-012-6/+35
| |/ | | | | | | | | | | | | | | In some situation where design document for search index created by customer is not valid, the _search_cleanup endpoint will stop to clean up. This will leave some search index orphan. The change is to allow to continue to clean up search index even if there is invalid design document for search.
| * Merge pull request #3116 from apache/fix-explain-text-indexesTony Sun2020-08-312-8/+19
| |\ | | | | | | fix bookmark passing with text indexes
| | * fix bookmark passing with text indexesfix-explain-text-indexesTony Sun2020-08-312-8/+19
| |/ | | | | | | | | | | | | | | | | | | | | Previously, we passed in the unpacked version of the bookmark with the cursor inside the options field. This worked fine for _find because we didn't need to return it to the user. But for _explain, we return the value back as unpacked tuple instead of a string and jiffy:encode/1 complains. Now we correctly extract the bookmark out of options, unpack it, and then pass it separately in it's own field. This way options retains it's original string form for the user so that invalid_ejson is not thrown.
| * Merge pull request #3105 from apache/fix-partition-query-limitTony Sun2020-08-284-5/+68
| |\ | | | | | | bypass partition query limit for mango