summaryrefslogtreecommitdiff
Commit message (Collapse)AuthorAgeFilesLines
* Temporarily disable FreeBSD buildstemporarily-disable-freebsd-buildsAdam Kocoloski2019-11-051-37/+38
| | | | See #2301.
* Merge pull request #2130 from apache/close-lruPeng Hui Jiang2019-11-052-1/+7
|\ | | | | Close LRU by database path for deleted database/index
| * close LRU by database pathclose-lrujiangph2019-11-052-1/+7
|/
* Do not mark replication jobs as failed if doc processor crashesNick Vatamaniuc2019-11-011-4/+25
| | | | | | | | Previously if couch_replicator_doc_processor crashed, the job was marked as "failed". We now ignore that case. It's safe to do that since supervisor will restart it anyway, and it will rescan all the docs again. Most of all, we want to prevent the job becoming failed permanently and needing a manual intervention to restart it.
* Show source and target proxies in _scheduler/docs outputNick Vatamaniuc2019-10-302-4/+31
| | | | | | | | | | Since we have separate proxies for source and target (https://github.com/apache/couchdb/commit/053d494e698181ae3b0b0698055f5a24e7995172), show them in _scheduler/docs endpoint as separate as well. Previously we just read the proxy value from the source. When formatting the proxy url for output, make sure `socks5` schema is handled by the url credentials stripping code to avoid exposing user credentials.
* Merge pull request #2287 from apache/fix-all_docs-timeout-errorEric Avdey2019-10-292-2/+3
|\ | | | | Pass timeout as an error to callback in `fabric_view_all_docs`
| * Pass timeout as an error to callback in fabric_view_all_docsEric Avdey2019-10-292-2/+3
|/
* Include proxy host and port in connection pool keyRobert Newson2019-10-283-27/+56
| | | | Closes #2271
* Implement separate source and target replication proxiesNick Vatamaniuc2019-10-282-4/+70
| | | | | | | | | | | | | Previously if a proxy was specified it was used for both source and target traffic. However, as mentioned in #2272, since both source and target now are URLs, instead of one being a "local" database, it makes sense to allow separate proxy settings for source and target endpoints. We are still allowing old style, single proxy setting, however if users set both the old style proxy and a per-endpoint one, an exception is raised about the settings being mutually exclusive. Fixes #2272
* Merge pull request #2276 from cloudant/remove-inets-client-remainsiilyak2019-10-282-6/+0
|\ | | | | Remove old clause which is no longer used
| * Remove old clause which is no longer usedILYA Khlopotov2019-10-242-6/+0
|/ | | | | | | | | | | | | The history of `send_error(_Req, {already_sent, Resp, _Error})` clause is bellow: - it was added on [2009/04/18](https://svn.apache.org/viewvc/couchdb/trunk/src/couchdb/couch_httpd.erl?r1=762574&r2=765819&diff_format=h) - we triggered that clause [in couch_httpd:do](https://svn.apache.org/viewvc/couchdb/trunk/src/couchdb/couch_httpd.erl?revision=642432&view=markup#l88) - at that time we were using inets webserver [see use of `httpd_response/3`](https://svn.apache.org/viewvc/couchdb/trunk/src/couchdb/couch_httpd.erl?revision=642432&view=markup#l170) - The inets OTP codebase uses `already_sent` messages [here](https://github.com/erlang/otp/blob/50214f02501926fee6ec286efa68a57a47c2e531/lib/inets/src/http_server/httpd_response.erl#L220) It should be safe to remove this clause because we are not using inets anymore and search of `already_sent` term across all dependencies doesn't return any results.
* Merge pull request #2270 from bessbd/changes-feed-input-validationiilyak2019-10-232-1/+51
|\ | | | | Make changes feed return bad request for invalid heartbeat values
| * Make changes feed return bad request for invalid heartbeat valuesBessenyei Balázs Donát2019-10-232-1/+51
|/ | | | | | | | Using a negative heartbeat value does not return a 400 bad request, instead getting just an empty response with no status code at all. This commit adds extra checks so that negative and non-integer heartbeat values return 400 bad request responses. This fixes #2234
* Avoid churning replication jobs if there is enough room to run pending jobsNick Vatamaniuc2019-10-221-2/+34
| | | | | When rescheduling jobs, make sure to stops existing job as much as needed to make room for the pending jobs.
* Merge pull request #2266 from apache/ken-1.0.6Robert Newson2019-10-211-1/+1
|\ | | | | update ken to 1.0.6
| * update ken to 1.0.6ken-1.0.6Robert Newson2019-10-211-1/+1
|/ | | | * Detect dreyfus/hastings correctly
* Merge pull request #2262 from apache/ken-1.0.5Robert Newson2019-10-181-1/+1
|\ | | | | Update ken to 1.0.5
| * Update ken to 1.0.5ken-1.0.5Robert Newson2019-10-181-1/+1
|/ | | | * Always include 'query' as an allowed language
* Merge pull request #2260 from apache/ken-query-serversRobert Newson2019-10-174-34/+5
|\ | | | | export get_servers_from_env/1 for ken
| * export get_servers_from_env/1 for kenken-query-serversRobert Newson2019-10-174-34/+5
|/ | | | | Also remove the tests to detect that background index building didn't happen, cause it does now.
* Merge pull request #2257 from apache/fauxton-1.2.2Will Holley2019-10-161-1/+1
|\ | | | | Update Fauxton to 1.2.2
| * Update Fauxton to 1.2.2fauxton-1.2.2Will Holley2019-10-151-1/+1
|/ | | Explicitly installs peer dependencies, the lack of which were causing the webpack bundling to fail.
* Stop creating node local _replicator dbNick Vatamaniuc2019-10-104-41/+0
| | | | | | We don't support "local" replications in 3.x so there is not need to waste resources creating this db on every node, and then continuously listening for replication doc updates from it.
* Merge pull request #2250 from apache/fauxton-1.2.1Will Holley2019-10-101-1/+1
|\ | | | | Update Fauxton to 1.2.1
| * Update Fauxton to 1.2.1fauxton-1.2.1Will Holley2019-10-101-1/+1
|/ | | Fauxton 1.2.0 failed to compile on some platforms. 1.2.1 is a patch release which updates the webpack dependency to address this.
* Merge pull request #2248 from apache/remove-externalsRobert Newson2019-10-095-281/+0
|\ | | | | Remove "externals"
| * Merge branch 'master' into remove-externalsremove-externalsAdam Kocoloski2019-10-091-2/+2
| |\ | |/ |/|
* | Update fauxton to version 1.2.0 (#2247)Will Holley2019-10-091-2/+2
| |
| * Remove "externals"Robert Newson2019-10-085-281/+0
|/ | | | | | | Remove all the plumbing that enabled `_external/` request handling, leaving only the functions necessary for `list` and `show`. closes https://github.com/apache/couchdb/issues/2166
* Merge pull request #2240 from cloudant/issue/985-continious-feed-blockingiilyak2019-10-072-19/+54
|\ | | | | Return headers from _changes feed when there are no changes
| * Return headers from _changes feed when there are no changesILYA Khlopotov2019-10-072-19/+54
|/ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Problem ------- The request of continious _changes feed doesn't return until either: - new change is made to the database - the heartbeat interval is reached This causes clients to block on subscription call. Solution -------- Introduce a counter to account for number of chunks sent. Send '\n' exactly once on `waiting_for_updates` when `chunks_sent` is still 0. The implementation is suggested by @davisp [here](https://github.com/apache/couchdb/issues/985#issuecomment-537150907). There is only one difference from his proposal which is: ``` diff --git a/src/chttpd/src/chttpd_db.erl b/src/chttpd/src/chttpd_db.erl index aba1bd22f..9cd6944d2 100644 --- a/src/chttpd/src/chttpd_db.erl +++ b/src/chttpd/src/chttpd_db.erl @@ -215,7 +215,7 @@ changes_callback(waiting_for_updates, #cacc{buffer = []} = Acc) -> true -> {ok, Acc}; false -> - {ok, Resp1} = chttpd:send_delayed_chunk(Resp, []), + {ok, Resp1} = chttpd:send_delayed_chunk(Resp, <<"\n">>), {ok, Acc#cacc{mochi = Resp1, chunks_sent = 1}} end; changes_callback(waiting_for_updates, Acc) -> ```
* Merge pull request #2228 from apache/update-couchdb-defaultsRobert Newson2019-10-042-3/+3
|\ | | | | Update default config settings
| * Merge branch 'master' into update-couchdb-defaultsRobert Newson2019-10-041-7/+2
| |\ | |/ |/|
* | Merge pull request #2229 from apache/ping-clouseau-directlyRobert Newson2019-10-031-7/+2
|\ \ | | | | | | Ping clouseau directly
| * | Ping clouseau directlyping-clouseau-directlyRobert Newson2019-10-031-7/+2
| | | | | | | | | | | | | | | This change eliminates IOQ from the test path for clouseau connectivity.
| | * Update default config settingsupdate-couchdb-defaultsRobert Newson2019-10-042-3/+3
| |/ |/| | | | | | | | | | | q=2 max_document_size = 8000000 ; 8 MB. https://github.com/apache/couchdb/issues/2115
* | Merge pull request #2217 from apache/fauxton-1.1.20Robert Newson2019-10-031-1/+1
|\ \ | |/ | | Update fauxton to version 1.1.20
| * Update fauxton to version 1.1.20fauxton-1.1.20Robert Newson2019-10-021-1/+1
|/
* Remove delayed commits optionNick Vatamaniuc2019-09-2634-268/+32
| | | | | | | | | | | | | | This effectively removes a lot couch_db:ensure_full_commit/1,2 calls. Low level fsync configuration options are also removed as it might be tempting to start using those instead of delayed commits, however unlike delayed commits, changing those default could lead to data corruption. `/_ensure_full_commit` HTTP API was left as is since replicator from older versions of CouchDB would call that, it just returns the start time as if ensure_commit function was called. Issue: https://github.com/apache/couchdb/issues/2165
* Include search in the list of advertised features (#2206)Adam Kocoloski2019-09-251-1/+9
| | | | | | | | | | | | | | The reason we don't do this in config:features() directly is because this one is a dynamic check for the presence of a connected clouseau node. Calling `enable_feature` every time we conduct that check seemed too heavyweight, but I didn't see a good opportunity to just call it once and be confident that it would reliably advertise the feature. The downside here is that CouchDB will not advertise the "search" feature if Clouseau is disconnected for maintenance or whatever, although technically it's accurate since search requests submitted during that interval would fail. Closes #2205
* Remove old multi-query path (#2173)Adam Kocoloski2019-09-241-12/+14
| | | | | | | Users should send requests with multiple queries to the new endpoint: /db/_design/{ddoc}/_view/{view}/queries Closes #2168
* Bump to 3.0.0Joan Touzet2019-09-232-3/+3
|
* feat: do not run stats aggregations on an intervalJan Lehnardt2019-09-201-7/+20
| | | | | | | | | | | | | | | | | | | | | | | | | | | Similar to 448be7996999a706464d8f7429a56dc9e9c87c3a (hello 0.10.1), `timer:{send,apply}_interval()` will apply functions / send messages for all intervals that match the time that a machine was in sleep / hibernation mode that is common on desktop systems. In a typical office scneario, a laptop system that sleeps over a weekend , when woken up on a monday, issue thousands of function calls, that together with other, unrelated wake-up activity, make a machine top out its CPU for no good reason. The change addresses this by instead of relying on an interval to start a given task, on startup, start the task once after a timeout, and then start a fresh timer after the task is done. Other than the 0.10-era patch, this one does not account for a system waking up before the timeout. I’m happy to add that behaviour, if a reviewer insists on it. As a result, no matter how long the sleep period is, we only run the desired function _once_ after we wake up again. In the never- sleep scenario, the existing behaviour is retained. This might impact metrics that have a time component, but I think that’s a fair compromise, so I didn’t investigate that further.
* Remove deprecated dbinfo fields (#2163)Adam Kocoloski2019-09-1810-65/+21
| | | | These fields are all marked as deprecated in the current documentation and they have more specific replacements in the `sizes` object.
* Merge pull request #2189 from jamieluckett/masterPeng Hui Jiang2019-09-171-1/+1
|\ | | | | Fix typo in couch_mrview comment
| * Fix typo in couch_mrview commentJamie Luckett2019-09-161-1/+1
|/
* Improve credential stripping for replication document readsNick Vatamaniuc2019-09-122-2/+16
| | | | | | | | | Allow a special field for plugin writers to stash endpoint credentials, which gets the same treatment as headers and user:pass combinations for already existing plugins (session, noop aka basic auth). Instead of complicating the plugin API, use a simple convention of just calling it "auth" for now.
* Merge pull request #2183 from cloudant/add-extra-arguments-to-beamiilyak2019-09-101-1/+15
|\ | | | | Support `--extra_args` parameter in `dev/run`
| * Support `--extra_args` parameter in `dev/run`ILYA Khlopotov2019-09-101-1/+15
|/ | | | | | | | | | | | | | | | | | | Sometimes there is a need to specify additional arguments for the beam process we start from dev/run. In particular the feature is handy for: - changing emulator flags - simulate OOM via available RAM restrictions - enable module loading tracing - configure number of schedulers - modify applications configuration - run customization script to add extra development deps (such as automatic code reload) Historically developers had to edit dev/run to do it. This PR adds an ability to specify additional arguments via `--extra_args` argument. In order to run customization script create `customization.erl` which exports `start/0` and run it using: ``` dev/run --extra_args='-run customization' ```
* Merge pull request #2178 from apache/fabric-cleanup-view-filesPeng Hui Jiang2019-09-101-1/+6
|\ | | | | do not cleanup ongoing compact files using fabric:cleanup_index_files/1