| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Since supporting SpiderMonkey versions > 1.8.5 we compile design
doc functions of the form `function(args) { /* impl */ }` into a
form that is recognise by newer JS engines.
For reduce views, this means a transpilation happens on each
reduce call over the couchjs protocol, which is once for every
level in the b+tree plus one final rereduce across all shards.
down reduce view indexing/querying.
This patch adds caching to the compilation function. This is
implemented by way of producing a SHA-256 hash of all incoming
JS functions and caching them in a global object in the memory
of a `couchjs` process.
The cache is cleared when a `add_fun` message is received, which
happens before new map functions from a new ddoc are loaded into
`couchjs`. This ensures that only functions from a single view &
security context are ever loaded into the cache.
SHA-256 was chosen because it is producing collisions that are
also valid JS functions is unlikley.
This specific SHA-256 implementation was chosen because:
- it is favourably licensed (MIT)
- taken from the Deno (https://deno.land) project, (h/t Martin
Sonnenholzer for the tip)so we can be reasonably assured this
has been tested thoroughly.
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When shards are moved to new nodes, and the user supplies a change sequence
from the old shard map configuration, attempt to match missing nodes and ranges
by inspecting current shard uuids in order to avoid rewinds.
Previously, if a node and range was missing, we randomly picked a node in the
appropriate range, so 1/3 of the time we might have hit the exact node, but 2/3
of the time we would end up with a complete changes feed rewind to 0.
Unfortunately, this involves a fabric worker scatter gather operation to all
shard copies. This should only happen when we get an old sequence. We rely on
that happening rarely, mostly right after the shards moved, then users would
get new sequence from the recent shard map.
|
|
|
|
|
|
| |
This module was kept around since 2.2.0 only to facilitate cluster upgrades
after we switched the receiver logic to not closures around between nodes
https://github.com/apache/couchdb/commit/fe53e437ca5ec9d23aa1b55d7934daced157a9e3
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This only applies to databases that have an n > [cluster] n.
Our `middleman()` function that proxies attachment streams from
the incoming HTTP socket on the coordinating node to the target
shard-bearing nodes used the server config to determine whether
it should start dropping chunks from the stream.
If a database was created with a larger `n`, the `middleman()`
function could have started to drop attachment chunks before
all attached nodes had a chance to receive it.
This fix uses a database’s concrete `n` value rather than the
server config default value.
Co-Authored-By: James Coglan <jcoglan@gmail.com>
Co-Authored-By: Robert Newson <rnewson@apache.org>
|
|
|
|
|
|
|
|
|
|
|
| |
While including a payload within a DELETE request is not forbidden by RFC7231
its presence on a delete attachment request leaves a mochiweb acceptor
in a semi-opened state since mochiweb's using lazy load for the request bodies.
This makes a next immediate request to the same acceptor to hung
until previous request's receive timeout.
This PR adds a step to explicitly "drain" and discard an entity body on a
delete attachment request to prevent that.
|
|
|
|
|
|
|
|
| |
Rebar mustache templating engine has a bug when handling the }}} brackets in a
case like {...{{var}}}. So we work around the issue by using a separate
variable.
This is an alternate fix for issue: https://github.com/apache/couchdb/pull/3617
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
They used to be disabled before the last major ibrowse upgrade.
On MacOS and FreeBSD the following tests fails periodically:
```
ibrowse_tests: running_server_fixture_test_ (Pipeline too small signals retries)...*failed*
in function ibrowse_tests:'-small_pipeline/0-fun-5-'/1 (test/ibrowse_tests.erl, line 150)
in call from ibrowse_tests:small_pipeline/0 (test/ibrowse_tests.erl, line 150)
**error:{assertEqual,[{module,ibrowse_tests},
{line,150},
{expression,"Counts"},
{expected,"\n\n\n\n\n\n\n\n\n\n"},
{value,"\t\n\n\n\n\t\t\n\n\t"}]}
output:<<"Load Balancer Pid : <0.494.0>
```
But seems to pass more reliable on Linux for some reson. It would be nice to
run the tests of course but having a passing full platsform suite is more
important.
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
| |
The endpoint is admin-only.
Closes #3298
|
|
|
|
| |
Closes #3362
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* Add new app couch_prometheus
This will be a new app add a _prometheus endpoint which will
return metrics information that adheres to the format described at
https://prometheus.io/.
Initial implementation of new _prometheus endpoint. A gen_server
waits for scraping calls while polling couch_stats:fetch and
other system info. The return value is constructed to adhere to
prometheus format and returned as text/plain. The format code
was originally written by @davisp.
We add an option to spawn a new mochiweb_http server to allow for an
additional port for scraping which does not require authentication.
The default ports are 17986, 27986, 37986 across 3 nodes.
make release
Co-authored-by: Joan Touzet <wohali@users.noreply.github.com>
|
|
|
|
|
|
|
|
|
|
|
| |
Bring back ppc64le builds
s390x seems to fail, possibly related to mozjs60 so skip it for now. Also
FoundationDB doesn't build on either architecture.
https://github.com/apache/couchdb/issues/3660
https://github.com/apache/couchdb/issues/3454#issuecomment-876738187
|
| |
|
|
|
|
|
|
|
|
| |
With the move from using a forked ibrowse to upstream [1], the
ibrowse options for socks5 proxy settings all changed to a `socks5_`
prefix.
[1] https://github.com/apache/couchdb/pull/3551
|
|\
| |
| | |
Normalize some config options
|
|/ |
|
|
|
|
|
|
|
|
|
| |
These system db defaults were left unchanged when this code was
imported from Cloudant. This updates them to CouchDB defaults, by
using existing functions in the appropriate application and module.
h/t @chewbranca for discovering the issue, and also suggesting
a better way to obtain these config values.
|
|
|
|
|
|
|
|
|
|
| |
In their current form, some of these tests rely on configuration props
set with specific values in rel/overlay/etc/default.ini, which makes
them prone to breakage when those values change, or when tests run in
non-default configuration.
This change deletes all config settings in the relevant sections under
test, and then adds those under test back explicitly.
|
|
|
|
|
|
|
|
|
| |
Previously, in 4.4.2-4 ibrowse upstream rebase also included the commit which
unconditionally unquoted userinfo credentials. Since we know have a better way
of handing basic auth creds bump ibrowse with a rebase which doesn't include
that commit.
This is the 3.x port of https://github.com/apache/couchdb/pull/3612
|
|
|
|
|
|
|
| |
* mochiweb : upgrade crypto functions to support OTP 23+
* ibrowse : update time functions and fix flaky unit test
Backport of https://github.com/apache/couchdb/pull/3610
|
|
|
|
|
|
|
|
|
| |
It doesn't really work as we have functionality relying on 20.0+
features. One particular instance is in [1].
Issue: https://github.com/apache/couchdb/issues/3571
[1] https://github.com/apache/couchdb/blob/ce596c65d9d7f0bc5d9937bcaf6253b343015690/src/couch/src/couch_emsort.erl#L363-L366
|
|
|
|
| |
This is a backport of https://github.com/apache/couchdb/commit/e349128d21212e9ab9ca35e8a72c581b9b77ebb1 from main.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously, there were two ways to pass in basic auth credentials for
endpoints -- using URL's userinfo part and encoding the them in an
`"Authorization": "basic ..."` header. Neither one is ideal for these reasons:
* Passwords in userinfo doesn't allow using ":", "@" and other characters.
However, even after switching to always unquoting them like we did recently
[1], would break authentication for usernames or passwords previously
containing "+" or "%HH" patterns, as "+" might now be decoded to a " ".
* Base64 encoded headers need an extra step to encode them. Also, quite often
these encoded headers are confused as being "encrypted" and shared in a
clear channel.
To improve this, revert the recent commit to unquote URL userinfo parts to
restore backwards compatibility, and introduce a way to pass in basic auth
credentials in the "auth" object. The "auth" object was already added a while
back to allow authentication plugins to store their credentials in it. The
format is:
```
"source": {
"url": "https://host/db",
"auth": {
"basic": {
"username":"myuser",
"password":"mypassword"
}
}
}
```
{"auth" : "basic" : {...}} object is checked first, and if credentials are
provided, they will be used. If they are not then userinfo and basic auth
header will be parsed.
Internally, there was a good amount duplication related to parsing credentials
from userinfo and headers in replication ID generation logic and in the auth
session plugin. As a cleanup, consolidate that logic in the
`couch_replicator_utils` module.
[1] https://github.com/apache/couchdb/commit/f672b911db19981a81d7fc6ce8ac33b150234fd7
|
| |
|
|
|
|
|
|
| |
Upgrade random -> rand
https://github.com/apache/couchdb-hyper/releases/tag/CouchDB-2.2.0-7
|
| |
|
|
|
|
|
|
|
|
|
| |
The main fix is to switch crypto functions to use the new versions for
22+ while keeping Erlang 20 still working.
```
crypto:hmac(Alg, Key, Message) -> crypto:mac(hmac, Alg, Key, Message)
```
|
|
|
|
|
|
|
| |
Set the `worker_trap_exits = false` setting to ensure our replication worker
pool properly cleans up worker processes.
Ref: https://github.com/apache/couchdb/pull/3208
|
| |
|
| |
|
| |
|
|
|
|
|
| |
`fabric.get_doc_info/3` requires three arguments, but this line was
only using one.
|
|\
| |
| | |
Import weatherreport
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| |
| |
| |
| |
| | |
In default CouchDB, search is disabled by default, so a failure to
connect to clouseau should only be a warning.
|
| |
| |
| |
| |
| | |
weatherreport previously relied on Cloudant's IOQ implementation.
This adds support for the default IOQ so that it works with either.
|