| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Key management is _not_ done and so this scheme is not secure.
The AES_MASTER_KEY value should be retrieved interactively and need
not be the same for different databases.
Overwriting the first 40 bytes of the file with any other value
renders the file unreadable.
We use AES in Counter Mode, which ensures we can encrypt and decrypt
any section of the file without padding or alignment. The ciphertext
is the same length as the plaintext. This mode provides
confidentiality but not authentication.
|
|
|
|
|
|
|
|
|
|
|
| |
* fix badargs for timed out responses
Under heavy load, fabric_update_doc workers will return timeout via
rexi. Therefore no reponses will be populate the response dictionary.
This leads to badargs when we do dict:fetch for keys that do not exist.
This fix corrects this behavior by ensuring that each document update
must receive a response. If one document does not have a response,
then the entire update returns a 500.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously we didn't check responses from get_state/2 or await/2 functions when
building indices. If an index updater crashed, and the index never finished
building, the get_state/2 call would simply return an error and the process
would exit normally. Then, the shard splitting job would count that as a
success and continue to make progress.
To fix that, make sure to check the response to all the supported indexing
types and wait until they return an `ok` result.
Additionally, increase the index building resilience to allow for more retries
on failure, and for configurable retries for individual index builders.
|
| |
|
|
|
|
|
| |
If the cookie is undefined then we should not set it so it can pick up
the ~/.erlang.cookie if it is there.
|
| |
|
| |
|
|
|
|
|
| |
Creating an index with "ddoc":"" or "name":"" should return a 400 Bad Request.
This fixes: https://github.com/apache/couchdb/issues/1472
|
|\
| |
| | |
Remove CI support for Debian 9 (stretch)
|
| | |
|
|\ \
| |/
| | |
mango_tests: revert hypothesis back for python3.6 compat
|
| | |
|
|\ \
| |/
| | |
Window test suite difficulties (nose2, timestamps)
|
| |\
| |/
|/| |
|
|\ \
| | |
| | | |
Ensure Object.prototype.toSource is always available
|
|/ / |
|
|\ \
| | |
| | | |
Convert DbName to list before cons
|
|/ / |
|
|\ \
| | |
| | | |
Cause a 400 Bad Request if decoding JWT token fails
|
|/ / |
|
|\ \
| | |
| | | |
Search is available if it was ever available since start
|
|/ /
| |
| |
| |
| | |
calling connected() every time causes spurious 503's when clouseau
is temporarily unavailable, which is usually masked by retry logic.
|
| |\
| |/
|/| |
|
|\ \
| | |
| | | |
Do not git ignore src/ioq subfolder
|
|/ /
| |
| |
| |
| | |
We include ioq in the main repo currently so it doesn't make sense to
ignore it.
|
| | |
|
| | |
|
| | |
|
|/ |
|
|
|
|
|
| |
Create new config options in `couchdb` and `smoosh` sections to enable
finer control of compaction logging levels.
|
|
|
|
|
| |
These entries are logged because of exceptions, and so should be
at least warning level.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
For non-existent ddocs, a new ddoc_cache_entry is spawned for each
call to ddoc_cache:open. Multiple calls to open the same non-existent
ddoc will create multiple cache entries with the same Key but
different PIDs. This can result in khash lookups returning `not_found`
as in the error below:
[error] 2021-12-10T02:25:21.622743Z node1@127.0.0.1 <0.18923.9> -------- gen_server ddoc_cache_lru terminated with reason: no match of right hand value not_found at ddoc_cache_lru:remove_key/2(line:308) <= ddoc_cache_lru:handle_info/2(line:219) <= gen_server:try_dispatch/4(line:637) <= gen_server:handle_msg/6(line:711) <= proc_lib:init_p_do_apply/3(line:249)
This checks the return values of `khash:lookup` and only proceeds to
delete keys if the results are other than `not_found`.
|
| |
|
|
|
|
|
|
|
|
| |
It's difficult to add rebar.config files per application when that
file name is ignored by git.
Also, ken and smoosh are now core CouchDB applications, so it's no
longer necessary to git ignore them.
|
|
|
|
|
|
| |
src/couch/src/test_util.erl:276:16: Warning: variable 'Timeout' is unused
src/test_util.erl:276:25: Warning: variable 'Delay' is unused
src/couchdb/src/couch/src/test_util.erl:276:32: Warning: variable 'Started' is unused
|
|\
| |
| | |
Fix smoosh enqueueing not found dbs and typo
|
|/ |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
`couch_key_tree:stem/2`, as seen in
https://github.com/apache/couchdb/pull/3963, has a potential to consume quite a
bit of memory. Replacing sets with maps helped in that case however, since
stemming has a non-tail recursive section, there is a chance in future versions
of Erlang to experience the same behavior again.
As a safeguard, add a memory limit test by stemming a larger conflicted rev
tree while limiting the maximum process heap size. For that, use the nifty
`max_heap_size` process flag, which ensures a process is killed if it starts
using too much memory.
To reduce the flakiness, use a deterministic tree shape by using a hard-coded
seed value and leave a decent margin of error for the memory limit.
Ref: https://github.com/apache/couchdb/pull/3963
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
sets to due to a compiler bug in 22 consumes too much memory and is slower on Erlang 22+
Reproducer: https://gist.github.com/nickva/ddc499e6e05678faf20d344c6e11daaa
With sets:
```
couch_key_tree:gen_and_stem().
{8,6848.812683105469}
```
With maps:
```
couch_key_tree:gen_and_stem().
{0,544.000732421875}
```
|
|\
| |
| | |
Add smoosh queue persistence
|
|/ |
|
| |
|