| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
The indexer transaction time is decreased in order to allow enough
time for the client to re-use the same GRV to emit doc bodies.
This PR goes along with [1], where emitted doc bodies in a view
responses now come from the same database read version as the one used
by the indexer. Since the batcher previously used 4.5 seconds as the
maximum, that left little time to read any doc bodies.
[1]: https://github.com/apache/couchdb/pull/3391
Issue: https://github.com/apache/couchdb/issues/3381
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Starting with OTP 21 there is a new logging system, and we forgot to
add the legacy error logger handler for it. Without it `couch_log`
cannot emit gen_server, supervisor and other such system events.
Luckily, there is OTP support to enable legacy error_logger behavior and
that's what we're doing here. The `add_report_handler/1` call will
auto-start the `error_logger` app if needed, and it will also add an
`error_logger` handler to the global `logger` system.
We also keep the `gen_event:add_sup_handler/3` call, as that will
ensure we'll find out when `error_logger` dies so that
`couch_log_monitor` can restart everything.
Someday(TM) we'll write a proper log event handler for the new logger
and have nicely formatted structured logs, but it's better to do that
once we don't have to support OTP versions =< 20.
Issue: https://github.com/apache/couchdb/pull/3422
|
| | |
|
| | |
|
| | |
|
| | |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously, if a JWT claim was present, it was validated regardless of
whether it was required.
However, according to the spec [1]:
"all claims that are not understood by implementations MUST be ignored"
which we interpret to mean that we should not attempt to validate
claims we don't require.
With this change, only claims listed in required checks are validated.
[1] https://tools.ietf.org/html/rfc7519#section-4
|
| |
|
|
|
| |
The config application depends on couch_log, so include it when
setting up and tearing down tests.
|
| |
|
|
|
|
|
|
| |
The errors see in #3417 seem to indicate the expiration jobs are interfering
with the couch_jobs tests, to prevent that prevent expiration_db job gen_server
from starting at all.
Fixes #3417
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
It turns out fabric is dependent on couch_jobs because of db expiration module.
So when couch_jobs was restarted multiple times per test case it could have
brought down fabric. However, since couch_jobs needs fabric for transactional
stuff it ended up brining couch_jobs app down as well.
To fix it:
* Switch to explicitly starting/stopping fabric and couch_jobs together
* Break appart bad_messages* tests to individually test each type of message
as app restarts in the middle of the tests kept killing fabric and
intermettently killing couch_jobs a well.
* Also make the tests look nicer by re-using ?TDEF_FE macros from
`fabric2_test`, this we can avoid the `?_test(begin... end).` pattern.
* Remove meck:unload since we don't really meck anything in the module
* Don't need to spend time cleaning out database as we don't really create
that many dbs (just one) and that one gets cleaned out in its own test.
|
| | |
|
| |
|
|
|
|
|
|
| |
Previously, the view indexer used the default retry limit (100) from the
`fdb_tx_options` config section. However, since the batching algorithm relies
on sensing errors and reacting to them, retrying the batch 100 times before
erroring out was not optimal. So value is lowered down to 5 and it's also made
configurable.
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
1) First, as a cleanup, remove DB `Options` from the `init_db/3 call. We always
follow `init_db/3` (sometimes called through the
`fabric2_fdb:transactional(DbName, ...)` with a `create(TxDb, Options)` or
`open(TxDb, Options)` call, where we overrode `Options` anyway. The only time
we didn't follow it up with a `create/2` or `open/2` is when dbs are deleted
where `Options` wouldn't matter.
2) Add a new `fabric2_fdb:transactional(DbName|Db, TxOptions, Fun)` call which
allows specifying per-transaction TX options in the `TxOptions` arg. The format
of `TxOptions` is `#{option_name_as_atom => integer | binary}`
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* Relax isolation level when indexer reads from DB
This patch causes the indexing subsystem to use snapshot isolation when
reading from the database. This reduces commit conflicts and ensures
the index can make progress even in the case of frequently updated docs.
In the pathological case, a document updated in a fast loop can cause
the indexer to stall out entirely when using serializable reads. Each
successful update of the doc will cause the indexer to fail to commit.
The indexer will retry with a new GRV but the same target DbSeq. In the
meantime, our frequently updated document will have advanced beyond
DbSeq and so the indexer will finish without indexing it in that pass.
This process can be repeated ad infinitum and the document will never
actually show up in a view response.
Snapshot reads are safe for this use case precisely because we do have
the _changes feed, and we can always be assured that a concurrent doc
update will show up again later in the feed.
* Bump erlfdb version
Needed to pull in fix for snapshot range reads.
|
| |
|
|
|
|
|
|
| |
Previously, when an erlfdb error occured and a recursive call to `update/3` was
made, the result of that call was always matched against `{Mrst, State}`.
However, in the case when the call had finalized and returned
`couch_eval:release_map_context/1` response, the result would be `ok` which
would blow with a badmatch error against `{Mrst, State}`.
|
| |
|
|
|
| |
A tidier version of https://github.com/apache/couchdb/pull/3384 that
saves an unnecessary call to collate.
|
| |\
| |
| | |
use collate in lookup
|
| | | |
|
| | |
| |
| |
| |
| |
| |
| | |
If one of the provided lookup keys doesn't exist in the ebtree, it can
inadvertently prevent a second lookup key from being found if it the
first key greater than the missing lookup key is equal to the second
lookup key.
|
| | |
| |
| |
| |
| |
| | |
These two test cases expose the subtle bug in ebtree:lookup_multi/3
where a key that doesn't exist in the tree can prevent a subsequent
lookup key from matching in the same KV node.
|
| |/ |
|
| |
|
|
|
| |
This allows users to verify that compaction processes are suspended
outside of any configured strict_window.
|
| | |
|
| |
|
|
|
|
|
| |
chunked (#3360)
Transfer-Encoding: chunked causes the server to wait indefinitely, then issue a a 500 error when the client finally hangs up, when PUTing a multipart/related document + attachments.
This commit fixes that issue by adding proper handling for chunked multipart/related requests.
|
| | |
|
| | |
|
| | |
|
| |
|
| |
1. The caching effort was a bust and has been removed. 2) chunkify can be done externally with a custom persist_fun.
|
| |
|
|
|
| |
All endpoints but _session support gzip encoding and there's no practical reason for that.
This commit enables gzip decoding on compressed requests to _session.
|
| | |
|
| |
|
|
| |
Add missing default headers to responses
|
| | |
|
| |
|
|
|
|
|
|
|
|
| |
This flips the view indexer to grab the database update_seq outside of
the update transaction. Previously we would cosntantly refresh the
db_seq value on every retry of the transactional loop.
We use a snapshot to get the update_seq so that we don't trigger
spurious read conflicts with any clients that might be updating the
database.
|
| |
|
|
|
|
|
| |
This is useful so that read conflicts on the changes feed will
eventually be resolved. Without an end key specified a reader could end
up in an infinite conflict retry loop if there are clients updating
documents in the database.
|
| | |
|
| | |
|
| | |
|
| |
|
|
|
|
|
| |
When we call `couch_httpd:json_body/1` we can have `req_body` already set.
In this case we should return the field as is without any attempt to
decompress or decode it. This PR brings the approach we use in `chttpd`
into `couch_httpd`.
|
| | |
|
| |
|
|
|
| |
Any ebtree that uses chunked key encoding will accidentally wipe out any
nodes that have a UUID with more than one leading zero byte.
|
| |
|
|
|
|
|
|
|
| |
Waiting for the timeout option to be set means we could still sneak in
and grab the old FDB database handle before fabric2_server updated it in
the application environment.
This new approach just waits until the handle has been updated by
watching the value in the application environment directly.
|
| |
|
|
|
| |
Turns out that ebtree caching wasn't quite correct so removing it for
now.
|
| |
|
|
|
|
|
|
| |
The ebtree caching layer does not work correctly in conjunction with
FoundationDB transaction retry semantics. If we incorrectly cache nodes
that are not actually read from FoundationDB, a retried transaction will
rely on incorrectly cached state and corrupt the ebtree persisted in
FoundationDB.
|
| |\
| |
| | |
Add an "encryption" object to db info
|
| | |
| |
| |
| |
| |
| | |
The encryption object contains a boolean "enabled"
property. Additional properties might be added by the key manager
which will appear in the "key_manager" sub-object.
|
| | | |
|
| | |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
A document with lots of conflicts can blow up couchjs if the user
calls _changes with a javascript filter and with `style=all_docs` as
this option causes up to fetch all the conflicts.
All leaf revisions of the document are then passed in a single call to
ddoc_prompt, which can fail if there's a lot of them.
In that event, we simply try them sequentially and assemble the
response from each call.
Should be backported to 3.x
|
| |/ |
|
| |
|
|
|
|
|
|
|
| |
Too many parallel attempts to insert the same keys can result in
`{erlfdb_error, 1020}`, which translates to:
"Transaction not committed due to conflict with another transaction"
This attempts to mitigate the problem by using a snapshot to read the
primary key during insertion.
|
| |
|
|
| |
Fix specs to eliminate dialyzer warnings.
|