| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
| |
Previously we subtly relied on one set of headers being sorted, then sorted the
other set of headers, and ran `lists:ukeymerge/3`. That function, however,
needs both arguments to be sorted in order for it to work as expected. If one
argument wasn't sorted we could get duplicate headers easily, which is what was
observed in testing.
A better fix than just sorting both sets of keys, is to use an actual header
processing library to combine them so we can account for case insensitivity as
well.
|
|
|
|
|
|
|
|
|
|
|
| |
Previously there was an attempt to shortcut some of the job initialization if
replication ID in the job data and the newly computed one matched. However,
that logic was wrong as it skipped over the job data state update.
The effect was that if a job was in a pending state, and it re-initialized, say
when a node restarted, its job data would still indicate it as "pending" until
the next checkpoint. If the job is continuous, and there are no more updates on
the source, the state of the job would stay in "pending" indefinitely.
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
| |
Some elixir test cases don't have actual module tag. Add tags to
help include or exclude them in CI test.
|
| |
|
| |
|
|
|
|
|
| |
This value is emitted in _active_tasks and was previously emitting `null`
values from the state record's defaults.
|
| |
|
|
|
|
|
|
|
|
|
|
| |
Previously there was an attempt to keep backwards compatibility with 3.x
replicator plugins by transforming the auth into a proplist with
`maps:to_list/1`. However, that didn't account for nested properties, so we
could have ended up with a top level of props with maps for some values.
Instead of making things too complicating, and doing a nested transform to
proplists, just keep the auth object as a map and let the plugins handle the
compatibility issue.
|
|
|
|
|
|
| |
This is mainly for compatibility with CouchDB 3.x
Ref: https://docs.couchdb.org/en/stable/api/server/common.html#scheduler-jobs
|
|
|
|
|
|
|
|
|
|
|
| |
Don't unnecessarily unwrap the fetch error since `error_info/1` can already
handle the current shape. Also, make sure to translate the reason to binary for
consistency with the other filter fetching errors in the
`couch_replicator_filters` module.
Add a test to ensure we return the `filter_fetch_error` term as that is
explicitly turned into a 404 error in chttpd, so we try to maintain
compatibility with CouchDB <= 3.x code.
|
|
|
|
|
|
|
|
| |
Make sure to handle both `finished` and `pending` states when waiting for a
transient jobs. A transient job will go to the `failed` state if it cannot
fetch the filter from the source endpoint. For completeness, we also account
for `pending` states in there in the remote chance the job get rescheduled
again.
|
|
|
|
|
| |
These are a few micro optimizations to avoid unnecessary work when
reading from a single reduce function during a view read.
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
| |
This fixes compilation if CouchDB is used as a dependency.
|
|
|
|
|
| |
Caught during Elixir tests. I've added a unit test to `ebtree.erl` to
ensure we don't regress in the future.
|
|
|
|
|
| |
Job exits are asynchronous so we ensure we wait for exit signals to be handled
before checking the state.
|
|
|
|
|
|
|
|
|
| |
Previously, in 3.x we re-parsed the endpoint URLs with
`ibrowse_lib:parse_url/1` when stripping credentials, which threw an error if
the URL was invalid. So we try to preserve that same logic.
Backport some tests from 3.x to make sure URL stripping works when URL is valid
and, also use the nicer ?TDEF and ?TDEF_FE test helpers.
|
|\ |
|
| | |
|
| |
| |
| |
| |
| |
| |
| | |
This fixes a94e693f32672e4613bce0d80d0b9660f85275ea because a race
condition exisited where the 'DOWN' message could be received
before the compactor pid is spawned. Adding a synchronous call to
get the compactor pid guarantees that the couch_db_updater process
handling of finish_compaction has occurred.
|
| | |
|
| |
| |
| | |
We need to call StartFun as it might add headers, etc.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Smoosh monitors the compactor pid to determine when the compaction jobs
finishes, and uses this for its idea of concurrency. However, this isn't
accurate in the case where the compaction job has to re-spawn to catch up on
intervening changes since the same logical compaction job continues with
another pid and smoosh is not aware. In such cases, a smoosh channel with
concurrency one can start arbitrarily many additional database compaction jobs.
To solve this problem, we added a check to see if a compaction PID exists for
a db in `start_compact`. But wee need to add another check because this check
is only for shard that comes off the queue. So the following can still occur:
1. Enqueue a bunch of stuff into channel with concurrency 1
2. Begin highest priority job, Shard1, in channel
3. Compaction finishes, discovers compaction file is behind main file
4. Smoosh-monitored PID for Shard1 exits, a new one starts to finish the job
5. Smoosh receives the 'DOWN' message, begins the next highest priority job,
Shard2
6. Channel concurrency is now 2, not 1
This change adds another check into the 'DOWN' message so that it checks for
that specific shard. If the compaction PID exists then it means a new process
was spawned and we just monitor that one and add it back to the queue. The
length of the queue does not change and therefore we won’t spawn new
compaction jobs.
|
| | |
|
| |
| |
| |
| |
| |
| | |
Previously there was an error thrown which prevented emitting _scheduler/docs
responses. Instead of throwing an error, return `null` if the URL cannot be
parsed.
|
| |\
| | |
| | | |
Add option to delay responses until the end
|
| |/
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
When set, every response is sent once fully generated on the server
side. This increases memory usage on the nodes but simplifies error
handling for the client as it eliminates the possibility that the
response will be deliberately terminated midway through due to a
timeout.
The config value can be changed at runtime without impacting any
in-flight responses.
|
| | |
|
| |\
| | |
| | | |
Improve jwtf keystore error handling
|
| |/ |
|
| | |
|
| |\
| | |
| | | |
Allow to continue to cleanup search index even if there is invalid design document
|
| |/
| |
| |
| |
| |
| |
| |
| | |
In some situation where design document for search index created by
customer is not valid, the _search_cleanup endpoint will stop to clean
up. This will leave some search index orphan. The change is to allow
to continue to clean up search index even if there is invalid design
document for search.
|
| |\
| | |
| | | |
fix bookmark passing with text indexes
|
| |/
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Previously, we passed in the unpacked version of the bookmark with
the cursor inside the options field. This worked fine for _find because
we didn't need to return it to the user. But for _explain, we return
the value back as unpacked tuple instead of a string and jiffy:encode/1
complains. Now we correctly extract the bookmark out of options, unpack
it, and then pass it separately in it's own field. This way options
retains it's original string form for the user so that invalid_ejson
is not thrown.
|
| |\
| | |
| | | |
bypass partition query limit for mango
|