| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
| |
|
| |
|
|\
| |
| | |
Req body json 3.x
|
| | |
|
| |
| |
| |
| |
| |
| |
| |
| | |
Currently the EPI plugins have no easy way to modify body of the document in
before request. There are complicated approaches via overiding compression header.
This is due to the fact that `chttp:json_body/1` expects compressed body.
We can rely on the fact that `MochiReq:recv_body/1` returns binary to allow
passing of already parsed JSON terms (objects and lists).
|
|/
|
|
|
|
|
| |
When we call `couch_httpd:json_body/1` we can have `req_body` already set.
In this case we should return the field as is without any attempt to
decompress or decode it. This PR brings the approach we use in `chttpd`
into `couch_httpd`.
|
|\
| |
| | |
Couch att erroneous md5 mismatch
|
|/
|
|
|
|
|
|
|
|
|
| |
If an attachment was stored uncompressed but later is replicated
internally to a node that wants to compress it (based on
content-type), couchdb compares the uncompressed md5 with the
compressed md5 and fails. This breaks eventual consistency between
replicas.
This PR removes the unnecessary MD5 check that is, in these specific
circumstances, always called with mismatched arguments.
|
|\
| |
| | |
added $keyMapMatch Mango operator
|
|/ |
|
|\
| |
| | |
Reset if we don't get a view header
|
|/
|
|
|
|
|
|
| |
I found a .view file with a db_header in production (cause unknown but
I'm hoping it's manual intervention).
This patch means we'll reset the index if we find something other than
a view header when looking for one.
|
|\
| |
| | |
fix check_local_dbs test
|
|/ |
|
|\
| |
| | |
Expose `couch_util:json_decode/2` to support jiffy options
|
|/
|
|
|
|
|
|
|
| |
It can be desirable in some cases for decoded JSON to e.g. return
maps instead of the default data structure, which is not currently
possible.
This exposes a new function `couch_util:decode/2`, the second
parameter being a list of options passed to `jiffy:decode/2`.
|
|\
| |
| | |
3.x porting - add remonitor code to DOWN message (#3144)
|
| |
| |
| |
| |
| |
| |
| |
| | |
This fixes a94e693f32672e4613bce0d80d0b9660f85275ea because a race
condition exisited where the 'DOWN' message could be received
before the compactor pid is spawned. Adding a synchronous call to
get the compactor pid guarantees that the couch_db_updater process
handling of finish_compaction has occurred.
|
|/
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Smoosh monitors the compactor pid to determine when the compaction jobs
finishes, and uses this for its idea of concurrency. However, this isn't
accurate in the case where the compaction job has to re-spawn to catch up on
intervening changes since the same logical compaction job continues with
another pid and smoosh is not aware. In such cases, a smoosh channel with
concurrency one can start arbitrarily many additional database compaction jobs.
To solve this problem, we added a check to see if a compaction PID exists for
a db in `start_compact`. But wee need to add another check because this check
is only for shard that comes off the queue. So the following can still occur:
1. Enqueue a bunch of stuff into channel with concurrency 1
2. Begin highest priority job, Shard1, in channel
3. Compaction finishes, discovers compaction file is behind main file
4. Smoosh-monitored PID for Shard1 exits, a new one starts to finish the job
5. Smoosh receives the 'DOWN' message, begins the next highest priority job,
Shard2
6. Channel concurrency is now 2, not 1
This change adds another check into the 'DOWN' message so that it checks for
that specific shard. If the compaction PID exists then it means a new process
was spawned and we just monitor that one and add it back to the queue. The
length of the queue does not change and therefore we won’t spawn new
compaction jobs.
|
|\
| |
| | |
Retry filter_docs sequentially if the patch exceeds couchjs stack
|
| | |
|
|/
|
|
|
|
|
|
|
|
|
|
|
|
| |
A document with lots of conflicts can blow up couchjs if the user
calls _changes with a javascript filter and with `style=all_docs` as
this option causes up to fetch all the conflicts.
All leaf revisions of the document are then passed in a single call to
ddoc_prompt, which can fail if there's a lot of them.
In that event, we simply try them sequentially and assemble the
response from each call.
Should be backported to 3.x
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously we subtly relied on one set of headers being sorted, then sorted the
other set of headers, and ran `lists:ukeymerge/3`. That function, however,
needs both arguments to be sorted in order for it to work as expected. If one
argument wasn't sorted we could get duplicate headers easily, which is what was
observed in testing.
A better fix than just sorting both sets of keys, is to use an actual header
processing library to combine them so we can account for case insensitivity as
well.
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
| |
Co-authored-by: mauroporras <mauroporrasc@gmail.com>
|
|
|
|
|
|
| |
e.g;
[jwt]
required_claims = {iss, "https://example.com/issuer"}
|
| |
|
| |
|
|
|
| |
We need to call StartFun as it might add headers, etc.
|
| |
|
| |
|
|
|
|
|
|
| |
Previously there was an error thrown which prevented emitting _scheduler/docs
responses. Instead of throwing an error, return `null` if the URL cannot be
parsed.
|
|\
| |
| | |
Add option to delay responses until the end
|
|/
|
|
|
|
|
|
|
|
|
| |
When set, every response is sent once fully generated on the server
side. This increases memory usage on the nodes but simplifies error
handling for the client as it eliminates the possibility that the
response will be deliberately terminated midway through due to a
timeout.
The config value can be changed at runtime without impacting any
in-flight responses.
|
| |
|