| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* Add a development container config for VS Code
This creates a development environment with a FoundationDB server
and a CouchDB layer in two containers, sharing a network through
Docker Compose.
It uses the FDB image published to Docker Hub for the FDB container,
and downloads the FDB client packages from foundationdb.org to provide
the development headers and libraries. www.foundationdb.org is actually
not trusted in Debian Buster by default, so we have to download the
GeoTrust_Global_CA.pem. The following link has more details:
https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=962596
Once the Docker Compose setup is running, VS Code executes the
create_cluster_file.bash script to write down a cluster file containing
the IP address in the compose network where the FDB service can be
found. This cluster file is used both for a user-driven invocation of
`./dev/run`, as well as for unit tests that require a running CouchDB.
Additionally, I've got a small fix to the way we run explicitly specified
eunit tests:
* Run eunit tests for each app separately
The `eunit` target executes a for loop that appears intended to use a
separate invocation of rebar for each Erlang application's unit tests.
When running `make eunit` without any arguments this works correctly,
as the for loop processes the output of `ls src`. But if you specify a
comma-delimited list of applications the for loop will treat that as a
single argument and pass it down to rebar. This asymmetry is
surprising, but also seems to cause some issues with environment
variables not being inherited by the environment used to execute the
tests for the 2..N applications in the list. I didn't bother digging
into the rebar source code to figure out what was happening there.
This patch just parses the incoming comma-delimited list with `sed` to
create a whitespace-delimited list for the loop, so we get the same
behavior regardless of whether we are specifying applications
explicitly or not.
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* Simplify and speedup dev node startup
This patch introduces an escript that generates an Erlang .boot script
to start CouchDB using the in-place .beam files produced by the compile
phase of the build. This allows us to radically simplify the boot
process as Erlang computes the optimal order for loading the necessary
modules.
In addition to the simplification this approach offers a significant
speedup when working inside a container environment. In my test with
the stock .devcontainer it reduces startup time from about 75 seconds
down to under 5 seconds.
* Rename boot_node to monitor_parent
* Add formatting suggestions from python-black
Co-authored-by: Paul J. Davis <paul.joseph.davis@gmail.com>
|
| |
|
| |
|
|
|
| |
1. The caching effort was a bust and has been removed. 2) chunkify can be done externally with a custom persist_fun.
|
|
|
|
|
| |
All endpoints but _session support gzip encoding and there's no practical reason for that.
This commit enables gzip decoding on compressed requests to _session.
|
| |
|
| |
|
| |
|
| |
|
| |
|
|\
| |
| | |
Add ability to control which Elixir integration tests to run
|
| | |
|
|/
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
New `elixir-suite` Makefile target is added. It runs a predefined set of elixir
integration tests.
The feature is controlled by two files:
- test/elixir/test/config/suite.elixir - contains list of all available tests
- test/elixir/test/config/skip.elixir - contains list of tests to skip
In order to update the `test/elixir/test/config/suite.elixir` when new tests
are added. The one would need to run the following command:
```
MIX_ENV=integration mix suite > test/elixir/test/config/suite.elixir
```
|
|
|
|
| |
Add missing default headers to responses
|
| |
|
|
|
|
|
|
|
|
|
|
| |
This flips the view indexer to grab the database update_seq outside of
the update transaction. Previously we would cosntantly refresh the
db_seq value on every retry of the transactional loop.
We use a snapshot to get the update_seq so that we don't trigger
spurious read conflicts with any clients that might be updating the
database.
|
|
|
|
|
|
|
| |
This is useful so that read conflicts on the changes feed will
eventually be resolved. Without an end key specified a reader could end
up in an infinite conflict retry loop if there are clients updating
documents in the database.
|
| |
|
|\
| |
| | |
Fix mango tests
|
| | |
|
|/ |
|
|
|
|
| |
https://github.com/apache/couchdb-erlfdb/releases/tag/v1.2.3
|
|\
| |
| | |
Use `req_body` field if present
|
|/
|
|
|
|
|
| |
When we call `couch_httpd:json_body/1` we can have `req_body` already set.
In this case we should return the field as is without any attempt to
decompress or decode it. This PR brings the approach we use in `chttpd`
into `couch_httpd`.
|
|\
| |
| | |
Avoid deleting UUID keys that start with zeros
|
| | |
|
|/
|
|
|
| |
Any ebtree that uses chunked key encoding will accidentally wipe out any
nodes that have a UUID with more than one leading zero byte.
|
|
|
|
|
|
|
|
|
| |
Waiting for the timeout option to be set means we could still sneak in
and grab the old FDB database handle before fabric2_server updated it in
the application environment.
This new approach just waits until the handle has been updated by
watching the value in the application environment directly.
|
|
|
|
|
| |
Turns out that ebtree caching wasn't quite correct so removing it for
now.
|
|
|
|
|
|
|
|
| |
The ebtree caching layer does not work correctly in conjunction with
FoundationDB transaction retry semantics. If we incorrectly cache nodes
that are not actually read from FoundationDB, a retried transaction will
rely on incorrectly cached state and corrupt the ebtree persisted in
FoundationDB.
|
|\
| |
| | |
Add an "encryption" object to db info
|
| |
| |
| |
| |
| |
| | |
The encryption object contains a boolean "enabled"
property. Additional properties might be added by the key manager
which will appear in the "key_manager" sub-object.
|
|\ \
| | |
| | | |
Retry filter_docs sequentially if the patch exceeds couchjs stack
|
| | | |
|
|/ /
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
A document with lots of conflicts can blow up couchjs if the user
calls _changes with a javascript filter and with `style=all_docs` as
this option causes up to fetch all the conflicts.
All leaf revisions of the document are then passed in a single call to
ddoc_prompt, which can fail if there's a lot of them.
In that event, we simply try them sequentially and assemble the
response from each call.
Should be backported to 3.x
|
|/ |
|
|\
| |
| | |
Remove '--production' flag when building Fauxton
|
|/
|
|
|
|
|
| |
Since https://github.com/apache/couchdb-fauxton/pull/1299 only runtime
dependencies are installed when using 'npm install --production'.
To correctly build the Fauxton release, one must install all dependencies
with 'npm install'.
|
| |
|
|
|
|
|
|
|
|
|
| |
Too many parallel attempts to insert the same keys can result in
`{erlfdb_error, 1020}`, which translates to:
"Transaction not committed due to conflict with another transaction"
This attempts to mitigate the problem by using a snapshot to read the
primary key during insertion.
|
|
|
|
| |
Fix specs to eliminate dialyzer warnings.
|
|\
| |
| | |
Fix semantics of total_rows
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
The `total_rows` field suppose to be a number of documents in the database/view.
See https://docs.couchdb.org/en/stable/api/ddoc/views.html.
When new pagination API was introduced the meaning of `total_rows` field was
changed to number of rows in the query results. The PR reverts this
accidental change.
|
| | |
|
| |
| |
| |
| |
| |
| |
| |
| | |
Before this change a reduce call that contained no rows would end up
returning the "default" value of the given reduce function which is
whatever it would return when given an empty array as input. This
changes the behavior to return `{"rows": []"}` when there are no
rows in the requested range.
|
| |
| |
| | |
Co-authored-by: Joan Touzet <wohali@users.noreply.github.com>
|