| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Enable running all chttpd tests. Some fixes needed for this to happen are:
* Some tests were not valid (checking shard maps, etc) and were deleted
* Some tests were disabled either because functionality is not implemented yet
or simply to minimize the diff between 3.x and this branch for when we have
to rebase
* Some applications used for index querying had to be started explicitly
* Mock updated to use new version of modules instead of old ones
|
|
|
|
|
| |
It should only be allowed if explicitly configured. Previously we did not
propertly match on the database name and effectively always allowed it.
|
|
|
|
|
|
| |
Call couch_views module instead of the old fabric:query_view also needed to
call `view_cb(complete, ...)` when using keys similar to how `all_docs_view/4`
does it.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Endpoints which are removed return a 410 response:
- _show
- _list
- _rewrite
Endpoints which will be implemented in CouchDB 4.x eventually now return a 510
response:
- _purge
- _purge_infos_limit
Endpoints which return a 2xx but are a no-op effectively:
- _compact
- _view_cleanup
|
|
|
|
| |
Clean up unused mango_utils functions.
|
| |
|
| |
|
| |
|
|
|
|
|
|
| |
This uses couch_views_updater to create mango indexes in the doc update
along with the couch_views_indexer to update the indexes in the
background up to the creation versionstamp.
|
|
|
|
| |
Removing quorum stats since they are not relevant with FDB.
|
|
|
|
|
| |
Removes the view callback that was performed on the nodes before
sending the results back to the co-ordinator.
|
| |
|
| |
|
| |
|
|
|
|
|
| |
Adds a max value to use for encoding. This is useful when getting the
max range when encoding startkey/endkeys.
|
|
|
|
|
|
| |
This adds the ability for couch_views to index an index in the docs
update transaction. This only happens if a design doc has the
field <<"interactive">> = true.
|
| |
|
|
|
|
|
|
| |
This creates a versionstamp for when an indexed was created
and build status for indexes. if the index has a creation_vs, then
couch_views_indexer will built the index to this creation versionstamp.
|
| |
|
| |
|
| |
|
| |
|
|\
| |
| | |
Use `couch_rate` application for `couch_view`
|
|/ |
|
|\
| |
| | |
Switch erlfdb to the couchdb repo at tag v1.0.0
|
|/ |
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
Removes the following features from the welcome message:
- reshard
- partitioned
- pluggable-storage-engines
- scheduler
Although `scheduler` at least will presumably be returned once that
feature is complete.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Currently we return a 500 but a 400 return code makes more sense
```
$
http $DB1/db1/_changes?since=0-1345
HTTP/1.1 400 Bad Request
{
"error": "invalid_since_seq",
"reason": "0-1345",
"ref": 442671026
}
```
|
|\
| |
| | |
Implement AES KW algorithm
|
|/
|
|
|
|
| |
For use by the native couchdb at-rest encryption feature.
* From NIST Special Publication 800-38F.
|
|
|
|
|
| |
Previously we didn't reset the metadata flag in case of a transaction retry so
we could have used a stale `?PDICT_CHECKED_MD_IS_CURRENT = true` value.
|
|
|
|
|
|
|
|
|
| |
After the recent upgrade to using HCA we forgot to check all the places where
the db prefix was constructed so a few places still used the old pattern of
{?DBS, DbName}.
In the case of `check_metadata_version` we also have to account for the fact
that during db creation, there might not be a db_prefix in the `Db` handle yet.
|
|
|
|
|
|
|
|
|
|
|
| |
Add the db instance id to indexing job data. During indexing ensure the
database is opened with the `{uuid, DbUUID}` option. After that any stale db
reads in `update/3` will throw the `database_does_not_exist` error.
In addition, when the indexing job is re-submitted in `build_view_async/2`,
check if it contains a reference to an old db instance id and replace the job.
That has to happen since couch_jobs doesn't overwrite job data for running
jobs.
|
|
|
|
|
|
|
|
| |
* Avoid a cause clause error in after 0 when the database is deleted
* Handle db re-creation by checking the instance UUID during fabric2_db:open/2
Since we added a few extra arguments switch to use a map as the State
|
|
|
|
|
|
| |
This is a more efficient method to get all of the design documents than
relying on fabric2_db:fold_docs which doesn't load doc bodies in
parallel.
|
|
|
|
|
|
| |
Previously we are using the DbName to set DbPrefix for clarity. In order
to support soft-deletion while providing efficient value for DbPrefix
allocation, we use value allocated with erlfdb_hca for DbPrefix.
|
|
|
|
|
|
| |
* add info endpoint
This commit adds the info endpoint for design docs stored in fdb.
|
|
|
|
|
|
|
|
| |
The e520294c7ee3f55c3e8cc7d528ff37a5a93c800f commit inadvertently
changed the `fabric2_fdb:refresh/1` head matching to accept db
instances with no names. `couch_jobs` uses those but they are not
cached in fabric2_server. So here we return to the previous matching
rule where contexts without names don't get refreshed from the cache.
|
|
|
|
|
|
|
|
|
| |
Check that on a transaction restarts `database_does_not_exist` error is
thrown properly if database was re-created.
Also we forgot to properly unload the mocked erlfdb module in
`tx_too_old_mock_erlfdb/0` so we make sure to do that, otherwise it
has a chance of messing up subsequent tests.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously it was possible for a database to be re-created while a `Db` handle
was open and the `Db` handle would continue operating on the new db without any
error.
To avoid that situation ensure instance UUID is explicitly checked during open
and reopen calls. This includes checking it after the metadata is loaded in
`fabric2_fdb:open/2` and when fetching the handle from the cache.
Also, create a `{uuid, UUID}` option to allow specifying a particular instance
UUID when opening a database. If that instance doesn't exist raise a
`database_does_not_exist` error.
|
|\
| |
| | |
Add additional get_doc spans
|
|/ |
|
|
|
|
|
|
|
|
| |
* Use the `row/3` helper function in a few more places
* Make a `run_query/3` function to shorten all the query calls
* A few minor emilio suggesions (whitespace, comma issues, ...)
|
|
|
|
|
|
|
|
| |
Previously transactions could time-out, retry and re-emit the same data. Use
the same mechanism as the _list_dbs and _changes feeds to fix it. Additional
detail in the mailing list discussion:
https://lists.apache.org/thread.html/r02cee7045cac4722e1682bb69ba0ec791f5cce025597d0099fb34033%40%3Cdev.couchdb.apache.org%3E
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
`list_dbs_info/3` maintains a queue of up to 100 futures which are used to
concurrently fetch data. Previously, if the transaction was reset, and the
accumulator inside the fold may have had futures from a previous transaction,
which have gotten their results yet, they threw a transaction_canceled (1025)
error.
To fix this, if we're in a read-only transaction, we return the tx object in
the opaque db info record. Then, if `erlfdb:wait/1` throws a transaction
canceled error, we re-fetch the future from the now restarted transaction.
Potentially, the transaction may also time-out while the futures queues is
drained after the main range fold has finished already. Handle that case by
reseting the transaction and then re-fetching the futures. To avoid an infinite
loop we allow up to 2 retries only.
This approach is not the most optimal but simpler as it hides the complexity
inside the fabric2_fdb module where we already handle these conditions. It
means that every 5 or so seconds we might have to refetch less than 100 extra
futures from the queue (as some or all may have gotten their results back
already).
|
| |
|
|
|
|
|
|
|
|
| |
* Made a silly error building the accumulator list:
`[Row, Acc]` -> `[Row | Acc]`
* Left some debugging code in `list_dbs_tx_too_old` test
The test was supposed to setup only 1 failure in the user callback
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* There was a good amount of duplication between `_db_crud_tests` and
`_changes_fold_tests`, so make a common test utility module so both suites can
use.
* Clean up test names. Previously some were named `tx_too_long` but since the
official FDB error is `transaction_too_old` rename them to match a bit
better.
* `list_dbs_info` implementation queue of 100 futures to parallelize fetching.
So its test was update to create more than 100 dbs. Creating 100 dbs took
about 3 seconds so add a small parallel map (pmap) utility function to help
with that.
|
|
|
|
|
|
|
|
|
| |
Previously those endpoints would break when transactions time-out and are
retried. To fix it we re-use the mechanism from changes feeds.
There is a longer discussion about this on the mailing list:
https://lists.apache.org/thread.html/r02cee7045cac4722e1682bb69ba0ec791f5cce025597d0099fb34033%40%3Cdev.couchdb.apache.org%3E
|