| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
```
make elixir tests=test/elixir/test/partition_view_test.exs:314
* test query with custom reduce works (708.2ms)
1) test query with custom reduce works (ViewPartitionTest)
test/elixir/test/partition_view_test.exs:314
Assertion with == failed
code: assert resp.status_code == 200
left: 500
right: 200
stacktrace:
test/elixir/test/partition_view_test.exs:319: (test)
```
Logs show:
```
[error] 2022-03-11T20:05:58.969707Z node1@127.0.0.1 <0.985.0> 9bc85227db req_err(2412564908) {{invalid_ejson,{p,<<"bar">>,[<<"field">>,<<"one">>]}},
[{jiffy,encode,2,
[{file,"/Users/nvatama/asf-3-main/src/jiffy/src/jiffy.erl"},
{line,99}]},
{couch_os_process,writejson,2,[{file,"src/couch_os_process.erl"},{line,97}]},
{couch_os_process,handle_call,3,
```
|
|
|
|
|
|
|
|
|
|
|
|
| |
configurable
The code that forwards attachment data to cluster nodes via fabric has a
hard-coded timeout of five minutes for nodes to request the data. Making
this configurable lets us mitigate the impact of issue #3939 [1], which
causes requests to block if one of the nodes already has the given
attachment and doesn't end up requesting the data for it.
[1]: https://github.com/apache/couchdb/issues/3939
|
|
|
|
|
|
| |
`ibrowse` responses may crash the job process with a function clause in
`handle_info/2`. Ingore those with a note that aliases should hopefully fix
this issue in the future.
|
|\
| |
| | |
Debug for sharded index server
|
| | |
|
|/ |
|
|
|
|
| |
`replicated_edits` should be `replicated_changes`
|
|
|
|
|
|
|
| |
Couldn't reproduce flakiness locally but it failed in CI at least twice.
Try to extend the wait timeout and poll frequence, and set `commit_freq = 0`
for some tests.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Checking a few scenarios:
* _design/*/_info returns collation versions
* A 2.x view can upgraded to have a view info map
* A 3.2.1 view can be upgraded to have a view info map
* A view with multiple collator versions:
- Can be opened and tagged with the current libicu version
- Can be queried, written to and then queried again
- Will be submitted for compaction when open and on updates
- After compaction the view will have only one collator version
Includes two new fixture views:
* Version 3.2.1 view without the view info map field
* A view with a bogus (old) libicu version "1.1.1.1"
The bogus libicu version view is generated by overriding the collation
version API to return `[1, 1, 1, 1]` [1]. And it can be re-created
with this snippet [2]
[1]
```
update_collator_versions(#{} = ViewInfo) ->
Versions = maps:get(ucol_vs, ViewInfo, []),
- Ver = tuple_to_list(couch_ejson_compare:get_collator_version()),
+ % Ver = tuple_to_list(couch_ejson_compare:get_collator_version()),
+ Ver = [1, 1, 1, 1],
```
[2] `http` is the `httpie` command line HTTP client
```
http delete $DB/colltest1 && http put $DB/colltest1'?q=1' && http put $DB/colltest1/_design/colltest1ddoc views:='{"colltest1view":{"map":"function(doc){emit(doc.id, null);}"}}' && http put $DB/colltest1/d1 a=b && http put $DB/colltest1/d2 c=d
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously, libicu collator versions were not tracked, and during major OS
version upgrades, it was possible to experience apparent data loss due to
collation order changes between libicu library versions. The view order
inconsistency would last until the view is compacted.
This commit introduces a view info map in the header which records the list of
libicu collator versions used by that view. The collator versions list is
checked and updated every time a view is opened.
The new view info map is re-using a previously removed view header field from
2.x views. The upgrade logic from 2.x to 3.x ignores that header field, and
this allows for transparent downgrading back to 3.2.1, and then upgrading back
to 3.2.1+ versions, all while keeping the same view signature.
If there is no collator version recorded in the view header, the first time the
view is opened, the header will be upgraded to record the current libicu
version. It's possible to avoid immediately writting the upgraded header and
instead delaying till the next view data update with this setting:
```
[view_upgrade]
commit_on_header_upgrade = false
```
By default it's toggled to `true`, meaning the view header will be written
immediately.
The list of collator version is returned in the _design/*/_info response. This
allows users to easily track the condition when the view is built or opened
with more than one libicu collator versions.
Views which have more than one collator versions are submitted for
re-compaction to the "upgrade_views" channel. This behavior is triggered both
on update (which is the typical smoosh trigger mechanism), and when opened.
Triggering on open is inteded to be used with read-only views, which may not be
updated after libicu upgrades, and so would perpetually emit inconsistent data.
Automatic re-compaction may be disabled with a config setting:
```
[view_upgrade]
compact_on_collator_upgrade = false
```
The default value is `true`.
|
|
|
|
|
|
|
|
|
|
| |
We already return the collation algorithm version and the libicu library version,
but since we're tracking the opaque collator versions in the view, it is
beneficial to show that to the user as well.
Thanks to Will Young for the idea [1]
[1] https://lists.apache.org/thread/rqfwrt4kszz79l3wxxtg6zwygz6my8p2
|
|
|
|
|
|
|
|
| |
get_collator_version/0 calls ucol_getVersion() C API [1]. It returns
an opaque sequence of 4 bytes which encodes both the base UCA version
and any tailorings which may affect collation order.
[1] https://unicode-org.github.io/icu-docs/apidoc/dev/icu4c/ucol_8h.html#a0f98dd01ba7a64069ade6f0fda13528d
|
|\
| |
| | |
Include index sig in _search_info response
|
|/ |
|
|\
| |
| | |
Add couch_mrview_debug:view_signature/2
|
|/ |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In a rare, unsupported scenario, a user can call _bulk_docs with
new_edits:false and a VDU that denies the update of certain documents.
When nodes are out of sync (under load and mem3_repl can't keep up),
an update may or may not update the revision tree.
When two nodes extend the revision tree and forbids the update,
but one node is behind, an {error, W, [{Doc, FirstReply} | Acc]}
is returned instead of {ok, _} or {accepted, _}. The _bulk_docs
request does not accept {error, _} which leads to a function_clause.
This fix changes the return value to:
{ok, W, [{Doc, {forbidden, Msg}} | Acc]}
when any of the nodes forbids the update.
Updating one document is also addressed so if the same issue occurs
during one document update, a 403 is still returned.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* Remove leftover CentOS 6 RPM bits
* Stop publishing SM 1.8.5 package for Debian 10
We're using the native SM 60 package in the CI system. If you look
closely you'll see that we weren't actually successfully publishing that
package anyway, so removing this line is a no-op.
* Publish Debian 11 packages
* Publish packages on main, not master
* Apply labels directly to build steps
|
|\
| |
| | |
Execute chttpd_dbs_info_tests in clean database_dir
|
|/ |
|
|\
| |
| | |
Execute fabric_rpc_tests in clean database_dir
|
|/
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The `fabric_rpc_tests` pollutes the state of `shards_db` which causes flakiness
of other tests. This PR fixes the problem by configuring temporary `database_dir`.
The important implementation detail is that we need to wait for all `couch_server`
processes to restart. Before initroduction of sharded couch server in the
https://github.com/apache/couchdb/pull/3366 this could be done as:
```erlang
test_util:with_process_restart(couch_server, fun() ->
config:set("couchdb", "database_dir", NewDatabaseDir)
end),
```
This method has to be updated to support sharded couch_server. Following auxilary
functions where added:
- `couch_server:names/0` - returns list of registered names of each
`couch_server` process
- `test_util:with_processes_restart/{2,4}` - waits all process to be restarted
returns `{Pids :: #{} | timeout, Res :: term()}`
- `test_util:with_couch_server_restart/1` - waits for all `couch_server` processes
to finish restart
The new way of configuring `database_dir` in test suites is:
```erlang
test_util:with_couch_server_restart(fun() ->
config:set("couchdb", "database_dir", NewDatabaseDir)
end),
```
|
| |
|
|
|
|
| |
In order to fix formatting issue outlined in https://github.com/apache/couchdb-documentation/pull/704
|
| |
|
| |
|
|
|
|
| |
Already fixed by @lostnet on main.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is one of those situations where you go in to make a small change,
see an opportunity for some refactoring, and get sucked into a rabbit
hole that leaves you wondering if you have any idea how computers
actually work. My initial goal was simply to update the Erlang version
used in our binary packages to a modern supported release. Along the
way I decided I wanted to figure out how to eliminate all the copypasta
we generate for making any change to this file, and after a few days of
hacking here we are. This rewrite has the following features:
* Updates to use Debian 11 (current stable) as the base image for
building releases and packaging repos.
* Defaults to Erlang 23 as the embedded Erlang version in packages. We
avoid Erlang 24 for now because Clouseau is not currently compatible.
* Dynamically generates the parallel build stages used to test and
package CouchDB on various OSes. This is accomplished through a bit
of scripted pipeline code that relies on two new methods defined at
the beginning of the Jenkinsfile, one for "native" builds on macOS
and FreeBSD and one for container-based builds. See comments in the
Jenkinsfile for additional details.
* Expands commands like `make check` into a series of steps to improve
visibility. The Jenkins UI will now show the time spent in each step
of the build process, and if a step (e.g. `make eunit`) fails it will
only expand the logs for that step by default instead of showing the
logs for the entire build stage. The downside is that if we do make
changes to the series of targets underneath `check` we need to
remember to update the Jenkinsfile as well.
* Starts per-stage timer _after_ agent is acquired. Previously builds could
fail with a 15m timeout when all they did was sit in the build queue.
This is a cherry-pick of 9b6454b with the following modifications:
- Drop the MINIMUM_ERLANG_VERSION to 20
- Drop the packaging ERLANG_VERSION to 23
- Add the weatherreport-test as a build step
- Add ARM and POWER back into the matrix using a new buildx-based
multi-architecture container image.
|
| |
|
|
|
|
|
|
|
|
| |
The previous response order of
{accepted, [{Doc1, {accepted, Doc2}}, {Doc2, {accepted, Doc1}}
looked a bit odd so update the order to look as expected.
|
|
|
|
|
|
|
|
|
|
|
|
| |
The code that generates suite.elixir will repreatedly strip the "test "
from the name of the test when writing the file, resulting in a mismatch
between the actual test name and what's in suite.elixir. You can see
this by searching for e.g. COUCHDB-497 in the suite file.
I tried using String.replace_prefix instead of String.replace_leading in
test_name() but that function seems to get called multiple times during
the test grouping. Simpler to just avoid naming the tests that way for
now.
|
|
|
|
| |
Also `export COUCHDB_TEST_ADMIN_PARTY_OVERRIDE=1` so couch will start.
|
|\
| |
| | |
Non-zero instance start time
|
|/
|
|
|
|
|
|
|
| |
Set instance_start_time to the creation time of the database to
restore the ability for the replicator to detect a db recreation event
during a replication. Without this, a replication can fail to notice
the db was deleted and recreated, write the checkpoint document for
the in-progress replication and keep going, whereas what should happen
is the replication should reset from sequence 0.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* Use Debian Stable, add Erlang 24 to CI
* Use specific images for each Erlang version
Instead of building one image with all supported Erlang versions through
kerl, this configuration looks for a specific container image for each
Erlang version. Decoupling it like this enables us to more easily adopt
newer distros for newer Erlang versions, and to build new images with
patch releases of Erlang without needing a simultaneous PR to the
CouchDB repo to pick them up in CI (although some change to Jenkins
might be needed to avoid images being cached for too long when a stable
tag changes).
* Bump Credo to 1.5.6 for Elixir 1.12 support
|
|
|
|
|
|
|
|
|
|
|
|
| |
Currently, `decode/3` performs various checks on a JWT, and then
base64 decodes and finally JSON decodes the token. However, in some
cases, it's desirable to skip the decoding steps, and just return the
token payload in binary form.
This exposes `decode/4` where the 4th argument is a decoder fun that
defaults to `decode_b64url_json/1` for `decode/3` to retain existing
behavior, but also exposes `decode_passthrough/1` in case a client
wants to avoid any decoding steps.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously, when a database shard is moved to a new node, and there are no
subsequent updates, the changes feed sequence rewound to the previous epoch. In
case of a first shard move, it would rewind to 0.
To fix the issue, update `owner_of/2` and `start_seq/3` functions to account
for the case when epoch sequence can exactly match the current db update
sequence.
Fixes #3885
|
| |
|
|\
| |
| | |
Sharded couch index server
|
|/ |
|
|
|
|
|
|
| |
One set of files is from a recent PR which we failed to detect because
of the Eralng 20 issue. The others are some test modules which had
been skipped during the initial reformatting.
|
|
|
|
|
|
| |
Previously, we ran it in the dist building stage. However, unlike in
main, in 3.x that stage is run with Erlang 20 where erlfmt check is
skipped. To fix that we run erlfmt in a separate stage with Erlang 23.
|
|\
| |
| | |
Always send all cookie attributes
|
|/ |
|
|
|
|
|
|
|
|
|
|
|
| |
* Fix quoting so that it works with all OTP versions 20 through 24 [1].
* Dist API in 20 [2] did not have a `listen/2` [3] callback. Implement
`listen/1` so we're compatible with all the supported OTP versions.
[1] https://github.com/apache/couchdb/issues/3821#issuecomment-985089867
[2] https://github.com/erlang/otp/blob/maint-20/lib/kernel/src/inet_tcp_dist.erl#L71-L72
[3] https://github.com/erlang/otp/blob/master/lib/kernel/src/inet_tcp_dist.erl#L79-L80
|
|\
| |
| | |
improve erlang_ls.config
|
|/ |
|
| |
|