| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
| |
Co-authored-by: Garren Smith <garren.smith@gmail.com>
Co-authored-by: Robert Newson <rnewson@apache.org>
|
|
|
|
|
| |
Co-authored-by: Garren Smith <garren.smith@gmail.com>
Co-authored-by: Robert Newson <rnewson@apache.org>
|
|
|
|
|
|
|
|
| |
Using the internal hash values for indexes was a brittle approach to
ensuring that a specific index was or was not picked. By naming the
index and design docs we can more concretely ensure that the chosen
indexes match the intent of the test while also not breaking each time
mango internals change.
|
|
|
|
|
|
|
|
|
| |
Now that a single shard handles the entire response we can optimize work
normally done in the coordinator by moving it to the RPC worker which
then removes the need to send an extra `skip` number of rows to the
coordinator.
Co-authored-by: Robert Newson <rnewson@apache.org>
|
|
|
|
|
|
|
|
| |
If a user specifies document ids that scope the query to a single
partition key we can automatically determine that we only need to
consuly a single shard range.
Co-authored-by: Robert Newson <rnewson@apache.org>
|
|
|
|
|
|
|
|
|
| |
The benefit of using partitioned databases is that views can then be
scoped to a single shard range. This allows for views to scale nearly as
linearly as document lookups.
Co-authored-by: Garren Smith <garren.smith@gmail.com>
Co-authored-by: Robert Newson <rnewson@apache.org>
|
|
|
|
|
|
|
|
| |
This feature allows us to fetch statistics for a given partition key
which will allow for users to find bloated partitions and such forth.
Co-authored-by: Garren Smith <garren.smith@gmail.com>
Co-authored-by: Robert Newson <rnewson@apache.org>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This change introduces the ability for users to place a group of
documents in a single shard range by specifying a "partition key" in the
document id. A partition key is denoted by everything preceding a colon
':' in the document id.
Every document id (except for design documents) in a partitioned
database is required to have a partition key.
Co-authored-by: Garren Smith <garren.smith@gmail.com>
Co-authored-by: Robert Newson <rnewson@apache.org>
|
|
|
|
|
|
|
| |
This provides the capability for features to specify alternative hash
functions for placing documents in a given shard range. While the
functionality exists with this implementation it is not yet actually
used.
|
|
|
|
|
|
|
|
| |
This adds specific datatype requirements to the list of allowable design
document options.
Co-authored-by: Garren Smith <garren.smith@gmail.com>
Co-authored-by: Robert Newson <rnewson@apache.org>
|
|
|
|
|
| |
Allow index validation to be parameterized by the database without
having to reopen its own copy.
|
|
|
|
|
| |
This allows for more fine grained use of couch_db:clustered_db as well
as chagnes the name to something more appropriate than `fake_db`.
|
|
|
|
|
| |
This allows for setting any combintaion of supported settings using a
proplist appraoch.
|
|
|
|
|
|
|
|
|
|
| |
This allows us to implement features outside of the PSE API without
requiring changes to the API for each bit of data we may want to end up
storing. The use of this opaque object should only be used for features
that don't require a beahvior change from the storage engine API.
Co-authored-by: Garren Smith <garren.smith@gmail.com>
Co-authored-by: Robert Newson <rnewson@apache.org>
|
| |
|
| |
|
|\
| |
| | |
Support one purge request with more than 100 docid
|
|/
|
|
| |
COUCHDB-3226
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
There was a subtle bug when opening specific revisions in
fabric_doc_open_revs due to a race condition between updates being
applied across a cluster.
The underlying cause here was due to the stemming after a document had
been updated more than revs_limit number of times along with concurrent
reads to a node that had not yet made the update. To illustrate lets
consider a document A which has a revision history from `{N, RevN}` to
`{N+1000, RevN+1000}` (assuming revs_limit is the default 1000). If we
consider a single node perspective when an update comes in we added the
new revision and stem the oldest revision. The docs the revisions on the
node would be `{N+1, RevN+1}` to `{N+1001, RevN+1001}`.
The bug exists when we attempt to open revisions on a different node
that has yet to apply the new update. In this case when
fabric_doc_open_revs could be called with `{N+1000, RevN+1000}`. This
results in a response from fabric_doc_open_revs that includes two
different `{ok, Doc}` results instead of the expected one instance. The
reason for this is that one document has revisions `{N+1, RevN+1}` to
`{N+1000, RevN+1000}` from the node that has applied the update, while
the node without the update responds with revisions `{N, RevN}` to
{N+1000, RevN+1000}`.
To rephrase that, a node that has applied an update can end up returning
a revision path that contains `revs_limit - 1` revisions while a node
wihtout the update returns all `revs_limit` revisions. This slight
change in the path prevented the responses from being properly combined
into a single response.
This bug has existed for many years. However, read repair effectively
prevents it from being a significant issue by immediately fixing the
revision history discrepancy. This was discovered due to the recent bug
in read repair during a mixed cluster upgrade to a release including
clustered purge. In this situation we end up crashing the design
document cache which then leads to all of the design document requests
being direct reads which can end up causing cluster nodes to OOM and
die. The conditions require a significant number of design document
edits coupled with already significant load to those modified design
documents. The most direct example observed was a clustered that had a
significant number of filtered replications in and out of the cluster.
|
| |
|
|
|
|
|
|
| |
This enables backwards compatbility with nodes still running the old
version of fabric_rpc when a cluster is upgraded to master. This has no
effect once all nodes are upgraded to the latest version.
|
|
|
|
|
|
|
|
|
| |
Previously `end_time` was generated converting the start_time to universal,
then passing that to `httpd_util:rfc1123_date/1`. However, `rfc1123_date/1`
also transates its argument from local to UTC time, that is it accepts input to
be in local time format.
Fixes #1841
|
|\
| |
| | |
Update before_doc_update/2 to before_doc_update/3
|
|/
|
|
| |
- Pass UpdateType to before_doc_update/3
|
|\
| |
| | |
Re-Introduce cpse_test_purge_seqs
|
|/
|
|
|
|
| |
- Re-introduce cpse_test_purge_seqs after fixing issue on
cpse_test_purge_seqs:cpse_increment_purge_seq_on_partial_purge/1
with undef issue
|
| |
|
|\
| |
| | |
Change minimum supported Erlang version to OTP 19
|
|/ |
|
|
|
|
|
|
| |
These files were used when their apps had separate repositories, but are
obsolete in the "mono repo" since their apps are built together using
the top level .travis.yml now.
|
|
|
|
|
|
| |
The modules lists in .app files are automatically generated by rebar
from .app.src files, so these explicit lists are unnecessary and prone
to being out of date.
|
|\
| |
| | |
Suppress compiler warnings
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
- For export_all warnings: either replace with explicit exports, add
nowarn_export_all compiler directives when appropriate, or in the case
of couch_epi_sup, move the test to dedicated test file and export the
function needed for testing.
- For "function already exported" warning in couch_key_tree_prop_tests,
remove include_lib attribute for eunit.hrl since it already gets
imported in triq.hrl
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Move couch_event and mem3 modules earlier in the list of SubDirs to
suppress behaviour undefined warnings.
This has the side effect of running the tests in the new order, which
induces failures in couch_index tests. Those failures are related to
quorum, and can be traced to mem3 seeds tests leaving a _nodes db
containing several node docs in the tmp/data directory, ultimately
resulting in badmatch errors e.g. when a test expects 'ok' but gets
'accepted' instead.
To prevent test failures, a cleanup function is implemented which
deletes any existing "nodes_db" left after test completion.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
- couch_util_tests.erl:90:
Warning: the result of the expression is ignored
- couch_mrview_index_changes_tests.erl:189,196:
Warning: a term is constructed, but never used
- couch_replicator_connection_tests.erl:76:
Warning: this expression will fail with a 'badarith' exception
|
| |
| |
| |
| |
| |
| | |
- Add unused test cases to test fixture
- Eliminate unreferenced code
- Comment out code that is referenced in commented code only
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Replace deprecated crypto:rand_uniform/2 and 'random' module functions
with equivalent couch_rand:uniform/1 calls, or eliminate the offending
code entirely if unused.
Note that crypto:rand_uniform/2 takes two parameters which have
different semantics than the single argument couch_rand:uniform/1.
Tests in mem3 are also provided to validate that the random rotation of
node lists was converted correctly.
|
| | |
|
|/ |
|
|\
| |
| | |
Elixir test improvements
|
| | |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Wrap deleted element assertions in retry_until to prevent timing related
failures like:
AllDocsTest
* test All Docs tests (331.1ms)
1) test All Docs tests (AllDocsTest)
test/all_docs_test.exs:15
Assertion with == failed
code: assert length(deleted) == 1
left: 0
right: 1
stacktrace:
test/all_docs_test.exs:72: (test)
|
|/
|
|
|
|
|
|
|
|
|
|
|
| |
Prior to this, `make elixir` was failing with these errors:
** (Mix) mix format failed due to --check-formatted.
The following files were not formatted:
* test/security_validation_test.exs
* test/rewrite_test.exs
* test/cluster_with_quorum_test.exs
* test/cluster_without_quorum_test.exs
* test/all_docs_test.exs
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Sometimes fabric coordinators end up getting brutally terminated [1], and in that
case they might never process their `after` clause where their remote rexi
workers are killed. Those workers are left lingering around keeping databases
active for up to 5 minutes at a time.
To prevent that from happening, let coordinators which use streams spawn an
auxiliary cleaner process. This process will monitor the main coordinator and
if it dies will ensure remote workers are killed, freeing resources
immediately. In order not to send 2x the number of kill messages during the
normal exit, fabric_util:cleanup() will stop the auxiliary process before
continuing.
[1] One instance is when the ddoc cache is refreshed:
https://github.com/apache/couchdb/blob/master/src/ddoc_cache/src/ddoc_cache_entry.erl#L236
|
|
|
|
|
|
| |
Streams functionality is fairly isolated from the rest of the utils module so
move it to its own. This is mostly in preparation to add a streams workers
cleaner process.
|
| |
|
| |
|
| |
|
|\
| |
| | |
Support specifying individual Elixir tests to run
|