| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
|
|
|
| |
Add an endpoint that returns the partition size and doc count
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
'skip' is implemented efficiently at the worker level but we've
disabled it for clustered views because of the multiple shards (and
not being able to calculate the right skip value to pass to each
worker). With a partitioned query, this problem is gone, as the value
the query specifies will be the right value for all workers (as they
hit the same shard range).
This commit removes the old fix_skip_and_limit function from
fabric_rpc and moves the logic up to the coordinators.
|
| |
|
| |
|
| |
|
|
|
|
|
|
| |
* Block design documents with partitioned option in non-partitioned db
* Prohibit javascript reduces in partitioned:true ddocs
* Prohibit include_docs=true for _view in partitioned db
|
|
|
|
| |
Adds tests to validate the all_docs optimisations works for partitions
|
| |
|
|
|
|
|
| |
Co-authored-by: Garren Smith <garren.smith@gmail.com>
Co-authored-by: Robert Newson <rnewson@apache.org>
|
| |
|
|
|
|
|
| |
Co-authored-by: Robert Newson <rnewson@apache.org>
Co-authored-by: Paul J. Davis <paul.joseph.davis@gmail.com>
|
| |
|
| |
|
|
|
|
| |
Default to database's partitioned setting if not present in ddoc.
|
| |
|
|
|
|
|
| |
Co-authored-by: Garren Smith <garren.smith@gmail.com>
Co-authored-by: Robert Newson <rnewson@apache.org>
|
| |
|
| |
|
|
|
|
|
|
|
|
| |
This props list is recorded in each database shard as well as the
shard document in the special _dbs database.
Co-authored-by: Garren Smith <garren.smith@gmail.com>
Co-authored-by: Robert Newson <rnewson@apache.org>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Recent Linux distributions start defaulting to Python 3, and require
ambiguous scripts to be more explicit.
For example building for Fedora 30 (not released yet) fails with:
ERROR: ambiguous python shebang in /opt/couchdb/bin/couchup:
#!/usr/bin/env python. Change it to python3 (or python2) explicitly.
So this commit changes the four Python scripts to use `python2`.
Note: They seem to be Python-3-compatible, but I couldn't be sure. If
you know they are, please tell me, I'll change it to `python3`.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* Mango match doc on co-ordinating node
This fixes an issue when doing a rolling upgrade of a CouchDB cluster
and adding commit a6bc72e the nodes that were not upgraded yet would
send through all the docs in the index and those would be passed through
to the user because the co-oridnator would assume it was matched at the
node level. This adds in a check to see if it has been matched at the
node level or not. And then performs a match if required.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previosly local node revisions were causing `badmatch` failures in read repair
filter. Node sequences already filtered out local nodes while NodeRevs didn't, so
during matching `{Node, NodeSeq} = lists:keyfind(Node, 1, NodeSeqs)` Node would
not be found in the list and crash.
Example of crash:
```
fabric_rpc:update_docs/3 error:{badmatch,false}
[{fabric_rpc,'-read_repair_filter/3-fun-1-',4,[{file,"src/fabric_rpc.erl"},{line,360}]},
```
|
|\
| |
| | |
Implement couch_file:format_status to log filepath
|
|/ |
|
|\
| |
| | |
Couch server improvements
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
The couchdb.update_lru_on_read setting controls whether couch_server
uses read requests as LRU update triggers. Unfortunately, the messages
for update_lru on reads are sent regardless of whether this is enabled
or disabled. While in principle this is harmless, and overloaded
couch_server pid can accumulate a considerable volume of these messages,
even when disabled. This patch prevents the caller from sending an
update_lru message when the setting is disabled.
|
|/
|
|
|
|
|
|
| |
This adds the read_concurrency option to couch_server's ETS table for
couch_dbs which contains the references to open database handles. This
is an obvious improvement as all callers opening database pids interact
with this ETS table concurrently. Conversely, the couch_server pid is
the only writer, so no need for write_concurrency.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Off-heap messages is an Erlang 19 feature:
http://erlang.org/doc/man/erlang.html#process_flag_message_queue_data
It is adviseable to use that setting for processes which expect to receive a
lot of messages. CouchDB sets it for couch_server, couch_log_server and bunch
of others as well.
In some cases the off-heap behavior could alter the timing of message receives
and expose subtle bugs that have been lurking in the code for years. Or could
slightly reduce performance, so a safety measure allow disabling it.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Its possible that a busy couch_server and a specific ordering and timing
of events can end up with an open_async message in the mailbox while a
new and unrelated open_async process is spawned. This change just ensure
that if we encounter any old messages in the mailbox that we ignore
them.
The underlying issue here is that a delete request clears out the state
in our couch_dbs ets table while not clearing out state in the message
queue. In some fairly specific circumstances this leads to the message
on in the mailbox satisfying an ets entry for a newer open_async
process. This change just includes a match on the opener process.
Anything unmatched came before the current open_async request which
means it should be ignored.
|
|
|
|
|
|
|
|
| |
A rather uncommon bug found in production. Will write more as this is
just for show and tell.
For now this test case just demonstrates the issue that was discovered.
A fix is still being pondered.
|
|
|
|
|
|
|
| |
If couch_server terminates while there is an active open_async process
it will throw a function_clause exception because `couch_db:get_pid/1`
will fail due to the `#entry.db` member being undefined. Simple fix is
to just filter those out.
|
|\
| |
| | |
Log error when changes forced to rewind to beginning
|
|/ |
|
|\
| |
| | |
Create shard files if missing
|
|/
|
|
|
|
|
|
|
|
| |
If, when a database is created, it was not possible to create any of
the shard files, the database cannot be used. All requests return a
"No DB shards could be opened." error.
This commit changes fabric_util:get_db/2 to create the shard file if
missing. This is correct as that function has already called
mem3:shards(DbName) which only returns shards if the database exists.
|
|
|
|
|
|
|
|
|
|
|
| |
We removed a security call in `do_db_req` to avoid
a duplicate authorization check and as a result
there are now no db validation in noop call
`/db/_ensure_full_commit`. This makes it always
return a success code, even for missing databases.
This fix places the security check back, directly
in _ensure_full_commit call and adds eunit tests
for a good measure.
|
|\
| |
| | |
Implement convinience `mem3:ping/2` function
|
|/
|
|
|
|
|
|
| |
Sometimes in operations it is helpful to re-establish connection between
erlang nodes. Usually it is achieved by calling `net_adm:ping/1`. However
the `ping` function provided by OTP uses `infinity` timeout. Which causes
indefinite hang in some cases. This PR adds convinience function to be
used instead of `net_adm:ping/1`.
|
|\
| |
| | |
Improve cleanup_index_files
|
|/
|
|
|
|
|
|
|
| |
The previous implementation was based on a search using
{view_index_dir}/.shards/*/{db_name}.[0-9]*_design/mrview/*
This wildcard includes all shards for all indexes of all databases.
This PR changes the search to look at index_directory of a database.
|
|\
| |
| | |
Fix dialyzer warning of shard record construction
|
|/
|
|
|
|
|
|
| |
- Fix dialyzer warning that record construction #shard violates
the declared type in fabric_doc_open_revs.erl,
cpse_test_purge_replication.erl and other files
Fixes #1580
|
|\
| |
| | |
Improve validation of database creation parameters
|