| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
| |
Added autodetection for thrift library and includes
Added Cassandra Storage Engine rpm
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Reasons:
alter_tablespace.rdiff:
tc_rename_error.result:
from monty@askmonty.org-20120529213755-876ptdhhaj0t7l8r
(Added text for errno in error messages)
insert_time.result:
from sergii@pisem.net-20120908101555-37w00eyfrd9noc06
(MDEV-457 - Inconsistent data truncation)
misc.result:
from igor@askmonty.org-20130109033433-5awdv0w6vbpigltw
(MDEV-3806/mwl248 - Engine independent statistics)
tbl_opt_row_format.rdiff:
from monty@askmonty.org-20120706161018-y5teinbuqpchle2m
(Fixed wrong error codes)
vcol.rdiff:
sergii@pisem.net-20121217100039-ikj1820nrku7p6d5
(simplify the handler api)
|
| |
|
|\ |
|
| | |
|
| |
| |
| |
| |
| | |
Field matching fixed.
DBUG_ASSERT fixed.
|
| |
| |
| |
| | |
- update ha_cassandra::start_bulk_insert() definition to match those in class handler.
|
| |\ |
|
| | | |
|
| | |
| | |
| | |
| | |
| | |
| | | |
- Cleanup ha_cassandra::store_lock()
- Remove dummy ha_cassandra::delete_table()
- Add HA_TABLE_SCAN_ON_INDEX to table_flags()
|
| | |
| | |
| | |
| | |
| | |
| | | |
- Register counters directly in the array passed to maria_declare_plugin. As
a consequence, FLUSH TABLES will reset the counters.
- Update test results accordingly.
|
| | |
| | |
| | |
| | |
| | |
| | | |
- Partially address review feedback.
- Update cassandra.test result result
- make cassandra.test timezone-agnostic
|
| | |
| | |
| | |
| | |
| | |
| | | |
into a dynamic column
Fixed incorrect initialization of variable which caused freeing memory by random address in case of error.
|
| | | |
|
| | | |
|
| | | |
|
| | | |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
- Don't connect right away in ha_cassandra::open. If we do this, it becomes
impossible to do SHOW CREATE TABLE when the server is not present.
- Note: CREATE TABLE still requires that connection is present, as it needs
to check whether the specified DDL can be used with Cassandra. We could
delay that check also, but then one would not be able to find out about
errors in table DDL until they do a SELECT.
|
| | |
| | |
| | |
| | |
| | | |
- Support UPDATE statements
- Follow what CQL does: don't show deleted rows (they show up as rows without any columns in reads)
|
| | |
| | |
| | |
| | | |
- Better error messages.
|
| | |
| | |
| | |
| | | |
- Add a test for ALTER TABLE
|
| | |
| | |
| | |
| | |
| | |
| | | |
- Add capability to retry calls that have failed with UnavailableException or
[Cassandra's] TimedOutException.
- We don't retry for Thrift errors yet, although could easily do, now.
|
| | |
| | |
| | |
| | |
| | | |
- Support mapping Cassandra's timestamp to INT64
- Support mapping Cassadnra's decimal to VARBINARY.
|
| | |
| | |
| | |
| | |
| | | |
- allow only VARBINARY(n), all other types can get meaningless data after conversions
- more comments
|
| | |
| | |
| | |
| | | |
- Add support for Cassandra's 'varint' datatype, mappable to VARBINARY.
|
| | | |
|
| | |
| | |
| | |
| | | |
- Added @@cassandra_thrift_host global variable.
|
| | |
| | |
| | |
| | |
| | |
| | | |
- added option thrift_port which allows to specify which port to connect to
- not adding username/password - it turns out, there are no authentication
schemes in stock cassandra distribution.
|
| | |
| | |
| | |
| | | |
- Use more permissive locking.
|
| | |
| | |
| | |
| | |
| | | |
- Also provide handling for generic Thrift exceptions. These are not listed in the 'throws' clause
of API definition but still can happen.
|
| | |
| | |
| | |
| | | |
- Catch all kinds of exceptions when calling Thrift code.
|
| | |
| | |
| | |
| | | |
doesn't need to make NULL-terminated strings.
|
| | | |
|
| | | |
|
| | |
| | |
| | |
| | | |
- Make an attempt at fixing.
|
| | |
| | |
| | |
| | | |
- add support for Cassandra's UUID datatype. We map it to CHAR(36).
|
| | | |
|
| | |
| | |
| | |
| | | |
fraction, not millisecond.
|
| | | |
|
| | |
| | |
| | |
| | |
| | |
| | | |
- Add mapping for INT datatype
- Primary key column should now be named like CQL's primary key,
or 'rowkey' if CF has key_alias.
|
| | | |
|
| | |
| | |
| | |
| | |
| | |
| | | |
- Full table scan internally uses LIMIT n, and re-starts the scan from
the last seen rowkey value. rowkey ranges are inclusive, so we will
see the same rowkey again. We should ignore it.
|
| | |
| | |
| | |
| | | |
- Remove HTON_CAN_RECREATE flag, re-create won't delete rows in cassandra.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
- We use HA_MRR_NO_ASSOC ("optimizer_switch=join_cache_hashed") mode
- Not able to use BKA's buffers yet.
- There is a variable to control batch size
- There are status counters.
- Nedeed to make some fixes in BKA code (to be checked with Igor)
|
| | |
| | |
| | |
| | |
| | | |
- bulk inserts themselves
- control variable and counters.
|
| | |
| | |
| | |
| | | |
- preparations for support of bulk INSERT.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
ORDER BY
- Fix typo in ha_cassandra::rnd_pos().
- in ::index_read_map(), do not assume that pk column is part of table->read_set.
|
| | | |
|
| | | |
|