summaryrefslogtreecommitdiff
path: root/sql/examples
diff options
context:
space:
mode:
authorunknown <brian@zim.(none)>2005-01-28 16:43:10 -0800
committerunknown <brian@zim.(none)>2005-01-28 16:43:10 -0800
commit96e4281f059cd6d534338c539a9e44caff984b93 (patch)
treeb6056fc697d3897a0bf6f0aa59345366fac72bef /sql/examples
parent49ab428c8ddc0cbafb99cb10ee77167081a23914 (diff)
downloadmariadb-git-96e4281f059cd6d534338c539a9e44caff984b93.tar.gz
Cleanup for lost file descriptors on close table for ha_archive.
sql/examples/ha_archive.cc: More comments, fixed issue with lost file descriptors. BitKeeper/etc/logging_ok: Logging to logging@openlogging.org accepted
Diffstat (limited to 'sql/examples')
-rw-r--r--sql/examples/ha_archive.cc26
1 files changed, 15 insertions, 11 deletions
diff --git a/sql/examples/ha_archive.cc b/sql/examples/ha_archive.cc
index ef609513489..75fba96cad9 100644
--- a/sql/examples/ha_archive.cc
+++ b/sql/examples/ha_archive.cc
@@ -42,18 +42,20 @@
handle bulk inserts as well (that is if someone was trying to read at
the same time since we would want to flush).
- A "meta" file is kept. All this file does is contain information on
- the number of rows.
+ A "meta" file is kept alongside the data file. This file serves two purpose.
+ The first purpose is to track the number of rows in the table. The second
+ purpose is to determine if the table was closed properly or not. When the
+ meta file is first opened it is marked as dirty. It is opened when the table
+ itself is opened for writing. When the table is closed the new count for rows
+ is written to the meta file and the file is marked as clean. If the meta file
+ is opened and it is marked as dirty, it is assumed that a crash occured. At
+ this point an error occurs and the user is told to rebuild the file.
+ A rebuild scans the rows and rewrites the meta file. If corruption is found
+ in the data file then the meta file is not repaired.
- No attempts at durability are made. You can corrupt your data. A repair
- method was added to repair the meta file that stores row information,
- but if your data file gets corrupted I haven't solved that. I could
- create a repair that would solve this, but do you want to take a
- chance of loosing your data?
+ At some point a recovery method for such a drastic case needs to be divised.
- Locks are row level, and you will get a consistant read. Transactions
- will be added later (they are not that hard to add at this
- stage).
+ Locks are row level, and you will get a consistant read.
For performance as far as table scans go it is quite fast. I don't have
good numbers but locally it has out performed both Innodb and MyISAM. For
@@ -88,7 +90,6 @@
compression but may speed up ordered searches).
Checkpoint the meta file to allow for faster rebuilds.
Dirty open (right now the meta file is repaired if a crash occured).
- Transactions.
Option to allow for dirty reads, this would lower the sync calls, which would make
inserts a lot faster, but would mean highly arbitrary reads.
@@ -333,6 +334,7 @@ ARCHIVE_SHARE *ha_archive::get_share(const char *table_name, TABLE *table)
opposite.
*/
(void)write_meta_file(share->meta_file, share->rows_recorded, TRUE);
+
/*
It is expensive to open and close the data files and since you can't have
a gzip file that can be both read and written we keep a writer open
@@ -379,6 +381,8 @@ int ha_archive::free_share(ARCHIVE_SHARE *share)
(void)write_meta_file(share->meta_file, share->rows_recorded, FALSE);
if (gzclose(share->archive_write) == Z_ERRNO)
rc= 1;
+ if (my_close(share->meta_file, MYF(0)))
+ rc= 1;
my_free((gptr) share, MYF(0));
}
pthread_mutex_unlock(&archive_mutex);