summaryrefslogtreecommitdiff
path: root/Docs
diff options
context:
space:
mode:
Diffstat (limited to 'Docs')
-rw-r--r--Docs/README-wsrep18
-rw-r--r--Docs/myisam.txt20
-rw-r--r--Docs/sp-imp-spec.txt28
3 files changed, 33 insertions, 33 deletions
diff --git a/Docs/README-wsrep b/Docs/README-wsrep
index 2058e1eb14d..166901e111f 100644
--- a/Docs/README-wsrep
+++ b/Docs/README-wsrep
@@ -137,7 +137,7 @@ Additional packages to consider (if not yet installed):
* galera (multi-master replication provider, https://launchpad.net/galera)
* MySQL-client-community (for connecting to server and mysqldump-based SST)
* rsync (for rsync-based SST)
- * xtrabackup and nc (for xtrabackup-based SST)
+ * mariabackup and nc (for mariabackup-based SST)
2.2 Upgrade system tables.
@@ -380,14 +380,14 @@ to join or start a cluster.
wsrep_sst_method=rsync
What method to use to copy database state to a newly joined node. Supported
methods:
- - mysqldump: slow (except for small datasets) but allows for upgrade
- between major MySQL versions or InnoDB features.
- - rsync: much faster on large datasets (default).
- - rsync_wan: same as rsync but with deltaxfer to minimize network traffic.
- - xtrabackup: very fast and practically non-blocking SST method based on
- Percona's xtrabackup tool.
-
- (for xtrabackup to work the following settings must be present in my.cnf
+ - mysqldump: slow (except for small datasets) but allows for upgrade
+ between major MySQL versions or InnoDB features.
+ - rsync: much faster on large datasets (default).
+ - rsync_wan: same as rsync but with deltaxfer to minimize network traffic.
+ - mariabackup: very fast and practically non-blocking SST method based on
+ mariabackup tool (enhanced version of Percona's xtrabackup).
+
+ (for mariabackup to work the following settings must be present in my.cnf
on all nodes:
[mysqld]
wsrep_sst_auth=root:<root password>
diff --git a/Docs/myisam.txt b/Docs/myisam.txt
index ceb4ae7dc0b..db41fb911ca 100644
--- a/Docs/myisam.txt
+++ b/Docs/myisam.txt
@@ -95,7 +95,7 @@ Zero if the create is successful. Non-zero if an error occurs.
HA_WRONG_CREATE_OPTION
means that some of the arguments was wrong.
-appart from the above one can get any unix error that one can get from open(), write() or close().
+apart from the above one can get any unix error that one can get from open(), write() or close().
#.#.4 Examples
@@ -169,7 +169,7 @@ The function parameter can be:
HA_EXTRA_QUICK=1 Optimize for speed
HA_EXTRA_RESET=2 Reset database to after open
HA_EXTRA_CACHE=3 Cash record in HA_rrnd()
-HA_EXTRA_NO_CACHE=4 End cacheing of records (def)
+HA_EXTRA_NO_CACHE=4 End caching of records (def)
HA_EXTRA_NO_READCHECK=5 No readcheck on update
HA_EXTRA_READCHECK=6 Use readcheck (def)
HA_EXTRA_KEYREAD=7 Read only key to database
@@ -335,7 +335,7 @@ int mi_rfirst(MI_INFO *mip , byte *buf, int inx)
#.#.1 Description
Reads the first row in the MyISAM file according to the specified index.
-If one want's to read rows in physical sequences, then one should instead use mi_scan() or mi_rrnd().
+If one wants to read rows in physical sequences, then one should instead use mi_scan() or mi_rrnd().
mip is an MI_INFO pointer to the open handle.
buf is the record buffer that will contain the row.
@@ -364,7 +364,7 @@ int mi_rkey(MI_INFO *mip, byte *buf, int inx, const byte *key, uint key_len, enu
#.#.1 Description
Reads the next row after the last row read, using the current index.
-If one want's to read rows in physical sequences, then one should instead use mi_scan() or mi_rrnd().
+If one wants to read rows in physical sequences, then one should instead use mi_scan() or mi_rrnd().
mip is an MI_INFO pointer to the open handle.
buf is the record buffer that will contain the row.
@@ -397,7 +397,7 @@ int mi_rlast(MI_INFO *mip , byte *buf, int inx)
#.#.1 Description
Reads the last row in the MyISAM file according to the specified index.
-If one want's to read rows in physical sequences, then one should instead use mi_scan() or mi_rrnd().
+If one wants to read rows in physical sequences, then one should instead use mi_scan() or mi_rrnd().
mip is an MI_INFO pointer to the open handle.
buf is a pointer to the record buffer that will contain the row.
Inx is the index (key) number, which must be the same as currently selected.
@@ -425,7 +425,7 @@ int mi_rnext(MI_INFO *mip , byte *buf, int inx )
#.#.1 Description
Reads the next row after the last row read, using the current index.
-If one want's to read rows in physical sequences, then one should instead use mi_scan() or mi_rrnd().
+If one wants to read rows in physical sequences, then one should instead use mi_scan() or mi_rrnd().
mip is an MI_INFO pointer to the open handle.
buf is the record buffer that will contain the row.
@@ -520,7 +520,7 @@ int mi_rsame(MI_INFO *mip, byte *buf, int inx)
Reads the current row to get its latest contents. This is useful to refresh the record buffer in case someone else has changed it.
If inx is negative it reads by position. If inx is >= 0 it reads by key.
-With mi_rsame() one can switch to use any other index for the current row. This is good if you have a user application that lets the user do 'read-next' on a row. In this case, if the user want's to start scanning on another index, one simply has to do a mi_rsame() on the new index to activate this.
+With mi_rsame() one can switch to use any other index for the current row. This is good if you have a user application that lets the user do 'read-next' on a row. In this case, if the user wants to start scanning on another index, one simply has to do a mi_rsame() on the new index to activate this.
mip is an MI_INFO pointer to the open handle.
buf is the record buffer that will contain the row.
@@ -677,7 +677,7 @@ Zero if successful. Non-zero if an error occurred.
HA_ERR_FOUND_DUPP_KEY
A record already existed with a unique key same as this new record.
HA_ERR_RECORD_FILE_FULL
-The error is given if you hit a system limit or if you try to create more rows in a table that you reserverd room for with mi_create().
+The error is given if you hit a system limit or if you try to create more rows in a table that you reserved room for with mi_create().
ENOSPC
The disk is full.
EACCES
@@ -770,7 +770,7 @@ uint _mi_make_key( MI_INFO *mip, uint keynr, uchar *key, const char *record, my_
Construct a key string for the given index, from the provided record buffer.
??? When packed records are used ...
This is an internal function, not for use by applications. Monty says: "This can't be used to create an external key for an application from your record."
-See mi_make_application_key() for a similar function that is useable by applications.
+See mi_make_application_key() for a similar function that is usable by applications.
The parameters are:
A MI_INFO pointer mip.
@@ -872,7 +872,7 @@ HEAP tables only exists in memory so they are lost if `mysqld' is taken down or
The *MySQL* internal HEAP tables uses 100% dynamic hashing without overflow areas and don't have problems with delete.
-You can only access things by equality using a index (usually by the `=' operator) whith a heap table.
+You can only access things by equality using a index (usually by the `=' operator) with a heap table.
The downside with HEAPS are:
1. You need enough extra memory for all HEAP tables that you want to use at the same time.
2. You can't search on a part of a index.
diff --git a/Docs/sp-imp-spec.txt b/Docs/sp-imp-spec.txt
index 259d76ab5bb..52389ea50f4 100644
--- a/Docs/sp-imp-spec.txt
+++ b/Docs/sp-imp-spec.txt
@@ -103,7 +103,7 @@
- Statements:
The Lex in THD is replaced by a new Lex structure and the statement,
is parsed as usual. A sp_instr_stmt is created, containing the new
- Lex, and added to added to the instructions in sphead.
+ Lex, and added to the instructions in sphead.
Afterwards, the procedure's Lex is restored in THD.
- SET var:
Setting a local variable generates a sp_instr_set instruction,
@@ -169,7 +169,7 @@
- Parsing CREATE FUNCTION ...
- Creating a functions is essensially the same thing as for a PROCEDURE,
+ Creating a functions is essentially the same thing as for a PROCEDURE,
with the addition that a FUNCTION has a return type and a RETURN
statement, but no OUT or INOUT parameters.
@@ -189,7 +189,7 @@
additional requirement. They will be called in expressions with the same
syntax as UDFs, so UDFs and stored FUNCTIONs share the namespace. Thus,
we must make sure that we do not have UDFs and FUNCTIONs with the same
- name (even if they are storded in different places).
+ name (even if they are stored in different places).
This means that we can reparse the procedure as many time as we want.
The first time, the resulting Lex is used to store the procedure in
@@ -225,7 +225,7 @@
sql_parse.cc:mysql_execute_command() then uses sp.cc:sp_find() to
get the sp_head for the procedure (which may have been read from the
- database or feetched from the in-memory cache) and calls the sp_head's
+ database or fetched from the in-memory cache) and calls the sp_head's
method execute().
Note: It's important that substatements called by the procedure do not
do send_ok(). Fortunately, there is a flag in THD->net to disable
@@ -294,15 +294,15 @@
- Detecting and parsing a FUNCTION invocation
- The existance of UDFs are checked during the lexical analysis (in
+ The existence of UDFs are checked during the lexical analysis (in
sql_lex.cc:find_keyword()). This has the drawback that they must
- exist before they are refered to, which was ok before SPs existed,
+ exist before they are referred to, which was ok before SPs existed,
but then it becomes a problem. The first implementation of SP FUNCTIONs
will work the same way, but this should be fixed a.s.a.p. (This will
required some reworking of the way UDFs are handled, which is why it's
not done from the start.)
For the time being, a FUNCTION is detected the same way, and returns
- the token SP_FUNC. During the parsing we only check for the *existance*
+ the token SP_FUNC. During the parsing we only check for the *existence*
of the function, we don't parse it, since wa can't call the parser
recursively.
@@ -323,13 +323,13 @@
One "obvious" solution would be to simply push "mysql.proc" to the list
of tables used by the query, but this implies a "join" with this table
if the query is a select, so it doesn't work (and we can't exclude this
- table easily; since a priviledged used might in fact want to search
+ table easily; since a privileged used might in fact want to search
the proc table).
Another solution would of course be to allow the opening and closing
of the mysql.proc table during a query execution, but this it not
possible at the present.
- So, the solution is to collect the names of the refered FUNCTIONs during
+ So, the solution is to collect the names of the referred FUNCTIONs during
parsing in the lex.
Then, before doing anything else in mysql_execute_command(), read all
functions from the database an keep them in the THD, where the function
@@ -390,7 +390,7 @@
a method in the THD's sp_rcontext (if there is one). If a handler is
found, this is recorded in the context and the routine returns without
sending the error message.
- The exectution loop (sp_head::execute()) checks for this after each
+ The execution loop (sp_head::execute()) checks for this after each
statement and invokes the handler that has been found. If several
errors or warnings occurs during one statement, only the first is
caught, the rest are ignored.
@@ -400,9 +400,9 @@
instruction.
Calling and returning from a CONTINUE handler poses some special
- problems. Since we need to return to the point after its invokation,
+ problems. Since we need to return to the point after its invocation,
we push the return location on a stack in the sp_rcontext (this is
- done by the exectution loop). The handler then ends with a special
+ done by the execution loop). The handler then ends with a special
instruction, sp_instr_hreturn, which returns to this location.
CONTINUE handlers have one additional problem: They are parsed at
@@ -545,8 +545,8 @@
Cons: Uses more memory, each SP read from table once per thread.
Unfortunately, we cannot use alternative 1 for the time being, as most
- of the datastructures to be cached (lex and items) are not reentrant
- and thread-safe. (Things are modifed at execution, we have THD pointers
+ of the data structures to be cached (lex and items) are not reentrant
+ and thread-safe. (Things are modified at execution, we have THD pointers
stored everywhere, etc.)
This leaves us with alternative 2, one cache per thread; or actually
two, since we keep FUNCTIONs and PROCEDUREs in separate caches.