summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
-rw-r--r--BitKeeper/etc/logging_ok1
-rw-r--r--Docs/manual.texi218
-rw-r--r--myisam/mi_dynrec.c41
-rw-r--r--myisam/mi_open.c2
-rw-r--r--myisam/myisamdef.h3
-rw-r--r--mysql-test/r/alter_table.result10
-rw-r--r--mysql-test/t/alter_table.test8
-rw-r--r--sql/sql_table.cc9
-rw-r--r--sql/sql_yacc.yy5
9 files changed, 145 insertions, 152 deletions
diff --git a/BitKeeper/etc/logging_ok b/BitKeeper/etc/logging_ok
index b572006d796..9ba3f55a597 100644
--- a/BitKeeper/etc/logging_ok
+++ b/BitKeeper/etc/logging_ok
@@ -33,3 +33,4 @@ tonu@x153.internalnet
tonu@x3.internalnet
Administrator@co3064164-a.
Administrator@co3064164-a.rochd1.qld.optushome.com.au
+monty@tramp.mysql.fi
diff --git a/Docs/manual.texi b/Docs/manual.texi
index 12d5683a63c..29facaaa4b2 100644
--- a/Docs/manual.texi
+++ b/Docs/manual.texi
@@ -880,52 +880,12 @@ version, with the exception of the bugs listed in the bugs section, which
are things that are design-related. @xref{Bugs}.
MySQL is written in multiple layers and different independent
-modules. These modules are listed below with an indication of how
+modules. Some of the new modules are listed below with an indication of how
well-tested each of them is:
@cindex modules, list of
@table @strong
-@item The ISAM table handler --- Stable
-This manages storage and retrieval of all data in MySQL Version 3.22
-and earlier. In all MySQL releases there hasn't been a single
-(reported) bug in this code. The only known way to get a corrupted table
-is to kill the server in the middle of an update. Even that is unlikely
-to destroy any data beyond rescue, because all data are flushed to disk
-between each query. There hasn't been a single bug report about lost data
-because of bugs in MySQL.
-
-@cindex ISAM table handler
-@cindex storing, data
-@cindex retrieving, data
-@cindex data, ISAM table handler
-
-@item The MyISAM table handler --- Stable
-This is new in MySQL Version 3.23. It's largely based on the ISAM
-table code but has a lot of new and very useful features.
-
-@item The parser and lexical analyser --- Stable
-There hasn't been a single reported bug in this system for a long time.
-
-@item The C client code --- Stable
-No known problems. In early Version 3.20 releases, there were some limitations
-in the send/receive buffer size. As of Version 3.21, the buffer size is now
-dynamic up to a default of 16M.
-
-@item Standard client programs --- Stable
-These include @code{mysql}, @code{mysqladmin}, @code{mysqlshow},
-@code{mysqldump}, and @code{mysqlimport}.
-
-@item Basic SQL --- Stable
-The basic SQL function system and string classes and dynamic memory
-handling. Not a single reported bug in this system.
-
-@item Query optimizer --- Stable
-
-@item Range optimizer --- Stable
-
-@item Join optimizer --- Stable
-
@item Locking --- Gamma
This is very system-dependent. On some systems there are big problems
using standard OS locking (@code{fcntl()}). In these cases, you should run the
@@ -933,79 +893,33 @@ MySQL daemon with the @code{--skip-locking} flag. Problems are known
to occur on some Linux systems, and on SunOS when using NFS-mounted file
systems.
-@item Linux threads --- Stable
-The major problem found has been with the @code{fcntl()} call, which is
-fixed by using the @w{@code{--skip-locking}} option to
-@code{mysqld}. Some people have reported lockup problems with Version 0.5.
-LinuxThreads will need to be recompiled if you plan to use
-1000+ concurrent connections. Although it is possible to run that many
-connections with the default LinuxThreads (however, you will never go
-above 1021), the default stack spacing of 2 MB makes the application
-unstable, and we have been able to reproduce a coredump after creating
-1021 idle connections. @xref{Linux}.
-
-@item Solaris 2.5+ pthreads --- Stable
-We use this for all our production work.
-
-@item MIT-pthreads (Other systems) --- Stable
-There have been no reported bugs since Version 3.20.15 and no known bugs since
-Version 3.20.16. On some systems, there is a ``misfeature'' where some
-operations are quite slow (a 1/20 second sleep is done between each query).
-Of course, MIT-pthreads may slow down everything a bit, but index-based
-@code{SELECT} statements are usually done in one time frame so there shouldn't
-be a mutex locking/thread juggling.
-
-@item Other thread implementions --- Beta - Gamma
-The ports to other systems are still very new and may have bugs, possibly
-in MySQL, but most often in the thread implementation itself.
-
-@item @code{LOAD DATA ...}, @code{INSERT ... SELECT} --- Stable
-Some people thought they had found bugs here, but these usually have
-turned out to be misunderstandings. Please check the manual before reporting
-problems!
-
-@item @code{ALTER TABLE} --- Stable
-Small changes in Version 3.22.12.
-
-@item DBD --- Stable
-Now maintained by Jochen Wiedmann
-(@email{wiedmann@@neckar-alb.de}). Thanks!
-
-@item @code{mysqlaccess} --- Stable
-Written and maintained by Yves Carlier
-(@email{Yves.Carlier@@rug.ac.be}). Thanks!
-
-@item @code{GRANT} --- Stable
-Big changes made in MySQL Version 3.22.12.
-
-@item @strong{MyODBC} (uses ODBC SDK 2.5) --- Gamma
+@item @strong{MyODBC 2.50} (uses ODBC SDK 2.5) --- Gamma
It seems to work well with some programs.
-@item Replication -- Beta / Gamma
+@item Replication -- Gamma
We are still working on replication, so don't expect this to be rock
solid yet. On the other hand, some MySQL users are already
using this with good results.
-@item BDB Tables -- Beta
+@item BDB Tables -- Gamma
The Berkeley DB code is very stable, but we are still improving the interface
between MySQL and BDB tables, so it will take some time before this
is tested as well as the other table types.
-@item InnoDB Tables -- Beta
+@item InnoDB Tables -- Gamma
This is a recent addition to @code{MySQL}. They appear to work well and
can be used after some initial testing.
-@item Automatic recovery of MyISAM tables - Beta
+@item Automatic recovery of MyISAM tables - Gamma
This only affects the new code that checks if the table was closed properly
on open and executes an automatic check/repair of the table if it wasn't.
-@item MERGE tables -- Beta / Gamma
-The usage of keys on @code{MERGE} tables is still not well tested. The
-other parts of the @code{MERGE} code are quite well tested.
-
@item FULLTEXT -- Beta
Text search seems to work, but is still not widely used.
+@item Bulk-insert - Alpha
+New feature in MyISAM in MySQL 4.0 for faster insert of many rows.
+
@end table
MySQL AB provides high-quality support for paying customers, but the
@@ -9505,6 +9419,13 @@ or
shell> mysqladmin -h 'your-host-name' variables
@end example
+If you get @code{Errcode 13}, which means @code{Permission denied}, when
+starting @code{mysqld} this means that you didn't have the right to
+read/create files in the MySQL database or log directory. In this case
+you should either start @code{mysqld} as the root user or change the
+permissions for the involved files and directories so that you have the
+right to use them.
+
If @code{safe_mysqld} starts the server but you can't connect to it,
you should make sure you have an entry in @file{/etc/hosts} that looks like
this:
@@ -20984,11 +20905,12 @@ names will be case-insensitive.
@item @code{max_allowed_packet}
The maximum size of one packet. The message buffer is initialized to
-@code{net_buffer_length} bytes, but can grow up to @code{max_allowed_packet}
-bytes when needed. This value by default is small, to catch big (possibly
-wrong) packets. You must increase this value if you are using big
-@code{BLOB} columns. It should be as big as the biggest @code{BLOB} you want
-to use. The current protocol limits @code{max_allowed_packet} to 16M.
+@code{net_buffer_length} bytes, but can grow up to
+@code{max_allowed_packet} bytes when needed. This value by default is
+small, to catch big (possibly wrong) packets. You must increase this
+value if you are using big @code{BLOB} columns. It should be as big as
+the biggest @code{BLOB} you want to use. The protocol limiits for
+@code{max_allowed_packet} is 16M in MySQL 3.23 and 4G in MySQL 4.0.
@item @code{max_binlog_cache_size}
If a multi-statement transaction requires more than this amount of memory,
@@ -23869,22 +23791,32 @@ argument).
@cindex error messages, displaying
@cindex perror
-@code{perror} can be used to print error message(s). @code{perror} can
-be invoked like this:
+@cindex errno
+@cindex Errcode
+
+For most system errors MySQL will, in addition to a internal text message,
+also print the system error code in one of the following styles:
+@code{message ... (errno: #)} or @code{message ... (Errcode: #)}.
+
+You can find out what the error code means by either examining the
+documentation for your system or use the @code{perror} utility.
+
+@code{perror} prints a description for a system error code, or an MyISAM/ISAM
+table handler error code.
+
+@code{perror} is invoked like this:
@example
shell> perror [OPTIONS] [ERRORCODE [ERRORCODE...]]
-For example:
+Example:
-shell> perror 64 79
+shell> perror 13 64
+Error code 13: Permission decided
Error code 64: Machine is not on the network
-Error code 79: Can not access a needed shared library
@end example
-@code{perror} can be used to display a description for a system error
-code, or an MyISAM/ISAM table handler error code. The error messages
-are mostly system dependent.
+Note that the error messages are mostly system dependent!
@node Batch Commands, , perror, Client-Side Scripts
@@ -26179,8 +26111,8 @@ Constant condition removal (needed because of constant folding):
Constant expressions used by indexes are evaluated only once.
@item
@code{COUNT(*)} on a single table without a @code{WHERE} is retrieved
-directly from the table information. This is also done for any @code{NOT NULL}
-expression when used with only one table.
+directly from the table information for MyISAM and HEAP tables. This is
+also done for any @code{NOT NULL} expression when used with only one table.
@item
Early detection of invalid constant expressions. MySQL quickly
detects that some @code{SELECT} statements are impossible and returns no rows.
@@ -26389,7 +26321,7 @@ key value changes. In this case @code{LIMIT #} will not calculate any
unnecessary @code{GROUP BY}'s.
@item
As soon as MySQL has sent the first @code{#} rows to the client, it
-will abort the query.
+will abort the query (If you are not using @code{SQL_CALC_FOUND_ROWS}).
@item
@code{LIMIT 0} will always quickly return an empty set. This is useful
to check the query and to get the column types of the result columns.
@@ -26445,7 +26377,7 @@ If you are inserting a lot of rows from different clients, you can get
higher speed by using the @code{INSERT DELAYED} statement. @xref{INSERT,
, @code{INSERT}}.
@item
-Note that with @code{MyISAM} you can insert rows at the same time
+Note that with @code{MyISAM} tables you can insert rows at the same time
@code{SELECT}s are running if there are no deleted rows in the tables.
@item
When loading a table from a text file, use @code{LOAD DATA INFILE}. This
@@ -26487,8 +26419,11 @@ Execute a @code{FLUSH TABLES} statement or the shell command @code{mysqladmin
flush-tables}.
@end enumerate
-This procedure will be built into @code{LOAD DATA INFILE} in some future
-version of MySQL.
+Note that @code{LOAD DATA INFILE} also does the above optimization if
+you insert into an empty table; The main difference with the above
+procedure is that you can let myisamchk allocate much more temporary
+memory for the index creation that you may want MySQL to allocate for
+every index recreation.
Since @strong{MySQL 4.0} you can also use
@code{ALTER TABLE tbl_name DISABLE KEYS} instead of
@@ -26497,7 +26432,8 @@ Since @strong{MySQL 4.0} you can also use
@code{myisamchk -r -q /path/to/db/tbl_name}. This way you can also skip
@code{FLUSH TABLES} steps.
@item
-You can speed up insertions by locking your tables:
+You can speed up insertions that is done over multiple statements by
+locking your tables:
@example
mysql> LOCK TABLES a WRITE;
@@ -26512,6 +26448,9 @@ be as many index buffer flushes as there are different @code{INSERT}
statements. Locking is not needed if you can insert all rows with a single
statement.
+For transactional tables, you should use @code{BEGIN/COMMIT} instead of
+@code{LOCK TABLES} to get a speedup.
+
Locking will also lower the total time of multi-connection tests, but the
maximum wait time for some threads will go up (because they wait for
locks). For example:
@@ -26589,7 +26528,7 @@ Always check that all your queries really use the indexes you have created
in the tables. In MySQL you can do this with the @code{EXPLAIN}
command. @xref{EXPLAIN, Explain, Explain, manual}.
@item
-Try to avoid complex @code{SELECT} queries on tables that are updated a
+Try to avoid complex @code{SELECT} queries on MyISAM tables that are updated a
lot. This is to avoid problems with table locking.
@item
The new @code{MyISAM} tables can insert rows in a table without deleted
@@ -35518,7 +35457,7 @@ using @code{myisampack}. @xref{Compressed format}.
ALTER [IGNORE] TABLE tbl_name alter_spec [, alter_spec ...]
alter_specification:
- ADD [COLUMN] create_definition [FIRST | AFTER column_name ]
+ ADD [COLUMN] create_definition [FIRST | AFTER column_name]
or ADD [COLUMN] (create_definition, create_definition,...)
or ADD INDEX [index_name] (index_col_name,...)
or ADD PRIMARY KEY (index_col_name,...)
@@ -35527,8 +35466,8 @@ alter_specification:
or ADD [CONSTRAINT symbol] FOREIGN KEY index_name (index_col_name,...)
[reference_definition]
or ALTER [COLUMN] col_name @{SET DEFAULT literal | DROP DEFAULT@}
- or CHANGE [COLUMN] old_col_name create_definition
- or MODIFY [COLUMN] create_definition
+ or CHANGE [COLUMN] old_col_name create_definition [FIRST | AFTER column_name]
+ or MODIFY [COLUMN] create_definition [FIRST | AFTER column_name]
or DROP [COLUMN] col_name
or DROP PRIMARY KEY
or DROP INDEX index_name
@@ -45552,20 +45491,31 @@ more on the server).
@node Packet too large, Communication errors, Out of memory, Common errors
@appendixsubsec @code{Packet too large} Error
+A communication packet is a single SQL statement sent to the MySQL server
+or a single row that is sent to the client.
+
When a MySQL client or the @code{mysqld} server gets a packet bigger
-than @code{max_allowed_packet} bytes, it issues a @code{Packet too large}
-error and closes the connection.
-
-If you are using the @code{mysql} client, you may specify a bigger buffer by
-starting the client with @code{mysql --set-variable=max_allowed_packet=8M}.
-
-If you are using other clients that do not allow you to specify the maximum
-packet size (such as @code{DBI}), you need to set the packet size when you
-start the server. You cau use a command-line option to @code{mysqld} to set
-@code{max_allowed_packet} to a larger size. For example, if you are
-expecting to store the full length of a @code{BLOB} into a table, you'll need
-to start the server with the @code{--set-variable=max_allowed_packet=16M}
-option.
+than @code{max_allowed_packet} bytes, it issues a @code{Packet too
+large} error and closes the connection. With some clients, you may also
+get @code{Lost connection to MySQL server during query} error if the
+communication packet is too big.
+
+Note that both the client and the server has it's own
+@code{max_allowed_packet} variable. If you want to handle big packets,
+you have to increase this variable both in the client and in the server.
+
+It's safe to increase this variable as memory is only allocated when
+needed; This variable is more a precaution to catch wrong packets
+between the client/server and also to ensure that you don't accidently
+use big packets so that you run out of memory.
+
+If you are using the @code{mysql} client, you may specify a bigger
+buffer by starting the client with @code{mysql --set-variable=max_allowed_packet=8M}. Other clients have different methods to set this variable.
+
+You can use the option file to set @code{max_allowed_packet} to a larger
+size in @code{mysqld}. For example, if you are expecting to store the
+full length of a @code{MEDIUMBLOB} into a table, you'll need to start
+the server with the @code{set-variable=max_allowed_packet=16M} option.
You can also get strange problems with large packets if you are using
big blobs, but you haven't given @code{mysqld} access to enough memory
@@ -45573,6 +45523,7 @@ to handle the query. If you suspect this is the case, try adding
@code{ulimit -d 256000} to the beginning of the @code{safe_mysqld} script
and restart @code{mysqld}.
+
@node Communication errors, Full table, Packet too large, Common errors
@appendixsubsec Communication Errors / Aborted Connection
@@ -48706,6 +48657,9 @@ Our TODO section contains what we plan to have in 4.0. @xref{TODO MySQL 4.0}.
@itemize @bullet
@item
Added boolean fulltext search code. It should be considered early alpha.
+@item
+Extended @code{MODIFY} and @code{CHANGE} in @code{ALTER TABLE} to accept
+the @code{AFTER} keyword.
@end itemize
@node News-4.0.0, , News-4.0.1, News-4.0.x
diff --git a/myisam/mi_dynrec.c b/myisam/mi_dynrec.c
index c07fa5aa080..4facda91626 100644
--- a/myisam/mi_dynrec.c
+++ b/myisam/mi_dynrec.c
@@ -190,6 +190,8 @@ static int _mi_find_writepos(MI_INFO *info,
my_errno=HA_ERR_RECORD_FILE_FULL;
DBUG_RETURN(-1);
}
+ if (*length > MI_MAX_BLOCK_LENGTH)
+ *length=MI_MAX_BLOCK_LENGTH;
info->state->data_file_length+= *length;
info->s->state.split++;
info->update|=HA_STATE_WRITE_AT_END;
@@ -370,19 +372,30 @@ int _mi_write_part_record(MI_INFO *info,
info->s->state.dellink : info->state->data_file_length;
if (*flag == 0) /* First block */
{
- head_length=5+8+long_block*2;
- temp[0]=5+(uchar) long_block;
- if (long_block)
+ if (*reclength > MI_MAX_BLOCK_LENGTH)
{
- mi_int3store(temp+1,*reclength);
+ head_length= 16;
+ temp[0]=13;
+ mi_int4store(temp+1,*reclength);
mi_int3store(temp+4,length-head_length);
- mi_sizestore((byte*) temp+7,next_filepos);
+ mi_sizestore((byte*) temp+8,next_filepos);
}
else
{
- mi_int2store(temp+1,*reclength);
- mi_int2store(temp+3,length-head_length);
- mi_sizestore((byte*) temp+5,next_filepos);
+ head_length=5+8+long_block*2;
+ temp[0]=5+(uchar) long_block;
+ if (long_block)
+ {
+ mi_int3store(temp+1,*reclength);
+ mi_int3store(temp+4,length-head_length);
+ mi_sizestore((byte*) temp+7,next_filepos);
+ }
+ else
+ {
+ mi_int2store(temp+1,*reclength);
+ mi_int2store(temp+3,length-head_length);
+ mi_sizestore((byte*) temp+5,next_filepos);
+ }
}
}
else
@@ -1433,10 +1446,10 @@ uint _mi_get_block_info(MI_BLOCK_INFO *info, File file, my_off_t filepos)
}
else
{
- if (info->header[0] > 6)
+ if (info->header[0] > 6 && info->header[0] != 13)
return_val=BLOCK_SYNC_ERROR;
}
- info->next_filepos= HA_OFFSET_ERROR; /* Dummy ifall no next block */
+ info->next_filepos= HA_OFFSET_ERROR; /* Dummy if no next block */
switch (info->header[0]) {
case 0:
@@ -1470,6 +1483,14 @@ uint _mi_get_block_info(MI_BLOCK_INFO *info, File file, my_off_t filepos)
info->filepos=filepos+4;
return return_val | BLOCK_FIRST | BLOCK_LAST;
+ case 13:
+ info->rec_len=mi_uint4korr(header+1);
+ info->block_len=info->data_len=mi_uint3korr(header+5);
+ info->next_filepos=mi_sizekorr(header+8);
+ info->second_read=1;
+ info->filepos=filepos+16;
+ return return_val | BLOCK_FIRST;
+
case 3:
info->rec_len=info->data_len=mi_uint2korr(header+1);
info->block_len=info->rec_len+ (uint) header[3];
diff --git a/myisam/mi_open.c b/myisam/mi_open.c
index 675dac38d41..6da1dc926c5 100644
--- a/myisam/mi_open.c
+++ b/myisam/mi_open.c
@@ -115,7 +115,7 @@ MI_INFO *mi_open(const char *name, int mode, uint open_flags)
DBUG_PRINT("error",("Wrong header in %s",name_buff));
DBUG_DUMP("error_dump",(char*) share->state.header.file_version,
head_length);
- my_errno=HA_ERR_CRASHED;
+ my_errno=HA_ERR_WRONG_TABLE_DEF;
goto err;
}
share->options= mi_uint2korr(share->state.header.options);
diff --git a/myisam/myisamdef.h b/myisam/myisamdef.h
index 33837dfda00..ea77d700234 100644
--- a/myisam/myisamdef.h
+++ b/myisam/myisamdef.h
@@ -356,7 +356,8 @@ struct st_myisam_info {
#define MI_DYN_MAX_BLOCK_LENGTH ((1L << 24)-4L)
#define MI_DYN_MAX_ROW_LENGTH (MI_DYN_MAX_BLOCK_LENGTH - MI_SPLIT_LENGTH)
#define MI_DYN_ALIGN_SIZE 4 /* Align blocks on this */
-#define MI_MAX_DYN_HEADER_BYTE 12 /* max header byte for dynamic rows */
+#define MI_MAX_DYN_HEADER_BYTE 13 /* max header byte for dynamic rows */
+#define MI_MAX_BLOCK_LENGTH (((ulong) 1 << 24)-1)
#define MEMMAP_EXTRA_MARGIN 7 /* Write this as a suffix for file */
diff --git a/mysql-test/r/alter_table.result b/mysql-test/r/alter_table.result
index d2975c33a47..4d6ef5ada98 100644
--- a/mysql-test/r/alter_table.result
+++ b/mysql-test/r/alter_table.result
@@ -6,10 +6,16 @@ col3 varchar (20) not null,
col4 varchar(4) not null,
col5 enum('PENDING', 'ACTIVE', 'DISABLED') not null,
col6 int not null, to_be_deleted int);
+insert into t1 values (2,4,3,5,"PENDING",1,7);
alter table t1
add column col4_5 varchar(20) not null after col4,
-add column col7 varchar(30) not null after col6,
-add column col8 datetime not null, drop column to_be_deleted;
+add column col7 varchar(30) not null after col5,
+add column col8 datetime not null, drop column to_be_deleted,
+change column col2 fourth varchar(30) not null after col3,
+modify column col6 int not null first;
+select * from t1;
+col6 col1 col3 fourth col4 col4_5 col5 col7 col8
+1 2 3 4 5 PENDING 0000-00-00 00:00:00
drop table t1;
create table t1 (bandID MEDIUMINT UNSIGNED NOT NULL PRIMARY KEY, payoutID SMALLINT UNSIGNED NOT NULL);
insert into t1 (bandID,payoutID) VALUES (1,6),(2,6),(3,4),(4,9),(5,10),(6,1),(7,12),(8,12);
diff --git a/mysql-test/t/alter_table.test b/mysql-test/t/alter_table.test
index 681e3d36cca..22af6663e0a 100644
--- a/mysql-test/t/alter_table.test
+++ b/mysql-test/t/alter_table.test
@@ -10,10 +10,14 @@ col3 varchar (20) not null,
col4 varchar(4) not null,
col5 enum('PENDING', 'ACTIVE', 'DISABLED') not null,
col6 int not null, to_be_deleted int);
+insert into t1 values (2,4,3,5,"PENDING",1,7);
alter table t1
add column col4_5 varchar(20) not null after col4,
-add column col7 varchar(30) not null after col6,
-add column col8 datetime not null, drop column to_be_deleted;
+add column col7 varchar(30) not null after col5,
+add column col8 datetime not null, drop column to_be_deleted,
+change column col2 fourth varchar(30) not null after col3,
+modify column col6 int not null first;
+select * from t1;
drop table t1;
create table t1 (bandID MEDIUMINT UNSIGNED NOT NULL PRIMARY KEY, payoutID SMALLINT UNSIGNED NOT NULL);
diff --git a/sql/sql_table.cc b/sql/sql_table.cc
index d76c6bbd627..2a1be2e525c 100644
--- a/sql/sql_table.cc
+++ b/sql/sql_table.cc
@@ -1273,8 +1273,11 @@ int mysql_alter_table(THD *thd,char *new_db, char *new_name,
def->field=field;
if (def->sql_type == FIELD_TYPE_TIMESTAMP)
use_timestamp=1;
- create_list.push_back(def);
- def_it.remove();
+ if (!def->after)
+ {
+ create_list.push_back(def);
+ def_it.remove();
+ }
}
else
{ // Use old field value
@@ -1305,7 +1308,7 @@ int mysql_alter_table(THD *thd,char *new_db, char *new_name,
List_iterator<create_field> find_it(create_list);
while ((def=def_it++)) // Add new columns
{
- if (def->change)
+ if (def->change && ! def->field)
{
my_error(ER_BAD_FIELD_ERROR,MYF(0),def->change,table_name);
DBUG_RETURN(-1);
diff --git a/sql/sql_yacc.yy b/sql/sql_yacc.yy
index 5096a737b8e..ff405626fad 100644
--- a/sql/sql_yacc.yy
+++ b/sql/sql_yacc.yy
@@ -1138,7 +1138,7 @@ alter_list_item:
LEX *lex=Lex;
lex->change= $3.str; lex->simple_alter=0;
}
- field_spec
+ field_spec opt_place
| MODIFY_SYM opt_column field_ident
{
LEX *lex=Lex;
@@ -1157,6 +1157,7 @@ alter_list_item:
YYABORT;
lex->simple_alter=0;
}
+ opt_place
| DROP opt_column field_ident opt_restrict
{
LEX *lex=Lex;
@@ -2831,6 +2832,7 @@ keyword:
| BACKUP_SYM {}
| BEGIN_SYM {}
| BERKELEY_DB_SYM {}
+ | BINLOG_SYM {}
| BIT_SYM {}
| BOOL_SYM {}
| BOOLEAN_SYM {}
@@ -2857,6 +2859,7 @@ keyword:
| END {}
| ENUM {}
| ESCAPE_SYM {}
+ | EVENTS_SYM {}
| EXTENDED_SYM {}
| FAST_SYM {}
| FULL {}