diff options
author | unknown <dlenev@brandersnatch.localdomain> | 2005-03-16 04:32:47 +0300 |
---|---|---|
committer | unknown <dlenev@brandersnatch.localdomain> | 2005-03-16 04:32:47 +0300 |
commit | 5f75c8f5b47b73e6c1cd4261de9b99b0fdb8c24e (patch) | |
tree | 3fa8e5965f7758c8da8c6827d247549163f01785 /sql/sql_load.cc | |
parent | 268a3ecc8b6282ddbd082f7f133c7273b087576b (diff) | |
download | mariadb-git-5f75c8f5b47b73e6c1cd4261de9b99b0fdb8c24e.tar.gz |
WL#874 "Extended LOAD DATA".
Now one can use user variables as target for data loaded from file
(besides table's columns). Also LOAD DATA got new SET-clause in which
one can specify values for table columns as expressions.
For example the following is possible:
LOAD DATA INFILE 'words.dat' INTO TABLE t1 (a, @b) SET c = @b + 1;
This patch also implements new way of replicating LOAD DATA.
Now we do it similarly to other queries.
We store LOAD DATA query in new Execute_load_query event
(which is last in the sequence of events representing LOAD DATA).
When we are executing this event we simply rewrite part of query which
holds name of file (we use name of temporary file) and then execute it
as usual query. In the beggining of this sequence we use Begin_load_query
event which is almost identical to Append_file event
client/mysqlbinlog.cc:
Added support of two new binary log events Begin_load_query_log_event and
Execute_load_query_log_Event which are used to replicate LOAD DATA INFILE.
mysql-test/r/ctype_ucs.result:
Addition of two new types of binary log events shifted binlog positions.
Updated test's results and made it more robust for future similar
changes.
mysql-test/r/insert_select.result:
Addition of two new types of binary log events shifted binlog positions.
Updated test's results and made it more robust for future similar
changes.
mysql-test/r/loaddata.result:
Added tests for new LOAD DATA features.
mysql-test/r/mix_innodb_myisam_binlog.result:
Addition of two new types of binary log events shifted binlog positions.
Updated test's results (don't dare to get rid from binlog positions
completely since it seems that this test uses them).
mysql-test/r/mysqlbinlog.result:
New approach for binlogging of LOAD DATA statement. Now we store it as
usual query and rewrite part in which file is specified when needed.
So now mysqlbinlog output for LOAD DATA much more closer to its initial
form. Updated test'd results accordingly.
mysql-test/r/mysqldump.result:
Made test more robust to other tests failures.
mysql-test/r/rpl000015.result:
Addition of two new types of binary log events shifted binlog positions.
Updated test's results accordingly.
mysql-test/r/rpl_change_master.result:
Addition of two new types of binary log events shifted binlog positions.
Updated test's results.
mysql-test/r/rpl_charset.result:
Addition of two new types of binary log events shifted binlog positions.
Updated test's results accordingly
mysql-test/r/rpl_deadlock.result:
Addition of two new types of binary log events shifted binlog positions.
Updated test's results accordingly
mysql-test/r/rpl_error_ignored_table.result:
Addition of two new types of binary log events shifted binlog positions.
Updated test's results and made it more robust for future similar
changes.
mysql-test/r/rpl_flush_log_loop.result:
Addition of two new types of binary log events shifted binlog positions.
Updated test's results accordingly.
mysql-test/r/rpl_flush_tables.result:
Addition of two new types of binary log events shifted binlog positions.
Updated test's results and made it more robust for future similar
changes.
mysql-test/r/rpl_loaddata.result:
New way of replicating LOAD DATA. Now we do it similarly to other
queries. We store LOAD DATA query in new Execute_load_query event
(which is last in the sequence of events representing LOAD DATA).
When we are executing this event we simply rewrite part of query which
holds name of file (we use name of temporary file) and then execute it
as usual query. In the beggining of this sequence we use Begin_load_query
event which is almost identical to Append_file event...
Updated test's results wwith new binlog positions.
mysql-test/r/rpl_loaddata_rule_m.result:
Addition of two new types of binary log events shifted binlog positions.
Updated test's results and made it more robust for future similar
changes.
Since now LOAD DATA is replicated much in the same way as usual query
--binlog_do/ignore_db work for it inthe same way as for usual queries.
mysql-test/r/rpl_loaddata_rule_s.result:
Addition of two new types of binary log events shifted binlog positions.
Updated test's results accordingly.
mysql-test/r/rpl_loaddatalocal.result:
Added nice test for case when it is important that LOAD DATA LOCAL
ignores duplicates.
mysql-test/r/rpl_log.result:
Addition of two new types of binary log events shifted binlog positions.
Updated test's results accordingly (don't dare to get rid from binlog
positions completely since it seems that this test uses them).
mysql-test/r/rpl_log_pos.result:
Addition of two new types of binary log events shifted binlog positions.
Updated test's results accordingly.
mysql-test/r/rpl_max_relay_size.result:
Addition of two new types of binary log events shifted binlog positions.
Updated test's results accordingly.
mysql-test/r/rpl_multi_query.result:
Addition of two new types of binary log events shifted binlog positions.
Updated test's results accordingly.
mysql-test/r/rpl_relayrotate.result:
Addition of two new types of binary log events shifted binlog positions.
Updated test's results accordingly.
mysql-test/r/rpl_replicate_do.result:
Addition of two new types of binary log events shifted binlog positions.
Updated test's results accordingly.
mysql-test/r/rpl_reset_slave.result:
Addition of two new types of binary log events shifted binlog positions.
Updated test's results accordingly.
mysql-test/r/rpl_rotate_logs.result:
Addition of two new types of binary log events shifted binlog positions.
Updated test's results accordingly.
mysql-test/r/rpl_server_id1.result:
Addition of two new types of binary log events shifted binlog positions.
Updated test's results accordingly.
mysql-test/r/rpl_server_id2.result:
Addition of two new types of binary log events shifted binlog positions.
Updated test's results accordingly.
mysql-test/r/rpl_temporary.result:
Addition of two new types of binary log events shifted binlog positions.
Updated test's results and made it more robust for future similar
changes.
mysql-test/r/rpl_timezone.result:
Addition of two new types of binary log events shifted binlog positions.
Updated test's results and made it more robust for future similar
changes.
mysql-test/r/rpl_until.result:
Addition of two new types of binary log events shifted binlog positions.
Updated test's results accordingly and tweaked test a bit to bring it
back to good shape.
mysql-test/r/rpl_user_variables.result:
Addition of two new types of binary log events shifted binlog positions.
Updated test's results and made it more robust for future similar
changes.
mysql-test/r/user_var.result:
Addition of two new types of binary log events shifted binlog positions.
Updated test's results and made it more robust for future similar
changes.
mysql-test/t/ctype_ucs.test:
Addition of two new types of binary log events shifted binlog positions.
Updated test accordingly and made it more robust for future similar
changes.
mysql-test/t/insert_select.test:
Addition of two new types of binary log events shifted binlog positions.
Updated test accordingly and made it more robust for future similar
changes.
mysql-test/t/loaddata.test:
Added test cases for new LOAD DATA functionality.
mysql-test/t/mix_innodb_myisam_binlog.test:
Addition of two new types of binary log events shifted binlog positions.
Updated test accordingly.
mysql-test/t/mysqlbinlog.test:
New way of replicating LOAD DATA local. Now we do it similarly to other
queries. We store LOAD DATA query in new Execute_load_query event
(which is last in the sequence of events representing LOAD DATA).
When we are executing this event we simply rewrite part of query which
holds name of file (we use name of temporary file) and then execute it
as usual query. In the beggining of this sequence we use Begin_load_query
event which is almost identical to Append_file event...
Thus we need new binlog positions for LOAD DATA events.
mysql-test/t/mysqlbinlog2.test:
Addition of two new types of binary log events shifted binlog positions.
Updated test accordingly.
mysql-test/t/mysqldump.test:
Made test more robust for failures of other tests.
mysql-test/t/rpl_charset.test:
Addition of two new types of binary log events shifted binlog positions.
Updated test accordingly.
mysql-test/t/rpl_deadlock.test:
Addition of two new types of binary log events shifted binlog positions.
Updated test accordingly.
mysql-test/t/rpl_error_ignored_table.test:
Addition of two new types of binary log events shifted binlog positions.
Updated test accordingly and made it more robust for future similar
changes.
mysql-test/t/rpl_flush_tables.test:
Addition of two new types of binary log events shifted binlog positions.
Made test more robust for future similar changes.
mysql-test/t/rpl_loaddata.test:
New way of replicating LOAD DATA. Now we do it similarly to other
queries. We store LOAD DATA query in new Execute_load_query event
(which is last in the sequence of events representing LOAD DATA).
When we are executing this event we simply rewrite part of query which
holds name of file (we use name of temporary file) and then execute it
as usual query. In the beggining of this sequence we use Begin_load_query
event which is almost identical to Append_file event...
Apropritely updated comments in test.
mysql-test/t/rpl_loaddata_rule_m.test:
Addition of two new types of binary log events shifted binlog positions.
Updated test accordingly and made it more robust for future similar
changes.
Since now LOAD DATA is replicated much in the same way as usual query
--binlog_do/ignore_db work for it inthe same way as for usual queries.
mysql-test/t/rpl_loaddata_rule_s.test:
Addition of two new types of binary log events shifted binlog positions.
Updated test accordingly.
mysql-test/t/rpl_loaddatalocal.test:
Added nice test for case when it is important that LOAD DATA LOCAL
ignores duplicates.
mysql-test/t/rpl_log.test:
Addition of two new types of binary log events shifted binlog positions.
Updated test accordingly (don't dare to get rid from binlog positions
completely since it seems that this test uses them).
mysql-test/t/rpl_log_pos.test:
Addition of two new types of binary log events shifted binlog positions.
Updated test accordingly.
mysql-test/t/rpl_multi_query.test:
Addition of two new types of binary log events shifted binlog positions.
Updated test accordingly.
mysql-test/t/rpl_temporary.test:
Addition of two new types of binary log events shifted binlog positions.
Made test more robust for future similar changes.
mysql-test/t/rpl_timezone.test:
Addition of two new types of binary log events shifted binlog positions.
Made test more robust for future similar changes.
mysql-test/t/rpl_until.test:
Addition of two new types of binary log events shifted binlog positions.
Updated test accordingly and tweaked it a bit to bring it back to good
shape.
mysql-test/t/rpl_user_variables.test:
Addition of two new types of binary log events shifted binlog positions.
Updated test accordingly and made it more robust for future similar
changes.
mysql-test/t/user_var.test:
Addition of two new types of binary log events shifted binlog positions.
Updated test accordingly and made it more robust for future similar
changes.
sql/item_func.cc:
Added Item_user_var_as_out_param class that represents user variable
which used as out parameter in LOAD DATA.
Moved code from Item_func_set_user_var::update_hash() function to
separate static function to be able to reuse it in this new class.
sql/item_func.h:
Added Item_user_var_as_out_param class that represents user variable
which used as out parameter in LOAD DATA.
sql/log_event.cc:
New way of replicating LOAD DATA. Now we do it similarly to other
queries. We store LOAD DATA query in new Execute_load_query event
(which is last in the sequence of events representing LOAD DATA).
When we are executing this event we simply rewrite part of query which
holds name of file (we use name of temporary file) and then execute it
as usual query. In the beggining of this sequence we use Begin_load_query
event which is almost identical to Append_file event.
sql/log_event.h:
New way of replicating LOAD DATA. Now we do it similarly to other
queries. We store LOAD DATA query in new Execute_load_query event
(which is last in the sequence of events representing LOAD DATA).
When we are executing this event we simply rewrite part of query which
holds name of file (we use name of temporary file) and then execute it
as usual query. In the beggining of this sequence we use Begin_load_query
event which is almost identical to Append_file event.
sql/mysql_priv.h:
Now mysql_load() has two more arguments. They are needed to pass list of
columns and corresponding expressions from new LOAD DATA's SET clause.
sql/share/errmsg.txt:
Added new error message which is used to forbid loading of data from
fixed length rows to variables.
sql/sql_lex.h:
Added LEX::fname_start/fname_end members.
They are pointers to part of LOAD DATA statement which should be
rewritten during replication (file name + little extra).
sql/sql_load.cc:
Added support for extended LOAD DATA.
Now one can use user variables as target for data loaded from file
(besides table's columns). Also LOAD DATA got new SET-clause in which
one can specify values for table columns as expressions.
Updated mysql_load()/read_fixed_length()/read_sep_field() to support
this functionality (now they can read data from file to both columns and
variables and assign do calculations and assignments specified in SET
clause).
We also use new approach for LOAD DATA binlogging/replication.
sql/sql_parse.cc:
mysql_execute_command():
Since now we have SET clause in LOAD DATA we should also check
permissions for tables used in its expressions. Also mysql_load()
has two more arguments to pass information about this clause.
sql/sql_repl.cc:
New way of replicating LOAD DATA. Now we do it similarly to other
queries. We store LOAD DATA query in new Execute_load_query event
(which is last in the sequence of events representing LOAD DATA).
When we are executing this event we simply rewrite part of query which
holds name of file (we use name of temporary file) and then execute it
as usual query. In the beggining of this sequence we use Begin_load_query
event which is almost identical to Append_file event.
sql/sql_repl.h:
struct st_load_file_info:
Removed memebers which are no longer needed for LOAD DATA binnlogging.
sql/sql_yacc.yy:
Added support for extended LOAD DATA syntax. Now one can use
user variables as target for data loaded from file (besides table's
columns). Also LOAD DATA got new SET-clause in which one can specify
values for table columns as expressions.
For example the following is possible:
LOAD DATA INFILE 'words.dat' INTO TABLE t1 (a, @b) SET c = @b + 1;
Also now we save pointers to the beginning and to the end of part of
LOAD DATA statement which should be rewritten during replication.
Diffstat (limited to 'sql/sql_load.cc')
-rw-r--r-- | sql/sql_load.cc | 272 |
1 files changed, 190 insertions, 82 deletions
diff --git a/sql/sql_load.cc b/sql/sql_load.cc index 174ccdfab5b..b27ba9c095f 100644 --- a/sql/sql_load.cc +++ b/sql/sql_load.cc @@ -72,18 +72,44 @@ public: }; static int read_fixed_length(THD *thd, COPY_INFO &info, TABLE_LIST *table_list, - List<Item> &fields, READ_INFO &read_info, + List<Item> &fields_vars, List<Item> &set_fields, + List<Item> &set_values, READ_INFO &read_info, ulong skip_lines, bool ignore_check_option_errors); static int read_sep_field(THD *thd, COPY_INFO &info, TABLE_LIST *table_list, - List<Item> &fields, READ_INFO &read_info, + List<Item> &fields_vars, List<Item> &set_fields, + List<Item> &set_values, READ_INFO &read_info, String &enclosed, ulong skip_lines, bool ignore_check_option_errors); + +/* + Execute LOAD DATA query + + SYNOPSYS + mysql_load() + thd - current thread + ex - sql_exchange object representing source file and its parsing rules + table_list - list of tables to which we are loading data + fields_vars - list of fields and variables to which we read + data from file + set_fields - list of fields mentioned in set clause + set_values - expressions to assign to fields in previous list + handle_duplicates - indicates whenever we should emit error or + replace row if we will meet duplicates. + ignore - - indicates whenever we should ignore duplicates + read_file_from_client - is this LOAD DATA LOCAL ? + lock_type - what type of concurrency do we allow then we are inserting data + + RETURN VALUES + TRUE - error / FALSE - success +*/ + bool mysql_load(THD *thd,sql_exchange *ex,TABLE_LIST *table_list, - List<Item> &fields, enum enum_duplicates handle_duplicates, - bool ignore, - bool read_file_from_client,thr_lock_type lock_type) + List<Item> &fields_vars, List<Item> &set_fields, + List<Item> &set_values, + enum enum_duplicates handle_duplicates, bool ignore, + bool read_file_from_client, thr_lock_type lock_type) { char name[FN_REFLEN]; File file; @@ -130,48 +156,80 @@ bool mysql_load(THD *thd,sql_exchange *ex,TABLE_LIST *table_list, my_error(ER_NON_UPDATABLE_TABLE, MYF(0), table_list->alias, "LOAD"); DBUG_RETURN(TRUE); } + /* + Let us emit an error if we are loading data to table which is used + in subselect in SET clause like we do it for INSERT. + + The main thing to fix to remove this restriction is to ensure that the + table is marked to be 'used for insert' in which case we should never + mark this table as as 'const table' (ie, one that has only one row). + */ + if (unique_table(table_list, table_list->next_global)) + { + my_error(ER_UPDATE_TABLE_USED, MYF(0), table_list->table_name); + DBUG_RETURN(TRUE); + } + table= table_list->table; transactional_table= table->file->has_transactions(); - if (!fields.elements) + if (!fields_vars.elements) { Field **field; for (field=table->field; *field ; field++) - fields.push_back(new Item_field(*field)); + fields_vars.push_back(new Item_field(*field)); + table->timestamp_field_type= TIMESTAMP_NO_AUTO_SET; + /* + Let us also prepare SET clause, altough it is probably empty + in this case. + */ + if (setup_fields(thd, 0, table_list, set_fields, 1, 0, 0) || + setup_fields(thd, 0, table_list, set_values, 1, 0, 0)) + DBUG_RETURN(TRUE); } else { // Part field list - thd->dupp_field=0; /* TODO: use this conds for 'WITH CHECK OPTIONS' */ - if (setup_fields(thd, 0, table_list, fields, 1, 0, 0)) + if (setup_fields(thd, 0, table_list, fields_vars, 1, 0, 0) || + setup_fields(thd, 0, table_list, set_fields, 1, 0, 0) || + check_that_all_fields_are_given_values(thd, table)) DBUG_RETURN(TRUE); - if (thd->dupp_field) - { - my_error(ER_FIELD_SPECIFIED_TWICE, MYF(0), thd->dupp_field->field_name); - DBUG_RETURN(TRUE); - } - if (check_that_all_fields_are_given_values(thd, table)) + /* + Check whenever TIMESTAMP field with auto-set feature specified + explicitly. + */ + if (table->timestamp_field && + table->timestamp_field->query_id == thd->query_id) + table->timestamp_field_type= TIMESTAMP_NO_AUTO_SET; + /* + Fix the expressions in SET clause. This should be done after + check_that_all_fields_are_given_values() and setting use_timestamp + since it may update query_id for some fields. + */ + if (setup_fields(thd, 0, table_list, set_values, 1, 0, 0)) DBUG_RETURN(TRUE); } uint tot_length=0; - bool use_blobs=0,use_timestamp=0; - List_iterator_fast<Item> it(fields); + bool use_blobs= 0, use_vars= 0; + List_iterator_fast<Item> it(fields_vars); + Item *item; - Item_field *field; - while ((field=(Item_field*) it++)) + while ((item= it++)) { - if (field->field->flags & BLOB_FLAG) + if (item->type() == Item::FIELD_ITEM) { - use_blobs=1; - tot_length+=256; // Will be extended if needed + Field *field= ((Item_field*)item)->field; + if (field->flags & BLOB_FLAG) + { + use_blobs= 1; + tot_length+= 256; // Will be extended if needed + } + else + tot_length+= field->field_length; } else - tot_length+=field->field->field_length; - if (!field_term->length() && !(field->field->flags & NOT_NULL_FLAG)) - field->field->set_notnull(); - if (field->field == table->timestamp_field) - use_timestamp=1; + use_vars= 1; } if (use_blobs && !ex->line_term->length() && !field_term->length()) { @@ -179,6 +237,11 @@ bool mysql_load(THD *thd,sql_exchange *ex,TABLE_LIST *table_list, MYF(0)); DBUG_RETURN(TRUE); } + if (use_vars && !field_term->length() && !enclosed->length()) + { + my_error(ER_LOAD_FROM_FIXED_SIZE_ROWS_TO_VAR, MYF(0)); + DBUG_RETURN(TRUE); + } /* We can't give an error in the middle when using LOCAL files */ if (read_file_from_client && handle_duplicates == DUP_ERROR) @@ -251,12 +314,6 @@ bool mysql_load(THD *thd,sql_exchange *ex,TABLE_LIST *table_list, if (mysql_bin_log.is_open()) { lf_info.thd = thd; - lf_info.ex = ex; - lf_info.db = db; - lf_info.table_name = table_list->table_name; - lf_info.fields = &fields; - lf_info.ignore= ignore; - lf_info.handle_dup = handle_duplicates; lf_info.wrote_create_file = 0; lf_info.last_pos_in_file = HA_POS_ERROR; lf_info.log_delayed= transactional_table; @@ -264,8 +321,6 @@ bool mysql_load(THD *thd,sql_exchange *ex,TABLE_LIST *table_list, } #endif /*!EMBEDDED_LIBRARY*/ - restore_record(table, s->default_values); - thd->count_cuted_fields= CHECK_FIELD_WARN; /* calc cuted fields */ thd->cuted_fields=0L; /* Skip lines if there is a line terminator */ @@ -282,8 +337,6 @@ bool mysql_load(THD *thd,sql_exchange *ex,TABLE_LIST *table_list, if (!(error=test(read_info.error))) { - if (use_timestamp) - table->timestamp_field_type= TIMESTAMP_NO_AUTO_SET; table->next_number_field=table->found_next_number_field; if (ignore || @@ -300,12 +353,13 @@ bool mysql_load(THD *thd,sql_exchange *ex,TABLE_LIST *table_list, MODE_STRICT_ALL_TABLES))); if (!field_term->length() && !enclosed->length()) - error= read_fixed_length(thd, info, table_list, fields,read_info, + error= read_fixed_length(thd, info, table_list, fields_vars, + set_fields, set_values, read_info, skip_lines, ignore); else - error= read_sep_field(thd, info, table_list, fields, read_info, - *enclosed, skip_lines, - ignore); + error= read_sep_field(thd, info, table_list, fields_vars, + set_fields, set_values, read_info, + *enclosed, skip_lines, ignore); if (table->file->end_bulk_insert()) error=1; /* purecov: inspected */ ha_enable_transaction(thd, TRUE); @@ -380,13 +434,19 @@ bool mysql_load(THD *thd,sql_exchange *ex,TABLE_LIST *table_list, { /* As already explained above, we need to call end_io_cache() or the last - block will be logged only after Execute_load_log_event (which is wrong), - when read_info is destroyed. + block will be logged only after Execute_load_query_log_event (which is + wrong), when read_info is destroyed. */ read_info.end_io_cache(); if (lf_info.wrote_create_file) { - Execute_load_log_event e(thd, db, transactional_table); + Execute_load_query_log_event e(thd, thd->query, thd->query_length, + (char*)thd->lex->fname_start - (char*)thd->query, + (char*)thd->lex->fname_end - (char*)thd->query, + (handle_duplicates == DUP_REPLACE) ? LOAD_DUP_REPLACE : + (ignore ? LOAD_DUP_IGNORE : + LOAD_DUP_ERROR), + transactional_table, FALSE); mysql_bin_log.write(&e); } } @@ -410,10 +470,11 @@ err: static int read_fixed_length(THD *thd, COPY_INFO &info, TABLE_LIST *table_list, - List<Item> &fields, READ_INFO &read_info, ulong skip_lines, - bool ignore_check_option_errors) + List<Item> &fields_vars, List<Item> &set_fields, + List<Item> &set_values, READ_INFO &read_info, + ulong skip_lines, bool ignore_check_option_errors) { - List_iterator_fast<Item> it(fields); + List_iterator_fast<Item> it(fields_vars); Item_field *sql_field; TABLE *table= table_list->table; ulonglong id; @@ -421,11 +482,7 @@ read_fixed_length(THD *thd, COPY_INFO &info, TABLE_LIST *table_list, DBUG_ENTER("read_fixed_length"); id= 0; - - /* No fields can be null in this format. mark all fields as not null */ - while ((sql_field= (Item_field*) it++)) - sql_field->field->set_notnull(); - + while (!read_info.read_fixed_length()) { if (thd->killed) @@ -450,16 +507,28 @@ read_fixed_length(THD *thd, COPY_INFO &info, TABLE_LIST *table_list, read_info.row_end[0]=0; #endif no_trans_update= !table->file->has_transactions(); + + restore_record(table, s->default_values); + /* + There is no variables in fields_vars list in this format so + this conversion is safe. + */ while ((sql_field= (Item_field*) it++)) { Field *field= sql_field->field; + /* + No fields specified in fields_vars list can be null in this format. + Mark field as not null, we should do this for each row because of + restore_record... + */ + field->set_notnull(); + if (pos == read_info.row_end) { thd->cuted_fields++; /* Not enough fields */ push_warning_printf(thd, MYSQL_ERROR::WARN_LEVEL_WARN, ER_WARN_TOO_FEW_RECORDS, ER(ER_WARN_TOO_FEW_RECORDS), thd->row_count); - field->reset(); } else { @@ -483,6 +552,9 @@ read_fixed_length(THD *thd, COPY_INFO &info, TABLE_LIST *table_list, ER(ER_WARN_TOO_MANY_RECORDS), thd->row_count); } + if (fill_record(thd, set_fields, set_values, ignore_check_option_errors)) + DBUG_RETURN(1); + switch (table_list->view_check_option(thd, ignore_check_option_errors)) { case VIEW_CHECK_SKIP: @@ -527,12 +599,13 @@ continue_loop:; static int read_sep_field(THD *thd, COPY_INFO &info, TABLE_LIST *table_list, - List<Item> &fields, READ_INFO &read_info, + List<Item> &fields_vars, List<Item> &set_fields, + List<Item> &set_values, READ_INFO &read_info, String &enclosed, ulong skip_lines, bool ignore_check_option_errors) { - List_iterator_fast<Item> it(fields); - Item_field *sql_field; + List_iterator_fast<Item> it(fields_vars); + Item *item; TABLE *table= table_list->table; uint enclosed_length; ulonglong id; @@ -550,60 +623,95 @@ read_sep_field(THD *thd, COPY_INFO &info, TABLE_LIST *table_list, thd->send_kill_message(); DBUG_RETURN(1); } - while ((sql_field=(Item_field*) it++)) + + restore_record(table, s->default_values); + + while ((item= it++)) { uint length; byte *pos; if (read_info.read_field()) break; + + /* If this line is to be skipped we don't want to fill field or var */ + if (skip_lines) + continue; + pos=read_info.row_start; length=(uint) (read_info.row_end-pos); - Field *field=sql_field->field; if (!read_info.enclosed && (enclosed_length && length == 4 && !memcmp(pos,"NULL",4)) || (length == 1 && read_info.found_null)) { - field->reset(); - field->set_null(); - if (!field->maybe_null()) - { - if (field->type() == FIELD_TYPE_TIMESTAMP) - ((Field_timestamp*) field)->set_time(); - else if (field != table->next_number_field) - field->set_warning((uint) MYSQL_ERROR::WARN_LEVEL_WARN, - ER_WARN_NULL_TO_NOTNULL, 1); + if (item->type() == Item::FIELD_ITEM) + { + Field *field= ((Item_field *)item)->field; + field->reset(); + field->set_null(); + if (!field->maybe_null()) + { + if (field->type() == FIELD_TYPE_TIMESTAMP) + ((Field_timestamp*) field)->set_time(); + else if (field != table->next_number_field) + field->set_warning((uint) MYSQL_ERROR::WARN_LEVEL_WARN, + ER_WARN_NULL_TO_NOTNULL, 1); + } } + else + ((Item_user_var_as_out_param *)item)->set_null_value( + read_info.read_charset); continue; } - field->set_notnull(); - read_info.row_end[0]=0; // Safe to change end marker - field->store((char*) read_info.row_start,length,read_info.read_charset); + + if (item->type() == Item::FIELD_ITEM) + { + Field *field= ((Item_field *)item)->field; + field->set_notnull(); + read_info.row_end[0]=0; // Safe to change end marker + field->store((char*) pos, length, read_info.read_charset); + } + else + ((Item_user_var_as_out_param *)item)->set_value((char*) pos, length, + read_info.read_charset); } if (read_info.error) break; if (skip_lines) { - if (!--skip_lines) - thd->cuted_fields= 0L; // Reset warnings + skip_lines--; continue; } - if (sql_field) - { // Last record - if (sql_field == (Item_field*) fields.head()) + if (item) + { + /* Have not read any field, thus input file is simply ended */ + if (item == fields_vars.head()) break; - for (; sql_field ; sql_field=(Item_field*) it++) + for (; item ; item= it++) { - sql_field->field->set_null(); - sql_field->field->reset(); - thd->cuted_fields++; - push_warning_printf(thd, MYSQL_ERROR::WARN_LEVEL_WARN, - ER_WARN_TOO_FEW_RECORDS, - ER(ER_WARN_TOO_FEW_RECORDS), thd->row_count); + if (item->type() == Item::FIELD_ITEM) + { + /* + QQ: We probably should not throw warning for each field. + But how about intention to always have the same number + of warnings in THD::cuted_fields (and get rid of cuted_fields + in the end ?) + */ + thd->cuted_fields++; + push_warning_printf(thd, MYSQL_ERROR::WARN_LEVEL_WARN, + ER_WARN_TOO_FEW_RECORDS, + ER(ER_WARN_TOO_FEW_RECORDS), thd->row_count); + } + else + ((Item_user_var_as_out_param *)item)->set_null_value( + read_info.read_charset); } } + if (fill_record(thd, set_fields, set_values, ignore_check_option_errors)) + DBUG_RETURN(1); + switch (table_list->view_check_option(thd, ignore_check_option_errors)) { case VIEW_CHECK_SKIP: |