Commit b3a97496 authored by unknown's avatar unknown

Fixed error message when opening a not-MyISAM file.

Extended MODIFY and CHANGE in ALTER TABLE to accept the AFTER keyword.
Extended MyISAM to handle records > 16M.


Docs/manual.texi:
  Updated state of different modules.
  Rewrote description of 'perror' and 'Packet too large'.
myisam/mi_dynrec.c:
  Extended MyISAM to handle records > 16M
myisam/mi_open.c:
  Fix error message when opening a not-MyISAM file.
myisam/myisamdef.h:
  Extended MyISAM to handle records > 16M
mysql-test/r/alter_table.result:
  Added test for CHANGE col ... AFTER
mysql-test/t/alter_table.test:
  Added test for CHANGE col ... AFTER
sql/sql_table.cc:
  Extended MODIFY and CHANGE in ALTER TABLE to accept the AFTER keyword.
sql/sql_yacc.yy:
  Extended MODIFY and CHANGE in ALTER TABLE to accept the AFTER keyword.
BitKeeper/etc/logging_ok:
  Logging to logging@openlogging.org accepted
parent 440de598
...@@ -33,3 +33,4 @@ tonu@x153.internalnet ...@@ -33,3 +33,4 @@ tonu@x153.internalnet
tonu@x3.internalnet tonu@x3.internalnet
Administrator@co3064164-a. Administrator@co3064164-a.
Administrator@co3064164-a.rochd1.qld.optushome.com.au Administrator@co3064164-a.rochd1.qld.optushome.com.au
monty@tramp.mysql.fi
...@@ -880,52 +880,12 @@ version, with the exception of the bugs listed in the bugs section, which ...@@ -880,52 +880,12 @@ version, with the exception of the bugs listed in the bugs section, which
are things that are design-related. @xref{Bugs}. are things that are design-related. @xref{Bugs}.
MySQL is written in multiple layers and different independent MySQL is written in multiple layers and different independent
modules. These modules are listed below with an indication of how modules. Some of the new modules are listed below with an indication of how
well-tested each of them is: well-tested each of them is:
@cindex modules, list of @cindex modules, list of
@table @strong @table @strong
@item The ISAM table handler --- Stable
This manages storage and retrieval of all data in MySQL Version 3.22
and earlier. In all MySQL releases there hasn't been a single
(reported) bug in this code. The only known way to get a corrupted table
is to kill the server in the middle of an update. Even that is unlikely
to destroy any data beyond rescue, because all data are flushed to disk
between each query. There hasn't been a single bug report about lost data
because of bugs in MySQL.
@cindex ISAM table handler
@cindex storing, data
@cindex retrieving, data
@cindex data, ISAM table handler
@item The MyISAM table handler --- Stable
This is new in MySQL Version 3.23. It's largely based on the ISAM
table code but has a lot of new and very useful features.
@item The parser and lexical analyser --- Stable
There hasn't been a single reported bug in this system for a long time.
@item The C client code --- Stable
No known problems. In early Version 3.20 releases, there were some limitations
in the send/receive buffer size. As of Version 3.21, the buffer size is now
dynamic up to a default of 16M.
@item Standard client programs --- Stable
These include @code{mysql}, @code{mysqladmin}, @code{mysqlshow},
@code{mysqldump}, and @code{mysqlimport}.
@item Basic SQL --- Stable
The basic SQL function system and string classes and dynamic memory
handling. Not a single reported bug in this system.
@item Query optimizer --- Stable
@item Range optimizer --- Stable
@item Join optimizer --- Stable
@item Locking --- Gamma @item Locking --- Gamma
This is very system-dependent. On some systems there are big problems This is very system-dependent. On some systems there are big problems
using standard OS locking (@code{fcntl()}). In these cases, you should run the using standard OS locking (@code{fcntl()}). In these cases, you should run the
...@@ -933,79 +893,33 @@ MySQL daemon with the @code{--skip-locking} flag. Problems are known ...@@ -933,79 +893,33 @@ MySQL daemon with the @code{--skip-locking} flag. Problems are known
to occur on some Linux systems, and on SunOS when using NFS-mounted file to occur on some Linux systems, and on SunOS when using NFS-mounted file
systems. systems.
@item Linux threads --- Stable @item @strong{MyODBC 2.50} (uses ODBC SDK 2.5) --- Gamma
The major problem found has been with the @code{fcntl()} call, which is
fixed by using the @w{@code{--skip-locking}} option to
@code{mysqld}. Some people have reported lockup problems with Version 0.5.
LinuxThreads will need to be recompiled if you plan to use
1000+ concurrent connections. Although it is possible to run that many
connections with the default LinuxThreads (however, you will never go
above 1021), the default stack spacing of 2 MB makes the application
unstable, and we have been able to reproduce a coredump after creating
1021 idle connections. @xref{Linux}.
@item Solaris 2.5+ pthreads --- Stable
We use this for all our production work.
@item MIT-pthreads (Other systems) --- Stable
There have been no reported bugs since Version 3.20.15 and no known bugs since
Version 3.20.16. On some systems, there is a ``misfeature'' where some
operations are quite slow (a 1/20 second sleep is done between each query).
Of course, MIT-pthreads may slow down everything a bit, but index-based
@code{SELECT} statements are usually done in one time frame so there shouldn't
be a mutex locking/thread juggling.
@item Other thread implementions --- Beta - Gamma
The ports to other systems are still very new and may have bugs, possibly
in MySQL, but most often in the thread implementation itself.
@item @code{LOAD DATA ...}, @code{INSERT ... SELECT} --- Stable
Some people thought they had found bugs here, but these usually have
turned out to be misunderstandings. Please check the manual before reporting
problems!
@item @code{ALTER TABLE} --- Stable
Small changes in Version 3.22.12.
@item DBD --- Stable
Now maintained by Jochen Wiedmann
(@email{wiedmann@@neckar-alb.de}). Thanks!
@item @code{mysqlaccess} --- Stable
Written and maintained by Yves Carlier
(@email{Yves.Carlier@@rug.ac.be}). Thanks!
@item @code{GRANT} --- Stable
Big changes made in MySQL Version 3.22.12.
@item @strong{MyODBC} (uses ODBC SDK 2.5) --- Gamma
It seems to work well with some programs. It seems to work well with some programs.
@item Replication -- Beta / Gamma @item Replication -- Gamma
We are still working on replication, so don't expect this to be rock We are still working on replication, so don't expect this to be rock
solid yet. On the other hand, some MySQL users are already solid yet. On the other hand, some MySQL users are already
using this with good results. using this with good results.
@item BDB Tables -- Beta @item BDB Tables -- Gamma
The Berkeley DB code is very stable, but we are still improving the interface The Berkeley DB code is very stable, but we are still improving the interface
between MySQL and BDB tables, so it will take some time before this between MySQL and BDB tables, so it will take some time before this
is tested as well as the other table types. is tested as well as the other table types.
@item InnoDB Tables -- Beta @item InnoDB Tables -- Gamma
This is a recent addition to @code{MySQL}. They appear to work well and This is a recent addition to @code{MySQL}. They appear to work well and
can be used after some initial testing. can be used after some initial testing.
@item Automatic recovery of MyISAM tables - Beta @item Automatic recovery of MyISAM tables - Gamma
This only affects the new code that checks if the table was closed properly This only affects the new code that checks if the table was closed properly
on open and executes an automatic check/repair of the table if it wasn't. on open and executes an automatic check/repair of the table if it wasn't.
@item MERGE tables -- Beta / Gamma
The usage of keys on @code{MERGE} tables is still not well tested. The
other parts of the @code{MERGE} code are quite well tested.
@item FULLTEXT -- Beta @item FULLTEXT -- Beta
Text search seems to work, but is still not widely used. Text search seems to work, but is still not widely used.
@item Bulk-insert - Alpha
New feature in MyISAM in MySQL 4.0 for faster insert of many rows.
@end table @end table
MySQL AB provides high-quality support for paying customers, but the MySQL AB provides high-quality support for paying customers, but the
...@@ -9505,6 +9419,13 @@ or ...@@ -9505,6 +9419,13 @@ or
shell> mysqladmin -h 'your-host-name' variables shell> mysqladmin -h 'your-host-name' variables
@end example @end example
If you get @code{Errcode 13}, which means @code{Permission denied}, when
starting @code{mysqld} this means that you didn't have the right to
read/create files in the MySQL database or log directory. In this case
you should either start @code{mysqld} as the root user or change the
permissions for the involved files and directories so that you have the
right to use them.
If @code{safe_mysqld} starts the server but you can't connect to it, If @code{safe_mysqld} starts the server but you can't connect to it,
you should make sure you have an entry in @file{/etc/hosts} that looks like you should make sure you have an entry in @file{/etc/hosts} that looks like
this: this:
...@@ -20984,11 +20905,12 @@ names will be case-insensitive. ...@@ -20984,11 +20905,12 @@ names will be case-insensitive.
@item @code{max_allowed_packet} @item @code{max_allowed_packet}
The maximum size of one packet. The message buffer is initialized to The maximum size of one packet. The message buffer is initialized to
@code{net_buffer_length} bytes, but can grow up to @code{max_allowed_packet} @code{net_buffer_length} bytes, but can grow up to
bytes when needed. This value by default is small, to catch big (possibly @code{max_allowed_packet} bytes when needed. This value by default is
wrong) packets. You must increase this value if you are using big small, to catch big (possibly wrong) packets. You must increase this
@code{BLOB} columns. It should be as big as the biggest @code{BLOB} you want value if you are using big @code{BLOB} columns. It should be as big as
to use. The current protocol limits @code{max_allowed_packet} to 16M. the biggest @code{BLOB} you want to use. The protocol limiits for
@code{max_allowed_packet} is 16M in MySQL 3.23 and 4G in MySQL 4.0.
@item @code{max_binlog_cache_size} @item @code{max_binlog_cache_size}
If a multi-statement transaction requires more than this amount of memory, If a multi-statement transaction requires more than this amount of memory,
...@@ -23869,22 +23791,32 @@ argument). ...@@ -23869,22 +23791,32 @@ argument).
@cindex error messages, displaying @cindex error messages, displaying
@cindex perror @cindex perror
@code{perror} can be used to print error message(s). @code{perror} can @cindex errno
be invoked like this: @cindex Errcode
For most system errors MySQL will, in addition to a internal text message,
also print the system error code in one of the following styles:
@code{message ... (errno: #)} or @code{message ... (Errcode: #)}.
You can find out what the error code means by either examining the
documentation for your system or use the @code{perror} utility.
@code{perror} prints a description for a system error code, or an MyISAM/ISAM
table handler error code.
@code{perror} is invoked like this:
@example @example
shell> perror [OPTIONS] [ERRORCODE [ERRORCODE...]] shell> perror [OPTIONS] [ERRORCODE [ERRORCODE...]]
For example: Example:
shell> perror 64 79 shell> perror 13 64
Error code 13: Permission decided
Error code 64: Machine is not on the network Error code 64: Machine is not on the network
Error code 79: Can not access a needed shared library
@end example @end example
@code{perror} can be used to display a description for a system error Note that the error messages are mostly system dependent!
code, or an MyISAM/ISAM table handler error code. The error messages
are mostly system dependent.
@node Batch Commands, , perror, Client-Side Scripts @node Batch Commands, , perror, Client-Side Scripts
...@@ -26179,8 +26111,8 @@ Constant condition removal (needed because of constant folding): ...@@ -26179,8 +26111,8 @@ Constant condition removal (needed because of constant folding):
Constant expressions used by indexes are evaluated only once. Constant expressions used by indexes are evaluated only once.
@item @item
@code{COUNT(*)} on a single table without a @code{WHERE} is retrieved @code{COUNT(*)} on a single table without a @code{WHERE} is retrieved
directly from the table information. This is also done for any @code{NOT NULL} directly from the table information for MyISAM and HEAP tables. This is
expression when used with only one table. also done for any @code{NOT NULL} expression when used with only one table.
@item @item
Early detection of invalid constant expressions. MySQL quickly Early detection of invalid constant expressions. MySQL quickly
detects that some @code{SELECT} statements are impossible and returns no rows. detects that some @code{SELECT} statements are impossible and returns no rows.
...@@ -26389,7 +26321,7 @@ key value changes. In this case @code{LIMIT #} will not calculate any ...@@ -26389,7 +26321,7 @@ key value changes. In this case @code{LIMIT #} will not calculate any
unnecessary @code{GROUP BY}'s. unnecessary @code{GROUP BY}'s.
@item @item
As soon as MySQL has sent the first @code{#} rows to the client, it As soon as MySQL has sent the first @code{#} rows to the client, it
will abort the query. will abort the query (If you are not using @code{SQL_CALC_FOUND_ROWS}).
@item @item
@code{LIMIT 0} will always quickly return an empty set. This is useful @code{LIMIT 0} will always quickly return an empty set. This is useful
to check the query and to get the column types of the result columns. to check the query and to get the column types of the result columns.
...@@ -26445,7 +26377,7 @@ If you are inserting a lot of rows from different clients, you can get ...@@ -26445,7 +26377,7 @@ If you are inserting a lot of rows from different clients, you can get
higher speed by using the @code{INSERT DELAYED} statement. @xref{INSERT, higher speed by using the @code{INSERT DELAYED} statement. @xref{INSERT,
, @code{INSERT}}. , @code{INSERT}}.
@item @item
Note that with @code{MyISAM} you can insert rows at the same time Note that with @code{MyISAM} tables you can insert rows at the same time
@code{SELECT}s are running if there are no deleted rows in the tables. @code{SELECT}s are running if there are no deleted rows in the tables.
@item @item
When loading a table from a text file, use @code{LOAD DATA INFILE}. This When loading a table from a text file, use @code{LOAD DATA INFILE}. This
...@@ -26487,8 +26419,11 @@ Execute a @code{FLUSH TABLES} statement or the shell command @code{mysqladmin ...@@ -26487,8 +26419,11 @@ Execute a @code{FLUSH TABLES} statement or the shell command @code{mysqladmin
flush-tables}. flush-tables}.
@end enumerate @end enumerate
This procedure will be built into @code{LOAD DATA INFILE} in some future Note that @code{LOAD DATA INFILE} also does the above optimization if
version of MySQL. you insert into an empty table; The main difference with the above
procedure is that you can let myisamchk allocate much more temporary
memory for the index creation that you may want MySQL to allocate for
every index recreation.
Since @strong{MySQL 4.0} you can also use Since @strong{MySQL 4.0} you can also use
@code{ALTER TABLE tbl_name DISABLE KEYS} instead of @code{ALTER TABLE tbl_name DISABLE KEYS} instead of
...@@ -26497,7 +26432,8 @@ Since @strong{MySQL 4.0} you can also use ...@@ -26497,7 +26432,8 @@ Since @strong{MySQL 4.0} you can also use
@code{myisamchk -r -q /path/to/db/tbl_name}. This way you can also skip @code{myisamchk -r -q /path/to/db/tbl_name}. This way you can also skip
@code{FLUSH TABLES} steps. @code{FLUSH TABLES} steps.
@item @item
You can speed up insertions by locking your tables: You can speed up insertions that is done over multiple statements by
locking your tables:
@example @example
mysql> LOCK TABLES a WRITE; mysql> LOCK TABLES a WRITE;
...@@ -26512,6 +26448,9 @@ be as many index buffer flushes as there are different @code{INSERT} ...@@ -26512,6 +26448,9 @@ be as many index buffer flushes as there are different @code{INSERT}
statements. Locking is not needed if you can insert all rows with a single statements. Locking is not needed if you can insert all rows with a single
statement. statement.
For transactional tables, you should use @code{BEGIN/COMMIT} instead of
@code{LOCK TABLES} to get a speedup.
Locking will also lower the total time of multi-connection tests, but the Locking will also lower the total time of multi-connection tests, but the
maximum wait time for some threads will go up (because they wait for maximum wait time for some threads will go up (because they wait for
locks). For example: locks). For example:
...@@ -26589,7 +26528,7 @@ Always check that all your queries really use the indexes you have created ...@@ -26589,7 +26528,7 @@ Always check that all your queries really use the indexes you have created
in the tables. In MySQL you can do this with the @code{EXPLAIN} in the tables. In MySQL you can do this with the @code{EXPLAIN}
command. @xref{EXPLAIN, Explain, Explain, manual}. command. @xref{EXPLAIN, Explain, Explain, manual}.
@item @item
Try to avoid complex @code{SELECT} queries on tables that are updated a Try to avoid complex @code{SELECT} queries on MyISAM tables that are updated a
lot. This is to avoid problems with table locking. lot. This is to avoid problems with table locking.
@item @item
The new @code{MyISAM} tables can insert rows in a table without deleted The new @code{MyISAM} tables can insert rows in a table without deleted
...@@ -35518,7 +35457,7 @@ using @code{myisampack}. @xref{Compressed format}. ...@@ -35518,7 +35457,7 @@ using @code{myisampack}. @xref{Compressed format}.
ALTER [IGNORE] TABLE tbl_name alter_spec [, alter_spec ...] ALTER [IGNORE] TABLE tbl_name alter_spec [, alter_spec ...]
alter_specification: alter_specification:
ADD [COLUMN] create_definition [FIRST | AFTER column_name ] ADD [COLUMN] create_definition [FIRST | AFTER column_name]
or ADD [COLUMN] (create_definition, create_definition,...) or ADD [COLUMN] (create_definition, create_definition,...)
or ADD INDEX [index_name] (index_col_name,...) or ADD INDEX [index_name] (index_col_name,...)
or ADD PRIMARY KEY (index_col_name,...) or ADD PRIMARY KEY (index_col_name,...)
...@@ -35527,8 +35466,8 @@ alter_specification: ...@@ -35527,8 +35466,8 @@ alter_specification:
or ADD [CONSTRAINT symbol] FOREIGN KEY index_name (index_col_name,...) or ADD [CONSTRAINT symbol] FOREIGN KEY index_name (index_col_name,...)
[reference_definition] [reference_definition]
or ALTER [COLUMN] col_name @{SET DEFAULT literal | DROP DEFAULT@} or ALTER [COLUMN] col_name @{SET DEFAULT literal | DROP DEFAULT@}
or CHANGE [COLUMN] old_col_name create_definition or CHANGE [COLUMN] old_col_name create_definition [FIRST | AFTER column_name]
or MODIFY [COLUMN] create_definition or MODIFY [COLUMN] create_definition [FIRST | AFTER column_name]
or DROP [COLUMN] col_name or DROP [COLUMN] col_name
or DROP PRIMARY KEY or DROP PRIMARY KEY
or DROP INDEX index_name or DROP INDEX index_name
...@@ -45552,20 +45491,31 @@ more on the server). ...@@ -45552,20 +45491,31 @@ more on the server).
@node Packet too large, Communication errors, Out of memory, Common errors @node Packet too large, Communication errors, Out of memory, Common errors
@appendixsubsec @code{Packet too large} Error @appendixsubsec @code{Packet too large} Error
A communication packet is a single SQL statement sent to the MySQL server
or a single row that is sent to the client.
When a MySQL client or the @code{mysqld} server gets a packet bigger When a MySQL client or the @code{mysqld} server gets a packet bigger
than @code{max_allowed_packet} bytes, it issues a @code{Packet too large} than @code{max_allowed_packet} bytes, it issues a @code{Packet too
error and closes the connection. large} error and closes the connection. With some clients, you may also
get @code{Lost connection to MySQL server during query} error if the
If you are using the @code{mysql} client, you may specify a bigger buffer by communication packet is too big.
starting the client with @code{mysql --set-variable=max_allowed_packet=8M}.
Note that both the client and the server has it's own
If you are using other clients that do not allow you to specify the maximum @code{max_allowed_packet} variable. If you want to handle big packets,
packet size (such as @code{DBI}), you need to set the packet size when you you have to increase this variable both in the client and in the server.
start the server. You cau use a command-line option to @code{mysqld} to set
@code{max_allowed_packet} to a larger size. For example, if you are It's safe to increase this variable as memory is only allocated when
expecting to store the full length of a @code{BLOB} into a table, you'll need needed; This variable is more a precaution to catch wrong packets
to start the server with the @code{--set-variable=max_allowed_packet=16M} between the client/server and also to ensure that you don't accidently
option. use big packets so that you run out of memory.
If you are using the @code{mysql} client, you may specify a bigger
buffer by starting the client with @code{mysql --set-variable=max_allowed_packet=8M}. Other clients have different methods to set this variable.
You can use the option file to set @code{max_allowed_packet} to a larger
size in @code{mysqld}. For example, if you are expecting to store the
full length of a @code{MEDIUMBLOB} into a table, you'll need to start
the server with the @code{set-variable=max_allowed_packet=16M} option.
You can also get strange problems with large packets if you are using You can also get strange problems with large packets if you are using
big blobs, but you haven't given @code{mysqld} access to enough memory big blobs, but you haven't given @code{mysqld} access to enough memory
...@@ -45573,6 +45523,7 @@ to handle the query. If you suspect this is the case, try adding ...@@ -45573,6 +45523,7 @@ to handle the query. If you suspect this is the case, try adding
@code{ulimit -d 256000} to the beginning of the @code{safe_mysqld} script @code{ulimit -d 256000} to the beginning of the @code{safe_mysqld} script
and restart @code{mysqld}. and restart @code{mysqld}.
@node Communication errors, Full table, Packet too large, Common errors @node Communication errors, Full table, Packet too large, Common errors
@appendixsubsec Communication Errors / Aborted Connection @appendixsubsec Communication Errors / Aborted Connection
...@@ -48706,6 +48657,9 @@ Our TODO section contains what we plan to have in 4.0. @xref{TODO MySQL 4.0}. ...@@ -48706,6 +48657,9 @@ Our TODO section contains what we plan to have in 4.0. @xref{TODO MySQL 4.0}.
@itemize @bullet @itemize @bullet
@item @item
Added boolean fulltext search code. It should be considered early alpha. Added boolean fulltext search code. It should be considered early alpha.
@item
Extended @code{MODIFY} and @code{CHANGE} in @code{ALTER TABLE} to accept
the @code{AFTER} keyword.
@end itemize @end itemize
@node News-4.0.0, , News-4.0.1, News-4.0.x @node News-4.0.0, , News-4.0.1, News-4.0.x
...@@ -190,6 +190,8 @@ static int _mi_find_writepos(MI_INFO *info, ...@@ -190,6 +190,8 @@ static int _mi_find_writepos(MI_INFO *info,
my_errno=HA_ERR_RECORD_FILE_FULL; my_errno=HA_ERR_RECORD_FILE_FULL;
DBUG_RETURN(-1); DBUG_RETURN(-1);
} }
if (*length > MI_MAX_BLOCK_LENGTH)
*length=MI_MAX_BLOCK_LENGTH;
info->state->data_file_length+= *length; info->state->data_file_length+= *length;
info->s->state.split++; info->s->state.split++;
info->update|=HA_STATE_WRITE_AT_END; info->update|=HA_STATE_WRITE_AT_END;
...@@ -369,6 +371,16 @@ int _mi_write_part_record(MI_INFO *info, ...@@ -369,6 +371,16 @@ int _mi_write_part_record(MI_INFO *info,
next_filepos=info->s->state.dellink != HA_OFFSET_ERROR ? next_filepos=info->s->state.dellink != HA_OFFSET_ERROR ?
info->s->state.dellink : info->state->data_file_length; info->s->state.dellink : info->state->data_file_length;
if (*flag == 0) /* First block */ if (*flag == 0) /* First block */
{
if (*reclength > MI_MAX_BLOCK_LENGTH)
{
head_length= 16;
temp[0]=13;
mi_int4store(temp+1,*reclength);
mi_int3store(temp+4,length-head_length);
mi_sizestore((byte*) temp+8,next_filepos);
}
else
{ {
head_length=5+8+long_block*2; head_length=5+8+long_block*2;
temp[0]=5+(uchar) long_block; temp[0]=5+(uchar) long_block;
...@@ -385,6 +397,7 @@ int _mi_write_part_record(MI_INFO *info, ...@@ -385,6 +397,7 @@ int _mi_write_part_record(MI_INFO *info,
mi_sizestore((byte*) temp+5,next_filepos); mi_sizestore((byte*) temp+5,next_filepos);
} }
} }
}
else else
{ {
head_length=3+8+long_block; head_length=3+8+long_block;
...@@ -1433,10 +1446,10 @@ uint _mi_get_block_info(MI_BLOCK_INFO *info, File file, my_off_t filepos) ...@@ -1433,10 +1446,10 @@ uint _mi_get_block_info(MI_BLOCK_INFO *info, File file, my_off_t filepos)
} }
else else
{ {
if (info->header[0] > 6) if (info->header[0] > 6 && info->header[0] != 13)
return_val=BLOCK_SYNC_ERROR; return_val=BLOCK_SYNC_ERROR;
} }
info->next_filepos= HA_OFFSET_ERROR; /* Dummy ifall no next block */ info->next_filepos= HA_OFFSET_ERROR; /* Dummy if no next block */
switch (info->header[0]) { switch (info->header[0]) {
case 0: case 0:
...@@ -1470,6 +1483,14 @@ uint _mi_get_block_info(MI_BLOCK_INFO *info, File file, my_off_t filepos) ...@@ -1470,6 +1483,14 @@ uint _mi_get_block_info(MI_BLOCK_INFO *info, File file, my_off_t filepos)
info->filepos=filepos+4; info->filepos=filepos+4;
return return_val | BLOCK_FIRST | BLOCK_LAST; return return_val | BLOCK_FIRST | BLOCK_LAST;
case 13:
info->rec_len=mi_uint4korr(header+1);
info->block_len=info->data_len=mi_uint3korr(header+5);
info->next_filepos=mi_sizekorr(header+8);
info->second_read=1;
info->filepos=filepos+16;
return return_val | BLOCK_FIRST;
case 3: case 3:
info->rec_len=info->data_len=mi_uint2korr(header+1); info->rec_len=info->data_len=mi_uint2korr(header+1);
info->block_len=info->rec_len+ (uint) header[3]; info->block_len=info->rec_len+ (uint) header[3];
......
...@@ -115,7 +115,7 @@ MI_INFO *mi_open(const char *name, int mode, uint open_flags) ...@@ -115,7 +115,7 @@ MI_INFO *mi_open(const char *name, int mode, uint open_flags)
DBUG_PRINT("error",("Wrong header in %s",name_buff)); DBUG_PRINT("error",("Wrong header in %s",name_buff));
DBUG_DUMP("error_dump",(char*) share->state.header.file_version, DBUG_DUMP("error_dump",(char*) share->state.header.file_version,
head_length); head_length);
my_errno=HA_ERR_CRASHED; my_errno=HA_ERR_WRONG_TABLE_DEF;
goto err; goto err;
} }
share->options= mi_uint2korr(share->state.header.options); share->options= mi_uint2korr(share->state.header.options);
......
...@@ -356,7 +356,8 @@ struct st_myisam_info { ...@@ -356,7 +356,8 @@ struct st_myisam_info {
#define MI_DYN_MAX_BLOCK_LENGTH ((1L << 24)-4L) #define MI_DYN_MAX_BLOCK_LENGTH ((1L << 24)-4L)
#define MI_DYN_MAX_ROW_LENGTH (MI_DYN_MAX_BLOCK_LENGTH - MI_SPLIT_LENGTH) #define MI_DYN_MAX_ROW_LENGTH (MI_DYN_MAX_BLOCK_LENGTH - MI_SPLIT_LENGTH)
#define MI_DYN_ALIGN_SIZE 4 /* Align blocks on this */ #define MI_DYN_ALIGN_SIZE 4 /* Align blocks on this */
#define MI_MAX_DYN_HEADER_BYTE 12 /* max header byte for dynamic rows */ #define MI_MAX_DYN_HEADER_BYTE 13 /* max header byte for dynamic rows */
#define MI_MAX_BLOCK_LENGTH (((ulong) 1 << 24)-1)
#define MEMMAP_EXTRA_MARGIN 7 /* Write this as a suffix for file */ #define MEMMAP_EXTRA_MARGIN 7 /* Write this as a suffix for file */
......
...@@ -6,10 +6,16 @@ col3 varchar (20) not null, ...@@ -6,10 +6,16 @@ col3 varchar (20) not null,
col4 varchar(4) not null, col4 varchar(4) not null,
col5 enum('PENDING', 'ACTIVE', 'DISABLED') not null, col5 enum('PENDING', 'ACTIVE', 'DISABLED') not null,
col6 int not null, to_be_deleted int); col6 int not null, to_be_deleted int);
insert into t1 values (2,4,3,5,"PENDING",1,7);
alter table t1 alter table t1
add column col4_5 varchar(20) not null after col4, add column col4_5 varchar(20) not null after col4,
add column col7 varchar(30) not null after col6, add column col7 varchar(30) not null after col5,
add column col8 datetime not null, drop column to_be_deleted; add column col8 datetime not null, drop column to_be_deleted,
change column col2 fourth varchar(30) not null after col3,
modify column col6 int not null first;
select * from t1;
col6 col1 col3 fourth col4 col4_5 col5 col7 col8
1 2 3 4 5 PENDING 0000-00-00 00:00:00
drop table t1; drop table t1;
create table t1 (bandID MEDIUMINT UNSIGNED NOT NULL PRIMARY KEY, payoutID SMALLINT UNSIGNED NOT NULL); create table t1 (bandID MEDIUMINT UNSIGNED NOT NULL PRIMARY KEY, payoutID SMALLINT UNSIGNED NOT NULL);
insert into t1 (bandID,payoutID) VALUES (1,6),(2,6),(3,4),(4,9),(5,10),(6,1),(7,12),(8,12); insert into t1 (bandID,payoutID) VALUES (1,6),(2,6),(3,4),(4,9),(5,10),(6,1),(7,12),(8,12);
......
...@@ -10,10 +10,14 @@ col3 varchar (20) not null, ...@@ -10,10 +10,14 @@ col3 varchar (20) not null,
col4 varchar(4) not null, col4 varchar(4) not null,
col5 enum('PENDING', 'ACTIVE', 'DISABLED') not null, col5 enum('PENDING', 'ACTIVE', 'DISABLED') not null,
col6 int not null, to_be_deleted int); col6 int not null, to_be_deleted int);
insert into t1 values (2,4,3,5,"PENDING",1,7);
alter table t1 alter table t1
add column col4_5 varchar(20) not null after col4, add column col4_5 varchar(20) not null after col4,
add column col7 varchar(30) not null after col6, add column col7 varchar(30) not null after col5,
add column col8 datetime not null, drop column to_be_deleted; add column col8 datetime not null, drop column to_be_deleted,
change column col2 fourth varchar(30) not null after col3,
modify column col6 int not null first;
select * from t1;
drop table t1; drop table t1;
create table t1 (bandID MEDIUMINT UNSIGNED NOT NULL PRIMARY KEY, payoutID SMALLINT UNSIGNED NOT NULL); create table t1 (bandID MEDIUMINT UNSIGNED NOT NULL PRIMARY KEY, payoutID SMALLINT UNSIGNED NOT NULL);
......
...@@ -1273,9 +1273,12 @@ int mysql_alter_table(THD *thd,char *new_db, char *new_name, ...@@ -1273,9 +1273,12 @@ int mysql_alter_table(THD *thd,char *new_db, char *new_name,
def->field=field; def->field=field;
if (def->sql_type == FIELD_TYPE_TIMESTAMP) if (def->sql_type == FIELD_TYPE_TIMESTAMP)
use_timestamp=1; use_timestamp=1;
if (!def->after)
{
create_list.push_back(def); create_list.push_back(def);
def_it.remove(); def_it.remove();
} }
}
else else
{ // Use old field value { // Use old field value
create_list.push_back(def=new create_field(field,field)); create_list.push_back(def=new create_field(field,field));
...@@ -1305,7 +1308,7 @@ int mysql_alter_table(THD *thd,char *new_db, char *new_name, ...@@ -1305,7 +1308,7 @@ int mysql_alter_table(THD *thd,char *new_db, char *new_name,
List_iterator<create_field> find_it(create_list); List_iterator<create_field> find_it(create_list);
while ((def=def_it++)) // Add new columns while ((def=def_it++)) // Add new columns
{ {
if (def->change) if (def->change && ! def->field)
{ {
my_error(ER_BAD_FIELD_ERROR,MYF(0),def->change,table_name); my_error(ER_BAD_FIELD_ERROR,MYF(0),def->change,table_name);
DBUG_RETURN(-1); DBUG_RETURN(-1);
......
...@@ -1138,7 +1138,7 @@ alter_list_item: ...@@ -1138,7 +1138,7 @@ alter_list_item:
LEX *lex=Lex; LEX *lex=Lex;
lex->change= $3.str; lex->simple_alter=0; lex->change= $3.str; lex->simple_alter=0;
} }
field_spec field_spec opt_place
| MODIFY_SYM opt_column field_ident | MODIFY_SYM opt_column field_ident
{ {
LEX *lex=Lex; LEX *lex=Lex;
...@@ -1157,6 +1157,7 @@ alter_list_item: ...@@ -1157,6 +1157,7 @@ alter_list_item:
YYABORT; YYABORT;
lex->simple_alter=0; lex->simple_alter=0;
} }
opt_place
| DROP opt_column field_ident opt_restrict | DROP opt_column field_ident opt_restrict
{ {
LEX *lex=Lex; LEX *lex=Lex;
...@@ -2831,6 +2832,7 @@ keyword: ...@@ -2831,6 +2832,7 @@ keyword:
| BACKUP_SYM {} | BACKUP_SYM {}
| BEGIN_SYM {} | BEGIN_SYM {}
| BERKELEY_DB_SYM {} | BERKELEY_DB_SYM {}
| BINLOG_SYM {}
| BIT_SYM {} | BIT_SYM {}
| BOOL_SYM {} | BOOL_SYM {}
| BOOLEAN_SYM {} | BOOLEAN_SYM {}
...@@ -2857,6 +2859,7 @@ keyword: ...@@ -2857,6 +2859,7 @@ keyword:
| END {} | END {}
| ENUM {} | ENUM {}
| ESCAPE_SYM {} | ESCAPE_SYM {}
| EVENTS_SYM {}
| EXTENDED_SYM {} | EXTENDED_SYM {}
| FAST_SYM {} | FAST_SYM {}
| FULL {} | FULL {}
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment