Merge nusphere@work.mysql.com:/home/bk/mysql

into nslinuxw2.bedford.progress.com:/users/devp/yfaktoro/bk/local
parents d250a4ee 2f5ab461
...@@ -8750,7 +8750,7 @@ It will also not do anything if you already have MySQL privilege ...@@ -8750,7 +8750,7 @@ It will also not do anything if you already have MySQL privilege
tables installed! tables installed!
If you want to re-create your privilege tables, you should take down If you want to re-create your privilege tables, you should take down
the mysqld server, if its running, and then do something like: the mysqld server, if it's running, and then do something like:
@example @example
mv mysql-data-directory/mysql mysql-data-directory/mysql-old mv mysql-data-directory/mysql mysql-data-directory/mysql-old
...@@ -22311,7 +22311,7 @@ above holds only if the columns are part of the same index. ...@@ -22311,7 +22311,7 @@ above holds only if the columns are part of the same index.
@item @item
The @code{PRIMARY KEY} will be faster than any other key, as the The @code{PRIMARY KEY} will be faster than any other key, as the
@code{PRIMARY KEY} is stored together with the row data. As the other keys are @code{PRIMARY KEY} is stored together with the row data. As the other keys are
stored as the key data + the @code{PRIMARY KEY}, its important to keep the stored as the key data + the @code{PRIMARY KEY}, it's important to keep the
@code{PRIMARY KEY} as short as possible to save disk and get better speed. @code{PRIMARY KEY} as short as possible to save disk and get better speed.
@item @item
@code{LOCK TABLES} works on @code{BDB} tables as with other tables. If @code{LOCK TABLES} works on @code{BDB} tables as with other tables. If
...@@ -26304,7 +26304,7 @@ option to @code{mysqld}, or by setting the SQL option ...@@ -26304,7 +26304,7 @@ option to @code{mysqld}, or by setting the SQL option
temporary table was @code{record_buffer*16}, so if you are using this temporary table was @code{record_buffer*16}, so if you are using this
version, you have to increase the value of @code{record_buffer}. You can version, you have to increase the value of @code{record_buffer}. You can
also start @code{mysqld} with the @code{--big-tables} option to always also start @code{mysqld} with the @code{--big-tables} option to always
store temporary tables on disk, however, this will affect the speed of store temporary tables on disk. However, this will affect the speed of
many complicated queries. many complicated queries.
@item @item
...@@ -26320,7 +26320,7 @@ unexpectedly large strings (this is done with @code{malloc()} and ...@@ -26320,7 +26320,7 @@ unexpectedly large strings (this is done with @code{malloc()} and
@item @item
Each index file is opened once and the data file is opened once for each Each index file is opened once and the data file is opened once for each
concurrently-running thread. For each concurrent thread, a table structure, concurrently running thread. For each concurrent thread, a table structure,
column structures for each column, and a buffer of size @code{3 * n} is column structures for each column, and a buffer of size @code{3 * n} is
allocated (where @code{n} is the maximum row length, not counting @code{BLOB} allocated (where @code{n} is the maximum row length, not counting @code{BLOB}
columns). A @code{BLOB} uses 5 to 8 bytes plus the length of the @code{BLOB} columns). A @code{BLOB} uses 5 to 8 bytes plus the length of the @code{BLOB}
...@@ -26356,7 +26356,7 @@ be no memory leaks. ...@@ -26356,7 +26356,7 @@ be no memory leaks.
@cindex locking, tables @cindex locking, tables
@cindex tables, locking @cindex tables, locking
@node Internal locking, Table locking, Memory use, System @node Internal locking, Table locking, Memory use, System
@subsection How MySQL locks tables @subsection How MySQL Locks Tables
You can find a discussion about different locking methods in the appendix. You can find a discussion about different locking methods in the appendix.
@xref{Locking methods}. @xref{Locking methods}.
...@@ -26412,28 +26412,28 @@ priority, which might help some applications. ...@@ -26412,28 +26412,28 @@ priority, which might help some applications.
@cindex problems, table locking @cindex problems, table locking
@node Table locking, , Internal locking, System @node Table locking, , Internal locking, System
@subsection Table locking issues @subsection Table Locking Issues
The table locking code in @strong{MySQL} is deadlock free. The table locking code in @strong{MySQL} is deadlock free.
@strong{MySQL} uses table locking (instead of row locking or column @strong{MySQL} uses table locking (instead of row locking or column
locking) on all table types, except @code{BDB} tables, to achieve a very locking) on all table types, except @code{BDB} tables, to achieve a very
high lock speed. For large tables, table locking is MUCH better than high lock speed. For large tables, table locking is MUCH better than
row locking for most applications, but there are of course some row locking for most applications, but there are, of course, some
pitfalls. pitfalls.
For @code{BDB} tables, @strong{MySQL} only uses table locking of you For @code{BDB} tables, @strong{MySQL} only uses table locking if you
explicitely lock the table with @code{LOCK TABLES} or execute an command that explicitely lock the table with @code{LOCK TABLES} or execute a command that
will modify every row in the table, like @code{ALTER TABLE}. will modify every row in the table, like @code{ALTER TABLE}.
In @strong{MySQL} Version 3.23.7 and above, you can insert rows into In @strong{MySQL} Version 3.23.7 and above, you can insert rows into
@code{MyISAM} tables at the same time as other threads are reading from @code{MyISAM} tables at the same time other threads are reading from
the table. Note that currently this only works if there are no holes after the table. Note that currently this only works if there are no holes after
deleted rows in the table at the time the insert is made. deleted rows in the table at the time the insert is made.
Table locking enables many threads to read from a table at the same Table locking enables many threads to read from a table at the same
time, but if a thread wants to write to a table, it must first get time, but if a thread wants to write to a table, it must first get
exclusive access. During the update all other threads that want to exclusive access. During the update, all other threads that want to
access this particular table will wait until the update is ready. access this particular table will wait until the update is ready.
As updates on tables normally are considered to be more important than As updates on tables normally are considered to be more important than
...@@ -26442,11 +26442,11 @@ than statements that retrieve information from a table. This should ...@@ -26442,11 +26442,11 @@ than statements that retrieve information from a table. This should
ensure that updates are not 'starved' because one issues a lot of heavy ensure that updates are not 'starved' because one issues a lot of heavy
queries against a specific table. (You can change this by using queries against a specific table. (You can change this by using
LOW_PRIORITY with the statement that does the update or LOW_PRIORITY with the statement that does the update or
@code{HIGH_PRIORITY} with the @code{SELECT} statement. @code{HIGH_PRIORITY} with the @code{SELECT} statement.)
Starting from @strong{MySQL Version 3.23.7} one can use the Starting from @strong{MySQL} Version 3.23.7 one can use the
@code{max_write_lock_count} variable to force @strong{MySQL} to @code{max_write_lock_count} variable to force @strong{MySQL} to
temporary give all @code{SELECT} statements, that waits for a table, a temporary give all @code{SELECT} statements, that wait for a table, a
higher priority after a specific number of inserts on a table. higher priority after a specific number of inserts on a table.
Table locking is, however, not very good under the following senario: Table locking is, however, not very good under the following senario:
...@@ -26455,10 +26455,10 @@ Table locking is, however, not very good under the following senario: ...@@ -26455,10 +26455,10 @@ Table locking is, however, not very good under the following senario:
@item @item
A client issues a @code{SELECT} that takes a long time to run. A client issues a @code{SELECT} that takes a long time to run.
@item @item
Another client then issues an @code{UPDATE} on a used table; This client Another client then issues an @code{UPDATE} on a used table. This client
will wait until the @code{SELECT} is finished will wait until the @code{SELECT} is finished.
@item @item
Another client issues another @code{SELECT} statement on the same table; As Another client issues another @code{SELECT} statement on the same table. As
@code{UPDATE} has higher priority than @code{SELECT}, this @code{SELECT} @code{UPDATE} has higher priority than @code{SELECT}, this @code{SELECT}
will wait for the @code{UPDATE} to finish. It will also wait for the first will wait for the @code{UPDATE} to finish. It will also wait for the first
@code{SELECT} to finish! @code{SELECT} to finish!
...@@ -26468,7 +26468,7 @@ Some possible solutions to this problem are: ...@@ -26468,7 +26468,7 @@ Some possible solutions to this problem are:
@itemize @bullet @itemize @bullet
@item @item
Try to get the @code{SELECT} statements to run faster; You may have to create Try to get the @code{SELECT} statements to run faster. You may have to create
some summary tables to do this. some summary tables to do this.
@item @item
...@@ -26478,7 +26478,7 @@ statement. In this case the last @code{SELECT} statement in the previous ...@@ -26478,7 +26478,7 @@ statement. In this case the last @code{SELECT} statement in the previous
scenario would execute before the @code{INSERT} statement. scenario would execute before the @code{INSERT} statement.
@item @item
You can give a specific @code{INSERT},@code{UPDATE} or @code{DELETE} statement You can give a specific @code{INSERT}, @code{UPDATE}, or @code{DELETE} statement
lower priority with the @code{LOW_PRIORITY} attribute. lower priority with the @code{LOW_PRIORITY} attribute.
@item @item
...@@ -26496,7 +26496,7 @@ You can specify that a specific @code{SELECT} is very important with the ...@@ -26496,7 +26496,7 @@ You can specify that a specific @code{SELECT} is very important with the
@item @item
If you have problems with @code{INSERT} combined with @code{SELECT}, If you have problems with @code{INSERT} combined with @code{SELECT},
switch to use the new @code{MyISAM} tables as these supports concurrent switch to use the new @code{MyISAM} tables as these support concurrent
@code{SELECT}s and @code{INSERT}s. @code{SELECT}s and @code{INSERT}s.
@item @item
...@@ -26515,7 +26515,7 @@ option to @code{DELETE} may help. @xref{DELETE, , @code{DELETE}}. ...@@ -26515,7 +26515,7 @@ option to @code{DELETE} may help. @xref{DELETE, , @code{DELETE}}.
@cindex tables, improving performance @cindex tables, improving performance
@cindex performance, improving @cindex performance, improving
@node Data size, MySQL indexes, System, Performance @node Data size, MySQL indexes, System, Performance
@section Get your data as small as possible @section Get Your Data as Small as Possible
One of the most basic optimization is to get your data (and indexes) to One of the most basic optimization is to get your data (and indexes) to
take as little space on the disk (and in memory) as possible. This can take as little space on the disk (and in memory) as possible. This can
...@@ -26532,7 +26532,7 @@ using the techniques listed below: ...@@ -26532,7 +26532,7 @@ using the techniques listed below:
@itemize @bullet @itemize @bullet
@item @item
Use the most efficient (smallest) types possible. @strong{MySQL} has a Use the most efficient (smallest) types possible. @strong{MySQL} has
many specialized types that save disk space and memory. many specialized types that save disk space and memory.
@item @item
Use the smaller integer types if possible to get smaller tables. For Use the smaller integer types if possible to get smaller tables. For
...@@ -26540,55 +26540,55 @@ example, @code{MEDIUMINT} is often better than @code{INT}. ...@@ -26540,55 +26540,55 @@ example, @code{MEDIUMINT} is often better than @code{INT}.
@item @item
Declare columns to be @code{NOT NULL} if possible. It makes everything Declare columns to be @code{NOT NULL} if possible. It makes everything
faster and you save one bit per column. Note that if you really need faster and you save one bit per column. Note that if you really need
@code{NULL} in your application you should definitely use it, just avoid @code{NULL} in your application you should definitely use it. Just avoid
having it on all columns by default. having it on all columns by default.
@item @item
If you don't have any variable-length columns (@code{VARCHAR}, If you don't have any variable-length columns (@code{VARCHAR},
@code{TEXT} or @code{BLOB} columns), a fixed-size record format is @code{TEXT}, or @code{BLOB} columns), a fixed-size record format is
used. This is faster but unfortunately may waste some space. used. This is faster but unfortunately may waste some space.
@xref{MyISAM table formats}. @xref{MyISAM table formats}.
@item @item
Each table should have as short as possible primary index. This makes The primary index of a table should be as short as possible. This makes
identification of one row easy and efficient. identification of one row easy and efficient.
@item @item
For each table you have to decide which storage/index method to For each table, you have to decide which storage/index method to
use. @xref{Table types}. use. @xref{Table types}.
@item @item
Only create the indexes that you really need. Indexes are good for Only create the indexes that you really need. Indexes are good for
retrieval but bad when you need to store things fast. If you mostly retrieval but bad when you need to store things fast. If you mostly
access a table by searching on a combination of columns, make an index access a table by searching on a combination of columns, make an index
on them. The first index part should be the most used column. If you are on them. The first index part should be the most used column. If you are
ALWAYS using many columns you should use the column with more duplicates ALWAYS using many columns, you should use the column with more duplicates
first to get better compression of the index. first to get better compression of the index.
@item @item
If it's very likely that a column has a unique prefix on the first number If it's very likely that a column has a unique prefix on the first number
of characters, it's better to only index this prefix. @strong{MySQL} of characters, it's better to only index this prefix. @strong{MySQL}
supports an index on a part of a character column. Shorter indexes is supports an index on a part of a character column. Shorter indexes are
faster not only because they take less disk space but also because they faster not only because they take less disk space but also because they
will give you more hits in the index cache and thus fewer disk will give you more hits in the index cache and thus fewer disk
seeks. @xref{Server parameters}. seeks. @xref{Server parameters}.
@item @item
In some circumstances it can be beneficial to split a table that is In some circumstances it can be beneficial to split into two a table that is
scanned very often into two. This is especially true if it is a dynamic scanned very often. This is especially true if it is a dynamic
format table and it is possible to use a smaller static format table that format table and it is possible to use a smaller static format table that
can be used to find the relevant rows when scanning the table. can be used to find the relevant rows when scanning the table.
@end itemize @end itemize
@cindex indexes, uses for @cindex indexes, uses for
@node MySQL indexes, Query Speed, Data size, Performance @node MySQL indexes, Query Speed, Data size, Performance
@section How @strong{MySQL} uses indexes @section How @strong{MySQL} Uses Indexes
Indexes are used to find rows with a specific value of one column Indexes are used to find rows with a specific value of one column
fast. Without an index @strong{MySQL} has to start with the first record fast. Without an index @strong{MySQL} has to start with the first record
and then read through the whole table until it finds the relevent and then read through the whole table until it finds the relevent
rows. The bigger the table, the more this costs. If the table has an index rows. The bigger the table, the more this costs. If the table has an index
for the colums in question, @strong{MySQL} can get fast a position to for the colums in question, @strong{MySQL} can quickly get a position to
seek to in the middle of the data file without having to look at all the seek to in the middle of the data file without having to look at all the
data. If a table has 1000 rows this is at least 100 times faster than data. If a table has 1000 rows, this is at least 100 times faster than
reading sequentially. Note that if you need to access almost all 1000 reading sequentially. Note that if you need to access almost all 1000
rows it is faster to read sequentially because we then avoid disk seeks. rows it is faster to read sequentially because we then avoid disk seeks.
All @strong{MySQL} indexes (@code{PRIMARY}, @code{UNIQUE} and All @strong{MySQL} indexes (@code{PRIMARY}, @code{UNIQUE}, and
@code{INDEX}) are stored in B-trees. Strings are automatically prefix- @code{INDEX}) are stored in B-trees. Strings are automatically prefix-
and end-space compressed. @xref{CREATE INDEX, , @code{CREATE INDEX}}. and end-space compressed. @xref{CREATE INDEX, , @code{CREATE INDEX}}.
...@@ -26602,11 +26602,11 @@ Retrieve rows from other tables when performing joins. ...@@ -26602,11 +26602,11 @@ Retrieve rows from other tables when performing joins.
@item @item
Find the @code{MAX()} or @code{MIN()} value for a specific indexed Find the @code{MAX()} or @code{MIN()} value for a specific indexed
column. This is optimized by a pre-processor that checks if you are column. This is optimized by a preprocessor that checks if you are
using @code{WHERE} key_part_# = constant on all key parts < N. In this case using @code{WHERE} key_part_# = constant on all key parts < N. In this case
@strong{MySQL} will do a single key lookup and replace the @code{MIN()} @strong{MySQL} will do a single key lookup and replace the @code{MIN()}
expression with a constant. If all expressions are replaced with expression with a constant. If all expressions are replaced with
constants, the query will return at once. constants, the query will return at once:
@example @example
SELECT MIN(key_part2),MAX(key_part2) FROM table_name where key_part1=10 SELECT MIN(key_part2),MAX(key_part2) FROM table_name where key_part1=10
...@@ -26617,10 +26617,10 @@ Sort or group a table if the sorting or grouping is done on a leftmost ...@@ -26617,10 +26617,10 @@ Sort or group a table if the sorting or grouping is done on a leftmost
prefix of a usable key (for example, @code{ORDER BY key_part_1,key_part_2 }). The prefix of a usable key (for example, @code{ORDER BY key_part_1,key_part_2 }). The
key is read in reverse order if all key parts are followed by @code{DESC}. key is read in reverse order if all key parts are followed by @code{DESC}.
The index can also be used even if the @code{ORDER BY} doesn't match gthe index The index can also be used even if the @code{ORDER BY} doesn't match the index
exactly, as long as all the not used index parts and all the extra exactly, as long as all the unused index parts and all the extra
are @code{ORDER BY} columns are constants in the @code{WHERE} clause. The are @code{ORDER BY} columns are constants in the @code{WHERE} clause. The
following queries will use the index to resolve the @code{ORDER BY} part. following queries will use the index to resolve the @code{ORDER BY} part:
@example @example
SELECT * FROM foo ORDER BY key_part1,key_part2,key_part3; SELECT * FROM foo ORDER BY key_part1,key_part2,key_part3;
...@@ -26632,7 +26632,7 @@ SELECT * FROM foo WHERE key_part1=const GROUP BY key_part2; ...@@ -26632,7 +26632,7 @@ SELECT * FROM foo WHERE key_part1=const GROUP BY key_part2;
In some cases a query can be optimized to retrieve values without In some cases a query can be optimized to retrieve values without
consulting the data file. If all used columns for some table are numeric consulting the data file. If all used columns for some table are numeric
and form a leftmost prefix for some key, the values may be retrieved and form a leftmost prefix for some key, the values may be retrieved
from the index tree for greater speed. from the index tree for greater speed:
@example @example
SELECT key_part3 FROM table_name WHERE key_part1=1 SELECT key_part3 FROM table_name WHERE key_part1=1
...@@ -26657,7 +26657,7 @@ rows and using that index to fetch the rows. ...@@ -26657,7 +26657,7 @@ rows and using that index to fetch the rows.
If the table has a multiple-column index, any leftmost prefix of the If the table has a multiple-column index, any leftmost prefix of the
index can be used by the optimizer to find rows. For example, if you index can be used by the optimizer to find rows. For example, if you
have a three-column index on @code{(col1,col2,col3)}, you have indexed have a three-column index on @code{(col1,col2,col3)}, you have indexed
search capabilities on @code{(col1)}, @code{(col1,col2)} and search capabilities on @code{(col1)}, @code{(col1,col2)}, and
@code{(col1,col2,col3)}. @code{(col1,col2,col3)}.
@strong{MySQL} can't use a partial index if the columns don't form a @strong{MySQL} can't use a partial index if the columns don't form a
...@@ -26707,9 +26707,9 @@ constant. ...@@ -26707,9 +26707,9 @@ constant.
Searching using @code{column_name IS NULL} will use indexes if column_name Searching using @code{column_name IS NULL} will use indexes if column_name
is an index. is an index.
@strong{MySQL} normally uses the index that finds least number of rows. An @strong{MySQL} normally uses the index that finds the least number of rows. An
index is used for columns that you compare with the following operators: index is used for columns that you compare with the following operators:
@code{=}, @code{>}, @code{>=}, @code{<}, @code{<=}, @code{BETWEEN} and a @code{=}, @code{>}, @code{>=}, @code{<}, @code{<=}, @code{BETWEEN}, and a
@code{LIKE} with a non-wild-card prefix like @code{'something%'}. @code{LIKE} with a non-wild-card prefix like @code{'something%'}.
Any index that doesn't span all @code{AND} levels in the @code{WHERE} clause Any index that doesn't span all @code{AND} levels in the @code{WHERE} clause
...@@ -26738,24 +26738,24 @@ would be available. Some of the cases where this happens are: ...@@ -26738,24 +26738,24 @@ would be available. Some of the cases where this happens are:
@itemize @bullet @itemize @bullet
@item @item
If the use of the index, would require @strong{MySQL} to access more If the use of the index would require @strong{MySQL} to access more
than 30 % of the rows in the table. (In this case a table scan is than 30 % of the rows in the table. (In this case a table scan is
probably much faster as this will require us to do much fewer seeks). probably much faster, as this will require us to do much fewer seeks).
Note that if you with such a query use @code{LIMIT} to only retrieve Note that if such a query uses @code{LIMIT} to only retrieve
part of the rows, @strong{MySQL} will use an index anyway as it can this part of the rows, @strong{MySQL} will use an index anyway, as it can
way much more quickly find the few rows to return in the result. much more quickly find the few rows to return in the result.
@end itemize @end itemize
@cindex queries, speed of @cindex queries, speed of
@cindex permission checks, effect on speed @cindex permission checks, effect on speed
@cindex speed, of queries @cindex speed, of queries
@node Query Speed, Tips, MySQL indexes, Performance @node Query Speed, Tips, MySQL indexes, Performance
@section Speed of queries that access or update data @section Speed of Queries that Access or Update Data
First, one thing that affects all queries: The more complex permission First, one thing that affects all queries: The more complex permission
system setup you have, the more overhead you get. system setup you have, the more overhead you get.
If you do not have any @code{GRANT} statements done @strong{MySQL} will If you do not have any @code{GRANT} statements done, @strong{MySQL} will
optimize the permission checking somewhat. So if you have a very high optimize the permission checking somewhat. So if you have a very high
volume it may be worth the time to avoid grants. Otherwise more volume it may be worth the time to avoid grants. Otherwise more
permission check results in a larger overhead. permission check results in a larger overhead.
...@@ -26777,7 +26777,7 @@ The above shows that @strong{MySQL} can execute 1,000,000 @code{+} ...@@ -26777,7 +26777,7 @@ The above shows that @strong{MySQL} can execute 1,000,000 @code{+}
expressions in 0.32 seconds on a @code{PentiumII 400MHz}. expressions in 0.32 seconds on a @code{PentiumII 400MHz}.
All @strong{MySQL} functions should be very optimized, but there may be All @strong{MySQL} functions should be very optimized, but there may be
some exceptions and the @code{benchmark(loop_count,expression)} is a some exceptions, and the @code{benchmark(loop_count,expression)} is a
great tool to find out if this is a problem with your query. great tool to find out if this is a problem with your query.
@menu @menu
...@@ -26796,26 +26796,26 @@ great tool to find out if this is a problem with your query. ...@@ -26796,26 +26796,26 @@ great tool to find out if this is a problem with your query.
@cindex queries, estimating performance @cindex queries, estimating performance
@cindex performance, estimating @cindex performance, estimating
@node Estimating performance, SELECT speed, Query Speed, Query Speed @node Estimating performance, SELECT speed, Query Speed, Query Speed
@subsection Estimating query performance @subsection Estimating Query Performance
In most cases you can estimate the performance by counting disk seeks. In most cases you can estimate the performance by counting disk seeks.
For small tables you can usually find the row in 1 disk seek (as the For small tables, you can usually find the row in 1 disk seek (as the
index is probably cached). For bigger tables, you can estimate that, index is probably cached). For bigger tables, you can estimate that
(using B++ tree indexes), you will need: @code{log(row_count) / (using B++ tree indexes) you will need: @code{log(row_count) /
log(index_block_length / 3 * 2 / (index_length + data_pointer_length)) + log(index_block_length / 3 * 2 / (index_length + data_pointer_length)) +
1} seeks to find a row. 1} seeks to find a row.
In @strong{MySQL} an index block is usually 1024 bytes and the data In @strong{MySQL} an index block is usually 1024 bytes and the data
pointer is usually 4 bytes, which gives for a 500,000 row table with a pointer is usually 4 bytes. A 500,000 row table with an
index length of 3 (medium integer) gives you: index length of 3 (medium integer) gives you:
@code{log(500,000)/log(1024/3*2/(3+4)) + 1} = 4 seeks. @code{log(500,000)/log(1024/3*2/(3+4)) + 1} = 4 seeks.
As the above index would require about 500,000 * 7 * 3/2 = 5.2M, As the above index would require about 500,000 * 7 * 3/2 = 5.2M,
(assuming that the index buffers are filled to 2/3 (which is typical) (assuming that the index buffers are filled to 2/3, which is typical)
you will probably have much of the index in memory and you will probably you will probably have much of the index in memory and you will probably
only need 1-2 calls to read data from the OS to find the row. only need 1-2 calls to read data from the OS to find the row.
For writes you will, however, need 4 seek requests (as above) to find For writes, however, you will need 4 seek requests (as above) to find
where to place the new index and normally 2 seeks to update the index where to place the new index and normally 2 seeks to update the index
and write the row. and write the row.
...@@ -26831,7 +26831,7 @@ the data grows. @xref{Server parameters}. ...@@ -26831,7 +26831,7 @@ the data grows. @xref{Server parameters}.
@findex SELECT speed @findex SELECT speed
@node SELECT speed, Where optimizations, Estimating performance, Query Speed @node SELECT speed, Where optimizations, Estimating performance, Query Speed
@subsection Speed of @code{SELECT} queries @subsection Speed of @code{SELECT} Queries
In general, when you want to make a slow @code{SELECT ... WHERE} faster, the In general, when you want to make a slow @code{SELECT ... WHERE} faster, the
first thing to check is whether or not you can add an index. @xref{MySQL first thing to check is whether or not you can add an index. @xref{MySQL
...@@ -26865,14 +26865,14 @@ time for a large table! ...@@ -26865,14 +26865,14 @@ time for a large table!
@cindex optimizations @cindex optimizations
@findex WHERE @findex WHERE
@node Where optimizations, DISTINCT optimization, SELECT speed, Query Speed @node Where optimizations, DISTINCT optimization, SELECT speed, Query Speed
@subsection How MySQL optimizes @code{WHERE} clauses @subsection How MySQL Optimizes @code{WHERE} Clauses
The @code{WHERE} optimizations are put in the @code{SELECT} part here because The @code{WHERE} optimizations are put in the @code{SELECT} part here because
they are mostly used with @code{SELECT}, but the same optimizations apply for they are mostly used with @code{SELECT}, but the same optimizations apply for
@code{WHERE} in @code{DELETE} and @code{UPDATE} statements. @code{WHERE} in @code{DELETE} and @code{UPDATE} statements.
Also note that this section is incomplete. @strong{MySQL} does many Also note that this section is incomplete. @strong{MySQL} does many
optimizations and we have not had time to document them all. optimizations, and we have not had time to document them all.
Some of the optimizations performed by @strong{MySQL} are listed below: Some of the optimizations performed by @strong{MySQL} are listed below:
...@@ -26906,7 +26906,7 @@ Early detection of invalid constant expressions. @strong{MySQL} quickly ...@@ -26906,7 +26906,7 @@ Early detection of invalid constant expressions. @strong{MySQL} quickly
detects that some @code{SELECT} statements are impossible and returns no rows. detects that some @code{SELECT} statements are impossible and returns no rows.
@item @item
@code{HAVING} is merged with @code{WHERE} if you don't use @code{GROUP BY} @code{HAVING} is merged with @code{WHERE} if you don't use @code{GROUP BY}
or group functions (@code{COUNT()}, @code{MIN()}...) or group functions (@code{COUNT()}, @code{MIN()}...).
@item @item
For each sub-join, a simpler @code{WHERE} is constructed to get a fast For each sub-join, a simpler @code{WHERE} is constructed to get a fast
@code{WHERE} evaluation for each sub-join and also to skip records as @code{WHERE} evaluation for each sub-join and also to skip records as
...@@ -26945,7 +26945,7 @@ table is created. ...@@ -26945,7 +26945,7 @@ table is created.
If you use @code{SQL_SMALL_RESULT}, @strong{MySQL} will use an in-memory If you use @code{SQL_SMALL_RESULT}, @strong{MySQL} will use an in-memory
temporary table. temporary table.
@item @item
Each table index is queried and the best index that spans fewer than 30% of Each table index is queried, and the best index that spans fewer than 30% of
the rows is used. If no such index can be found, a quick table scan is used. the rows is used. If no such index can be found, a quick table scan is used.
@item @item
In some cases, @strong{MySQL} can read rows from the index without even In some cases, @strong{MySQL} can read rows from the index without even
...@@ -26990,7 +26990,7 @@ mysql> SELECT ... FROM tbl_name ORDER BY key_part1 DESC,key_part2 DESC,... ...@@ -26990,7 +26990,7 @@ mysql> SELECT ... FROM tbl_name ORDER BY key_part1 DESC,key_part2 DESC,...
@findex DISTINCT @findex DISTINCT
@cindex optimizing, DISTINCT @cindex optimizing, DISTINCT
@node DISTINCT optimization, LEFT JOIN optimization, Where optimizations, Query Speed @node DISTINCT optimization, LEFT JOIN optimization, Where optimizations, Query Speed
@subsection How MySQL optimizes @code{DISTINCT} @subsection How MySQL Optimizes @code{DISTINCT}
@code{DISTINCT} is converted to a @code{GROUP BY} on all columns, @code{DISTINCT} is converted to a @code{GROUP BY} on all columns,
@code{DISTINCT} combined with @code{ORDER BY} will in many cases also @code{DISTINCT} combined with @code{ORDER BY} will in many cases also
...@@ -27013,9 +27013,9 @@ when the first row in t2 is found. ...@@ -27013,9 +27013,9 @@ when the first row in t2 is found.
@findex LEFT JOIN @findex LEFT JOIN
@cindex optimizing, LEFT JOIN @cindex optimizing, LEFT JOIN
@node LEFT JOIN optimization, LIMIT optimization, DISTINCT optimization, Query Speed @node LEFT JOIN optimization, LIMIT optimization, DISTINCT optimization, Query Speed
@subsection How MySQL optimizes @code{LEFT JOIN} and @code{RIGHT JOIN} @subsection How MySQL Optimizes @code{LEFT JOIN} and @code{RIGHT JOIN}
@code{A LEFT JOIN B} is in @strong{MySQL} implemented as follows: @code{A LEFT JOIN B} in @strong{MySQL} is implemented as follows:
@itemize @bullet @itemize @bullet
@item @item
...@@ -27037,7 +27037,7 @@ If there is a row in @code{A} that matches the @code{WHERE} clause, but there ...@@ -27037,7 +27037,7 @@ If there is a row in @code{A} that matches the @code{WHERE} clause, but there
wasn't any row in @code{B} that matched the @code{LEFT JOIN} condition, wasn't any row in @code{B} that matched the @code{LEFT JOIN} condition,
then an extra @code{B} row is generated with all columns set to @code{NULL}. then an extra @code{B} row is generated with all columns set to @code{NULL}.
@item @item
If you use @code{LEFT JOIN} to find rows that doesn't exist in some If you use @code{LEFT JOIN} to find rows that don't exist in some
table and you have the following test: @code{column_name IS NULL} in the table and you have the following test: @code{column_name IS NULL} in the
@code{WHERE} part, where column_name is a column that is declared as @code{WHERE} part, where column_name is a column that is declared as
@code{NOT NULL}, then @strong{MySQL} will stop searching after more rows @code{NOT NULL}, then @strong{MySQL} will stop searching after more rows
...@@ -27049,7 +27049,7 @@ matches the @code{LEFT JOIN} condition. ...@@ -27049,7 +27049,7 @@ matches the @code{LEFT JOIN} condition.
The table read order forced by @code{LEFT JOIN} and @code{STRAIGHT JOIN} The table read order forced by @code{LEFT JOIN} and @code{STRAIGHT JOIN}
will help the join optimizer (which calculates in which order tables will help the join optimizer (which calculates in which order tables
should be joined) to do its work much more quickly as there are fewer should be joined) to do its work much more quickly, as there are fewer
table permutations to check. table permutations to check.
Note that the above means that if you do a query of type: Note that the above means that if you do a query of type:
...@@ -27058,7 +27058,7 @@ Note that the above means that if you do a query of type: ...@@ -27058,7 +27058,7 @@ Note that the above means that if you do a query of type:
SELECT * FROM a,b LEFT JOIN c ON (c.key=a.key) LEFT JOIN d (d.key=a.key) WHERE b.key=d.key SELECT * FROM a,b LEFT JOIN c ON (c.key=a.key) LEFT JOIN d (d.key=a.key) WHERE b.key=d.key
@end example @end example
Then @strong{MySQL} will do a full scan on @code{b} as the @code{LEFT @strong{MySQL} will do a full scan on @code{b} as the @code{LEFT
JOIN} will force it to be read before @code{d}. JOIN} will force it to be read before @code{d}.
The fix in this case is to change the query to: The fix in this case is to change the query to:
...@@ -27070,7 +27070,7 @@ SELECT * FROM b,a LEFT JOIN c ON (c.key=a.key) LEFT JOIN d (d.key=a.key) WHERE b ...@@ -27070,7 +27070,7 @@ SELECT * FROM b,a LEFT JOIN c ON (c.key=a.key) LEFT JOIN d (d.key=a.key) WHERE b
@cindex optimizing, LIMIT @cindex optimizing, LIMIT
@findex LIMIT @findex LIMIT
@node LIMIT optimization, Insert speed, LEFT JOIN optimization, Query Speed @node LIMIT optimization, Insert speed, LEFT JOIN optimization, Query Speed
@subsection How MySQL optimizes @code{LIMIT} @subsection How MySQL Optimizes @code{LIMIT}
In some cases @strong{MySQL} will handle the query differently when you are In some cases @strong{MySQL} will handle the query differently when you are
using @code{LIMIT #} and not using @code{HAVING}: using @code{LIMIT #} and not using @code{HAVING}:
...@@ -27106,7 +27106,7 @@ space is needed to resolve the query. ...@@ -27106,7 +27106,7 @@ space is needed to resolve the query.
@cindex speed, inserting @cindex speed, inserting
@cindex inserting, speed of @cindex inserting, speed of
@node Insert speed, Update speed, LIMIT optimization, Query Speed @node Insert speed, Update speed, LIMIT optimization, Query Speed
@subsection Speed of @code{INSERT} queries @subsection Speed of @code{INSERT} Queries
The time to insert a record consists approximately of: The time to insert a record consists approximately of:
...@@ -27125,9 +27125,9 @@ Inserting indexes: (1 x number of indexes) ...@@ -27125,9 +27125,9 @@ Inserting indexes: (1 x number of indexes)
Close: (1) Close: (1)
@end itemize @end itemize
Where the numbers are somewhat proportional to the overall time. This where the numbers are somewhat proportional to the overall time. This
does not take into consideration the initial overhead to open tables does not take into consideration the initial overhead to open tables
(which is done once for each concurrently-running query). (which is done once for each concurrently running query).
The size of the table slows down the insertion of indexes by N log N The size of the table slows down the insertion of indexes by N log N
(B-trees). (B-trees).
...@@ -27136,7 +27136,7 @@ Some ways to speed up inserts: ...@@ -27136,7 +27136,7 @@ Some ways to speed up inserts:
@itemize @bullet @itemize @bullet
@item @item
If you are inserting many rows from the same client at the same time use If you are inserting many rows from the same client at the same time, use
multiple value lists @code{INSERT} statements. This is much faster (many multiple value lists @code{INSERT} statements. This is much faster (many
times in some cases) than using separate @code{INSERT} statements. times in some cases) than using separate @code{INSERT} statements.
@item @item
...@@ -27178,7 +27178,7 @@ on it to make it smaller. @xref{Compressed format}. ...@@ -27178,7 +27178,7 @@ on it to make it smaller. @xref{Compressed format}.
@item @item
Re-create the indexes with @code{myisamchk -r -q Re-create the indexes with @code{myisamchk -r -q
/path/to/db/tbl_name}. This will create the index tree in memory before /path/to/db/tbl_name}. This will create the index tree in memory before
writing it to disk, which is much faster because it avoid lots of disk writing it to disk, which is much faster because it avoids lots of disk
seeks. The resulting index tree is also perfectly balanced. seeks. The resulting index tree is also perfectly balanced.
@item @item
...@@ -27214,43 +27214,43 @@ thread 2, 3, and 4 does 1 insert ...@@ -27214,43 +27214,43 @@ thread 2, 3, and 4 does 1 insert
thread 5 does 1000 inserts thread 5 does 1000 inserts
@end example @end example
If you don't use locking, 2, 3 and 4 will finish before 1 and 5. If you If you don't use locking, 2, 3, and 4 will finish before 1 and 5. If you
use locking, 2, 3 and 4 probably will not finish before 1 or 5, but the use locking, 2, 3, and 4 probably will not finish before 1 or 5, but the
total time should be about 40% faster. total time should be about 40% faster.
As @code{INSERT}, @code{UPDATE} and @code{DELETE} operations are very As @code{INSERT}, @code{UPDATE}, and @code{DELETE} operations are very
fast in @strong{MySQL}, you will obtain better overall performance by fast in @strong{MySQL}, you will obtain better overall performance by
adding locks around everything that does more than about 5 inserts or adding locks around everything that does more than about 5 inserts or
updates in a row. If you do very many inserts in a row, you could do a updates in a row. If you do very many inserts in a row, you could do a
@code{LOCK TABLES} followed by a @code{UNLOCK TABLES} once in a while @code{LOCK TABLES} followed by an @code{UNLOCK TABLES} once in a while
(about each 1000 rows) to allow other threads access to the table. This (about each 1000 rows) to allow other threads access to the table. This
would still result in a nice performance gain. would still result in a nice performance gain.
Of course, @code{LOAD DATA INFILE} is much faster still for loading data. Of course, @code{LOAD DATA INFILE} is much faster for loading data.
@end itemize @end itemize
To get some more speed for both @code{LOAD DATA INFILE} and To get some more speed for both @code{LOAD DATA INFILE} and
@code{INSERT}, enlarge the key buffer. @xref{Server parameters}. @code{INSERT}, enlarge the key buffer. @xref{Server parameters}.
@node Update speed, Delete speed, Insert speed, Query Speed @node Update speed, Delete speed, Insert speed, Query Speed
@subsection Speed of @code{UPDATE} queries @subsection Speed of @code{UPDATE} Queries
Update queries are optimized as a @code{SELECT} query with the additional Update queries are optimized as a @code{SELECT} query with the additional
overhead of a write. The speed of the write is dependent on the size of overhead of a write. The speed of the write is dependent on the size of
the data that is being updated and the number of indexes that are the data that is being updated and the number of indexes that are
updated. Indexes that are not changed will not be updated. updated. Indexes that are not changed will not be updated.
Also another way to get fast updates is to delay updates and then do Also, another way to get fast updates is to delay updates and then do
many updates in a row later. Doing many updates in a row is much quicker many updates in a row later. Doing many updates in a row is much quicker
than doing one at a time if you lock the table. than doing one at a time if you lock the table.
Note that, with dynamic record format, updating a record to Note that, with dynamic record format, updating a record to
a longer total length may split the record. So if you do this often a longer total length may split the record. So if you do this often,
it is very important to @code{OPTIMIZE TABLE} sometimes. it is very important to @code{OPTIMIZE TABLE} sometimes.
@xref{OPTIMIZE TABLE, , @code{OPTIMIZE TABLE}}. @xref{OPTIMIZE TABLE, , @code{OPTIMIZE TABLE}}.
@node Delete speed, , Update speed, Query Speed @node Delete speed, , Update speed, Query Speed
@subsection Speed of @code{DELETE} queries @subsection Speed of @code{DELETE} Queries
If you want to delete all rows in the table, you should use If you want to delete all rows in the table, you should use
@code{TRUNCATE table_name}. @xref{TRUNCATE}. @code{TRUNCATE table_name}. @xref{TRUNCATE}.
...@@ -27262,7 +27262,7 @@ the index cache. @xref{Server parameters}. ...@@ -27262,7 +27262,7 @@ the index cache. @xref{Server parameters}.
@cindex optimization, tips @cindex optimization, tips
@cindex tips, optimization @cindex tips, optimization
@node Tips, Benchmarks, Query Speed, Performance @node Tips, Benchmarks, Query Speed, Performance
@section Other optimization tips @section Other Optimization Tips
Unsorted tips for faster systems: Unsorted tips for faster systems:
...@@ -27292,43 +27292,43 @@ changes to the table, you may be able to get higher performance. ...@@ -27292,43 +27292,43 @@ changes to the table, you may be able to get higher performance.
In some cases it may make sense to introduce a column that is 'hashed' In some cases it may make sense to introduce a column that is 'hashed'
based on information from other columns. If this column is short and based on information from other columns. If this column is short and
reasonably unique it may be much faster than a big index on many reasonably unique it may be much faster than a big index on many
columns. In @strong{MySQL} its very easy to use this extra column: columns. In @strong{MySQL} it's very easy to use this extra column:
@code{SELECT * FROM table_name WHERE hash=MD5(concat(col1,col2)) @code{SELECT * FROM table_name WHERE hash=MD5(concat(col1,col2))
AND col_1='constant' AND col_2='constant'} AND col_1='constant' AND col_2='constant'}
@item @item
For tables that changes a lot you should try to avoid all @code{VARCHAR} For tables that change a lot you should try to avoid all @code{VARCHAR}
or @code{BLOB} columns. You will get dynamic row length as soon as you or @code{BLOB} columns. You will get dynamic row length as soon as you
are using a single @code{VARCHAR} or @code{BLOB} columns. @xref{Table are using a single @code{VARCHAR} or @code{BLOB} column. @xref{Table
types}. types}.
@item @item
It's not normally useful to split a table into different tables just It's not normally useful to split a table into different tables just
because the rows gets 'big'. To access a row, the biggest performance because the rows gets 'big'. To access a row, the biggest performance
hit is the disk seek to find the first byte of the row. After finding hit is the disk seek to find the first byte of the row. After finding
the data most new disks can read the whole row fast enough for most the data most new disks can read the whole row fast enough for most
applications. The only cases it really matters to split up a table is if applications. The only cases where it really matters to split up a table is if
its a dynamic row size table (see above) that you can change to a fixed it's a dynamic row size table (see above) that you can change to a fixed
row size. Or if you very often need to scan the table and don't need row size, or if you very often need to scan the table and don't need
most of the columns. @xref{Table types}. most of the columns. @xref{Table types}.
@item @item
If you very often need to calculate things based on information from a If you very often need to calculate things based on information from a
lot of rows (like counts of things) it's probably much better to lot of rows (like counts of things), it's probably much better to
introduce a new table and update the counter in real time. An update of introduce a new table and update the counter in real time. An update of
type @code{UPDATE table set count=count+1 where index_column=constant} type @code{UPDATE table set count=count+1 where index_column=constant}
is very fast! is very fast!
This is really important when you use databases like @strong{MySQL} that This is really important when you use databases like @strong{MySQL} that
only has table locking (multiple readers / single writers). This will only have table locking (multiple readers / single writers). This will
also give better performance with most databases as the row locking also give better performance with most databases, as the row locking
manager in this case will have less to do. manager in this case will have less to do.
@item @item
If you need to collect statistics from big log tables, use summary tables If you need to collect statistics from big log tables, use summary tables
instead of scanning the whole table. Maintaining the summaries should be instead of scanning the whole table. Maintaining the summaries should be
much faster than trying to do statistics 'live'. It's much faster to much faster than trying to do statistics 'live'. It's much faster to
re-generate new summary tables from the logs when things change regenerate new summary tables from the logs when things change
(depending on business decisions) than to have to change the running (depending on business decisions) than to have to change the running
application! application!
@item @item
If possible one should classify reports as 'live' or 'statistical', If possible, one should classify reports as 'live' or 'statistical',
where data needed for statistical reports are only generated based on where data needed for statistical reports are only generated based on
summary tables that are generated from the actual data. summary tables that are generated from the actual data.
@item @item
...@@ -27339,7 +27339,7 @@ improves the insert speed. ...@@ -27339,7 +27339,7 @@ improves the insert speed.
@item @item
In some cases it's convenient to pack and store data into a blob. In this In some cases it's convenient to pack and store data into a blob. In this
case you have to add some extra code in your appliction to pack/unpack case you have to add some extra code in your appliction to pack/unpack
things in the blob but this may save a lot of accesses at some stage. things in the blob, but this may save a lot of accesses at some stage.
This is practical when you have data that doesn't conform to a static This is practical when you have data that doesn't conform to a static
table structure. table structure.
@item @item
...@@ -27348,7 +27348,7 @@ is called 3rd normal form in database theory), but you should not be ...@@ -27348,7 +27348,7 @@ is called 3rd normal form in database theory), but you should not be
afraid of duplicating things or creating summary tables if you need these afraid of duplicating things or creating summary tables if you need these
to gain more speed. to gain more speed.
@item @item
Stored procedures or UDF (user defined functions) may be a good way to Stored procedures or UDF (user-defined functions) may be a good way to
get more performance. In this case you should, however, always have a way get more performance. In this case you should, however, always have a way
to do this some other (slower) way if you use some database that doesn't to do this some other (slower) way if you use some database that doesn't
support this. support this.
...@@ -27367,7 +27367,7 @@ Use @code{INSERT /*! LOW_PRIORITY */} when you want your selects to be ...@@ -27367,7 +27367,7 @@ Use @code{INSERT /*! LOW_PRIORITY */} when you want your selects to be
more important. more important.
@item @item
Use @code{SELECT /*! HIGH_PRIORITY */} to get selects that jump the Use @code{SELECT /*! HIGH_PRIORITY */} to get selects that jump the
queue. That is the select is done even if there is somebody waiting to queue. That is, the select is done even if there is somebody waiting to
do a write. do a write.
@item @item
Use the multi-line @code{INSERT} statement to store many rows with one Use the multi-line @code{INSERT} statement to store many rows with one
...@@ -27386,14 +27386,14 @@ using dynamic table format. @xref{OPTIMIZE TABLE, , @code{OPTIMIZE TABLE}}. ...@@ -27386,14 +27386,14 @@ using dynamic table format. @xref{OPTIMIZE TABLE, , @code{OPTIMIZE TABLE}}.
Use @code{HEAP} tables to get more speed when possible. @xref{Table Use @code{HEAP} tables to get more speed when possible. @xref{Table
types}. types}.
@item @item
When using a normal web server setup, images should be stored as When using a normal Web server setup, images should be stored as
files. That is, store only a file reference in the database. The main files. That is, store only a file reference in the database. The main
reason for this is that a normal web server is much better at caching reason for this is that a normal Web server is much better at caching
files than database contents. So it it's much easier to get a fast files than database contents. So it it's much easier to get a fast
system if you are using files. system if you are using files.
@item @item
Use in memory tables for non-critical data that are accessed often (like Use in memory tables for non-critical data that are accessed often (like
information about the last shown banner for users that doesn't have information about the last shown banner for users that don't have
cookies). cookies).
@item @item
Columns with identical information in different tables should be Columns with identical information in different tables should be
...@@ -27404,49 +27404,49 @@ Try to keep the names simple (use @code{name} instead of ...@@ -27404,49 +27404,49 @@ Try to keep the names simple (use @code{name} instead of
@code{customer_name} in the customer table). To make your names portable @code{customer_name} in the customer table). To make your names portable
to other SQL servers you should keep them shorter than 18 characters. to other SQL servers you should keep them shorter than 18 characters.
@item @item
If you need REALLY high speed you should take a look at the low level If you need REALLY high speed, you should take a look at the low-level
interfaces for data storage that the different SQL servers support! For interfaces for data storage that the different SQL servers support! For
example by accessing the @strong{MySQL} @code{MyISAM} directly you could example, by accessing the @strong{MySQL} @code{MyISAM} directly, you could
get a speed increase of 2-5 times compared to using the SQL interface. get a speed increase of 2-5 times compared to using the SQL interface.
To be able to do this the data must, however, be on the same server as To be able to do this the data must be on the same server as
the application and usually it should only be accessed by one process the application, and usually it should only be accessed by one process
(because external file locking is really slow). One could eliminate the (because external file locking is really slow). One could eliminate the
above problems by introducing low-level @code{MyISAM} commands in the above problems by introducing low-level @code{MyISAM} commands in the
@strong{MySQL} server (this could be one easy way to get more @strong{MySQL} server (this could be one easy way to get more
performance if needed). By carefully designing the database interface performance if needed). By carefully designing the database interface,
it should be quite easy to support this types of optimization. it should be quite easy to support this types of optimization.
@item @item
In many cases it's faster to access data from a database (using a live In many cases it's faster to access data from a database (using a live
connection) than accessing a text file, just because the database is connection) than accessing a text file, just because the database is
likely to be more compact than the text file (if you are using numerical likely to be more compact than the text file (if you are using numerical
data) and this will involve fewer disk accesses. You will also save data), and this will involve fewer disk accesses. You will also save
code because you don't have to parse your text files to find line and code because you don't have to parse your text files to find line and
column boundaries. column boundaries.
@item @item
You can also use replication to speed things up. @xref{Replication}. You can also use replication to speed things up. @xref{Replication}.
@item @item
Declaring a table with @code{DELAY_KEY_WRITE=1} will make the updating of Declaring a table with @code{DELAY_KEY_WRITE=1} will make the updating of
indexes faster as these are not logged to disk until the file is closed. indexes faster, as these are not logged to disk until the file is closed.
The downside is that you should run @code{myisamchk} on these tables before The downside is that you should run @code{myisamchk} on these tables before
you start @code{mysqld} to ensure that they are okay if something killed you start @code{mysqld} to ensure that they are okay if something killed
@code{mysqld} in the middle. As the key information can always be generated @code{mysqld} in the middle. As the key information can always be generated
from the data you should not lose anything by using @code{DELAY_KEY_WRITE}. from the data, you should not lose anything by using @code{DELAY_KEY_WRITE}.
@end itemize @end itemize
@cindex benchmarks @cindex benchmarks
@cindex performance, benchmarks @cindex performance, benchmarks
@node Benchmarks, Design, Tips, Performance @node Benchmarks, Design, Tips, Performance
@section Using your own benchmarks @section Using Your Own Benchmarks
You should definately benchmark your application and database to find You should definately benchmark your application and database to find
out where the bottlenecks are. By fixing it (or by replacing the out where the bottlenecks are. By fixing it (or by replacing the
bottleneck with a 'dummy module') you can then easily identify the next bottleneck with a 'dummy module') you can then easily identify the next
bottleneck (and so on). Even if the overall performance for your bottleneck (and so on). Even if the overall performance for your
application is sufficient you should at least make a plan for each application is sufficient, you should at least make a plan for each
bottleneck, and decide how to solve it if someday you really need the bottleneck, and decide how to solve it if someday you really need the
extra performance. extra performance.
For an example of portable benchmark programs look at the @strong{MySQL} For an example of portable benchmark programs, look at the @strong{MySQL}
benchmark suite. @xref{MySQL Benchmarks, , @strong{MySQL} Benchmarks}. You benchmark suite. @xref{MySQL Benchmarks, , @strong{MySQL} Benchmarks}. You
can take any program from this suite and modify it for your needs. By doing this, can take any program from this suite and modify it for your needs. By doing this,
you can try different solutions to your problem and test which is really the you can try different solutions to your problem and test which is really the
...@@ -27455,12 +27455,12 @@ fastest solution for you. ...@@ -27455,12 +27455,12 @@ fastest solution for you.
It is very common that some problems only occur when the system is very It is very common that some problems only occur when the system is very
heavily loaded. We have had many customers who contact us when they heavily loaded. We have had many customers who contact us when they
have a (tested) system in production and have encountered load problems. In have a (tested) system in production and have encountered load problems. In
every one of these cases so far it has been problems with basic design every one of these cases so far, it has been problems with basic design
(table scans are NOT good at high load) or OS/Library issues. Most of (table scans are NOT good at high load) or OS/Library issues. Most of
this would be a @strong{LOT} easier to fix if the systems were not this would be a @strong{LOT} easier to fix if the systems were not
already in production. already in production.
To avoid problems like this you should put some effort into benchmarking To avoid problems like this, you should put some effort into benchmarking
your whole application under the worst possible load! You can use Sasha's your whole application under the worst possible load! You can use Sasha's
recent hack for this - recent hack for this -
@uref{http://www.mysql.com/Downloads/Contrib/mysql-bench-0.6.tar.gz, mysql-super-smack}. @uref{http://www.mysql.com/Downloads/Contrib/mysql-bench-0.6.tar.gz, mysql-super-smack}.
...@@ -27471,7 +27471,7 @@ so make sure to use it only on your developement systems. ...@@ -27471,7 +27471,7 @@ so make sure to use it only on your developement systems.
@cindex database design @cindex database design
@cindex storage of data @cindex storage of data
@node Design, Design Limitations, Benchmarks, Performance @node Design, Design Limitations, Benchmarks, Performance
@section Design choices @section Design Choices
@strong{MySQL} keeps row data and index data in separate files. Many (almost @strong{MySQL} keeps row data and index data in separate files. Many (almost
all) other databases mix row and index data in the same file. We believe that all) other databases mix row and index data in the same file. We believe that
...@@ -27496,18 +27496,18 @@ to get at the data. ...@@ -27496,18 +27496,18 @@ to get at the data.
@item @item
You can't use only the index table to retrieve data for a query. You can't use only the index table to retrieve data for a query.
@item @item
You lose a lot of space as you must duplicate indexes from the nodes You lose a lot of space, as you must duplicate indexes from the nodes
(as you can't store the row in the nodes). (as you can't store the row in the nodes).
@item @item
Deletes will degenerate the table over time (as indexes in nodes are Deletes will degenerate the table over time (as indexes in nodes are
usually not updated on delete). usually not updated on delete).
@item @item
Its harder to cache ONLY the index data. It's harder to cache ONLY the index data.
@end itemize @end itemize
@cindex design, limitations @cindex design, limitations
@node Design Limitations, Portability, Design, Performance @node Design Limitations, Portability, Design, Performance
@section MySQL design limitations/tradeoffs @section MySQL Design Limitations/Tradeoffs
Because @strong{MySQL} uses extremely fast table locking (multiple readers / Because @strong{MySQL} uses extremely fast table locking (multiple readers /
single writers) the biggest remaining problem is a mix of a steady stream of single writers) the biggest remaining problem is a mix of a steady stream of
...@@ -27529,7 +27529,7 @@ common application niches. ...@@ -27529,7 +27529,7 @@ common application niches.
Because all SQL servers implement different parts of SQL, it takes work to Because all SQL servers implement different parts of SQL, it takes work to
write portable SQL applications. For very simple selects/inserts it is write portable SQL applications. For very simple selects/inserts it is
very easy but the more you need the harder it gets. If you want an very easy, but the more you need the harder it gets. If you want an
application that is fast with many databases it becomes even harder! application that is fast with many databases it becomes even harder!
To make a complex application portable you need to choose a number of To make a complex application portable you need to choose a number of
...@@ -27537,8 +27537,8 @@ SQL servers that it should work with. ...@@ -27537,8 +27537,8 @@ SQL servers that it should work with.
You can use the @strong{MySQL} crash-me program/web-page You can use the @strong{MySQL} crash-me program/web-page
@uref{http://www.mysql.com/information/crashme/choose.php} to find functions, @uref{http://www.mysql.com/information/crashme/choose.php} to find functions,
types and limits you can use with a selection of database types, and limits you can use with a selection of database
servers. Crash-me now tests far from everything possible but it servers. Crash-me now tests far from everything possible, but it
is still comprehensive with about 450 things tested. is still comprehensive with about 450 things tested.
For example, you shouldn't have column names longer than 18 characters For example, you shouldn't have column names longer than 18 characters
...@@ -27546,29 +27546,29 @@ if you want to be able to use Informix or DB2. ...@@ -27546,29 +27546,29 @@ if you want to be able to use Informix or DB2.
Both the @strong{MySQL} benchmarks and crash-me programs are very Both the @strong{MySQL} benchmarks and crash-me programs are very
database-independent. By taking a look at how we have handled this, you database-independent. By taking a look at how we have handled this, you
can get a feeling of what you have to do to write your application can get a feeling for what you have to do to write your application
database-independent. The benchmarks themselves can be found in the database-independent. The benchmarks themselves can be found in the
@file{sql-bench} directory in the @strong{MySQL} source @file{sql-bench} directory in the @strong{MySQL} source
distribution. They are written in Perl with DBI database interface distribution. They are written in Perl with DBI database interface
(which solves the access part of the problem). (which solves the access part of the problem).
See @uref{http://www.mysql.com/information/benchmarks.html} the results See @uref{http://www.mysql.com/information/benchmarks.html} for the results
from this benchmark. from this benchmark.
As you can see in these results all databases have some weak points. That As you can see in these results, all databases have some weak points. That
is, they have different design compromises that lead to different is, they have different design compromises that lead to different
behavior. behavior.
If you strive for database independence you need to get a good feeling If you strive for database independence, you need to get a good feeling
of each SQL server's bottlenecks. @strong{MySQL} is VERY fast in for each SQL server's bottlenecks. @strong{MySQL} is VERY fast in
retrieving and updating things, but will have a problem in mixing slow retrieving and updating things, but will have a problem in mixing slow
readers/writers on the same table. Oracle on the other hand has a big readers/writers on the same table. Oracle, on the other hand, has a big
problem when you try to access rows that you have recently updated problem when you try to access rows that you have recently updated
(until they are flushed to disk). Transaction databases in general are (until they are flushed to disk). Transaction databases in general are
not very good at generating summary tables from log tables as in this not very good at generating summary tables from log tables, as in this
case row locking is almost useless. case row locking is almost useless.
To get your application @emph{really} database-independent you need to define To get your application @emph{really} database-independent, you need to define
an easy extendable interface through which you manipulate your data. As an easy extendable interface through which you manipulate your data. As
C++ is available on most systems, it makes sense to use a C++ classes C++ is available on most systems, it makes sense to use a C++ classes
interface to the databases. interface to the databases.
...@@ -27577,14 +27577,14 @@ If you use some specific feature for some database (like the ...@@ -27577,14 +27577,14 @@ If you use some specific feature for some database (like the
@code{REPLACE} command in @strong{MySQL}), you should code a method for @code{REPLACE} command in @strong{MySQL}), you should code a method for
the other SQL servers to implement the same feature (but slower). With the other SQL servers to implement the same feature (but slower). With
@strong{MySQL} you can use the @code{/*! */} syntax to add @strong{MySQL} you can use the @code{/*! */} syntax to add
@strong{MySQL} specific keywords to a query. The code inside @strong{MySQL}-specific keywords to a query. The code inside
@code{/**/} will be treated as a comment (ignored) by most other SQL @code{/**/} will be treated as a comment (ignored) by most other SQL
servers. servers.
If REAL high performance is more important than exactness, like in some If REAL high performance is more important than exactness, as in some
web applications, a possibility is to create an application layer that Web applications, a possibility is to create an application layer that
caches all results to give you even higher performance. By letting caches all results to give you even higher performance. By letting
old results 'expire' after a while you can keep the cache reasonably old results 'expire' after a while, you can keep the cache reasonably
fresh. This is quite nice in case of extremely high load, in which case fresh. This is quite nice in case of extremely high load, in which case
you can dynamically increase the cache and set the expire timeout higher you can dynamically increase the cache and set the expire timeout higher
until things get back to normal. until things get back to normal.
...@@ -27596,18 +27596,18 @@ be refreshed. ...@@ -27596,18 +27596,18 @@ be refreshed.
@cindex uses, of MySQL @cindex uses, of MySQL
@cindex customers, of MySQL @cindex customers, of MySQL
@node Internal use, , Portability, Performance @node Internal use, , Portability, Performance
@section What have we used MySQL for? @section What Have We Used MySQL For?
During @strong{MySQL} initial development, the features of @strong{MySQL} were made to fit During @strong{MySQL} initial development, the features of @strong{MySQL} were made to fit
our largest customer. They handle data warehousing for a couple of the our largest customer. They handle data warehousing for a couple of the
biggest retailers in Sweden. biggest retailers in Sweden.
From all stores, we get weekly summaries of all bonus card transactions From all stores, we get weekly summaries of all bonus card transactions,
and we are expected to provide useful information for the store owners and we are expected to provide useful information for the store owners
to help them find how their advertisement campaigns are affecting their to help them find how their advertisement campaigns are affecting their
customers. customers.
The data is quite huge (about 7 million summary transactions per month) The data is quite huge (about 7 million summary transactions per month),
and we have data for 4-10 years that we need to present to the users. and we have data for 4-10 years that we need to present to the users.
We got weekly requests from the customers that they want to get We got weekly requests from the customers that they want to get
'instant' access to new reports from this data. 'instant' access to new reports from this data.
...@@ -27616,8 +27616,8 @@ We solved this by storing all information per month in compressed ...@@ -27616,8 +27616,8 @@ We solved this by storing all information per month in compressed
'transaction' tables. We have a set of simple macros (script) that 'transaction' tables. We have a set of simple macros (script) that
generates summary tables grouped by different criteria (product group, generates summary tables grouped by different criteria (product group,
customer id, store ...) from the transaction tables. The reports are customer id, store ...) from the transaction tables. The reports are
web pages that are dynamically generated by a small Perl script that Web pages that are dynamically generated by a small Perl script that
parses a web page, executes the SQL statements in it and inserts the parses a Web page, executes the SQL statements in it, and inserts the
results. We would have used PHP or mod_perl instead but they were results. We would have used PHP or mod_perl instead but they were
not available at that time. not available at that time.
...@@ -27627,31 +27627,31 @@ result). This is also dynamically executed from the Perl script that ...@@ -27627,31 +27627,31 @@ result). This is also dynamically executed from the Perl script that
parses the @code{HTML} files. parses the @code{HTML} files.
In most cases a new report can simply be done by copying an existing In most cases a new report can simply be done by copying an existing
script and modifying the SQL query in it. In some cases we will need to script and modifying the SQL query in it. In some cases, we will need to
add more fields to an existing summary table or generate a new one, but add more fields to an existing summary table or generate a new one, but
this is also quite simple as we keep all transactions tables on disk. this is also quite simple, as we keep all transactions tables on disk.
(Currently we have at least 50G of transactions tables and 200G of other (Currently we have at least 50G of transactions tables and 200G of other
customer data). customer data.)
We also let our customers access the summary tables directly with ODBC We also let our customers access the summary tables directly with ODBC
so that the advanced users can themselves experiment with the data. so that the advanced users can themselves experiment with the data.
We haven't had any problems handling this with quite modest Sun Ultra We haven't had any problems handling this with quite modest Sun Ultra
SPARCstation (2x200 Mhz). We recently upgraded one of our servers to a 2 SPARCstation (2x200 Mhz). We recently upgraded one of our servers to a 2
CPU 400 Mhz UltraSPARC and we are now planning to start handling CPU 400 Mhz UltraSPARC, and we are now planning to start handling
transactions on the product level, which would mean a ten-fold increase transactions on the product level, which would mean a ten-fold increase
of data. We think we can keep up with this by just adding more disk to of data. We think we can keep up with this by just adding more disk to
our systems. our systems.
We are also experimenting with Intel-Linux to be able to get more CPU We are also experimenting with Intel-Linux to be able to get more CPU
power cheaper. Now that we have the binary portable database format (new power cheaper. Now that we have the binary portable database format (new
in Version 3.23) we will start to use this for some parts of the application. in Version 3.23), we will start to use this for some parts of the application.
Our initial feelings are that Linux will perform much better on low to Our initial feelings are that Linux will perform much better on
medium load but Solaris will perform better when you start to get a low-to-medium load and Solaris will perform better when you start to get a
high load because of extreme disk IO, but we don't yet have anything high load because of extreme disk IO, but we don't yet have anything
conclusive about this. After some discussion with a Linux Kernel conclusive about this. After some discussion with a Linux Kernel
developer this might be a side effect of Linux giving so much resources developer, this might be a side effect of Linux giving so much resources
to the batch job that the interactive performance gets very low. This to the batch job that the interactive performance gets very low. This
makes the machine feel very slow and unresponsive while big batches are makes the machine feel very slow and unresponsive while big batches are
going. Hopefully this will be better handled in future Linux Kernels. going. Hopefully this will be better handled in future Linux Kernels.
...@@ -27659,10 +27659,10 @@ going. Hopefully this will be better handled in future Linux Kernels. ...@@ -27659,10 +27659,10 @@ going. Hopefully this will be better handled in future Linux Kernels.
@cindex benchmark suite @cindex benchmark suite
@cindex crash-me program @cindex crash-me program
@node MySQL Benchmarks, Tools, Performance, Top @node MySQL Benchmarks, Tools, Performance, Top
@chapter The MySQL benchmark suite @chapter The MySQL Benchmark Suite
This should contain a technical description of the @strong{MySQL} This should contain a technical description of the @strong{MySQL}
benchmark suite (and @code{crash-me}) but that description is not benchmark suite (and @code{crash-me}), but that description is not
written yet. Currently, you should look at the code and results in the written yet. Currently, you should look at the code and results in the
@file{sql-bench} directory in the distribution (and of course on the Web page @file{sql-bench} directory in the distribution (and of course on the Web page
at @uref{http://www.mysql.com/crashme/choose.php} and (normally found in at @uref{http://www.mysql.com/crashme/choose.php} and (normally found in
...@@ -27671,7 +27671,7 @@ the @file{sql-bench} directory in the @strong{MySQL} distribution)). ...@@ -27671,7 +27671,7 @@ the @file{sql-bench} directory in the @strong{MySQL} distribution)).
It is meant to be a benchmark that will tell any user what things a It is meant to be a benchmark that will tell any user what things a
given SQL implementation performs well or poorly at. given SQL implementation performs well or poorly at.
Note that this benchmark is single threaded so it measures the minimum Note that this benchmark is single threaded, so it measures the minimum
time for the operations. time for the operations.
For example, (run on the same NT 4.0 machine): For example, (run on the same NT 4.0 machine):
...@@ -27703,7 +27703,7 @@ For example, (run on the same NT 4.0 machine): ...@@ -27703,7 +27703,7 @@ For example, (run on the same NT 4.0 machine):
In the above test @strong{MySQL} was run with a 8M index cache. In the above test @strong{MySQL} was run with a 8M index cache.
Note that Oracle is not included because they asked to be removed. All Note that Oracle is not included because they asked to be removed. All
Oracle benchmarks has to be passed by Oracle! We believe that makes Oracle benchmarks have to be passed by Oracle! We believe that makes
Oracle benchmarks @strong{VERY} biased because the above benchmarks are Oracle benchmarks @strong{VERY} biased because the above benchmarks are
supposed to show what a standard installation can do for a single supposed to show what a standard installation can do for a single
client. client.
...@@ -27743,7 +27743,7 @@ How big a @code{VARCHAR} column can be ...@@ -27743,7 +27743,7 @@ How big a @code{VARCHAR} column can be
@cindex environment variables @cindex environment variables
@cindex programs, list of @cindex programs, list of
@node Programs, safe_mysqld, Tools, Tools @node Programs, safe_mysqld, Tools, Tools
@section Overview of the different MySQL programs @section Overview of the Different MySQL Programs
All @strong{MySQL} clients that communicate with the server using the All @strong{MySQL} clients that communicate with the server using the
@code{mysqlclient} library use the following environment variables: @code{mysqlclient} library use the following environment variables:
...@@ -27776,7 +27776,7 @@ Use of @code{MYSQL_PWD} is insecure. ...@@ -27776,7 +27776,7 @@ Use of @code{MYSQL_PWD} is insecure.
@cindex command line history @cindex command line history
@tindex .mysql_history file @tindex .mysql_history file
The @file{mysql} client uses the file named in the @code{MYSQL_HISTFILE} The @file{mysql} client uses the file named in the @code{MYSQL_HISTFILE}
environment variable to save the command line history. The default value for environment variable to save the command-line history. The default value for
the history file is @file{$HOME/.mysql_history}, where @code{$HOME} is the the history file is @file{$HOME/.mysql_history}, where @code{$HOME} is the
value of the @code{HOME} environment variable. @xref{Environment variables}. value of the @code{HOME} environment variable. @xref{Environment variables}.
...@@ -27794,7 +27794,7 @@ The list below briefly describes the @strong{MySQL} programs: ...@@ -27794,7 +27794,7 @@ The list below briefly describes the @strong{MySQL} programs:
@cindex @code{myisamchk} @cindex @code{myisamchk}
@item myisamchk @item myisamchk
Utility to describe, check, optimize and repair @strong{MySQL} tables. Utility to describe, check, optimize, and repair @strong{MySQL} tables.
Because @code{myisamchk} has many functions, it is described in its own Because @code{myisamchk} has many functions, it is described in its own
chapter. @xref{Maintenance}. chapter. @xref{Maintenance}.
...@@ -27811,15 +27811,15 @@ handle all cases, but it gives a good start when converting. ...@@ -27811,15 +27811,15 @@ handle all cases, but it gives a good start when converting.
@cindex @code{mysqlaccess} @cindex @code{mysqlaccess}
@item mysqlaccess @item mysqlaccess
A script that checks the access privileges for a host, user and database A script that checks the access privileges for a host, user, and database
combination. combination.
@cindex @code{mysqladmin} @cindex @code{mysqladmin}
@item mysqladmin @item mysqladmin
Utility for performing administrative operations, such as creating or Utility for performing administrative operations, such as creating or
dropping databases, reloading the grant tables, flushing tables to disk and dropping databases, reloading the grant tables, flushing tables to disk, and
reopening log files. @code{mysqladmin} can also be used to retrieve version, reopening log files. @code{mysqladmin} can also be used to retrieve version,
process and status information from the server. process, and status information from the server.
@xref{mysqladmin, , @code{mysqladmin}}. @xref{mysqladmin, , @code{mysqladmin}}.
@cindex @code{mysqlbug} @cindex @code{mysqlbug}
...@@ -27844,7 +27844,7 @@ INFILE}. @xref{mysqlimport, , @code{mysqlimport}}. ...@@ -27844,7 +27844,7 @@ INFILE}. @xref{mysqlimport, , @code{mysqlimport}}.
@cindex @code{mysqlshow} @cindex @code{mysqlshow}
@item mysqlshow @item mysqlshow
Displays information about databases, tables, columns and indexes. Displays information about databases, tables, columns, and indexes.
@cindex @code{mysql_install_db} @cindex @code{mysql_install_db}
@item mysql_install_db @item mysql_install_db
...@@ -27873,7 +27873,7 @@ shell> replace a b b a -- file1 file2 ... ...@@ -27873,7 +27873,7 @@ shell> replace a b b a -- file1 file2 ...
@code{safe_mysqld} is the recommended way to start a @code{mysqld} @code{safe_mysqld} is the recommended way to start a @code{mysqld}
daemon on Unix. @code{safe_mysqld} adds some safety features such as daemon on Unix. @code{safe_mysqld} adds some safety features such as
restarting the server when an error occurs and logging runtime restarting the server when an error occurs and logging run-time
information to a log file. information to a log file.
Normally one should never edit the @code{safe_mysqld} script, but Normally one should never edit the @code{safe_mysqld} script, but
...@@ -27963,7 +27963,7 @@ edited version that you can reinstall. ...@@ -27963,7 +27963,7 @@ edited version that you can reinstall.
@cindex scripts @cindex scripts
@cindex @code{mysql} @cindex @code{mysql}
@node mysql, mysqladmin, safe_mysqld, Tools @node mysql, mysqladmin, safe_mysqld, Tools
@section The command line tool @section The Command-line Tool
@code{mysql} is a simple SQL shell (with GNU @code{readline} capabilities). @code{mysql} is a simple SQL shell (with GNU @code{readline} capabilities).
It supports interactive and non-interactive use. When used interactively, It supports interactive and non-interactive use. When used interactively,
...@@ -27981,9 +27981,9 @@ If you have problems due to insufficient memory in the client, use the ...@@ -27981,9 +27981,9 @@ If you have problems due to insufficient memory in the client, use the
@code{mysql_use_result()} rather than @code{mysql_store_result()} to @code{mysql_use_result()} rather than @code{mysql_store_result()} to
retrieve the result set. retrieve the result set.
Using @code{mysql} is very easy; Just start it as follows Using @code{mysql} is very easy. Just start it as follows:
@code{mysql database} or @code{mysql --user=user_name --password=your_password database}. Type a SQL statement, end it with @samp{;}, @samp{\g} or @samp{\G} @code{mysql database} or @code{mysql --user=user_name --password=your_password database}. Type a SQL statement, end it with @samp{;}, @samp{\g}, or @samp{\G}
and press return/enter. and press RETURN/ENTER.
@cindex command line options @cindex command line options
@cindex options, command line @cindex options, command line
...@@ -27993,7 +27993,7 @@ and press return/enter. ...@@ -27993,7 +27993,7 @@ and press return/enter.
@table @code @table @code
@cindex help option @cindex help option
@item -?, --help @item -?, --help
Display this help and exit Display this help and exit.
@cindex automatic rehash option @cindex automatic rehash option
@item -A, --no-auto-rehash @item -A, --no-auto-rehash
No automatic rehashing. One has to use 'rehash' to get table and field No automatic rehashing. One has to use 'rehash' to get table and field
...@@ -28011,10 +28011,10 @@ Directory where character sets are located. ...@@ -28011,10 +28011,10 @@ Directory where character sets are located.
Use compression in server/client protocol. Use compression in server/client protocol.
@cindex debug option @cindex debug option
@item -#, --debug[=...] @item -#, --debug[=...]
Debug log. Default is 'd:t:o,/tmp/mysql.trace' Debug log. Default is 'd:t:o,/tmp/mysql.trace'.
@cindex database option @cindex database option
@item -D, --database=.. @item -D, --database=..
Database to use; This is mainly useful in the @code{my.cnf} file. Database to use. This is mainly useful in the @code{my.cnf} file.
@cindex default character set option @cindex default character set option
@item @item
--default-character-set=... Set the default character set. --default-character-set=... Set the default character set.
...@@ -28031,8 +28031,8 @@ Continue even if we get a SQL error. ...@@ -28031,8 +28031,8 @@ Continue even if we get a SQL error.
@cindex no-named-commands option @cindex no-named-commands option
@item -g, --no-named-commands @item -g, --no-named-commands
Named commands are disabled. Use \* form only, or use named commands Named commands are disabled. Use \* form only, or use named commands
only in the beginning of a line ending with a semicolon (;) Since only in the beginning of a line ending with a semicolon (;). Since
version 10.9 the client now starts with this option ENABLED by default! Version 10.9, the client now starts with this option ENABLED by default!
With the -g option, long format commands will still work from the first With the -g option, long format commands will still work from the first
line, however. line, however.
@cindex enable-named-commands option @cindex enable-named-commands option
...@@ -28050,7 +28050,7 @@ Connect to the given host. ...@@ -28050,7 +28050,7 @@ Connect to the given host.
Produce HTML output. Produce HTML output.
@cindex skip line numbers option @cindex skip line numbers option
@item -L, --skip-line-numbers @item -L, --skip-line-numbers
Don't write line number for errors. Useful when one want's to compare result Don't write line number for errors. Useful when one wants to compare result
files that includes error messages files that includes error messages
@cindex no pager option @cindex no pager option
@item --no-pager @item --no-pager
...@@ -28078,7 +28078,7 @@ pagers are less, more, cat [> filename], etc. See interactive help (\h) ...@@ -28078,7 +28078,7 @@ pagers are less, more, cat [> filename], etc. See interactive help (\h)
also. This option does not work in batch mode. Pager works only in UNIX. also. This option does not work in batch mode. Pager works only in UNIX.
@cindex password option @cindex password option
@item -p[password], --password[=...] @item -p[password], --password[=...]
Password to use when connecting to server. If password is not given on Password to use when connecting to server. If a password is not given on
the command line, you will be prompted for it. Note that if you use the the command line, you will be prompted for it. Note that if you use the
short form @code{-p} you can't have a space between the option and the short form @code{-p} you can't have a space between the option and the
password. password.
...@@ -28100,7 +28100,7 @@ Socket file to use for connection. ...@@ -28100,7 +28100,7 @@ Socket file to use for connection.
@item -t --table @item -t --table
Output in table format. This is default in non-batch mode. Output in table format. This is default in non-batch mode.
@item -T, --debug-info @item -T, --debug-info
Print some debug info at exit. Print some debug information at exit.
@cindex tee option @cindex tee option
@item --tee=... @item --tee=...
Append everything into outfile. See interactive help (\h) also. Does not Append everything into outfile. See interactive help (\h) also. Does not
...@@ -28171,7 +28171,7 @@ From the above, pager only works in UNIX. ...@@ -28171,7 +28171,7 @@ From the above, pager only works in UNIX.
The @code{status} command gives you some information about the The @code{status} command gives you some information about the
connection and the server you are using. If you are running in the connection and the server you are using. If you are running in the
@code{--safe-updates} mode, @code{status} will also print the values for @code{--safe-updates} mode, @code{status} will also print the values for
the @code{mysql} variables that affects your queries. the @code{mysql} variables that affect your queries.
@cindex @code{safe-mode} command @cindex @code{safe-mode} command
A useful startup option for beginners (introduced in @strong{MySQL} Version 3.23.11) is A useful startup option for beginners (introduced in @strong{MySQL} Version 3.23.11) is
...@@ -28192,7 +28192,7 @@ The effect of the above is: ...@@ -28192,7 +28192,7 @@ The effect of the above is:
@itemize @bullet @itemize @bullet
@item @item
You are not allowed to do an @code{UPDATE} or @code{DELETE} statements You are not allowed to do an @code{UPDATE} or @code{DELETE} statement
if you don't have a key constraint in the @code{WHERE} part. One can, if you don't have a key constraint in the @code{WHERE} part. One can,
however, force an @code{UPDATE/DELETE} by using @code{LIMIT}: however, force an @code{UPDATE/DELETE} by using @code{LIMIT}:
@example @example
...@@ -28271,9 +28271,9 @@ the same time. ...@@ -28271,9 +28271,9 @@ the same time.
@cindex server administration @cindex server administration
@cindex @code{mysladmn} @cindex @code{mysladmn}
@node mysqladmin, mysqldump, mysql, Tools @node mysqladmin, mysqldump, mysql, Tools
@section Administering a MySQL server @section Administering a MySQL Server
Utility for performing administrative operations. The syntax is: A utility for performing administrative operations. The syntax is:
@example @example
shell> mysqladmin [OPTIONS] command [command-option] command ... shell> mysqladmin [OPTIONS] command [command-option] command ...
...@@ -28293,7 +28293,7 @@ The current @code{mysqladmin} supports the following commands: ...@@ -28293,7 +28293,7 @@ The current @code{mysqladmin} supports the following commands:
@item flush-tables @tab Flush all tables. @item flush-tables @tab Flush all tables.
@item flush-privileges @tab Reload grant tables (same as reload). @item flush-privileges @tab Reload grant tables (same as reload).
@item kill id,id,... @tab Kill mysql threads. @item kill id,id,... @tab Kill mysql threads.
@item password @tab new-password Change old password to new-password. @item password @tab New-password. Change old password to new-password.
@item ping @tab Check if mysqld is alive. @item ping @tab Check if mysqld is alive.
@item processlist @tab Show list of active threads in server. @item processlist @tab Show list of active threads in server.
@item reload @tab Reload grant tables. @item reload @tab Reload grant tables.
...@@ -28334,7 +28334,7 @@ The @code{mysqladmin status} command result has the following columns: ...@@ -28334,7 +28334,7 @@ The @code{mysqladmin status} command result has the following columns:
@item Opens @tab How many tables @code{mysqld} has opened. @item Opens @tab How many tables @code{mysqld} has opened.
@cindex flush tables @cindex flush tables
@cindex tables, flush @cindex tables, flush
@item Flush tables @tab Number of @code{flush ...}, @code{refresh} and @code{reload} commands. @item Flush tables @tab Number of @code{flush ...}, @code{refresh}, and @code{reload} commands.
@cindex open tables @cindex open tables
@item Open tables @tab Number of tables that are open now. @item Open tables @tab Number of tables that are open now.
@cindex memory use @cindex memory use
...@@ -28352,12 +28352,12 @@ the @code{mysqld} server has stopped properly. ...@@ -28352,12 +28352,12 @@ the @code{mysqld} server has stopped properly.
@cindex tables, dumping @cindex tables, dumping
@cindex backing up, databases @cindex backing up, databases
@node mysqldump, mysqlimport, mysqladmin, Tools @node mysqldump, mysqlimport, mysqladmin, Tools
@section Dumping the structure and data from MySQL databases and tables @section Dumping the Structure and Data from MySQL Databases and Tables
@cindex @code{mysqldump} @cindex @code{mysqldump}
Utility to dump a database or a collection of database for backup or Utility to dump a database or a collection of database for backup or
for transferring the data to another SQL server. The dump will contain SQL for transferring the data to another SQL server. The dump will contain SQL
statements to create the table and/or populate the table. statements to create the table and/or populate the table:
@example @example
shell> mysqldump [OPTIONS] database [tables] shell> mysqldump [OPTIONS] database [tables]
...@@ -28385,14 +28385,14 @@ server, you should not use the @code{--opt} or @code{-e} options. ...@@ -28385,14 +28385,14 @@ server, you should not use the @code{--opt} or @code{-e} options.
@table @code @table @code
@item --add-locks @item --add-locks
Add @code{LOCK TABLES} before and @code{UNLOCK TABLE} after each table dump. Add @code{LOCK TABLES} before and @code{UNLOCK TABLE} after each table dump.
(To get faster inserts into @strong{MySQL}). (To get faster inserts into @strong{MySQL}.)
@item --add-drop-table @item --add-drop-table
Add a @code{drop table} before each create statement. Add a @code{drop table} before each create statement.
@item -A, --all-databases @item -A, --all-databases
Dump all the databases. This will be same as @code{--databases} with all Dump all the databases. This will be same as @code{--databases} with all
databases selected. databases selected.
@item -a, --all @item -a, --all
Include all @strong{MySQL} specific create options. Include all @strong{MySQL}-specific create options.
@item --allow-keywords @item --allow-keywords
Allow creation of column names that are keywords. This works by Allow creation of column names that are keywords. This works by
prefixing each column name with the table name. prefixing each column name with the table name.
...@@ -28402,7 +28402,7 @@ Use complete insert statements (with column names). ...@@ -28402,7 +28402,7 @@ Use complete insert statements (with column names).
Compress all information between the client and the server if both support Compress all information between the client and the server if both support
compression. compression.
@item -B, --databases @item -B, --databases
To dump several databases. Note the difference in usage; In this case To dump several databases. Note the difference in usage. In this case
no tables are given. All name arguments are regarded as databasenames. no tables are given. All name arguments are regarded as databasenames.
@code{USE db_name;} will be included in the output before each new database. @code{USE db_name;} will be included in the output before each new database.
@item --delayed @item --delayed
...@@ -28438,7 +28438,7 @@ tables. ...@@ -28438,7 +28438,7 @@ tables.
output. The above line will be added otherwise, if --databases or output. The above line will be added otherwise, if --databases or
--all-databases option was given. --all-databases option was given.
@item -t, --no-create-info @item -t, --no-create-info
Don't write table creation info (The @code{CREATE TABLE} statement.) Don't write table creation information (The @code{CREATE TABLE} statement.)
@item -d, --no-data @item -d, --no-data
Don't write any row information for the table. This is very useful if you Don't write any row information for the table. This is very useful if you
just want to get a dump of the structure for a table! just want to get a dump of the structure for a table!
...@@ -28455,7 +28455,7 @@ The TCP/IP port number to use for connecting to a host. (This is used for ...@@ -28455,7 +28455,7 @@ The TCP/IP port number to use for connecting to a host. (This is used for
connections to hosts other than @code{localhost}, for which Unix sockets are connections to hosts other than @code{localhost}, for which Unix sockets are
used.) used.)
@item -q, --quick @item -q, --quick
Don't buffer query, dump directly to stdout; Uses @code{mysql_use_result()} Don't buffer query, dump directly to stdout. Uses @code{mysql_use_result()}
to do this. to do this.
@item -S /path/to/socket, --socket=/path/to/socket @item -S /path/to/socket, --socket=/path/to/socket
The socket file to use when connecting to @code{localhost} (which is the The socket file to use when connecting to @code{localhost} (which is the
...@@ -28474,11 +28474,11 @@ default value is your Unix login name. ...@@ -28474,11 +28474,11 @@ default value is your Unix login name.
@item -O var=option, --set-variable var=option @item -O var=option, --set-variable var=option
Set the value of a variable. The possible variables are listed below. Set the value of a variable. The possible variables are listed below.
@item -v, --verbose @item -v, --verbose
Verbose mode. Print out more information what the program does. Verbose mode. Print out more information on what the program does.
@item -V, --version @item -V, --version
Print version information and exit. Print version information and exit.
@item -w, --where='where-condition' @item -w, --where='where-condition'
Dump only selected records; Note that QUOTES are mandatory! Dump only selected records. Note that QUOTES are mandatory:
@example @example
"--where=user='jimf'" "-wuserid>1" "-wuserid<1" "--where=user='jimf'" "-wuserid>1" "-wuserid<1"
...@@ -28493,7 +28493,7 @@ variable in the @strong{MySQL} server is bigger than the ...@@ -28493,7 +28493,7 @@ variable in the @strong{MySQL} server is bigger than the
@end table @end table
The most normal use of @code{mysqldump} is probably for making a backup of The most normal use of @code{mysqldump} is probably for making a backup of
whole databases. @xref{Backup}. whole databases. @xref{Backup}:
@example @example
mysqldump --opt database > backup-file.sql mysqldump --opt database > backup-file.sql
...@@ -28536,9 +28536,9 @@ mysqldump --all-databases > all_databases.sql ...@@ -28536,9 +28536,9 @@ mysqldump --all-databases > all_databases.sql
@cindex text files, importing @cindex text files, importing
@cindex @code{mysqlimport} @cindex @code{mysqlimport}
@node mysqlimport, mysqlshow, mysqldump, Tools @node mysqlimport, mysqlshow, mysqldump, Tools
@section Importing data from text files @section Importing Data from Text Files
@code{mysqlimport} provides a command line interface to the @code{LOAD DATA @code{mysqlimport} provides a command-line interface to the @code{LOAD DATA
INFILE} SQL statement. Most options to @code{mysqlimport} correspond INFILE} SQL statement. Most options to @code{mysqlimport} correspond
directly to the same options to @code{LOAD DATA INFILE}. directly to the same options to @code{LOAD DATA INFILE}.
@xref{LOAD DATA, , @code{LOAD DATA}}. @xref{LOAD DATA, , @code{LOAD DATA}}.
...@@ -28552,14 +28552,14 @@ shell> mysqlimport [options] database textfile1 [textfile2....] ...@@ -28552,14 +28552,14 @@ shell> mysqlimport [options] database textfile1 [textfile2....]
For each text file named on the command line, For each text file named on the command line,
@code{mysqlimport} strips any extension from the filename and uses the result @code{mysqlimport} strips any extension from the filename and uses the result
to determine which table to import the file's contents into. For example, to determine which table to import the file's contents into. For example,
files named @file{patient.txt}, @file{patient.text} and @file{patient} would files named @file{patient.txt}, @file{patient.text}, and @file{patient} would
all be imported into a table named @code{patient}. all be imported into a table named @code{patient}.
@code{mysqlimport} supports the following options: @code{mysqlimport} supports the following options:
@table @code @table @code
@item -c, --columns=... @item -c, --columns=...
This option takes a comma separated list of field names as an argument. This option takes a comma-separated list of field names as an argument.
The field list is passed to LOAD DATA INFILE MySQL sql command, which The field list is passed to LOAD DATA INFILE MySQL sql command, which
mysqlimport calls MySQL to execute. For more information, please see mysqlimport calls MySQL to execute. For more information, please see
@code{LOAD DATA INFILE}. @xref{LOAD DATA, , @code{LOAD DATA}}. @code{LOAD DATA INFILE}. @xref{LOAD DATA, , @code{LOAD DATA}}.
...@@ -28641,7 +28641,7 @@ Verbose mode. Print out more information what the program does. ...@@ -28641,7 +28641,7 @@ Verbose mode. Print out more information what the program does.
Print version information and exit. Print version information and exit.
@end table @end table
Here is a sample run of using @code{mysqlimport}: Here is a sample run using @code{mysqlimport}:
@example @example
$ mysql --version $ mysql --version
...@@ -28678,7 +28678,7 @@ $ mysql -e 'SELECT * FROM imptest' test ...@@ -28678,7 +28678,7 @@ $ mysql -e 'SELECT * FROM imptest' test
@cindex columns, displaying @cindex columns, displaying
@cindex showing, database information @cindex showing, database information
@node mysqlshow, myisampack, mysqlimport, Tools @node mysqlshow, myisampack, mysqlimport, Tools
@section Showing databases, tables and columns @section Showing Databases, Tables, and Columns
@code{mysqlshow} can be used to quickly look at which databases exist, @code{mysqlshow} can be used to quickly look at which databases exist,
their tables, and the table's columns. their tables, and the table's columns.
...@@ -28696,20 +28696,20 @@ shell> mysqlshow [OPTIONS] [database [table [column]]] ...@@ -28696,20 +28696,20 @@ shell> mysqlshow [OPTIONS] [database [table [column]]]
@item @item
If no database is given, all matching databases are shown. If no database is given, all matching databases are shown.
@item @item
If no table is given, all matching tables in database are shown. If no table is given, all matching tables in the database are shown.
@item @item
If no column is given, all matching columns and column types in table If no column is given, all matching columns and column types in the table
are shown. are shown.
@end itemize @end itemize
Note that in newer @strong{MySQL} versions you only see those Note that in newer @strong{MySQL} versions, you only see those
database/tables/columns for which you have some privileges. database/tables/columns for which you have some privileges.
If the last argument contains a shell or SQL wild-card (@code{*}, @code{?}, If the last argument contains a shell or SQL wild-card (@code{*}, @code{?},
@code{%} or @code{_}) then only what's matched by the wild card is shown. @code{%} or @code{_}) then only what's matched by the wild card is shown.
This may cause some confusion when you try to display the columns for a This may cause some confusion when you try to display the columns for a
table with a @code{_} as in this case @code{mysqlshow} only shows you table with a @code{_} as in this case @code{mysqlshow} only shows you
the table names that matches the pattern. This is easily fixed by the table names that match the pattern. This is easily fixed by
adding an extra @code{%} last on the command line (as a separate adding an extra @code{%} last on the command line (as a separate
argument). argument).
...@@ -28719,9 +28719,9 @@ argument). ...@@ -28719,9 +28719,9 @@ argument).
@cindex @code{myisampack} @cindex @code{myisampack}
@cindex @code{pack_isam} @cindex @code{pack_isam}
@node myisampack, , mysqlshow, Tools @node myisampack, , mysqlshow, Tools
@section The MySQL compressed read-only table generator. @section The MySQL Compressed Read-only Table Generator
@code{myisampack} is used to compress MyISAM tables and @code{pack_isam} @code{myisampack} is used to compress MyISAM tables, and @code{pack_isam}
is used to compress ISAM tables. Because ISAM tables are deprecated, we is used to compress ISAM tables. Because ISAM tables are deprecated, we
will only discuss @code{myisampack} here, but everything said about will only discuss @code{myisampack} here, but everything said about
@code{myisampack} should also be true for @code{pack_isam}. @code{myisampack} should also be true for @code{pack_isam}.
...@@ -28730,7 +28730,7 @@ will only discuss @code{myisampack} here, but everything said about ...@@ -28730,7 +28730,7 @@ will only discuss @code{myisampack} here, but everything said about
The information needed to decompress columns is read into memory when the The information needed to decompress columns is read into memory when the
table is opened. This results in much better performance when accessing table is opened. This results in much better performance when accessing
individual records, because you only have to uncompress exactly one record, not individual records, because you only have to uncompress exactly one record, not
a much larger disk block like when using Stacker on MS-DOS. a much larger disk block as when using Stacker on MS-DOS.
Usually, @code{myisampack} packs the data file 40%-70%. Usually, @code{myisampack} packs the data file 40%-70%.
@strong{MySQL} uses memory mapping (@code{mmap()}) on compressed tables and @strong{MySQL} uses memory mapping (@code{mmap()}) on compressed tables and
...@@ -28769,7 +28769,7 @@ Output debug log. The @code{debug_options} string often is ...@@ -28769,7 +28769,7 @@ Output debug log. The @code{debug_options} string often is
@item -f, --force @item -f, --force
Force packing of the table even if it becomes bigger or if the temporary file Force packing of the table even if it becomes bigger or if the temporary file
exists. (@code{myisampack} creates a temporary file named @file{tbl_name.TMD} exists. @code{myisampack} creates a temporary file named @file{tbl_name.TMD}
while it compresses the table. If you kill @code{myisampack}, the @file{.TMD} while it compresses the table. If you kill @code{myisampack}, the @file{.TMD}
file may not be deleted. Normally, @code{myisampack} exits with an error if file may not be deleted. Normally, @code{myisampack} exits with an error if
it finds that @file{tbl_name.TMD} exists. With @code{--force}, it finds that @file{tbl_name.TMD} exists. With @code{--force},
...@@ -28784,8 +28784,8 @@ Join all tables named on the command line into a single table ...@@ -28784,8 +28784,8 @@ Join all tables named on the command line into a single table
MUST be identical (same column names and types, same indexes, etc.). MUST be identical (same column names and types, same indexes, etc.).
@item -p #, --packlength=# @item -p #, --packlength=#
Specify the record length storage size, in bytes. The value should be 1, 2 Specify the record length storage size, in bytes. The value should be 1, 2,
or 3. (@code{myisampack} stores all rows with length pointers of 1, 2 or 3 or 3. (@code{myisampack} stores all rows with length pointers of 1, 2, or 3
bytes. In most normal cases, @code{myisampack} can determine the right length bytes. In most normal cases, @code{myisampack} can determine the right length
value before it begins packing the file, but it may notice during the packing value before it begins packing the file, but it may notice during the packing
process that it could have used a shorter length. In this case, process that it could have used a shorter length. In this case,
...@@ -28796,7 +28796,7 @@ you could use a shorter record length.) ...@@ -28796,7 +28796,7 @@ you could use a shorter record length.)
Silent mode. Write output only when errors occur. Silent mode. Write output only when errors occur.
@item -t, --test @item -t, --test
Don't pack table, only test packing it. Don't actually pack table, just test packing it.
@item -T dir_name, --tmp_dir=dir_name @item -T dir_name, --tmp_dir=dir_name
Use the named directory as the location in which to write the temporary table. Use the named directory as the location in which to write the temporary table.
...@@ -29015,15 +29015,15 @@ type; these are changed to a smaller type (for example, an @code{INTEGER} ...@@ -29015,15 +29015,15 @@ type; these are changed to a smaller type (for example, an @code{INTEGER}
column may be changed to @code{MEDIUMINT}). column may be changed to @code{MEDIUMINT}).
@item pre-space @item pre-space
The number of decimal columns that are stored with leading space. In this The number of decimal columns that are stored with leading spaces. In this
case, each value will contain a count for the number of leading spaces. case, each value will contain a count for the number of leading spaces.
@item end-space @item end-space
The number of columns that have a lot of trailing space. In this case, each The number of columns that have a lot of trailing spaces. In this case, each
value will contain a count for the number of trailing spaces. value will contain a count for the number of trailing spaces.
@item table-lookup @item table-lookup
The column had only a small number of different values, and that were The column had only a small number of different values, which were
converted to an @code{ENUM} before Huffman compression. converted to an @code{ENUM} before Huffman compression.
@item zero @item zero
...@@ -29080,8 +29080,8 @@ The number of bits used in the Huffman tree. ...@@ -29080,8 +29080,8 @@ The number of bits used in the Huffman tree.
After you have run @code{pack_isam}/@code{myisampack} you must run After you have run @code{pack_isam}/@code{myisampack} you must run
@code{isamchk}/@code{myisamchk} to re-create the index. At this time you @code{isamchk}/@code{myisamchk} to re-create the index. At this time you
can also sort the index blocks and create statistics that is needed for can also sort the index blocks and create statistics needed for
the @strong{MySQL} optimizer to work more efficiently. the @strong{MySQL} optimizer to work more efficiently:
@example @example
myisamchk -rq --analyze --sort-index table_name.MYI myisamchk -rq --analyze --sort-index table_name.MYI
...@@ -29100,7 +29100,7 @@ to start using the new table. ...@@ -29100,7 +29100,7 @@ to start using the new table.
@cindex crash, recovery @cindex crash, recovery
@cindex recovery, from crash @cindex recovery, from crash
@node Maintenance, Adding functions, Tools, Top @node Maintenance, Adding functions, Tools, Top
@chapter Maintaining a MySQL installation @chapter Maintaining a MySQL Installation
@menu @menu
* Table maintenance:: Table maintenance and crash recovery * Table maintenance:: Table maintenance and crash recovery
...@@ -29111,7 +29111,7 @@ to start using the new table. ...@@ -29111,7 +29111,7 @@ to start using the new table.
@end menu @end menu
@node Table maintenance, Maintenance regimen, Maintenance, Maintenance @node Table maintenance, Maintenance regimen, Maintenance, Maintenance
@section Using @code{myisamchk} for table maintenance and crash recovery @section Using @code{myisamchk} for Table Maintenance and Crash Recovery
Starting with @strong{MySQL} Version 3.23.13, you can check MyISAM Starting with @strong{MySQL} Version 3.23.13, you can check MyISAM
tables with the @code{CHECK TABLE} command. @xref{CHECK TABLE}. You can tables with the @code{CHECK TABLE} command. @xref{CHECK TABLE}. You can
...@@ -29126,7 +29126,7 @@ In the following text we will talk about @code{myisamchk}, but everything ...@@ -29126,7 +29126,7 @@ In the following text we will talk about @code{myisamchk}, but everything
also applies to the old @code{isamchk}. also applies to the old @code{isamchk}.
You can use the @code{myisamchk} utility to get information about your database You can use the @code{myisamchk} utility to get information about your database
tables, check and repair them or optimize them. The following sections tables, check and repair them, or optimize them. The following sections
describe how to invoke @code{myisamchk} (including a description of its describe how to invoke @code{myisamchk} (including a description of its
options), how to set up a table maintenance schedule, and how to use options), how to set up a table maintenance schedule, and how to use
@code{myisamchk} to perform its various functions. @code{myisamchk} to perform its various functions.
...@@ -29144,7 +29144,7 @@ flushing tables. ...@@ -29144,7 +29144,7 @@ flushing tables.
@end menu @end menu
@node myisamchk syntax, myisamchk memory, Table maintenance, Table maintenance @node myisamchk syntax, myisamchk memory, Table maintenance, Table maintenance
@subsection @code{myisamchk} invocation syntax @subsection @code{myisamchk} Invocation Syntax
@code{myisamchk} is invoked like this: @code{myisamchk} is invoked like this:
...@@ -29198,7 +29198,7 @@ myisamchk --fast --silent /path/to/datadir/*/*.MYI ...@@ -29198,7 +29198,7 @@ myisamchk --fast --silent /path/to/datadir/*/*.MYI
isamchk --silent /path/to/datadir/*/*.ISM isamchk --silent /path/to/datadir/*/*.ISM
@end example @end example
@code{myisamchk} supports the following options: @code{myisamchk} supports the following options.
@menu @menu
* myisamchk general options:: * myisamchk general options::
...@@ -29226,7 +29226,7 @@ way to avoid this problem is to use @code{CHECK TABLE} instead of ...@@ -29226,7 +29226,7 @@ way to avoid this problem is to use @code{CHECK TABLE} instead of
@cindex options, @code{myisamchk} @cindex options, @code{myisamchk}
@cindex @code{myisamchk}, options @cindex @code{myisamchk}, options
@node myisamchk general options, myisamchk check options, myisamchk syntax, myisamchk syntax @node myisamchk general options, myisamchk check options, myisamchk syntax, myisamchk syntax
@subsubsection General options for myisamchk @subsubsection General Options for @code{myisamchk}
@table @code @table @code
@item -# or --debug=debug_options @item -# or --debug=debug_options
...@@ -29236,7 +29236,7 @@ Output debug log. The @code{debug_options} string often is ...@@ -29236,7 +29236,7 @@ Output debug log. The @code{debug_options} string often is
Display a help message and exit. Display a help message and exit.
@item -O var=option, --set-variable var=option @item -O var=option, --set-variable var=option
Set the value of a variable. The possible variables and their default values Set the value of a variable. The possible variables and their default values
for myisamchk can be examined with @code{myisamchk --help}. for myisamchk can be examined with @code{myisamchk --help}:
@multitable @columnfractions .3 .7 @multitable @columnfractions .3 .7
@item key_buffer_size @tab 523264 @item key_buffer_size @tab 523264
@item read_buffer_size @tab 262136 @item read_buffer_size @tab 262136
...@@ -29251,7 +29251,7 @@ repair it with @code{-o}. ...@@ -29251,7 +29251,7 @@ repair it with @code{-o}.
@code{sort_buffer_size} is used when you repair the table with @code{-r}. @code{sort_buffer_size} is used when you repair the table with @code{-r}.
If you want a faster repair, set the above variables to about 1/4 of your If you want a faster repair, set the above variables to about 1/4 of your
available memory. You can set both variables to big values as only one available memory. You can set both variables to big values, as only one
of the above buffers will be used at a time. of the above buffers will be used at a time.
@item -s or --silent @item -s or --silent
...@@ -29266,25 +29266,25 @@ Print the @code{myisamchk} version and exit. ...@@ -29266,25 +29266,25 @@ Print the @code{myisamchk} version and exit.
@item -w or, --wait @item -w or, --wait
Instead of giving an error if the table is locked, wait until the table Instead of giving an error if the table is locked, wait until the table
is unlocked before continuing. Note that if you are running @code{mysqld} is unlocked before continuing. Note that if you are running @code{mysqld}
on the table with @code{--skip-locking}, the table is can only be locked on the table with @code{--skip-locking}, the table can only be locked
by another @code{myisamchk} command. by another @code{myisamchk} command.
@end table @end table
@cindex check options, myisamchk @cindex check options, myisamchk
@cindex tables, checking @cindex tables, checking
@node myisamchk check options, myisamchk repair options, myisamchk general options, myisamchk syntax @node myisamchk check options, myisamchk repair options, myisamchk general options, myisamchk syntax
@subsubsection Check options for myisamchk @subsubsection Check Options for @code{myisamchk}
@table @code @table @code
@item -c or --check @item -c or --check
Check table for errors. This is the default operation if you are not Check table for errors. This is the default operation if you are not
giving @code{myisamchk} any options that overrides this. giving @code{myisamchk} any options that override this.
@item -e or --extend-check @item -e or --extend-check
Check the table VERY thoroughly (which is quite slow if you have many Check the table VERY thoroughly (which is quite slow if you have many
indexes). This options should only be used extreme cases. Normally, indexes). This option should only be used in extreme cases. Normally,
@code{myisamchk} or @code{myisamchk --medium-check} should in most @code{myisamchk} or @code{myisamchk --medium-check} should, in most
cases be able to find out if there is any errors in the table. cases, be able to find out if there are any errors in the table.
If you are using @code{--extended-check} and have much memory, you should If you are using @code{--extended-check} and have much memory, you should
increase the value of @code{key_buffer_size} a lot! increase the value of @code{key_buffer_size} a lot!
...@@ -29292,7 +29292,7 @@ increase the value of @code{key_buffer_size} a lot! ...@@ -29292,7 +29292,7 @@ increase the value of @code{key_buffer_size} a lot!
@item -F or --fast @item -F or --fast
Check only tables that haven't been closed properly. Check only tables that haven't been closed properly.
@item -C or --check-only-changed @item -C or --check-only-changed
Check only tables that have changed since last check. Check only tables that have changed since the last check.
@item -f or --force @item -f or --force
Restart @code{myisamchk} with @code{-r} (repair) on the table, if Restart @code{myisamchk} with @code{-r} (repair) on the table, if
@code{myisamchk} finds any errors in the table. @code{myisamchk} finds any errors in the table.
...@@ -29302,51 +29302,50 @@ Print informational statistics about the table that is checked. ...@@ -29302,51 +29302,50 @@ Print informational statistics about the table that is checked.
Faster than extended-check, but only finds 99.99% of all errors. Faster than extended-check, but only finds 99.99% of all errors.
Should, however, be good enough for most cases. Should, however, be good enough for most cases.
@item -U or --update-state @item -U or --update-state
Store in the @file{.MYI} file when the table was checked and if the Store in the @file{.MYI} file when the table was checked and if the table crashed. This should be used to get full benefit of the
table was crashed. This should be used to get full benefit of the @code{--check-only-changed} option, but you shouldn't use this
@code{--check-only-changed} option, but you shouldn't use this if
option if the @code{mysqld} server is using the table and you are option if the @code{mysqld} server is using the table and you are
running @code{mysqld} with @code{--skip-locking}. running @code{mysqld} with @code{--skip-locking}.
@item -T or --read-only @item -T or --read-only
Don't mark table as checked. This is useful if you use @code{myisamchk} Don't mark table as checked. This is useful if you use @code{myisamchk}
to check a table that is in use by some other application that doesn't to check a table that is in use by some other application that doesn't
use locking (like @code{mysqld --skip-locking}) use locking (like @code{mysqld --skip-locking}).
@end table @end table
@cindex repair options, myisamchk @cindex repair options, myisamchk
@cindex files, repairing @cindex files, repairing
@node myisamchk repair options, myisamchk other options, myisamchk check options, myisamchk syntax @node myisamchk repair options, myisamchk other options, myisamchk check options, myisamchk syntax
@subsubsection Repair options for myisamchk @subsubsection Repair Options for myisamchk
The following options are used if you start @code{myisamchk} with The following options are used if you start @code{myisamchk} with
@code{-r} or @code{-o}: @code{-r} or @code{-o}:
@table @code @table @code
@item -D # or --data-file-length=# @item -D # or --data-file-length=#
Max length of data file (when re-creating data file when it's 'full') Max length of data file (when re-creating data file when it's 'full').
@item -e or --extend-check @item -e or --extend-check
Try to recover every possible row from the data file. Try to recover every possible row from the data file.
Normally this will also find a lot of garbage rows; Don't use this option Normally this will also find a lot of garbage rows. Don't use this option
if you are not totally desperate. if you are not totally desperate.
@item -f or --force @item -f or --force
Overwrite old temporary files (@code{table_name.TMD}) instead of aborting. Overwrite old temporary files (@code{table_name.TMD}) instead of aborting.
@item -k # or keys-used=# @item -k # or keys-used=#
If you are using ISAM, tells the ISAM table handler to update only the If you are using ISAM, tells the ISAM table handler to update only the
first @code{#} indexes. If you are using @code{MyISAM} tells which keys first @code{#} indexes. If you are using @code{MyISAM}, tells which keys
to use, where each binary bit stands for one key (First key is bit 0). to use, where each binary bit stands for one key (first key is bit 0).
This can be used to get faster inserts! Deactivated indexes can be This can be used to get faster inserts! Deactivated indexes can be
reactivated by using @code{myisamchk -r}. keys. reactivated by using @code{myisamchk -r}. keys.
@item -l or --no-symlinks @item -l or --no-symlinks
Do not follow symbolic links. Normally @code{myisamchk} repairs the Do not follow symbolic links. Normally @code{myisamchk} repairs the
table a symlink points at. table a symlink points at.
@item -r or --recover @item -r or --recover
Can fix almost anything except unique keys that aren't unique. Can fix almost anything except unique keys that aren't unique
(which is a extremely unlikely error with ISAM/MyISAM tables). (which is an extremely unlikely error with ISAM/MyISAM tables).
If you want to recover a table, this is the option to try first. Only if If you want to recover a table, this is the option to try first. Only if
myisamchk reports that the table can't be recovered by @code{-r}, you myisamchk reports that the table can't be recovered by @code{-r}, you
should then try @code{-o}. (Note that in the unlikely case that @code{-r} should then try @code{-o}. (Note that in the unlikely case that @code{-r}
fails, the data file is still intact). fails, the data file is still intact.)
If you have lot's of memory, you should increase the size of If you have lots of memory, you should increase the size of
@code{sort_buffer_size}! @code{sort_buffer_size}!
@item -o or --safe-recover @item -o or --safe-recover
Uses an old recovery method (reads through all rows in order and updates Uses an old recovery method (reads through all rows in order and updates
...@@ -29354,16 +29353,16 @@ all index trees based on the found rows); this is a magnitude slower ...@@ -29354,16 +29353,16 @@ all index trees based on the found rows); this is a magnitude slower
than @code{-r}, but can handle a couple of very unlikely cases that than @code{-r}, but can handle a couple of very unlikely cases that
@code{-r} cannot handle. This recovery method also uses much less disk @code{-r} cannot handle. This recovery method also uses much less disk
space than @code{-r}. Normally one should always first repair with space than @code{-r}. Normally one should always first repair with
@code{-r} and only if this fails use @code{-o}. @code{-r}, and only if this fails use @code{-o}.
If you have lot's of memory, you should increase the size of If you have lots of memory, you should increase the size of
@code{key_buffer_size}! @code{key_buffer_size}!
@item --character-sets-dir=... @item --character-sets-dir=...
Directory where character sets are stored. Directory where character sets are stored.
@item --set-character-set=name @item --set-character-set=name
Change the character set used by the index Change the character set used by the index
@item .t or --tmpdir=path @item .t or --tmpdir=path
Path where to store temporary files. If this is not set, @code{myisamchk} will Path for storing temporary files. If this is not set, @code{myisamchk} will
use the environment variable @code{TMPDIR} for this. use the environment variable @code{TMPDIR} for this.
@item -q or --quick @item -q or --quick
Faster repair by not modifying the data file. One can give a second Faster repair by not modifying the data file. One can give a second
...@@ -29374,7 +29373,7 @@ Unpack file packed with myisampack. ...@@ -29374,7 +29373,7 @@ Unpack file packed with myisampack.
@end table @end table
@node myisamchk other options, , myisamchk repair options, myisamchk syntax @node myisamchk other options, , myisamchk repair options, myisamchk syntax
@subsubsection Other options for myisamchk @subsubsection Other Options for @code{myisamchk}
Other actions that @code{myisamchk} can do, besides repair and check tables: Other actions that @code{myisamchk} can do, besides repair and check tables:
...@@ -29382,9 +29381,9 @@ Other actions that @code{myisamchk} can do, besides repair and check tables: ...@@ -29382,9 +29381,9 @@ Other actions that @code{myisamchk} can do, besides repair and check tables:
@item -a or --analyze @item -a or --analyze
Analyze the distribution of keys. This improves join performance by Analyze the distribution of keys. This improves join performance by
enabling the join optimizer to better choose in which order it should enabling the join optimizer to better choose in which order it should
join the tables and which keys it should use. join the tables and which keys it should use:
@code{myisamchk --describe --verbose table_name'} or using @code{SHOW KEYS} in @code{myisamchk --describe --verbose table_name'} or using @code{SHOW KEYS} in
@strong{MySQL} @strong{MySQL}.
@item -d or --description @item -d or --description
Prints some information about table. Prints some information about table.
@item -A or --set-auto-increment[=value] @item -A or --set-auto-increment[=value]
...@@ -29405,7 +29404,7 @@ numbered beginning with 1. ...@@ -29405,7 +29404,7 @@ numbered beginning with 1.
@cindex memory usage, myisamchk @cindex memory usage, myisamchk
@node myisamchk memory, , myisamchk syntax, Table maintenance @node myisamchk memory, , myisamchk syntax, Table maintenance
@subsection @code{myisamchk} memory usage @subsection @code{myisamchk} Memory Usage
Memory allocation is important when you run @code{myisamchk}. Memory allocation is important when you run @code{myisamchk}.
@code{myisamchk} uses no more memory than you specify with the @code{-O} @code{myisamchk} uses no more memory than you specify with the @code{-O}
...@@ -29425,19 +29424,19 @@ Using @code{-O sort=16M} should probably be enough for most cases. ...@@ -29425,19 +29424,19 @@ Using @code{-O sort=16M} should probably be enough for most cases.
Be aware that @code{myisamchk} uses temporary files in @code{TMPDIR}. If Be aware that @code{myisamchk} uses temporary files in @code{TMPDIR}. If
@code{TMPDIR} points to a memory file system, you may easily get out of @code{TMPDIR} points to a memory file system, you may easily get out of
memory errors. If this happens, set @code{TMPDIR} to point at some directory memory errors. If this happens, set @code{TMPDIR} to point at some directory
with more space and restart @code{myisamchk} with more space and restart @code{myisamchk}.
When repairing, @code{myisamchk} will also nead a lot of diskspace: When repairing, @code{myisamchk} will also nead a lot of disk space:
@itemize @bullet @itemize @bullet
@item @item
Double the size of the record file (The original one and a copy). This Double the size of the record file (the original one and a copy). This
space is not needed if one does a repair with @code{--quick}, as in this space is not needed if one does a repair with @code{--quick}, as in this
case only the index file will be re-created. This space is needed on the case only the index file will be re-created. This space is needed on the
same disk as the original record file! same disk as the original record file!
@item @item
Space for the new index file (that replaces the old one; The old Space for the new index file that replaces the old one. The old
index file is truncated at start, so one usually ignore this space). index file is truncated at start, so one usually ignore this space.
This space is needed on the same disk as the original index file! This space is needed on the same disk as the original index file!
@item @item
When using @code{--repair} (but not when using @code{--safe-repair}, you When using @code{--repair} (but not when using @code{--safe-repair}, you
...@@ -29455,7 +29454,7 @@ If you have a problem with disk space during repair, you can try to use ...@@ -29455,7 +29454,7 @@ If you have a problem with disk space during repair, you can try to use
@cindex maintaining, tables @cindex maintaining, tables
@cindex tables, maintenance regimen @cindex tables, maintenance regimen
@node Maintenance regimen, Table-info, Table maintenance, Maintenance @node Maintenance regimen, Table-info, Table maintenance, Maintenance
@section Setting up a table maintenance regimen @section Setting Up a Table Maintenance Regimen
Starting with @strong{MySQL} Version 3.23.13, you can check MyISAM Starting with @strong{MySQL} Version 3.23.13, you can check MyISAM
tables with the @code{CHECK TABLE} command. @xref{CHECK TABLE}. You can tables with the @code{CHECK TABLE} command. @xref{CHECK TABLE}. You can
...@@ -29519,10 +29518,10 @@ myisamchk -r --silent --sort-index -O sort_buffer_size=16M */*.MYI ...@@ -29519,10 +29518,10 @@ myisamchk -r --silent --sort-index -O sort_buffer_size=16M */*.MYI
@cindex tables, information @cindex tables, information
@node Table-info, Crash recovery, Maintenance regimen, Maintenance @node Table-info, Crash recovery, Maintenance regimen, Maintenance
@section Getting information about a table @section Getting Information About a Table
To get a description of a table or statistics about it, use the commands shown To get a description of a table or statistics about it, use the commands shown
below. We explain some of the information in more detail later. below. We explain some of the information in more detail later:
@table @code @table @code
@item myisamchk -d tbl_name @item myisamchk -d tbl_name
...@@ -29687,7 +29686,7 @@ preceding examples: ...@@ -29687,7 +29686,7 @@ preceding examples:
Explanations for the types of information @code{myisamchk} produces are Explanations for the types of information @code{myisamchk} produces are
given below. The ``keyfile'' is the index file. ``Record'' and ``row'' given below. The ``keyfile'' is the index file. ``Record'' and ``row''
are synonymous. are synonymous:
@table @code @table @code
@item ISAM file @item ISAM file
...@@ -29721,13 +29720,13 @@ You can optimize your table to minimize this space. ...@@ -29721,13 +29720,13 @@ You can optimize your table to minimize this space.
@xref{Optimization}. @xref{Optimization}.
@item Datafile pointer @item Datafile pointer
The size of the data file pointer, in bytes. It is usually 2, 3, 4 or 5 The size of the data file pointer, in bytes. It is usually 2, 3, 4, or 5
bytes. Most tables manage with 2 bytes, but this cannot be controlled bytes. Most tables manage with 2 bytes, but this cannot be controlled
from @strong{MySQL} yet. For fixed tables, this is a record address. For from @strong{MySQL} yet. For fixed tables, this is a record address. For
dynamic tables, this is a byte address. dynamic tables, this is a byte address.
@item Keyfile pointer @item Keyfile pointer
The size of the index file pointer, in bytes. It is usually 1, 2 or 3 The size of the index file pointer, in bytes. It is usually 1, 2, or 3
bytes. Most tables manage with 2 bytes, but this is calculated bytes. Most tables manage with 2 bytes, but this is calculated
automatically by @strong{MySQL}. It is always a block address. automatically by @strong{MySQL}. It is always a block address.
...@@ -29813,7 +29812,7 @@ exact record length. ...@@ -29813,7 +29812,7 @@ exact record length.
@item Packed @item Packed
@strong{MySQL} strips spaces from the end of strings. The @code{Packed} @strong{MySQL} strips spaces from the end of strings. The @code{Packed}
value indicates the percentage savings achieved by doing this. value indicates the percentage of savings achieved by doing this.
@item Recordspace used @item Recordspace used
What percentage of the data file is used. What percentage of the data file is used.
...@@ -29859,7 +29858,7 @@ information and a description of what it means. ...@@ -29859,7 +29858,7 @@ information and a description of what it means.
@cindex crash, recovery @cindex crash, recovery
@cindex recovery, from crash @cindex recovery, from crash
@node Crash recovery, Log files, Table-info, Maintenance @node Crash recovery, Log files, Table-info, Maintenance
@section Using @code{myisamchk} for crash recovery @section Using @code{myisamchk} for Crash Recovery
If you run @code{mysqld} with @code{--skip-locking} (which is the default on If you run @code{mysqld} with @code{--skip-locking} (which is the default on
some systems, like Linux), you can't reliably use @code{myisamchk} to some systems, like Linux), you can't reliably use @code{myisamchk} to
...@@ -29940,7 +29939,7 @@ case you should at least make a backup before running @code{myisamchk}. ...@@ -29940,7 +29939,7 @@ case you should at least make a backup before running @code{myisamchk}.
@cindex tables, error checking @cindex tables, error checking
@cindex errors, checking tables for @cindex errors, checking tables for
@node Check, Repair, Crash recovery, Crash recovery @node Check, Repair, Crash recovery, Crash recovery
@subsection How to check tables for errors @subsection How to Check Tables for Errors
To check a MyISAM table, use the following commands: To check a MyISAM table, use the following commands:
...@@ -29952,7 +29951,7 @@ to check a table, you should normally run @code{myisamchk} without options or ...@@ -29952,7 +29951,7 @@ to check a table, you should normally run @code{myisamchk} without options or
with either the @code{-s} or @code{--silent} option. with either the @code{-s} or @code{--silent} option.
@item myisamchk -m tbl_name @item myisamchk -m tbl_name
This finds 99.999% of all errors. It checks first all index for errors and This finds 99.999% of all errors. It checks first all index entries for errors and
then it reads through all rows. It calculates a checksum for all keys in then it reads through all rows. It calculates a checksum for all keys in
the rows and verifies that they checksum matches the checksum for the keys the rows and verifies that they checksum matches the checksum for the keys
in the index tree. in the index tree.
...@@ -29975,15 +29974,15 @@ print some informational statistics, too. ...@@ -29975,15 +29974,15 @@ print some informational statistics, too.
@cindex tables, repairing @cindex tables, repairing
@cindex repairing, tables @cindex repairing, tables
@node Repair, Optimization, Check, Crash recovery @node Repair, Optimization, Check, Crash recovery
@subsection How to repair tables @subsection How to Repair Tables
In the following we only talk about using @code{myisamchk} on @code{MyISAM} In the following section we only talk about using @code{myisamchk} on @code{MyISAM}
tables (extensions @code{.MYI} and @code{.MYD}). If you are using tables (extensions @code{.MYI} and @code{.MYD}). If you are using
@code{ISAM} tables (extensions @code{.ISM} and @code{.ISD}), you should use @code{ISAM} tables (extensions @code{.ISM} and @code{.ISD}), you should use
@code{isamchk} instead. @code{isamchk} instead.
The symptoms of a corrupted table are usually that queries abort unexpectedly The symptoms of a corrupted table include queries that abort unexpectedly
and you observe errors such as these: and observable errors such as these:
@itemize @bullet @itemize @bullet
@item @item
...@@ -30008,7 +30007,7 @@ that @code{mysqld} runs as (and to you, because you need to access the files ...@@ -30008,7 +30007,7 @@ that @code{mysqld} runs as (and to you, because you need to access the files
you are checking). If it turns out you need to modify files, they must also you are checking). If it turns out you need to modify files, they must also
be writable by you. be writable by you.
If you are using @strong{MySQL} Version 3.23.16 and above you can (and should) use the If you are using @strong{MySQL} Version 3.23.16 and above, you can (and should) use the
@code{CHECK} and @code{REPAIR} commands to check and repair @code{MyISAM} @code{CHECK} and @code{REPAIR} commands to check and repair @code{MyISAM}
tables. @xref{CHECK TABLE}. @xref{REPAIR TABLE}. tables. @xref{CHECK TABLE}. @xref{REPAIR TABLE}.
...@@ -30016,19 +30015,19 @@ The manual section about table maintenence includes the options to ...@@ -30016,19 +30015,19 @@ The manual section about table maintenence includes the options to
@code{isamchk}/@code{myisamchk}. @xref{Table maintenance}. @code{isamchk}/@code{myisamchk}. @xref{Table maintenance}.
The following section is for the cases where the above command fails or The following section is for the cases where the above command fails or
if you want to use the extended features that isamchk/myisamchk provides. if you want to use the extended features that @code{isamchk}/@code{myisamchk} provides.
If you are going to repair a table from the command line, you must first If you are going to repair a table from the command line, you must first
take down the @code{mysqld} server. Note that when you do take down the @code{mysqld} server. Note that when you do
@code{mysqladmin shutdown} on a remote server, the @code{mysqld} server @code{mysqladmin shutdown} on a remote server, the @code{mysqld} server
will still be alive for a while after @code{mysqladmin} returns until will still be alive for a while after @code{mysqladmin} returns, until
all queries are stopped and all keys have been flushed to disk. all queries are stopped and all keys have been flushed to disk.
@noindent @noindent
@strong{Stage 1: Checking your tables} @strong{Stage 1: Checking your tables}
Run @code{myisamchk *.MYI} or (@code{myisamchk -e *.MYI} if you have Run @code{myisamchk *.MYI} or @code{myisamchk -e *.MYI} if you have
more time). Use the @code{-s} (silent) option to suppress unnecessary more time. Use the @code{-s} (silent) option to suppress unnecessary
information. information.
If the mysqld server is done you should use the --update option to tell If the mysqld server is done you should use the --update option to tell
...@@ -30043,7 +30042,7 @@ memory} errors), or if @code{myisamchk} crashes, go to Stage 3. ...@@ -30043,7 +30042,7 @@ memory} errors), or if @code{myisamchk} crashes, go to Stage 3.
@noindent @noindent
@strong{Stage 2: Easy safe repair} @strong{Stage 2: Easy safe repair}
Note: If you want repairing to go much faster, you should add: @code{-O NOTE: If you want repairing to go much faster, you should add: @code{-O
sort_buffer=# -O key_buffer=#} (where # is about 1/4 of the available sort_buffer=# -O key_buffer=#} (where # is about 1/4 of the available
memory) to all @code{isamchk/myisamchk} commands. memory) to all @code{isamchk/myisamchk} commands.
...@@ -30104,14 +30103,14 @@ a copy in case something goes wrong.) ...@@ -30104,14 +30103,14 @@ a copy in case something goes wrong.)
@end enumerate @end enumerate
Go back to Stage 2. @code{myisamchk -r -q} should work now. (This shouldn't Go back to Stage 2. @code{myisamchk -r -q} should work now. (This shouldn't
be an endless loop). be an endless loop.)
@noindent @noindent
@strong{Stage 4: Very difficult repair} @strong{Stage 4: Very difficult repair}
You should reach this stage only if the description file has also You should reach this stage only if the description file has also
crashed. That should never happen, because the description file isn't changed crashed. That should never happen, because the description file isn't changed
after the table is created. after the table is created:
@enumerate @enumerate
@item @item
...@@ -30131,7 +30130,7 @@ the index file. ...@@ -30131,7 +30130,7 @@ the index file.
@cindex tables, optimizing @cindex tables, optimizing
@cindex optimizing, tables @cindex optimizing, tables
@node Optimization, , Repair, Crash recovery @node Optimization, , Repair, Crash recovery
@subsection Table optimization @subsection Table Optimization
To coalesce fragmented records and eliminate wasted space resulting from To coalesce fragmented records and eliminate wasted space resulting from
deleting or updating records, run @code{myisamchk} in recovery mode: deleting or updating records, run @code{myisamchk} in recovery mode:
...@@ -30156,15 +30155,15 @@ the performance of a table: ...@@ -30156,15 +30155,15 @@ the performance of a table:
@item -a, --analyze @item -a, --analyze
@end table @end table
For a full description of the option see @xref{myisamchk syntax}. For a full description of the option, see @xref{myisamchk syntax}.
@cindex files, log @cindex files, log
@cindex maintaining, log files @cindex maintaining, log files
@cindex log files, maintaining @cindex log files, maintaining
@node Log files, , Crash recovery, Maintenance @node Log files, , Crash recovery, Maintenance
@section Log file maintenance @section Log file Maintenance
When using @strong{MySQL} with log files, you will from time to time When using @strong{MySQL} with log files, you will, from time to time,
want to remove/backup old log files and tell @strong{MySQL} to start want to remove/backup old log files and tell @strong{MySQL} to start
logging on new files. @xref{Update log}. logging on new files. @xref{Update log}.
...@@ -30210,7 +30209,7 @@ and then take a backup and remove @file{mysql.old}. ...@@ -30210,7 +30209,7 @@ and then take a backup and remove @file{mysql.old}.
@cindex UDFs, defined @cindex UDFs, defined
@cindex functions, user-defined @cindex functions, user-defined
@node Adding functions, Adding procedures, Maintenance, Top @node Adding functions, Adding procedures, Maintenance, Top
@chapter Adding new functions to MySQL @chapter Adding New Functions to MySQL
There are two ways to add new functions to @strong{MySQL}: There are two ways to add new functions to @strong{MySQL}:
...@@ -30237,7 +30236,7 @@ You can add UDFs to a binary @strong{MySQL} distribution. Native functions ...@@ -30237,7 +30236,7 @@ You can add UDFs to a binary @strong{MySQL} distribution. Native functions
require you to modify a source distribution. require you to modify a source distribution.
@item @item
If you upgrade your @strong{MySQL} distribution, you can continue to use your If you upgrade your @strong{MySQL} distribution, you can continue to use your
previously-installed UDFs. For native functions, you must repeat your previously installed UDFs. For native functions, you must repeat your
modifications each time you upgrade. modifications each time you upgrade.
@end itemize @end itemize
...@@ -30253,7 +30252,7 @@ native functions such as @code{ABS()} or @code{SOUNDEX()}. ...@@ -30253,7 +30252,7 @@ native functions such as @code{ABS()} or @code{SOUNDEX()}.
@cindex user-defined functions, adding @cindex user-defined functions, adding
@cindex functions, user-definable, adding @cindex functions, user-definable, adding
@node Adding UDF, Adding native function, Adding functions, Adding functions @node Adding UDF, Adding native function, Adding functions, Adding functions
@section Adding a new user-definable function @section Adding a New User-definable Function
@menu @menu
* UDF calling sequences:: UDF calling sequences * UDF calling sequences:: UDF calling sequences
...@@ -30296,7 +30295,7 @@ The initialization function for @code{xxx()}. It can be used to: ...@@ -30296,7 +30295,7 @@ The initialization function for @code{xxx()}. It can be used to:
@item @item
Check the number of arguments to @code{XXX()}. Check the number of arguments to @code{XXX()}.
@item @item
Check that the arguments are of a required type, or, alternatively, Check that the arguments are of a required type or, alternatively,
tell @strong{MySQL} to coerce arguments to the types you want when tell @strong{MySQL} to coerce arguments to the types you want when
the main function is called. the main function is called.
@item @item
...@@ -30331,11 +30330,11 @@ and free it in @code{xxx_deinit()}. ...@@ -30331,11 +30330,11 @@ and free it in @code{xxx_deinit()}.
@cindex calling sequences, UDF @cindex calling sequences, UDF
@node UDF calling sequences, UDF arguments, Adding UDF, Adding UDF @node UDF calling sequences, UDF arguments, Adding UDF, Adding UDF
@subsection UDF calling sequences @subsection UDF Calling Sequences
The main function should be declared as shown below. Note that the return The main function should be declared as shown below. Note that the return
type and parameters differ, depending on whether you will declare the SQL type and parameters differ, depending on whether you will declare the SQL
function @code{XXX()} to return @code{STRING}, @code{INTEGER} or @code{REAL} function @code{XXX()} to return @code{STRING}, @code{INTEGER}, or @code{REAL}
in the @code{CREATE FUNCTION} statement: in the @code{CREATE FUNCTION} statement:
@noindent @noindent
...@@ -30375,7 +30374,7 @@ The @code{initid} parameter is passed to all three functions. It points to a ...@@ -30375,7 +30374,7 @@ The @code{initid} parameter is passed to all three functions. It points to a
@code{UDF_INIT} structure that is used to communicate information between @code{UDF_INIT} structure that is used to communicate information between
functions. The @code{UDF_INIT} structure members are listed below. The functions. The @code{UDF_INIT} structure members are listed below. The
initialization function should fill in any members that it wishes to change. initialization function should fill in any members that it wishes to change.
(To use the default for a member, leave it unchanged.) (To use the default for a member, leave it unchanged.):
@table @code @table @code
@item my_bool maybe_null @item my_bool maybe_null
...@@ -30386,7 +30385,7 @@ arguments are declared @code{maybe_null}. ...@@ -30386,7 +30385,7 @@ arguments are declared @code{maybe_null}.
@item unsigned int decimals @item unsigned int decimals
Number of decimals. The default value is the maximum number of decimals in Number of decimals. The default value is the maximum number of decimals in
the arguments passed to the main function. (For example, if the function is the arguments passed to the main function. (For example, if the function is
passed @code{1.34}, @code{1.345} and @code{1.3}, the default would be 3, passed @code{1.34}, @code{1.345}, and @code{1.3}, the default would be 3,
because @code{1.345} has 3 decimals. because @code{1.345} has 3 decimals.
@item unsigned int max_length @item unsigned int max_length
...@@ -30414,9 +30413,9 @@ or deallocate the memory. ...@@ -30414,9 +30413,9 @@ or deallocate the memory.
@cindex argument processing @cindex argument processing
@cindex processing, arguments @cindex processing, arguments
@node UDF arguments, UDF return values, UDF calling sequences, Adding UDF @node UDF arguments, UDF return values, UDF calling sequences, Adding UDF
@subsection Argument processing @subsection Argument Processing
The @code{args} parameter points to a @code{UDF_ARGS} structure which has the The @code{args} parameter points to a @code{UDF_ARGS} structure that thas the
members listed below: members listed below:
@table @code @table @code
...@@ -30436,7 +30435,7 @@ if (args->arg_count != 2) ...@@ -30436,7 +30435,7 @@ if (args->arg_count != 2)
@item enum Item_result *arg_type @item enum Item_result *arg_type
The types for each argument. The possible type values are The types for each argument. The possible type values are
@code{STRING_RESULT}, @code{INT_RESULT} and @code{REAL_RESULT}. @code{STRING_RESULT}, @code{INT_RESULT}, and @code{REAL_RESULT}.
To make sure that arguments are of a given type and return an To make sure that arguments are of a given type and return an
error if they are not, check the @code{arg_type} array in the initialization error if they are not, check the @code{arg_type} array in the initialization
...@@ -30520,7 +30519,7 @@ the maximum length of the argument (as for the initialization function). ...@@ -30520,7 +30519,7 @@ the maximum length of the argument (as for the initialization function).
@cindex errors, handling for UDFs @cindex errors, handling for UDFs
@cindex handling, errors @cindex handling, errors
@node UDF return values, UDF compiling, UDF arguments, Adding UDF @node UDF return values, UDF compiling, UDF arguments, Adding UDF
@subsection Return values and error handling @subsection Return Values and Error Handling
The initialization function should return @code{0} if no error occurred and The initialization function should return @code{0} if no error occurred and
@code{1} otherwise. If an error occurs, @code{xxx_init()} should store a @code{1} otherwise. If an error occurs, @code{xxx_init()} should store a
...@@ -30573,7 +30572,7 @@ and @code{*is_null}: ...@@ -30573,7 +30572,7 @@ and @code{*is_null}:
@cindex UDFs, compiling @cindex UDFs, compiling
@cindex installing, user-defined functions @cindex installing, user-defined functions
@node UDF compiling, , UDF return values, Adding UDF @node UDF compiling, , UDF return values, Adding UDF
@subsection Compiling and installing user-definable functions @subsection Compiling and Installing User-definable Functions
Files implementing UDFs must be compiled and installed on the host where the Files implementing UDFs must be compiled and installed on the host where the
server runs. This process is described below for the example UDF file server runs. This process is described below for the example UDF file
...@@ -30597,7 +30596,7 @@ The function may be called with a string @code{"xxx.xxx.xxx.xxx"} or ...@@ -30597,7 +30596,7 @@ The function may be called with a string @code{"xxx.xxx.xxx.xxx"} or
four numbers. four numbers.
@end itemize @end itemize
A dynamically-loadable file should be compiled as a sharable object file, A dynamically loadable file should be compiled as a sharable object file,
using a command something like this: using a command something like this:
@example @example
...@@ -30672,7 +30671,7 @@ one that has been loaded with @code{CREATE FUNCTION} and not removed with ...@@ -30672,7 +30671,7 @@ one that has been loaded with @code{CREATE FUNCTION} and not removed with
@cindex native functions, adding @cindex native functions, adding
@cindex functions, native, adding @cindex functions, native, adding
@node Adding native function, , Adding UDF, Adding functions @node Adding native function, , Adding UDF, Adding functions
@section Adding a new native function @section Adding a New Native Function
The procedure for adding a new native function is described below. Note that The procedure for adding a new native function is described below. Note that
you cannot add native functions to a binary distribution because the procedure you cannot add native functions to a binary distribution because the procedure
...@@ -30737,11 +30736,11 @@ absolutely necessary! ...@@ -30737,11 +30736,11 @@ absolutely necessary!
@cindex adding, procedures @cindex adding, procedures
@cindex new procedures, adding @cindex new procedures, adding
@node Adding procedures, ODBC, Adding functions, Top @node Adding procedures, ODBC, Adding functions, Top
@chapter Adding new procedures to MySQL @chapter Adding New Procedures to MySQL
In @strong{MySQL}, you can define a procedure in C++ that can access and In @strong{MySQL}, you can define a procedure in C++ that can access and
modify the data in a query before it is sent to the client. The modification modify the data in a query before it is sent to the client. The modification
can be done on row by row or @code{GROUP BY} level. can be done on row-by-row or @code{GROUP BY} level.
We have created an example procedure in @strong{MySQL} Version 3.23 to We have created an example procedure in @strong{MySQL} Version 3.23 to
show you what can be done. show you what can be done.
...@@ -30752,13 +30751,13 @@ show you what can be done. ...@@ -30752,13 +30751,13 @@ show you what can be done.
@end menu @end menu
@node procedure analyse, Writing a procedure, Adding procedures, Adding procedures @node procedure analyse, Writing a procedure, Adding procedures, Adding procedures
@section Procedure analyse @section Procedure Analyse
@code{analyse([max elements,[max memory]])} @code{analyse([max elements,[max memory]])}
This procedure is defined in the @file{sql/sql_analyse.cc}. This This procedure is defined in the @file{sql/sql_analyse.cc}. This
examines the result from your query and returns an analysis of the examines the result from your query and returns an analysis of the
results. results:
@itemize @bullet @itemize @bullet
@item @item
...@@ -30775,9 +30774,9 @@ SELECT ... FROM ... WHERE ... PROCEDURE ANALYSE([max elements,[max memory]]) ...@@ -30775,9 +30774,9 @@ SELECT ... FROM ... WHERE ... PROCEDURE ANALYSE([max elements,[max memory]])
@end example @end example
@node Writing a procedure, , procedure analyse, Adding procedures @node Writing a procedure, , procedure analyse, Adding procedures
@section Writing a procedure. @section Writing a Procedure
For the moment, the only documentation for this is the source. :( For the moment, the only documentation for this is the source.
You can find all information about procedures by examining the following files: You can find all information about procedures by examining the following files:
...@@ -30807,11 +30806,11 @@ You can find all information about procedures by examining the following files: ...@@ -30807,11 +30806,11 @@ You can find all information about procedures by examining the following files:
program. program.
@node Which ODBC OS, ODBC administrator, ODBC, ODBC @node Which ODBC OS, ODBC administrator, ODBC, ODBC
@section Operating systems supported by MyODBC @section Operating Systems Supported by MyODBC
@strong{MyODBC} is a 32-bit ODBC (2.50) level 0 (with level 1 and level @strong{MyODBC} is a 32-bit ODBC (2.50) level 0 (with level 1 and level
2 features) driver for connecting an ODBC-aware application to 2 features) driver for connecting an ODBC-aware application to
@strong{MySQL}. @strong{MyODBC} works on Windows95, Windows98, NT and @strong{MySQL}. @strong{MyODBC} works on Windows95, Windows98, NT, and
on most Unix platforms. on most Unix platforms.
Normally you only need to install @strong{MyODBC} on Windows machines. Normally you only need to install @strong{MyODBC} on Windows machines.
...@@ -30819,7 +30818,7 @@ You only need @strong{MyODBC} for Unix if you have a program like ...@@ -30819,7 +30818,7 @@ You only need @strong{MyODBC} for Unix if you have a program like
ColdFusion that is running on the Unix machine and uses ODBC to connect ColdFusion that is running on the Unix machine and uses ODBC to connect
to the databases. to the databases.
@strong{MyODBC} is in public domain and you can find the newest version @strong{MyODBC} is in public domain, and you can find the newest version
at @uref{http://www.mysql.com/downloads/api-myodbc.html}. at @uref{http://www.mysql.com/downloads/api-myodbc.html}.
If you want to install @strong{MyODBC} on a Unix box, you will also need If you want to install @strong{MyODBC} on a Unix box, you will also need
...@@ -30832,12 +30831,13 @@ On Windows/NT you may get the following error when trying to install ...@@ -30832,12 +30831,13 @@ On Windows/NT you may get the following error when trying to install
@strong{MyODBC}: @strong{MyODBC}:
@example @example
An error occurred while copying C:\WINDOWS\SYSTEM\MFC30.DLL. Restart Windows An error occurred while copying C:\WINDOWS\SYSTEM\MFC30.DLL. Restart
and try installing again (before running any applications which use ODBC) Windows and try installing again (before running any applications which
use ODBC)
@end example @end example
The problem in this case is that some other program is using ODBC and The problem in this case is that some other program is using ODBC and
because of how windows is designed, you cannot in this case install new because of how Windows is designed, you cannot in this case install new
ODBC drivers with Microsoft's ODBC setup program. :( The solution to this ODBC drivers with Microsoft's ODBC setup program. :( The solution to this
is to reboot your computer in ``safe mode`` (You can choose this by is to reboot your computer in ``safe mode`` (You can choose this by
pressing F8 just before your machine starts Windows during rebooting), pressing F8 just before your machine starts Windows during rebooting),
...@@ -31411,7 +31411,7 @@ doesn't want to die, this is probably a bug in the operating system. ...@@ -31411,7 +31411,7 @@ doesn't want to die, this is probably a bug in the operating system.
@end itemize @end itemize
If after you have examined all other possibilities and you have If after you have examined all other possibilities and you have
concluded that its the @strong{MySQL} server or a @strong{MySQL} client concluded that it's the @strong{MySQL} server or a @strong{MySQL} client
that is causing the problem, it's time to do a bug report for our that is causing the problem, it's time to do a bug report for our
mailing list or our support team. In the bug report, try to describe mailing list or our support team. In the bug report, try to describe
very detailed how the system is behaving and what you think is very detailed how the system is behaving and what you think is
...@@ -37131,7 +37131,7 @@ don't yet support: ...@@ -37131,7 +37131,7 @@ don't yet support:
@item A way to extend the SQL to handle new key types (like R-trees) @item A way to extend the SQL to handle new key types (like R-trees)
@end table @end table
@strong{MySQL} on the other hand supports a many ANSI SQL constructs @strong{MySQL}, on the other hand, supports a many ANSI SQL constructs
that @code{PostgreSQL} doesn't support; Most of these can be found at the that @code{PostgreSQL} doesn't support; Most of these can be found at the
@uref{http://www.mysql.com/information/crash-me.php, @code{crash-me} web page}. @uref{http://www.mysql.com/information/crash-me.php, @code{crash-me} web page}.
...@@ -40023,7 +40023,7 @@ MyOBC. ...@@ -40023,7 +40023,7 @@ MyOBC.
Rewrote the table handler to use classes. This introduces a lot of new code, Rewrote the table handler to use classes. This introduces a lot of new code,
but will make table handling faster and better. but will make table handling faster and better.
@item @item
Added patch by Sasha for user defined variables. Added patch by Sasha for user-defined variables.
@item @item
Changed that @code{FLOAT} and @code{DOUBLE} (without any length modifiers) are Changed that @code{FLOAT} and @code{DOUBLE} (without any length modifiers) are
not anymore fixed decimal point numbers. not anymore fixed decimal point numbers.
...@@ -41105,7 +41105,7 @@ characters didn't exist. ...@@ -41105,7 +41105,7 @@ characters didn't exist.
@item @item
@code{OPTIMIZE TABLE tbl_name} can now be used to reclaim disk space @code{OPTIMIZE TABLE tbl_name} can now be used to reclaim disk space
after many deletes. Currently, this uses @code{ALTER TABLE} to after many deletes. Currently, this uses @code{ALTER TABLE} to
re-generate the table, but in the future it will use an integrated regenerate the table, but in the future it will use an integrated
@code{isamchk} for more speed. @code{isamchk} for more speed.
@item @item
Upgraded @code{libtool} to get the configure more portable. Upgraded @code{libtool} to get the configure more portable.
...@@ -44177,7 +44177,7 @@ as follows: ...@@ -44177,7 +44177,7 @@ as follows:
@end example @end example
Each field consists of a mandatory flag character followed by Each field consists of a mandatory flag character followed by
an optional "," and comma separated list of modifiers: an optional "," and comma-separated list of modifiers:
@example @example
flag[,modifier,modifier,...,modifier] flag[,modifier,modifier,...,modifier]
...@@ -44659,7 +44659,7 @@ mysql> select "weeknights" REGEXP "^(wee|week)(knights|nights)$"; -> 1 ...@@ -44659,7 +44659,7 @@ mysql> select "weeknights" REGEXP "^(wee|week)(knights|nights)$"; -> 1
@node Unireg, GPL license, Regexp, Top @node Unireg, GPL license, Regexp, Top
@appendix What is Unireg? @appendix What is Unireg?
Unireg is our tty interface builder, but it uses a low level connection Unireg is our tty interface builder, but it uses a low-level connection
to our ISAM (which is used by @strong{MySQL}) and because of this it is to our ISAM (which is used by @strong{MySQL}) and because of this it is
very quick. It has existed since 1979 (on Unix in C since ~1986). very quick. It has existed since 1979 (on Unix in C since ~1986).
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment