Commit 7f8ee99d authored by monty@donna.mysql.com's avatar monty@donna.mysql.com

merge

parents 3b3bdf7f 110a5f2d
jani@janikt.pp.saunalahti.fi jani@janikt.pp.saunalahti.fi
sasha@mysql.sashanet.com
monty@donna.mysql.com
...@@ -712,7 +712,7 @@ Solving some common problems with MySQL ...@@ -712,7 +712,7 @@ Solving some common problems with MySQL
* Log Replication:: Database replication with update log * Log Replication:: Database replication with update log
* Backup:: Database backups * Backup:: Database backups
* Update log:: The update log * Update log:: The update log
* Binary log:: * Binary log:: The binary log
* Slow query log:: Log of slow queries * Slow query log:: Log of slow queries
* Multiple servers:: Running multiple @strong{MySQL} servers on the same machine * Multiple servers:: Running multiple @strong{MySQL} servers on the same machine
...@@ -9117,6 +9117,32 @@ bin\mysqld-nt --remove # remove MySQL as a service ...@@ -9117,6 +9117,32 @@ bin\mysqld-nt --remove # remove MySQL as a service
By invoking @code{mysqld} directly. By invoking @code{mysqld} directly.
@end itemize @end itemize
When the @code{mysqld} daemon starts up, it changes directory to the
data directory. This is where it expects to write log files and the pid
(process ID) file, and where it expects to find databases.
The data directory location is hardwired in when the distribution is
compiled. However, if @code{mysqld} expects to find the data directory
somewhere other than where it really is on your system, it will not work
properly. If you have problems with incorrect paths, you can find out
what options @code{mysqld} allows and what the default path settings are by
invoking @code{mysqld} with the @code{--help} option. You can override the
defaults by specifying the correct pathnames as command-line arguments to
@code{mysqld}. (These options can be used with @code{safe_mysqld} as well.)
Normally you should need to tell @code{mysqld} only the base directory under
which @strong{MySQL} is installed. You can do this with the @code{--basedir}
option. You can also use @code{--help} to check the effect of changing path
options (note that @code{--help} @emph{must} be the final option of the
@code{mysqld} command). For example:
@example
shell> EXECDIR/mysqld --basedir=/usr/local --help
@end example
Once you determine the path settings you want, start the server without
the @code{--help} option.
Whichever method you use to start the server, if it fails to start up Whichever method you use to start the server, if it fails to start up
correctly, check the log file to see if you can find out why. Log files correctly, check the log file to see if you can find out why. Log files
are located in the data directory (typically are located in the data directory (typically
...@@ -9146,32 +9172,6 @@ the old Berkeley DB log file from the database directory to some other ...@@ -9146,32 +9172,6 @@ the old Berkeley DB log file from the database directory to some other
place, where you can later examine these. The log files are named place, where you can later examine these. The log files are named
@file{log.0000000001}, where the number will increase over time. @file{log.0000000001}, where the number will increase over time.
When the @code{mysqld} daemon starts up, it changes directory to the
data directory. This is where it expects to write log files and the pid
(process ID) file, and where it expects to find databases.
The data directory location is hardwired in when the distribution is
compiled. However, if @code{mysqld} expects to find the data directory
somewhere other than where it really is on your system, it will not work
properly. If you have problems with incorrect paths, you can find out
what options @code{mysqld} allows and what the default path settings are by
invoking @code{mysqld} with the @code{--help} option. You can override the
defaults by specifying the correct pathnames as command-line arguments to
@code{mysqld}. (These options can be used with @code{safe_mysqld} as well.)
Normally you should need to tell @code{mysqld} only the base directory under
which @strong{MySQL} is installed. You can do this with the @code{--basedir}
option. You can also use @code{--help} to check the effect of changing path
options (note that @code{--help} @emph{must} be the final option of the
@code{mysqld} command). For example:
@example
shell> EXECDIR/mysqld --basedir=/usr/local --help
@end example
Once you determine the path settings you want, start the server without
the @code{--help} option.
If you get the following error, it means that some other program (or another If you get the following error, it means that some other program (or another
@code{mysqld} server) is already using the TCP/IP port or socket @code{mysqld} server) is already using the TCP/IP port or socket
@code{mysqld} is trying to use: @code{mysqld} is trying to use:
...@@ -9222,6 +9222,10 @@ This will not run in the background and it should also write a trace in ...@@ -9222,6 +9222,10 @@ This will not run in the background and it should also write a trace in
@file{\mysqld.trace}, which may help you determine the source of your @file{\mysqld.trace}, which may help you determine the source of your
problems. @xref{Windows}. problems. @xref{Windows}.
If you are using BDB (Berkeley DB) tables, you should familiarize
yourself with the different BDB specific startup options. @xref{BDB start}.
@node Automatic start, Command-line options, Starting server, Post-installation @node Automatic start, Command-line options, Starting server, Post-installation
@subsection Starting and Stopping MySQL Automatically @subsection Starting and Stopping MySQL Automatically
@cindex starting, the server automatically @cindex starting, the server automatically
...@@ -9747,6 +9751,10 @@ Version 3.23: ...@@ -9747,6 +9751,10 @@ Version 3.23:
@itemize @bullet @itemize @bullet
@item @item
If you do a @code{DROP DATABASE} on a symbolic linked database, both the
link and the original database is deleted. (This didn't happen in 3.22
because configure didn't detect the @code{readlink} system call).
@item
@code{OPTIMIZE TABLE} now only works for @strong{MyISAM} tables. @code{OPTIMIZE TABLE} now only works for @strong{MyISAM} tables.
For other table types, you can use @code{ALTER TABLE} to optimize the table. For other table types, you can use @code{ALTER TABLE} to optimize the table.
During @code{OPTIMIZE TABLE} the table is now locked from other threads. During @code{OPTIMIZE TABLE} the table is now locked from other threads.
...@@ -17464,7 +17472,9 @@ DROP DATABASE [IF EXISTS] db_name ...@@ -17464,7 +17472,9 @@ DROP DATABASE [IF EXISTS] db_name
@end example @end example
@code{DROP DATABASE} drops all tables in the database and deletes the @code{DROP DATABASE} drops all tables in the database and deletes the
database. @strong{Be VERY careful with this command!} database. If you do a @code{DROP DATABASE} on a symbolic linked
database, both the link and the original database is deleted. @strong{Be
VERY careful with this command!}
@code{DROP DATABASE} returns the number of files that were removed from @code{DROP DATABASE} returns the number of files that were removed from
the database directory. Normally, this is three times the number of the database directory. Normally, this is three times the number of
...@@ -18261,10 +18271,13 @@ Deleted records are maintained in a linked list and subsequent @code{INSERT} ...@@ -18261,10 +18271,13 @@ Deleted records are maintained in a linked list and subsequent @code{INSERT}
operations reuse old record positions. You can use @code{OPTIMIZE TABLE} to operations reuse old record positions. You can use @code{OPTIMIZE TABLE} to
reclaim the unused space and to defragment the data file. reclaim the unused space and to defragment the data file.
For the moment @code{OPTIMIZE TABLE} only works on @strong{MyISAM} For the moment @code{OPTIMIZE TABLE} only works on @strong{MyISAM} and
tables. You can get optimize table to work on other table types by @code{BDB} tables. For @code{BDB} tables, @code{OPTIMIZE TABLE} is
starting @code{mysqld} with @code{--skip-new} or @code{--safe-mode}, but in currently mapped to @code{ANALYZE TABLE}. @xref{ANALYZE TABLE}.
this case @code{OPTIMIZE TABLE} is just mapped to @code{ALTER TABLE}.
You can get optimize table to work on other table types by starting
@code{mysqld} with @code{--skip-new} or @code{--safe-mode}, but in this
case @code{OPTIMIZE TABLE} is just mapped to @code{ALTER TABLE}.
@code{OPTIMIZE TABLE} works the following way: @code{OPTIMIZE TABLE} works the following way:
@itemize @bullet @itemize @bullet
...@@ -18277,7 +18290,7 @@ If the statistics are not up to date (and the repair couldn't be done ...@@ -18277,7 +18290,7 @@ If the statistics are not up to date (and the repair couldn't be done
by sorting the index), update them. by sorting the index), update them.
@end itemize @end itemize
@code{OPTIMIZE TABLE} is equvialent of running @code{OPTIMIZE TABLE} for @code{MyISAM} tables is equvialent of running
@code{myisamchk --quick --check-changed-tables --sort-index --analyze} @code{myisamchk --quick --check-changed-tables --sort-index --analyze}
on the table. on the table.
...@@ -18294,11 +18307,12 @@ CHECK TABLE tbl_name[,tbl_name...] [option [option...]] ...@@ -18294,11 +18307,12 @@ CHECK TABLE tbl_name[,tbl_name...] [option [option...]]
option = QUICK | FAST | EXTEND | CHANGED option = QUICK | FAST | EXTEND | CHANGED
@end example @end example
@code{CHECK TABLE} only works on @code{MyISAM} tables and is the same thing @code{CHECK TABLE} only works on @code{MyISAM} and @code{BDB} tables. On
as running @code{myisamchk -m table_name} on the table. @code{MyISAM} tables it's the same thing as running @code{myisamchk -m
table_name} on the table.
Check the table(s) for errors and update the key statistics for the table. Checks the table(s) for errors. For @code{MyISAM} tables the key statistics
The command returns a table with the following columns: is updated. The command returns a table with the following columns:
@multitable @columnfractions .35 .65 @multitable @columnfractions .35 .65
@item @strong{Column} @tab @strong{Value} @item @strong{Column} @tab @strong{Value}
...@@ -18325,6 +18339,9 @@ The different check types stand for the following: ...@@ -18325,6 +18339,9 @@ The different check types stand for the following:
@item @code{EXTENDED} @tab Do a full key lookup for all keys for each row. This ensures that the table is 100 % consistent, but will take a long time! @item @code{EXTENDED} @tab Do a full key lookup for all keys for each row. This ensures that the table is 100 % consistent, but will take a long time!
@end multitable @end multitable
Note that for BDB tables the different check options doesn't affect the
check in any way!
You can combine check options as in: You can combine check options as in:
@example @example
...@@ -18423,7 +18440,9 @@ ANALYZE TABLE tbl_name[,tbl_name...] ...@@ -18423,7 +18440,9 @@ ANALYZE TABLE tbl_name[,tbl_name...]
@end example @end example
Analyze and store the key distribution for the table. During the Analyze and store the key distribution for the table. During the
analyze the table is locked with a read lock. analyze the table is locked with a read lock. This works on
@code{MyISAM} and @code{BDB} tables.
This is equivalent to running @code{myisamchk -a} on the table. This is equivalent to running @code{myisamchk -a} on the table.
@strong{MySQL} uses the stored key distribution to decide in which order @strong{MySQL} uses the stored key distribution to decide in which order
...@@ -20108,16 +20127,15 @@ If @code{key_reads} is big, then your @code{key_cache} is probably too ...@@ -20108,16 +20127,15 @@ If @code{key_reads} is big, then your @code{key_cache} is probably too
small. The cache hit rate can be calculated with small. The cache hit rate can be calculated with
@code{key_reads}/@code{key_read_requests}. @code{key_reads}/@code{key_read_requests}.
@item @item
If @code{Handler_read_rnd} is big, then you probably have a lot of queries If @code{Handler_read_rnd} is big, then you probably have a lot of
that require @strong{MySQL} to scan whole tables or you have joins that don't use queries that require @strong{MySQL} to scan whole tables or you have
keys properly. joins that don't use keys properly.
@item @item
If @code{Created_tmp_tables} or @code{Sort_merge_passes} are high then If @code{Created_tmp_tables} or @code{Sort_merge_passes} are high then
your @code{mysqld} @code{sort_buffer} variables is probably too small. your @code{mysqld} @code{sort_buffer} variables is probably too small.
@item @item
@code{Created_tmp_files} doesn't count the files needed to handle temporary @code{Created_tmp_files} doesn't count the files needed to handle temporary
tables. tables.
@item
@end itemize @end itemize
@node SHOW VARIABLES, SHOW PROCESSLIST, SHOW STATUS, SHOW @node SHOW VARIABLES, SHOW PROCESSLIST, SHOW STATUS, SHOW
...@@ -20143,6 +20161,7 @@ differ somewhat: ...@@ -20143,6 +20161,7 @@ differ somewhat:
| bdb_home | /usr/local/mysql/data/ | | bdb_home | /usr/local/mysql/data/ |
| bdb_logdir | | | bdb_logdir | |
| bdb_tmpdir | /tmp/ | | bdb_tmpdir | /tmp/ |
| binlog_cache_size | 32768 |
| character_set | latin1 | | character_set | latin1 |
| character_sets | latin1 | | character_sets | latin1 |
| connect_timeout | 5 | | connect_timeout | 5 |
...@@ -20239,7 +20258,7 @@ cache. ...@@ -20239,7 +20258,7 @@ cache.
@item @code{bdb_home} @item @code{bdb_home}
The value of the @code{--bdb-home} option. The value of the @code{--bdb-home} option.
@item @code{bdb_lock_max} @item @code{bdb_max_lock}
The maximum number of locks (1000 by default) you can have active on a The maximum number of locks (1000 by default) you can have active on a
BDB table. You should increase this if you get errors of type @code{bdb: BDB table. You should increase this if you get errors of type @code{bdb:
Lock table is out of available locks} or @code{Got error 12 from ...} Lock table is out of available locks} or @code{Got error 12 from ...}
...@@ -20249,9 +20268,17 @@ a lot of rows to calculate the query. ...@@ -20249,9 +20268,17 @@ a lot of rows to calculate the query.
@item @code{bdb_logdir} @item @code{bdb_logdir}
The value of the @code{--bdb-logdir} option. The value of the @code{--bdb-logdir} option.
@item @code{bdb_shared_data}
Is @code{ON} if you are using @code{--bdb-shared-data}.
@item @code{bdb_tmpdir} @item @code{bdb_tmpdir}
The value of the @code{--bdb-tmpdir} option. The value of the @code{--bdb-tmpdir} option.
@item @code{binlog_cache_size}. The size of the cache to hold the SQL
statements for the binary log during a transaction. If you often use
big, multi-statement transactions you can increase this to get more
performance. @xref{COMMIT}.
@item @code{character_set} @item @code{character_set}
The default character set. The default character set.
...@@ -20390,6 +20417,11 @@ wrong) packets. You must increase this value if you are using big ...@@ -20390,6 +20417,11 @@ wrong) packets. You must increase this value if you are using big
@code{BLOB} columns. It should be as big as the biggest @code{BLOB} you want @code{BLOB} columns. It should be as big as the biggest @code{BLOB} you want
to use. to use.
@item @code{max_binlog_cache_size}. If a multi-statement transaction
requires more than this amount of memory, one will get the error
"Multi-statement transaction required more than 'max_binlog_cache_size'
bytes of storage".
@item @code{max_connections} @item @code{max_connections}
The number of simultaneous clients allowed. Increasing this value increases The number of simultaneous clients allowed. Increasing this value increases
the number of file descriptors that @code{mysqld} requires. See below for the number of file descriptors that @code{mysqld} requires. See below for
...@@ -21014,6 +21046,21 @@ table you will get an error (@code{ER_WARNING_NOT_COMPLETE_ROLLBACK}) as ...@@ -21014,6 +21046,21 @@ table you will get an error (@code{ER_WARNING_NOT_COMPLETE_ROLLBACK}) as
a warning. All transactional safe tables will be restored but any a warning. All transactional safe tables will be restored but any
non-transactional table will not change. non-transactional table will not change.
If you are using @code{BEGIN} or @code{SET AUTO_COMMIT=0}, you
should use the @strong{MySQL} binary log for backups instead of the
old update log; The transaction is stored in the binary log
in one chunk, during @code{COMMIT}, the to ensure and @code{ROLLBACK}:ed
transactions are not stored. @xref{Binary log}.
The following commands automaticly ends an transaction (as if you had done
a @code{COMMIT} before executing the command):
@multitable @columnfractions .33 .33 .33
@item @code{ALTER TABLE} @tab @code{BEGIN} @tab @code{CREATE INDEX}
@item @code{DROP DATABASE} @tab @code{DROP TABLE} @tab @code{RENAME TABLE}
@item @code{TRUNCATE}
@end multitable
@findex LOCK TABLES @findex LOCK TABLES
@findex UNLOCK TABLES @findex UNLOCK TABLES
@node LOCK TABLES, SET OPTION, COMMIT, Reference @node LOCK TABLES, SET OPTION, COMMIT, Reference
...@@ -22511,11 +22558,12 @@ BDB tables: ...@@ -22511,11 +22558,12 @@ BDB tables:
@item @code{--bdb-home=directory} @tab Base directory for BDB tables. This should be the same directory you use for --datadir. @item @code{--bdb-home=directory} @tab Base directory for BDB tables. This should be the same directory you use for --datadir.
@item @code{--bdb-lock-detect=#} @tab Berkeley lock detect. One of (DEFAULT, OLDEST, RANDOM, or YOUNGEST). @item @code{--bdb-lock-detect=#} @tab Berkeley lock detect. One of (DEFAULT, OLDEST, RANDOM, or YOUNGEST).
@item @code{--bdb-logdir=directory} @tab Berkeley DB log file directory. @item @code{--bdb-logdir=directory} @tab Berkeley DB log file directory.
@item @code{--bdb-nosync} @tab Don't synchronously flush logs. @item @code{--bdb-no-sync} @tab Don't synchronously flush logs.
@item @code{--bdb-recover} @tab Start Berkeley DB in recover mode. @item @code{--bdb-recover} @tab Start Berkeley DB in recover mode.
@item @code{--bdb-shared-data} @tab Start Berkeley DB in multi-process mode (Don't use @code{DB_PRIVATE} when initializing Berkeley DB)
@item @code{--bdb-tmpdir=directory} @tab Berkeley DB tempfile name. @item @code{--bdb-tmpdir=directory} @tab Berkeley DB tempfile name.
@item @code{--skip-bdb} @tab Don't use berkeley db. @item @code{--skip-bdb} @tab Don't use berkeley db.
@item @code{-O bdb_lock_max=1000} @tab Set the maximum number of locks possible. @xref{SHOW VARIABLES}. @item @code{-O bdb_max_lock=1000} @tab Set the maximum number of locks possible. @xref{SHOW VARIABLES}.
@end multitable @end multitable
If you use @code{--skip-bdb}, @strong{MySQL} will not initialize the If you use @code{--skip-bdb}, @strong{MySQL} will not initialize the
...@@ -22526,13 +22574,17 @@ Normally you should start mysqld with @code{--bdb-recover} if you intend ...@@ -22526,13 +22574,17 @@ Normally you should start mysqld with @code{--bdb-recover} if you intend
to use BDB tables. This may, however, give you problems when you try to to use BDB tables. This may, however, give you problems when you try to
start mysqld if the BDB log files are corrupted. @xref{Starting server}. start mysqld if the BDB log files are corrupted. @xref{Starting server}.
With @code{bdb_lock_max} you can specify the maximum number of locks With @code{bdb_max_lock} you can specify the maximum number of locks
(1000 by default) you can have active on a BDB table. You should (1000 by default) you can have active on a BDB table. You should
increase this if you get errors of type @code{bdb: Lock table is out of increase this if you get errors of type @code{bdb: Lock table is out of
available locks} or @code{Got error 12 from ...} when you have do long available locks} or @code{Got error 12 from ...} when you have do long
transactions or when @code{mysqld} has to examine a lot of rows to transactions or when @code{mysqld} has to examine a lot of rows to
calculate the query. calculate the query.
You may also want to change @code{binlog_cache_size} and
@code{max_binlog_cache_size} if you are using big multi-line transactions.
@xref{COMMIT}.
@node BDB characteristic, BDB TODO, BDB start, BDB @node BDB characteristic, BDB TODO, BDB start, BDB
@subsection Some characteristic of @code{BDB} tables: @subsection Some characteristic of @code{BDB} tables:
...@@ -22578,6 +22630,10 @@ tables. In other words, the key information will take a little more ...@@ -22578,6 +22630,10 @@ tables. In other words, the key information will take a little more
space in @code{BDB} tables compared to MyISAM tables which don't use space in @code{BDB} tables compared to MyISAM tables which don't use
@code{PACK_KEYS=0}. @code{PACK_KEYS=0}.
@item @item
There is often holes in the BDB table to allow you to insert new rows
between different keys. This makes BDB tables somewhat larger than
MyISAM tables.
@item
@strong{MySQL} performs a checkpoint each time a new Berkeley DB log @strong{MySQL} performs a checkpoint each time a new Berkeley DB log
file is started, and removes any log files that are not needed for file is started, and removes any log files that are not needed for
current transactions. One can also run @code{FLUSH LOGS} at any time current transactions. One can also run @code{FLUSH LOGS} at any time
...@@ -22585,6 +22641,17 @@ to checkpoint the Berkeley DB tables. ...@@ -22585,6 +22641,17 @@ to checkpoint the Berkeley DB tables.
For disaster recovery, one should use table backups plus MySQL's binary For disaster recovery, one should use table backups plus MySQL's binary
log. @xref{Backup}. log. @xref{Backup}.
@item
The optimizer needs to know an approximation of the number of rows in
the table. @strong{MySQL} solves this by counting inserts and
maintaining this in a separate segment in each BDB table. If you don't
do a lot of @code{DELETE} or @code{ROLLBACK}:s this number should be
accurate enough for the @strong{MySQL} optimizer, but as @strong{MySQL}
only store the number on close, it may be wrong if @strong{MySQL} dies
unexpectedly. It should not be fatal even if this number is not 100 %
correct. One can update the number of rows by executing @code{ANALYZE
TABLE} or @code{OPTIMIZE TABLE}. @xref{ANALYZE TABLE} . @xref{OPTIMIZE
TABLE}.
@end itemize @end itemize
@node BDB TODO, BDB errors, BDB characteristic, BDB @node BDB TODO, BDB errors, BDB characteristic, BDB
...@@ -25367,7 +25434,8 @@ server-id=<some unique number between 1 and 2^32-1> ...@@ -25367,7 +25434,8 @@ server-id=<some unique number between 1 and 2^32-1>
@end example @end example
@code{server-id} must be different for each server participating in @code{server-id} must be different for each server participating in
replication. replication. If you don't specify a server-id, it will be set to
1 if you have not defined @code{master-host}, else it will be set to 2.
@item Restart the slave(s). @item Restart the slave(s).
...@@ -26341,6 +26409,7 @@ like this: ...@@ -26341,6 +26409,7 @@ like this:
Possible variables for option --set-variable (-O) are: Possible variables for option --set-variable (-O) are:
back_log current value: 5 back_log current value: 5
bdb_cache_size current value: 1048540 bdb_cache_size current value: 1048540
binlog_cache_size current_value: 32768
connect_timeout current value: 5 connect_timeout current value: 5
delayed_insert_timeout current value: 300 delayed_insert_timeout current value: 300
delayed_insert_limit current value: 100 delayed_insert_limit current value: 100
...@@ -26352,6 +26421,7 @@ key_buffer_size current value: 1048540 ...@@ -26352,6 +26421,7 @@ key_buffer_size current value: 1048540
lower_case_table_names current value: 0 lower_case_table_names current value: 0
long_query_time current value: 10 long_query_time current value: 10
max_allowed_packet current value: 1048576 max_allowed_packet current value: 1048576
max_binlog_cache_size current_value: 4294967295
max_connections current value: 100 max_connections current value: 100
max_connect_errors current value: 10 max_connect_errors current value: 10
max_delayed_threads current value: 20 max_delayed_threads current value: 20
...@@ -33516,7 +33586,8 @@ and the crash. ...@@ -33516,7 +33586,8 @@ and the crash.
@node Binary log, Slow query log, Update log, Common problems @node Binary log, Slow query log, Update log, Common problems
@section The Binary Log @section The Binary Log
In the future we expect the binary log to replace the update log! In the future the binary log will replace the update log, so we
recommend you to switch to this log format as soon as possible!
The binary log contains all information that is available in the update The binary log contains all information that is available in the update
log in a more efficient format. It also contains information about how long log in a more efficient format. It also contains information about how long
...@@ -33562,6 +33633,20 @@ direct from a remote mysql server! ...@@ -33562,6 +33633,20 @@ direct from a remote mysql server!
@code{mysqlbinlog --help} will give you more information of how to use @code{mysqlbinlog --help} will give you more information of how to use
this program! this program!
If you are using @code{BEGIN} or @code{SET AUTO_COMMIT=0}, you must use
the @strong{MySQL} binary log for backups instead of the old update log.
All updates (@code{UPDATE}, @code{DELETE} or @code{INSERT}) that changes
a transactional table (like BDB tables) is cached until a @code{COMMIT}.
Any updates to a not transactional table is stored in the binary log at
once. Every thread will on start allocate a buffer of
@code{binlog_cache_size} to buffer queries. If a query is bigger than
this, the thread will open a temporary file to handle the bigger cache.
The temporary file will be deleted when the thread ends.
The @code{max_binlog_cache_size} can be used to restrict the total size used
to cache a multi-transaction query.
@cindex slow query log @cindex slow query log
@cindex files, slow query log @cindex files, slow query log
@node Slow query log, Multiple servers, Binary log, Common problems @node Slow query log, Multiple servers, Binary log, Common problems
...@@ -39468,9 +39553,35 @@ though, so Version 3.23 is not released as a stable version yet. ...@@ -39468,9 +39553,35 @@ though, so Version 3.23 is not released as a stable version yet.
@appendixsubsec Changes in release 3.23.29 @appendixsubsec Changes in release 3.23.29
@itemize @bullet @itemize @bullet
@item @item
New client, @code{mysql_multi_mysqld}. @xref{mysql_multi_mysqld}. Fixed a bug with @code{HEAP} type tables; the variable
@code{max_heap_table_size} wasn't used. Now either @code{MAX_ROWS} or
@code[max_heap_table_size} can be used to limit the size of a @code{HEAP}
type table.
@item
Renamed variable @code{bdb_lock_max} to @code{bdb_max_lock}.
@item
Changed the default server-id to 1 for masters and 2 for slaves
to make it easier to use the binary log.
@item @item
Fixed @code{DROP DATABASE} to work on OS/2. Added @code{CHECK}, @code{ANALYZE} and @code{OPTIMIZE} of BDB tables.
@item
Store in BDB tables the number of rows; This helps to optimize queries
when we need an approximation of the number of row.
@item
Made @code{DROP TABLE}, @code{RENAME TABLE}, @code{CREATE INDEX} and
@code{DROP INDEX} are now transaction endpoints.
@item
Added option @code{--bdb-shared-data} to @code{mysqld}.
@item
Added variables @code{binlog_cache_size} and @code{max_binlog_cache_size} to
@code{mysqld}.
@item
If you do a @code{DROP DATABASE} on a symbolic linked database, both
the link and the original database is deleted.
@item
Fixed that @code{DROP DATABASE} works on OS/2.
@item
New client @code{mysql_multi_mysqld}. @xref{mysql_multi_mysqld}.
@item @item
Fixed bug when doing a @code{SELECT DISTINCT ... table1 LEFT JOIN Fixed bug when doing a @code{SELECT DISTINCT ... table1 LEFT JOIN
table2..} when table2 was empty. table2..} when table2 was empty.
...@@ -44120,6 +44231,10 @@ Fail safe replication. ...@@ -44120,6 +44231,10 @@ Fail safe replication.
Subqueries. Subqueries.
@code{select id from t where grp in (select grp from g where u > 100)} @code{select id from t where grp in (select grp from g where u > 100)}
@item @item
@code{INSERT SQL_CONCURRENT ...}; This will force the insert to happen at the
end of the data file if the table is in use by an select to allow
concurrent inserts.
@item
Don't allow more than a defined number of threads to run MyISAM recover Don't allow more than a defined number of threads to run MyISAM recover
at the same time. at the same time.
@item @item
...@@ -1239,7 +1239,7 @@ AC_CHECK_FUNCS(alarm bmove \ ...@@ -1239,7 +1239,7 @@ AC_CHECK_FUNCS(alarm bmove \
chsize ftruncate rint finite fpsetmask fpresetsticky\ chsize ftruncate rint finite fpsetmask fpresetsticky\
cuserid fcntl fconvert poll \ cuserid fcntl fconvert poll \
getrusage getpwuid getcwd getrlimit getwd index stpcpy locking longjmp \ getrusage getpwuid getcwd getrlimit getwd index stpcpy locking longjmp \
perror pread realpath rename \ perror pread realpath readlink rename \
socket strnlen madvise mkstemp \ socket strnlen madvise mkstemp \
strtol strtoul strtoull snprintf tempnam thr_setconcurrency \ strtol strtoul strtoull snprintf tempnam thr_setconcurrency \
gethostbyaddr_r gethostbyname_r getpwnam \ gethostbyaddr_r gethostbyname_r getpwnam \
......
...@@ -197,4 +197,5 @@ ...@@ -197,4 +197,5 @@
#define ER_CRASHED_ON_USAGE 1194 #define ER_CRASHED_ON_USAGE 1194
#define ER_CRASHED_ON_REPAIR 1195 #define ER_CRASHED_ON_REPAIR 1195
#define ER_WARNING_NOT_COMPLETE_ROLLBACK 1196 #define ER_WARNING_NOT_COMPLETE_ROLLBACK 1196
#define ER_ERROR_MESSAGES 197 #define ER_TRANS_CACHE_FULL 1197
#define ER_ERROR_MESSAGES 198
# This file describes how to run benchmarks and crash-me with FrontBase
Installed components:
- FrontBase-2.1-8.rpm
(had to run with rpm -i --nodeps; the rpm wanted libreadline.so.4.0,
but only libreadline.so.4.1 was available)
- DBD-FB-0.03.tar.gz
(perl Makefile.Pl;
make;
make test;
make install;)
- DBI-1.14.tar.gz
(perl Makefile.Pl;
make;
make test;
make install;)
- Msql-Mysql-modules-1.2215.tar.gz
(perl Makefile.Pl;
make;
make test;
make install;)
After installations:
- cd /etc/rc.d
FBWeb start
FrontBase start
- cd /usr/local/mysql/sql-bench
- FBExec &
- FrontBase test
crash-me:
There were a lot of troubles running the crash-me; FrontBase core
dumped several tens of times while crash-me was trying to determine
the maximum values in different areas.
The crash-me program itself was also needed to be tuned quite a lot
for FB. There were also some bugs/lacking features in the crash-me
program, which are now fixed to the new version.
After we finally got the limits, we runned the benchmarks.
benchmarks:
Problems again. Frontbase core dumped with every part of the
benchmark (8/8) tests. After a lot of fine-tuning we got the
benchmarks to run through. The maximum values had to be dropped
down a lot in many of the tests.
The benchmarks were run with the following command:
perl run-all-tests --server=frontbase --host=prima
--cmp=frontbase,mysql --tcpip --log
...@@ -37,12 +37,15 @@ ...@@ -37,12 +37,15 @@
transaction ?) transaction ?)
- When using ALTER TABLE IGNORE, we should not start an transaction, but do - When using ALTER TABLE IGNORE, we should not start an transaction, but do
everything wthout transactions. everything wthout transactions.
- When we do rollback, we need to subtract the number of changed rows
from the updated tables.
Testing of: Testing of:
- ALTER TABLE
- LOCK TABLES - LOCK TABLES
- CHAR keys
- BLOBS - BLOBS
- Mark tables that participate in a transaction so that they are not
closed during the transaction. We need to test what happens if
MySQL closes a table that is updated by a not commit transaction.
*/ */
...@@ -58,19 +61,27 @@ ...@@ -58,19 +61,27 @@
#include <hash.h> #include <hash.h>
#include "ha_berkeley.h" #include "ha_berkeley.h"
#include "sql_manager.h" #include "sql_manager.h"
#include <stdarg.h>
#define HA_BERKELEY_ROWS_IN_TABLE 10000 /* to get optimization right */ #define HA_BERKELEY_ROWS_IN_TABLE 10000 /* to get optimization right */
#define HA_BERKELEY_RANGE_COUNT 100 #define HA_BERKELEY_RANGE_COUNT 100
#define HA_BERKELEY_MAX_ROWS 10000000 /* Max rows in table */ #define HA_BERKELEY_MAX_ROWS 10000000 /* Max rows in table */
/* extra rows for estimate_number_of_rows() */
#define HA_BERKELEY_EXTRA_ROWS 100
/* Bits for share->status */
#define STATUS_PRIMARY_KEY_INIT 1
#define STATUS_ROW_COUNT_INIT 2
#define STATUS_BDB_ANALYZE 4
const char *ha_berkeley_ext=".db"; const char *ha_berkeley_ext=".db";
bool berkeley_skip=0; bool berkeley_skip=0,berkeley_shared_data=0;
u_int32_t berkeley_init_flags=0,berkeley_lock_type=DB_LOCK_DEFAULT; u_int32_t berkeley_init_flags= DB_PRIVATE, berkeley_lock_type=DB_LOCK_DEFAULT;
ulong berkeley_cache_size; ulong berkeley_cache_size;
char *berkeley_home, *berkeley_tmpdir, *berkeley_logdir; char *berkeley_home, *berkeley_tmpdir, *berkeley_logdir;
long berkeley_lock_scan_time=0; long berkeley_lock_scan_time=0;
ulong berkeley_trans_retry=5; ulong berkeley_trans_retry=5;
ulong berkeley_lock_max; ulong berkeley_max_lock;
pthread_mutex_t bdb_mutex; pthread_mutex_t bdb_mutex;
static DB_ENV *db_env; static DB_ENV *db_env;
...@@ -86,11 +97,13 @@ TYPELIB berkeley_lock_typelib= {array_elements(berkeley_lock_names),"", ...@@ -86,11 +97,13 @@ TYPELIB berkeley_lock_typelib= {array_elements(berkeley_lock_names),"",
static void berkeley_print_error(const char *db_errpfx, char *buffer); static void berkeley_print_error(const char *db_errpfx, char *buffer);
static byte* bdb_get_key(BDB_SHARE *share,uint *length, static byte* bdb_get_key(BDB_SHARE *share,uint *length,
my_bool not_used __attribute__((unused))); my_bool not_used __attribute__((unused)));
static BDB_SHARE *get_share(const char *table_name); static BDB_SHARE *get_share(const char *table_name, TABLE *table);
static void free_share(BDB_SHARE *share); static void free_share(BDB_SHARE *share, TABLE *table);
static void update_status(BDB_SHARE *share, TABLE *table);
static void berkeley_noticecall(DB_ENV *db_env, db_notices notice); static void berkeley_noticecall(DB_ENV *db_env, db_notices notice);
/* General functions */ /* General functions */
bool berkeley_init(void) bool berkeley_init(void)
...@@ -121,14 +134,14 @@ bool berkeley_init(void) ...@@ -121,14 +134,14 @@ bool berkeley_init(void)
db_env->set_cachesize(db_env, 0, berkeley_cache_size, 0); db_env->set_cachesize(db_env, 0, berkeley_cache_size, 0);
db_env->set_lk_detect(db_env, berkeley_lock_type); db_env->set_lk_detect(db_env, berkeley_lock_type);
if (berkeley_lock_max) if (berkeley_max_lock)
db_env->set_lk_max(db_env, berkeley_lock_max); db_env->set_lk_max(db_env, berkeley_max_lock);
if (db_env->open(db_env, if (db_env->open(db_env,
berkeley_home, berkeley_home,
berkeley_init_flags | DB_INIT_LOCK | berkeley_init_flags | DB_INIT_LOCK |
DB_INIT_LOG | DB_INIT_MPOOL | DB_INIT_TXN | DB_INIT_LOG | DB_INIT_MPOOL | DB_INIT_TXN |
DB_CREATE | DB_THREAD | DB_PRIVATE, 0666)) DB_CREATE | DB_THREAD, 0666))
{ {
db_env->close(db_env,0); db_env->close(db_env,0);
db_env=0; db_env=0;
...@@ -345,7 +358,7 @@ berkeley_key_cmp(TABLE *table, KEY *key_info, const char *key, uint key_length) ...@@ -345,7 +358,7 @@ berkeley_key_cmp(TABLE *table, KEY *key_info, const char *key, uint key_length)
key+=length; key+=length;
key_length-=length; key_length-=length;
} }
return 0; return 0; // Identical keys
} }
...@@ -387,7 +400,7 @@ int ha_berkeley::open(const char *name, int mode, uint test_if_locked) ...@@ -387,7 +400,7 @@ int ha_berkeley::open(const char *name, int mode, uint test_if_locked)
} }
/* Init table lock structure */ /* Init table lock structure */
if (!(share=get_share(name))) if (!(share=get_share(name,table)))
{ {
my_free(rec_buff,MYF(0)); my_free(rec_buff,MYF(0));
my_free(alloc_ptr,MYF(0)); my_free(alloc_ptr,MYF(0));
...@@ -397,7 +410,7 @@ int ha_berkeley::open(const char *name, int mode, uint test_if_locked) ...@@ -397,7 +410,7 @@ int ha_berkeley::open(const char *name, int mode, uint test_if_locked)
if ((error=db_create(&file, db_env, 0))) if ((error=db_create(&file, db_env, 0)))
{ {
free_share(share); free_share(share,table);
my_free(rec_buff,MYF(0)); my_free(rec_buff,MYF(0));
my_free(alloc_ptr,MYF(0)); my_free(alloc_ptr,MYF(0));
my_errno=error; my_errno=error;
...@@ -413,7 +426,7 @@ int ha_berkeley::open(const char *name, int mode, uint test_if_locked) ...@@ -413,7 +426,7 @@ int ha_berkeley::open(const char *name, int mode, uint test_if_locked)
2 | 4), 2 | 4),
"main", DB_BTREE, open_mode,0)))) "main", DB_BTREE, open_mode,0))))
{ {
free_share(share); free_share(share,table);
my_free(rec_buff,MYF(0)); my_free(rec_buff,MYF(0));
my_free(alloc_ptr,MYF(0)); my_free(alloc_ptr,MYF(0));
my_errno=error; my_errno=error;
...@@ -459,7 +472,6 @@ int ha_berkeley::open(const char *name, int mode, uint test_if_locked) ...@@ -459,7 +472,6 @@ int ha_berkeley::open(const char *name, int mode, uint test_if_locked)
} }
} }
} }
/* Calculate pack_length of primary key */ /* Calculate pack_length of primary key */
if (!hidden_primary_key) if (!hidden_primary_key)
{ {
...@@ -470,12 +482,9 @@ int ha_berkeley::open(const char *name, int mode, uint test_if_locked) ...@@ -470,12 +482,9 @@ int ha_berkeley::open(const char *name, int mode, uint test_if_locked)
ref_length+= key_part->field->max_packed_col_length(key_part->length); ref_length+= key_part->field->max_packed_col_length(key_part->length);
fixed_length_primary_key= fixed_length_primary_key=
(ref_length == table->key_info[primary_key].key_length); (ref_length == table->key_info[primary_key].key_length);
share->status|=STATUS_PRIMARY_KEY_INIT;
} }
else get_status();
{
if (!share->primary_key_inited)
update_auto_primary_key();
}
DBUG_RETURN(0); DBUG_RETURN(0);
} }
...@@ -491,7 +500,7 @@ int ha_berkeley::close(void) ...@@ -491,7 +500,7 @@ int ha_berkeley::close(void)
if (key_file[i] && (error=key_file[i]->close(key_file[i],0))) if (key_file[i] && (error=key_file[i]->close(key_file[i],0)))
result=error; result=error;
} }
free_share(share); free_share(share,table);
my_free(rec_buff,MYF(MY_ALLOW_ZERO_PTR)); my_free(rec_buff,MYF(MY_ALLOW_ZERO_PTR));
my_free(alloc_ptr,MYF(MY_ALLOW_ZERO_PTR)); my_free(alloc_ptr,MYF(MY_ALLOW_ZERO_PTR));
if (result) if (result)
...@@ -632,8 +641,8 @@ void ha_berkeley::unpack_key(char *record, DBT *key, uint index) ...@@ -632,8 +641,8 @@ void ha_berkeley::unpack_key(char *record, DBT *key, uint index)
This will never fail as the key buffer is pre allocated. This will never fail as the key buffer is pre allocated.
*/ */
DBT *ha_berkeley::pack_key(DBT *key, uint keynr, char *buff, DBT *ha_berkeley::create_key(DBT *key, uint keynr, char *buff,
const byte *record) const byte *record, int key_length)
{ {
bzero((char*) key,sizeof(*key)); bzero((char*) key,sizeof(*key));
...@@ -647,11 +656,11 @@ DBT *ha_berkeley::pack_key(DBT *key, uint keynr, char *buff, ...@@ -647,11 +656,11 @@ DBT *ha_berkeley::pack_key(DBT *key, uint keynr, char *buff,
KEY *key_info=table->key_info+keynr; KEY *key_info=table->key_info+keynr;
KEY_PART_INFO *key_part=key_info->key_part; KEY_PART_INFO *key_part=key_info->key_part;
KEY_PART_INFO *end=key_part+key_info->key_parts; KEY_PART_INFO *end=key_part+key_info->key_parts;
DBUG_ENTER("pack_key"); DBUG_ENTER("create_key");
key->data=buff; key->data=buff;
for ( ; key_part != end ; key_part++) for ( ; key_part != end && key_length > 0; key_part++)
{ {
if (key_part->null_bit) if (key_part->null_bit)
{ {
...@@ -666,6 +675,7 @@ DBT *ha_berkeley::pack_key(DBT *key, uint keynr, char *buff, ...@@ -666,6 +675,7 @@ DBT *ha_berkeley::pack_key(DBT *key, uint keynr, char *buff,
} }
buff=key_part->field->pack(buff,record + key_part->offset, buff=key_part->field->pack(buff,record + key_part->offset,
key_part->length); key_part->length);
key_length-=key_part->length;
} }
key->size= (buff - (char*) key->data); key->size= (buff - (char*) key->data);
DBUG_DUMP("key",(char*) key->data, key->size); DBUG_DUMP("key",(char*) key->data, key->size);
...@@ -729,7 +739,7 @@ int ha_berkeley::write_row(byte * record) ...@@ -729,7 +739,7 @@ int ha_berkeley::write_row(byte * record)
if (table->keys == 1) if (table->keys == 1)
{ {
error=file->put(file, transaction, pack_key(&prim_key, primary_key, error=file->put(file, transaction, create_key(&prim_key, primary_key,
key_buff, record), key_buff, record),
&row, key_type[primary_key]); &row, key_type[primary_key]);
} }
...@@ -742,7 +752,7 @@ int ha_berkeley::write_row(byte * record) ...@@ -742,7 +752,7 @@ int ha_berkeley::write_row(byte * record)
if ((error=txn_begin(db_env, transaction, &sub_trans, 0))) if ((error=txn_begin(db_env, transaction, &sub_trans, 0)))
break; break;
DBUG_PRINT("trans",("starting subtransaction")); DBUG_PRINT("trans",("starting subtransaction"));
if (!(error=file->put(file, sub_trans, pack_key(&prim_key, primary_key, if (!(error=file->put(file, sub_trans, create_key(&prim_key, primary_key,
key_buff, record), key_buff, record),
&row, key_type[primary_key]))) &row, key_type[primary_key])))
{ {
...@@ -751,7 +761,7 @@ int ha_berkeley::write_row(byte * record) ...@@ -751,7 +761,7 @@ int ha_berkeley::write_row(byte * record)
if (keynr == primary_key) if (keynr == primary_key)
continue; continue;
if ((error=key_file[keynr]->put(key_file[keynr], sub_trans, if ((error=key_file[keynr]->put(key_file[keynr], sub_trans,
pack_key(&key, keynr, key_buff2, create_key(&key, keynr, key_buff2,
record), record),
&prim_key, key_type[keynr]))) &prim_key, key_type[keynr])))
{ {
...@@ -783,6 +793,8 @@ int ha_berkeley::write_row(byte * record) ...@@ -783,6 +793,8 @@ int ha_berkeley::write_row(byte * record)
} }
if (error == DB_KEYEXIST) if (error == DB_KEYEXIST)
error=HA_ERR_FOUND_DUPP_KEY; error=HA_ERR_FOUND_DUPP_KEY;
else if (!error)
changed_rows++;
DBUG_RETURN(error); DBUG_RETURN(error);
} }
...@@ -838,7 +850,7 @@ int ha_berkeley::update_primary_key(DB_TXN *trans, bool primary_key_changed, ...@@ -838,7 +850,7 @@ int ha_berkeley::update_primary_key(DB_TXN *trans, bool primary_key_changed,
{ {
// Primary key changed or we are updating a key that can have duplicates. // Primary key changed or we are updating a key that can have duplicates.
// Delete the old row and add a new one // Delete the old row and add a new one
pack_key(&old_key, primary_key, key_buff2, old_row); create_key(&old_key, primary_key, key_buff2, old_row);
if ((error=remove_key(trans, primary_key, old_row, (DBT *) 0, &old_key))) if ((error=remove_key(trans, primary_key, old_row, (DBT *) 0, &old_key)))
DBUG_RETURN(error); // This should always succeed DBUG_RETURN(error); // This should always succeed
if ((error=pack_row(&row, new_row, 0))) if ((error=pack_row(&row, new_row, 0)))
...@@ -893,10 +905,10 @@ int ha_berkeley::update_row(const byte * old_row, byte * new_row) ...@@ -893,10 +905,10 @@ int ha_berkeley::update_row(const byte * old_row, byte * new_row)
} }
else else
{ {
pack_key(&prim_key, primary_key, key_buff, new_row); create_key(&prim_key, primary_key, key_buff, new_row);
if ((primary_key_changed=key_cmp(primary_key, old_row, new_row))) if ((primary_key_changed=key_cmp(primary_key, old_row, new_row)))
pack_key(&old_prim_key, primary_key, primary_key_buff, old_row); create_key(&old_prim_key, primary_key, primary_key_buff, old_row);
else else
old_prim_key=prim_key; old_prim_key=prim_key;
} }
...@@ -921,7 +933,7 @@ int ha_berkeley::update_row(const byte * old_row, byte * new_row) ...@@ -921,7 +933,7 @@ int ha_berkeley::update_row(const byte * old_row, byte * new_row)
if ((error=remove_key(sub_trans, keynr, old_row, (DBT*) 0, if ((error=remove_key(sub_trans, keynr, old_row, (DBT*) 0,
&old_prim_key)) || &old_prim_key)) ||
(error=key_file[keynr]->put(key_file[keynr], sub_trans, (error=key_file[keynr]->put(key_file[keynr], sub_trans,
pack_key(&key, keynr, key_buff2, create_key(&key, keynr, key_buff2,
new_row), new_row),
&prim_key, key_type[keynr]))) &prim_key, key_type[keynr])))
{ {
...@@ -980,7 +992,7 @@ int ha_berkeley::remove_key(DB_TXN *sub_trans, uint keynr, const byte *record, ...@@ -980,7 +992,7 @@ int ha_berkeley::remove_key(DB_TXN *sub_trans, uint keynr, const byte *record,
error=key_file[keynr]->del(key_file[keynr], sub_trans, error=key_file[keynr]->del(key_file[keynr], sub_trans,
keynr == primary_key ? keynr == primary_key ?
prim_key : prim_key :
pack_key(&key, keynr, key_buff2, record), create_key(&key, keynr, key_buff2, record),
0); 0);
} }
else else
...@@ -997,7 +1009,7 @@ int ha_berkeley::remove_key(DB_TXN *sub_trans, uint keynr, const byte *record, ...@@ -997,7 +1009,7 @@ int ha_berkeley::remove_key(DB_TXN *sub_trans, uint keynr, const byte *record,
if (!(error=cursor->c_get(cursor, if (!(error=cursor->c_get(cursor,
(keynr == primary_key ? (keynr == primary_key ?
prim_key : prim_key :
pack_key(&key, keynr, key_buff2, record)), create_key(&key, keynr, key_buff2, record)),
(keynr == primary_key ? (keynr == primary_key ?
packed_record : prim_key), packed_record : prim_key),
DB_GET_BOTH))) DB_GET_BOTH)))
...@@ -1046,7 +1058,7 @@ int ha_berkeley::delete_row(const byte * record) ...@@ -1046,7 +1058,7 @@ int ha_berkeley::delete_row(const byte * record)
if ((error=pack_row(&row, record, 0))) if ((error=pack_row(&row, record, 0)))
DBUG_RETURN((error)); DBUG_RETURN((error));
pack_key(&prim_key, primary_key, key_buff, record); create_key(&prim_key, primary_key, key_buff, record);
if (hidden_primary_key) if (hidden_primary_key)
keys|= (key_map) 1 << primary_key; keys|= (key_map) 1 << primary_key;
...@@ -1078,7 +1090,9 @@ int ha_berkeley::delete_row(const byte * record) ...@@ -1078,7 +1090,9 @@ int ha_berkeley::delete_row(const byte * record)
if (error != DB_LOCK_DEADLOCK) if (error != DB_LOCK_DEADLOCK)
break; break;
} }
DBUG_RETURN(0); if (!error)
changed_rows--;
DBUG_RETURN(error);
} }
...@@ -1090,7 +1104,7 @@ int ha_berkeley::index_init(uint keynr) ...@@ -1090,7 +1104,7 @@ int ha_berkeley::index_init(uint keynr)
dbug_assert(cursor == 0); dbug_assert(cursor == 0);
if ((error=file->cursor(key_file[keynr], transaction, &cursor, if ((error=file->cursor(key_file[keynr], transaction, &cursor,
table->reginfo.lock_type > TL_WRITE_ALLOW_READ ? table->reginfo.lock_type > TL_WRITE_ALLOW_READ ?
0 : 0))) DB_RMW : 0)))
cursor=0; // Safety cursor=0; // Safety
bzero((char*) &last_key,sizeof(last_key)); bzero((char*) &last_key,sizeof(last_key));
DBUG_RETURN(error); DBUG_RETURN(error);
...@@ -1336,7 +1350,7 @@ void ha_berkeley::position(const byte *record) ...@@ -1336,7 +1350,7 @@ void ha_berkeley::position(const byte *record)
memcpy_fixed(ref, (char*) current_ident, BDB_HIDDEN_PRIMARY_KEY_LENGTH); memcpy_fixed(ref, (char*) current_ident, BDB_HIDDEN_PRIMARY_KEY_LENGTH);
} }
else else
pack_key(&key, primary_key, ref, record); create_key(&key, primary_key, ref, record);
} }
...@@ -1345,9 +1359,18 @@ void ha_berkeley::info(uint flag) ...@@ -1345,9 +1359,18 @@ void ha_berkeley::info(uint flag)
DBUG_ENTER("info"); DBUG_ENTER("info");
if (flag & HA_STATUS_VARIABLE) if (flag & HA_STATUS_VARIABLE)
{ {
records = estimate_number_of_rows(); // Just to get optimisations right records = share->rows; // Just to get optimisations right
deleted = 0; deleted = 0;
} }
if ((flag & HA_STATUS_CONST) || version != share->version)
{
version=share->version;
for (uint i=0 ; i < table->keys ; i++)
{
table->key_info[i].rec_per_key[table->key_info[i].key_parts-1]=
share->rec_per_key[i];
}
}
else if (flag & HA_STATUS_ERRKEY) else if (flag & HA_STATUS_ERRKEY)
errkey=last_dup_key; errkey=last_dup_key;
DBUG_VOID_RETURN; DBUG_VOID_RETURN;
...@@ -1424,6 +1447,7 @@ int ha_berkeley::external_lock(THD *thd, int lock_type) ...@@ -1424,6 +1447,7 @@ int ha_berkeley::external_lock(THD *thd, int lock_type)
} }
} }
transaction= (DB_TXN*) thd->transaction.stmt.bdb_tid; transaction= (DB_TXN*) thd->transaction.stmt.bdb_tid;
changed_rows=0;
} }
else else
{ {
...@@ -1437,6 +1461,7 @@ int ha_berkeley::external_lock(THD *thd, int lock_type) ...@@ -1437,6 +1461,7 @@ int ha_berkeley::external_lock(THD *thd, int lock_type)
current_row.data=0; current_row.data=0;
} }
} }
thread_safe_add(share->rows, changed_rows, &share->mutex);
current_row.data=0; current_row.data=0;
if (!--thd->transaction.bdb_lock_count) if (!--thd->transaction.bdb_lock_count)
{ {
...@@ -1607,6 +1632,142 @@ ha_rows ha_berkeley::records_in_range(int keynr, ...@@ -1607,6 +1632,142 @@ ha_rows ha_berkeley::records_in_range(int keynr,
DBUG_RETURN(rows <= 1.0 ? (ha_rows) 1 : (ha_rows) rows); DBUG_RETURN(rows <= 1.0 ? (ha_rows) 1 : (ha_rows) rows);
} }
longlong ha_berkeley::get_auto_increment()
{
longlong nr=1; // Default if error or new key
int error;
(void) ha_berkeley::extra(HA_EXTRA_KEYREAD);
ha_berkeley::index_init(table->next_number_index);
if (!table->next_number_key_offset)
{ // Autoincrement at key-start
error=ha_berkeley::index_last(table->record[1]);
}
else
{
DBT row;
bzero((char*) &row,sizeof(row));
uint key_len;
KEY *key_info= &table->key_info[active_index];
/* Reading next available number for a sub key */
ha_berkeley::create_key(&last_key, active_index,
key_buff, table->record[0],
table->next_number_key_offset);
/* Store for compare */
memcpy(key_buff2, key_buff, (key_len=last_key.size));
key_info->handler.bdb_return_if_eq= -1;
error=read_row(cursor->c_get(cursor, &last_key, &row, DB_SET_RANGE),
table->record[1], active_index, &row, (DBT*) 0, 0);
key_info->handler.bdb_return_if_eq= 0;
if (!error && !berkeley_key_cmp(table, key_info, key_buff2, key_len))
{
/*
Found matching key; Now search after next key, go one step back
and then we should have found the biggest key with the given
prefix
*/
(void) read_row(cursor->c_get(cursor, &last_key, &row, DB_NEXT_NODUP),
table->record[1], active_index, &row, (DBT*) 0, 0);
if (read_row(cursor->c_get(cursor, &last_key, &row, DB_PREV),
table->record[1], active_index, &row, (DBT*) 0, 0) ||
berkeley_key_cmp(table, key_info, key_buff2, key_len))
error=1; // Something went wrong
}
}
nr=(longlong)
table->next_number_field->val_int_offset(table->rec_buff_length)+1;
ha_berkeley::index_end();
(void) ha_berkeley::extra(HA_EXTRA_NO_KEYREAD);
return nr;
}
/****************************************************************************
Analyzing, checking, and optimizing tables
****************************************************************************/
static void print_msg(THD *thd, const char *table_name, const char *op_name,
const char *msg_type, const char *fmt, ...)
{
String* packet = &thd->packet;
packet->length(0);
char msgbuf[256];
msgbuf[0] = 0;
va_list args;
va_start(args,fmt);
my_vsnprintf(msgbuf, sizeof(msgbuf), fmt, args);
msgbuf[sizeof(msgbuf) - 1] = 0; // healthy paranoia
DBUG_PRINT(msg_type,("message: %s",msgbuf));
net_store_data(packet, table_name);
net_store_data(packet, op_name);
net_store_data(packet, msg_type);
net_store_data(packet, msgbuf);
if (my_net_write(&thd->net, (char*)thd->packet.ptr(),
thd->packet.length()))
thd->killed=1;
}
int ha_berkeley::analyze(THD* thd, HA_CHECK_OPT* check_opt)
{
DB_BTREE_STAT stat;
uint i;
for (i=0 ; i < table->keys ; i++)
{
file->stat(key_file[i], (void*) &stat, 0, 0);
share->rec_per_key[i]= stat.bt_ndata / stat.bt_nkeys;
}
/* If hidden primary key */
if (hidden_primary_key)
file->stat(file, (void*) &stat, 0, 0);
pthread_mutex_lock(&share->mutex);
share->rows=stat.bt_ndata;
share->status|=STATUS_BDB_ANALYZE; // Save status on close
share->version++; // Update stat in table
pthread_mutex_unlock(&share->mutex);
update_status(share,table); // Write status to file
return ((share->status & STATUS_BDB_ANALYZE) ? HA_ADMIN_FAILED :
HA_ADMIN_OK);
}
int ha_berkeley::optimize(THD* thd, HA_CHECK_OPT* check_opt)
{
return ha_berkeley::analyze(thd,check_opt);
}
int ha_berkeley::check(THD* thd, HA_CHECK_OPT* check_opt)
{
char name_buff[FN_REFLEN];
int error;
fn_format(name_buff,share->table_name,"", ha_berkeley_ext, 2 | 4);
if ((error=file->verify(file, name_buff, NullS, (FILE*) 0,
hidden_primary_key ? 0 : DB_NOORDERCHK)))
{
print_msg(thd, table->real_name, "check", "error",
"Got error %d checking file structure",error);
return HA_ADMIN_CORRUPT;
}
for (uint i=0 ; i < table->keys ; i++)
{
if ((error=file->verify(key_file[i], name_buff, NullS, (FILE*) 0,
DB_ORDERCHKONLY)))
{
print_msg(thd, table->real_name, "check", "error",
"Key %d was not in order",error);
return HA_ADMIN_CORRUPT;
}
}
return HA_ADMIN_OK;
}
/**************************************************************************** /****************************************************************************
Handling the shared BDB_SHARE structure that is needed to provide table Handling the shared BDB_SHARE structure that is needed to provide table
locking. locking.
...@@ -1619,19 +1780,21 @@ static byte* bdb_get_key(BDB_SHARE *share,uint *length, ...@@ -1619,19 +1780,21 @@ static byte* bdb_get_key(BDB_SHARE *share,uint *length,
return (byte*) share->table_name; return (byte*) share->table_name;
} }
static BDB_SHARE *get_share(const char *table_name) static BDB_SHARE *get_share(const char *table_name, TABLE *table)
{ {
BDB_SHARE *share; BDB_SHARE *share;
pthread_mutex_lock(&bdb_mutex); pthread_mutex_lock(&bdb_mutex);
uint length=(uint) strlen(table_name); uint length=(uint) strlen(table_name);
if (!(share=(BDB_SHARE*) hash_search(&bdb_open_tables, table_name, length))) if (!(share=(BDB_SHARE*) hash_search(&bdb_open_tables, table_name, length)))
{ {
if ((share=(BDB_SHARE *) my_malloc(sizeof(*share)+length+1, if ((share=(BDB_SHARE *) my_malloc(sizeof(*share)+length+1 +
sizeof(ha_rows)* table->keys,
MYF(MY_WME | MY_ZEROFILL)))) MYF(MY_WME | MY_ZEROFILL))))
{ {
share->table_name_length=length; share->table_name_length=length;
share->table_name=(char*) (share+1); share->table_name=(char*) (share+1);
strmov(share->table_name,table_name); strmov(share->table_name,table_name);
share->rec_per_key= (ha_rows*) (share+1);
if (hash_insert(&bdb_open_tables, (char*) share)) if (hash_insert(&bdb_open_tables, (char*) share))
{ {
pthread_mutex_unlock(&bdb_mutex); pthread_mutex_unlock(&bdb_mutex);
...@@ -1647,11 +1810,14 @@ static BDB_SHARE *get_share(const char *table_name) ...@@ -1647,11 +1810,14 @@ static BDB_SHARE *get_share(const char *table_name)
return share; return share;
} }
static void free_share(BDB_SHARE *share) static void free_share(BDB_SHARE *share, TABLE *table)
{ {
pthread_mutex_lock(&bdb_mutex); pthread_mutex_lock(&bdb_mutex);
if (!--share->use_count) if (!--share->use_count)
{ {
update_status(share,table);
if (share->status_block)
share->status_block->close(share->status_block,0);
hash_delete(&bdb_open_tables, (gptr) share); hash_delete(&bdb_open_tables, (gptr) share);
thr_lock_delete(&share->lock); thr_lock_delete(&share->lock);
pthread_mutex_destroy(&share->mutex); pthread_mutex_destroy(&share->mutex);
...@@ -1660,11 +1826,18 @@ static void free_share(BDB_SHARE *share) ...@@ -1660,11 +1826,18 @@ static void free_share(BDB_SHARE *share)
pthread_mutex_unlock(&bdb_mutex); pthread_mutex_unlock(&bdb_mutex);
} }
/*
Get status information that is stored in the 'status' sub database
and the max used value for the hidden primary key.
*/
void ha_berkeley::update_auto_primary_key() void ha_berkeley::get_status()
{ {
if (!test_all_bits(share->status,(STATUS_PRIMARY_KEY_INIT |
STATUS_ROW_COUNT_INIT)))
{
pthread_mutex_lock(&share->mutex); pthread_mutex_lock(&share->mutex);
if (!share->primary_key_inited) if (!(share->status & STATUS_PRIMARY_KEY_INIT))
{ {
(void) extra(HA_EXTRA_KEYREAD); (void) extra(HA_EXTRA_KEYREAD);
index_init(primary_key); index_init(primary_key);
...@@ -1673,7 +1846,104 @@ void ha_berkeley::update_auto_primary_key() ...@@ -1673,7 +1846,104 @@ void ha_berkeley::update_auto_primary_key()
index_end(); index_end();
(void) extra(HA_EXTRA_NO_KEYREAD); (void) extra(HA_EXTRA_NO_KEYREAD);
} }
if (! share->status_block)
{
char name_buff[FN_REFLEN];
uint open_mode= (((table->db_stat & HA_READ_ONLY) ? DB_RDONLY : 0)
| DB_THREAD);
fn_format(name_buff, share->table_name,"", ha_berkeley_ext, 2 | 4);
if (!db_create(&share->status_block, db_env, 0))
{
if (!share->status_block->open(share->status_block, name_buff,
"status", DB_BTREE, open_mode, 0))
{
share->status_block->close(share->status_block, 0);
share->status_block=0;
}
}
}
if (!(share->status & STATUS_ROW_COUNT_INIT) && share->status_block)
{
share->org_rows=share->rows=
table->max_rows ? table->max_rows : HA_BERKELEY_MAX_ROWS;
if (!file->cursor(share->status_block, 0, &cursor, 0))
{
DBT row;
char rec_buff[64],*pos=rec_buff;
bzero((char*) &row,sizeof(row));
bzero((char*) &last_key,sizeof(last_key));
row.data=rec_buff;
row.size=sizeof(rec_buff);
row.flags=DB_DBT_USERMEM;
if (!cursor->c_get(cursor, &last_key, &row, DB_FIRST))
{
uint i;
share->org_rows=share->rows=uint4korr(pos); pos+=4;
for (i=0 ; i < table->keys ; i++)
{
share->rec_per_key[i]=uint4korr(pos); pos+=4;
}
}
cursor->c_close(cursor);
}
cursor=0; // Safety
}
share->status|= STATUS_PRIMARY_KEY_INIT | STATUS_ROW_COUNT_INIT;
pthread_mutex_unlock(&share->mutex); pthread_mutex_unlock(&share->mutex);
}
}
static void update_status(BDB_SHARE *share, TABLE *table)
{
DBUG_ENTER("update_status");
if (share->rows != share->org_rows ||
(share->status & STATUS_BDB_ANALYZE))
{
pthread_mutex_lock(&share->mutex);
if (!share->status_block)
{
/*
Create sub database 'status' if it doesn't exist from before
(This '*should*' always exist for table created with MySQL)
*/
char name_buff[FN_REFLEN];
if (db_create(&share->status_block, db_env, 0))
goto end;
share->status_block->set_flags(share->status_block,0);
if (share->status_block->open(share->status_block,
fn_format(name_buff,share->table_name,"",
ha_berkeley_ext,2 | 4),
"status", DB_BTREE,
DB_THREAD | DB_CREATE, my_umask))
goto end;
}
{
uint i;
DBT row,key;
char rec_buff[4+MAX_KEY*sizeof(ulong)], *pos=rec_buff;
const char *key_buff="status";
bzero((char*) &row,sizeof(row));
bzero((char*) &key,sizeof(key));
row.data=rec_buff;
key.data=(void*) key_buff;
key.size=sizeof(key_buff);
row.flags=key.flags=DB_DBT_USERMEM;
int4store(pos,share->rows); pos+=4;
for (i=0 ; i < table->keys ; i++)
{
int4store(pos,share->rec_per_key[i]); pos+=4;
}
row.size=(uint) (pos-rec_buff);
(void) share->status_block->put(share->status_block, 0, &key, &row, 0);
share->status&= ~STATUS_BDB_ANALYZE;
}
end:
pthread_mutex_unlock(&share->mutex);
}
DBUG_VOID_RETURN;
} }
/* /*
...@@ -1683,14 +1953,7 @@ void ha_berkeley::update_auto_primary_key() ...@@ -1683,14 +1953,7 @@ void ha_berkeley::update_auto_primary_key()
ha_rows ha_berkeley::estimate_number_of_rows() ha_rows ha_berkeley::estimate_number_of_rows()
{ {
ulonglong max_ident; return share->rows + HA_BERKELEY_EXTRA_ROWS;
ulonglong max_rows=table->max_rows ? table->max_rows : HA_BERKELEY_MAX_ROWS;
if (!hidden_primary_key)
return (ha_rows) max_rows;
pthread_mutex_lock(&share->mutex);
max_ident=share->auto_ident+EXTRA_RECORDS;
pthread_mutex_unlock(&share->mutex);
return (ha_rows) min(max_ident,max_rows);
} }
#endif /* HAVE_BERKELEY_DB */ #endif /* HAVE_BERKELEY_DB */
...@@ -27,11 +27,13 @@ ...@@ -27,11 +27,13 @@
typedef struct st_berkeley_share { typedef struct st_berkeley_share {
ulonglong auto_ident; ulonglong auto_ident;
ha_rows rows, org_rows, *rec_per_key;
THR_LOCK lock; THR_LOCK lock;
pthread_mutex_t mutex; pthread_mutex_t mutex;
char *table_name; char *table_name;
DB *status_block;
uint table_name_length,use_count; uint table_name_length,use_count;
bool primary_key_inited; uint status,version;
} BDB_SHARE; } BDB_SHARE;
...@@ -49,7 +51,8 @@ class ha_berkeley: public handler ...@@ -49,7 +51,8 @@ class ha_berkeley: public handler
BDB_SHARE *share; BDB_SHARE *share;
ulong int_option_flag; ulong int_option_flag;
ulong alloced_rec_buff_length; ulong alloced_rec_buff_length;
uint primary_key,last_dup_key, hidden_primary_key; ulong changed_rows;
uint primary_key,last_dup_key, hidden_primary_key, version;
bool fixed_length_row, fixed_length_primary_key, key_read; bool fixed_length_row, fixed_length_primary_key, key_read;
bool fix_rec_buff_for_blob(ulong length); bool fix_rec_buff_for_blob(ulong length);
byte current_ident[BDB_HIDDEN_PRIMARY_KEY_LENGTH]; byte current_ident[BDB_HIDDEN_PRIMARY_KEY_LENGTH];
...@@ -58,7 +61,8 @@ class ha_berkeley: public handler ...@@ -58,7 +61,8 @@ class ha_berkeley: public handler
int pack_row(DBT *row,const byte *record, bool new_row); int pack_row(DBT *row,const byte *record, bool new_row);
void unpack_row(char *record, DBT *row); void unpack_row(char *record, DBT *row);
void ha_berkeley::unpack_key(char *record, DBT *key, uint index); void ha_berkeley::unpack_key(char *record, DBT *key, uint index);
DBT *pack_key(DBT *key, uint keynr, char *buff, const byte *record); DBT *create_key(DBT *key, uint keynr, char *buff, const byte *record,
int key_length = MAX_KEY_LENGTH);
DBT *pack_key(DBT *key, uint keynr, char *buff, const byte *key_ptr, DBT *pack_key(DBT *key, uint keynr, char *buff, const byte *key_ptr,
uint key_length); uint key_length);
int remove_key(DB_TXN *trans, uint keynr, const byte *record, int remove_key(DB_TXN *trans, uint keynr, const byte *record,
...@@ -79,8 +83,9 @@ class ha_berkeley: public handler ...@@ -79,8 +83,9 @@ class ha_berkeley: public handler
HA_KEYPOS_TO_RNDPOS | HA_READ_ORDER | HA_LASTKEY_ORDER | HA_KEYPOS_TO_RNDPOS | HA_READ_ORDER | HA_LASTKEY_ORDER |
HA_LONGLONG_KEYS | HA_NULL_KEY | HA_HAVE_KEY_READ_ONLY | HA_LONGLONG_KEYS | HA_NULL_KEY | HA_HAVE_KEY_READ_ONLY |
HA_BLOB_KEY | HA_NOT_EXACT_COUNT | HA_BLOB_KEY | HA_NOT_EXACT_COUNT |
HA_PRIMARY_KEY_IN_READ_INDEX | HA_DROP_BEFORE_CREATE), HA_PRIMARY_KEY_IN_READ_INDEX | HA_DROP_BEFORE_CREATE |
last_dup_key((uint) -1) HA_AUTO_PART_KEY),
last_dup_key((uint) -1),version(0)
{ {
} }
~ha_berkeley() {} ~ha_berkeley() {}
...@@ -123,6 +128,10 @@ class ha_berkeley: public handler ...@@ -123,6 +128,10 @@ class ha_berkeley: public handler
int reset(void); int reset(void);
int external_lock(THD *thd, int lock_type); int external_lock(THD *thd, int lock_type);
void position(byte *record); void position(byte *record);
int analyze(THD* thd,HA_CHECK_OPT* check_opt);
int optimize(THD* thd, HA_CHECK_OPT* check_opt);
int check(THD* thd, HA_CHECK_OPT* check_opt);
ha_rows records_in_range(int inx, ha_rows records_in_range(int inx,
const byte *start_key,uint start_key_len, const byte *start_key,uint start_key_len,
enum ha_rkey_function start_search_flag, enum ha_rkey_function start_search_flag,
...@@ -135,7 +144,7 @@ class ha_berkeley: public handler ...@@ -135,7 +144,7 @@ class ha_berkeley: public handler
THR_LOCK_DATA **store_lock(THD *thd, THR_LOCK_DATA **to, THR_LOCK_DATA **store_lock(THD *thd, THR_LOCK_DATA **to,
enum thr_lock_type lock_type); enum thr_lock_type lock_type);
void update_auto_primary_key(); void get_status();
inline void get_auto_primary_key(byte *to) inline void get_auto_primary_key(byte *to)
{ {
ulonglong tmp; ulonglong tmp;
...@@ -144,11 +153,12 @@ class ha_berkeley: public handler ...@@ -144,11 +153,12 @@ class ha_berkeley: public handler
int5store(to,share->auto_ident); int5store(to,share->auto_ident);
pthread_mutex_unlock(&share->mutex); pthread_mutex_unlock(&share->mutex);
} }
longlong ha_berkeley::get_auto_increment();
}; };
extern bool berkeley_skip; extern bool berkeley_skip, berkeley_shared_data;
extern u_int32_t berkeley_init_flags,berkeley_lock_type,berkeley_lock_types[]; extern u_int32_t berkeley_init_flags,berkeley_lock_type,berkeley_lock_types[];
extern ulong berkeley_cache_size, berkeley_lock_max; extern ulong berkeley_cache_size, berkeley_max_lock;
extern char *berkeley_home, *berkeley_tmpdir, *berkeley_logdir; extern char *berkeley_home, *berkeley_tmpdir, *berkeley_logdir;
extern long berkeley_lock_scan_time; extern long berkeley_lock_scan_time;
extern TYPELIB berkeley_lock_typelib; extern TYPELIB berkeley_lock_typelib;
......
...@@ -33,7 +33,8 @@ const char **ha_heap::bas_ext() const ...@@ -33,7 +33,8 @@ const char **ha_heap::bas_ext() const
int ha_heap::open(const char *name, int mode, uint test_if_locked) int ha_heap::open(const char *name, int mode, uint test_if_locked)
{ {
uint key,part,parts; uint key,part,parts,mem_per_row=0;
ulong max_rows;
HP_KEYDEF *keydef; HP_KEYDEF *keydef;
HP_KEYSEG *seg; HP_KEYSEG *seg;
...@@ -47,6 +48,7 @@ int ha_heap::open(const char *name, int mode, uint test_if_locked) ...@@ -47,6 +48,7 @@ int ha_heap::open(const char *name, int mode, uint test_if_locked)
for (key=0 ; key < table->keys ; key++) for (key=0 ; key < table->keys ; key++)
{ {
KEY *pos=table->key_info+key; KEY *pos=table->key_info+key;
mem_per_row += (pos->key_length + (sizeof(char*) * 2));
keydef[key].keysegs=(uint) pos->key_parts; keydef[key].keysegs=(uint) pos->key_parts;
keydef[key].flag = (pos->flags & HA_NOSAME); keydef[key].flag = (pos->flags & HA_NOSAME);
...@@ -66,9 +68,13 @@ int ha_heap::open(const char *name, int mode, uint test_if_locked) ...@@ -66,9 +68,13 @@ int ha_heap::open(const char *name, int mode, uint test_if_locked)
seg++; seg++;
} }
} }
mem_per_row += MY_ALIGN(table->reclength+1, sizeof(char*));
max_rows = (ulong) (max_heap_table_size / mem_per_row);
file=heap_open(table->path,mode, file=heap_open(table->path,mode,
table->keys,keydef, table->keys,keydef,
table->reclength,table->max_rows, table->reclength,
((table->max_rows < max_rows && table->max_rows) ?
table->max_rows : max_rows),
table->min_rows); table->min_rows);
my_free((gptr) keydef,MYF(0)); my_free((gptr) keydef,MYF(0));
info(HA_STATUS_NO_LOCK | HA_STATUS_CONST | HA_STATUS_VARIABLE); info(HA_STATUS_NO_LOCK | HA_STATUS_CONST | HA_STATUS_VARIABLE);
......
...@@ -191,8 +191,7 @@ int ha_autocommit_or_rollback(THD *thd, int error) ...@@ -191,8 +191,7 @@ int ha_autocommit_or_rollback(THD *thd, int error)
{ {
DBUG_ENTER("ha_autocommit_or_rollback"); DBUG_ENTER("ha_autocommit_or_rollback");
#ifdef USING_TRANSACTIONS #ifdef USING_TRANSACTIONS
if (!(thd->options & (OPTION_NOT_AUTO_COMMIT | OPTION_BEGIN)) && if (!(thd->options & (OPTION_NOT_AUTO_COMMIT | OPTION_BEGIN)))
!thd->locked_tables)
{ {
if (!error) if (!error)
{ {
...@@ -211,6 +210,16 @@ int ha_commit_trans(THD *thd, THD_TRANS* trans) ...@@ -211,6 +210,16 @@ int ha_commit_trans(THD *thd, THD_TRANS* trans)
{ {
int error=0; int error=0;
DBUG_ENTER("ha_commit"); DBUG_ENTER("ha_commit");
#ifdef USING_TRANSACTIONS
/* Update the binary log if we have cached some queries */
if (trans == &thd->transaction.all && mysql_bin_log.is_open() &&
my_b_tell(&thd->transaction.trans_log))
{
mysql_bin_log.write(&thd->transaction.trans_log);
reinit_io_cache(&thd->transaction.trans_log,
WRITE_CACHE, (my_off_t) 0, 0, 1);
thd->transaction.trans_log.end_of_file= max_binlog_cache_size;
}
#ifdef HAVE_BERKELEY_DB #ifdef HAVE_BERKELEY_DB
if (trans->bdb_tid) if (trans->bdb_tid)
{ {
...@@ -224,13 +233,16 @@ int ha_commit_trans(THD *thd, THD_TRANS* trans) ...@@ -224,13 +233,16 @@ int ha_commit_trans(THD *thd, THD_TRANS* trans)
#endif #endif
#ifdef HAVE_INNOBASE_DB #ifdef HAVE_INNOBASE_DB
{ {
if ((error=innobase_commit(thd,trans->innobase_tid)) if ((error=innobase_commit(thd,trans->innobase_tid)))
{ {
my_error(ER_ERROR_DURING_COMMIT, MYF(0), error); my_error(ER_ERROR_DURING_COMMIT, MYF(0), error);
error=1; error=1;
} }
trans->innobase_tid=0; trans->innobase_tid=0;
} }
#endif
if (error && trans == &thd->transaction.all && mysql_bin_log.is_open())
sql_print_error("Error: Got error during commit; Binlog is not up to date!");
#endif #endif
DBUG_RETURN(error); DBUG_RETURN(error);
} }
...@@ -260,6 +272,12 @@ int ha_rollback_trans(THD *thd, THD_TRANS *trans) ...@@ -260,6 +272,12 @@ int ha_rollback_trans(THD *thd, THD_TRANS *trans)
} }
trans->innobase_tid=0; trans->innobase_tid=0;
} }
#endif
#ifdef USING_TRANSACTIONS
if (trans == &thd->transaction.all)
reinit_io_cache(&thd->transaction.trans_log,
WRITE_CACHE, (my_off_t) 0, 0, 1);
thd->transaction.trans_log.end_of_file= max_binlog_cache_size;
#endif #endif
DBUG_RETURN(error); DBUG_RETURN(error);
} }
......
...@@ -180,20 +180,21 @@ my_string ip_to_hostname(struct in_addr *in, uint *errors) ...@@ -180,20 +180,21 @@ my_string ip_to_hostname(struct in_addr *in, uint *errors)
VOID(pthread_mutex_lock(&hostname_cache->lock)); VOID(pthread_mutex_lock(&hostname_cache->lock));
if (!(hp=gethostbyaddr((char*) in,sizeof(*in), AF_INET))) if (!(hp=gethostbyaddr((char*) in,sizeof(*in), AF_INET)))
{ {
DBUG_PRINT("error",("gethostbyaddr returned %d",errno));
VOID(pthread_mutex_unlock(&hostname_cache->lock)); VOID(pthread_mutex_unlock(&hostname_cache->lock));
add_wrong_ip(in); DBUG_PRINT("error",("gethostbyaddr returned %d",errno));
DBUG_RETURN(0); goto err;
} }
if (!hp->h_name[0]) if (!hp->h_name[0]) // Don't allow empty hostnames
{ {
VOID(pthread_mutex_unlock(&hostname_cache->lock)); VOID(pthread_mutex_unlock(&hostname_cache->lock));
DBUG_PRINT("error",("Got an empty hostname")); DBUG_PRINT("error",("Got an empty hostname"));
add_wrong_ip(in); goto err;
DBUG_RETURN(0); // Don't allow empty hostnames
} }
if (!(name=my_strdup(hp->h_name,MYF(0)))) if (!(name=my_strdup(hp->h_name,MYF(0))))
{
VOID(pthread_mutex_unlock(&hostname_cache->lock));
DBUG_RETURN(0); // out of memory DBUG_RETURN(0); // out of memory
}
check=gethostbyname(name); check=gethostbyname(name);
VOID(pthread_mutex_unlock(&hostname_cache->lock)); VOID(pthread_mutex_unlock(&hostname_cache->lock));
if (!check) if (!check)
...@@ -214,8 +215,7 @@ my_string ip_to_hostname(struct in_addr *in, uint *errors) ...@@ -214,8 +215,7 @@ my_string ip_to_hostname(struct in_addr *in, uint *errors)
{ {
DBUG_PRINT("error",("mysqld doesn't accept hostnames that starts with a number followed by a '.'")); DBUG_PRINT("error",("mysqld doesn't accept hostnames that starts with a number followed by a '.'"));
my_free(name,MYF(0)); my_free(name,MYF(0));
add_wrong_ip(in); goto err;
DBUG_RETURN(0);
} }
} }
...@@ -230,6 +230,8 @@ my_string ip_to_hostname(struct in_addr *in, uint *errors) ...@@ -230,6 +230,8 @@ my_string ip_to_hostname(struct in_addr *in, uint *errors)
} }
DBUG_PRINT("error",("Couldn't verify hostname with gethostbyname")); DBUG_PRINT("error",("Couldn't verify hostname with gethostbyname"));
my_free(name,MYF(0)); my_free(name,MYF(0));
err:
add_wrong_ip(in); add_wrong_ip(in);
DBUG_RETURN(0); DBUG_RETURN(0);
} }
...@@ -16,6 +16,7 @@ ...@@ -16,6 +16,7 @@
/* logging of commands */ /* logging of commands */
/* TODO: Abort logging when we get an error in reading or writing log files */
#include "mysql_priv.h" #include "mysql_priv.h"
#include "sql_acl.h" #include "sql_acl.h"
...@@ -523,14 +524,12 @@ void MYSQL_LOG::new_file() ...@@ -523,14 +524,12 @@ void MYSQL_LOG::new_file()
} }
void MYSQL_LOG::write(THD *thd,enum enum_server_command command, bool MYSQL_LOG::write(THD *thd,enum enum_server_command command,
const char *format,...) const char *format,...)
{ {
if (is_open() && (what_to_log & (1L << (uint) command))) if (is_open() && (what_to_log & (1L << (uint) command)))
{ {
va_list args; int error=0;
va_start(args,format);
char buff[32];
VOID(pthread_mutex_lock(&LOCK_log)); VOID(pthread_mutex_lock(&LOCK_log));
/* Test if someone closed after the is_open test */ /* Test if someone closed after the is_open test */
...@@ -538,14 +537,17 @@ void MYSQL_LOG::write(THD *thd,enum enum_server_command command, ...@@ -538,14 +537,17 @@ void MYSQL_LOG::write(THD *thd,enum enum_server_command command,
{ {
time_t skr; time_t skr;
ulong id; ulong id;
int error=0; va_list args;
va_start(args,format);
char buff[32];
if (thd) if (thd)
{ // Normal thread { // Normal thread
if ((thd->options & OPTION_LOG_OFF) && if ((thd->options & OPTION_LOG_OFF) &&
(thd->master_access & PROCESS_ACL)) (thd->master_access & PROCESS_ACL))
{ {
VOID(pthread_mutex_unlock(&LOCK_log)); VOID(pthread_mutex_unlock(&LOCK_log));
return; // No logging return 0; // No logging
} }
id=thd->thread_id; id=thd->thread_id;
if (thd->user_time || !(skr=thd->query_start())) if (thd->user_time || !(skr=thd->query_start()))
...@@ -593,48 +595,47 @@ void MYSQL_LOG::write(THD *thd,enum enum_server_command command, ...@@ -593,48 +595,47 @@ void MYSQL_LOG::write(THD *thd,enum enum_server_command command,
write_error=1; write_error=1;
sql_print_error(ER(ER_ERROR_ON_WRITE),name,error); sql_print_error(ER(ER_ERROR_ON_WRITE),name,error);
} }
}
va_end(args); va_end(args);
VOID(pthread_mutex_unlock(&LOCK_log)); VOID(pthread_mutex_unlock(&LOCK_log));
} }
VOID(pthread_mutex_unlock(&LOCK_log));
return error != 0;
}
return 0;
} }
/* Write to binary log in a format to be used for replication */ /* Write to binary log in a format to be used for replication */
void MYSQL_LOG::write(Query_log_event* event_info) bool MYSQL_LOG::write(Query_log_event* event_info)
{ {
if (is_open()) /* In most cases this is only called if 'is_open()' is true */
{ bool error=1;
VOID(pthread_mutex_lock(&LOCK_log)); VOID(pthread_mutex_lock(&LOCK_log));
if (is_open()) if (is_open())
{ {
THD *thd=event_info->thd; THD *thd=event_info->thd;
IO_CACHE *file = (event_info->cache_stmt ? &thd->transaction.trans_log :
&log_file);
if ((!(thd->options & OPTION_BIN_LOG) && if ((!(thd->options & OPTION_BIN_LOG) &&
thd->master_access & PROCESS_ACL) || thd->master_access & PROCESS_ACL) ||
!db_ok(event_info->db, binlog_do_db, binlog_ignore_db)) !db_ok(event_info->db, binlog_do_db, binlog_ignore_db))
{ {
VOID(pthread_mutex_unlock(&LOCK_log)); VOID(pthread_mutex_unlock(&LOCK_log));
return; return 0;
} }
if (thd->last_insert_id_used) if (thd->last_insert_id_used)
{ {
Intvar_log_event e((uchar)LAST_INSERT_ID_EVENT, thd->last_insert_id); Intvar_log_event e((uchar)LAST_INSERT_ID_EVENT, thd->last_insert_id);
if (e.write(&log_file)) if (e.write(file))
{
sql_print_error(ER(ER_ERROR_ON_WRITE), name, errno);
goto err; goto err;
} }
}
if (thd->insert_id_used) if (thd->insert_id_used)
{ {
Intvar_log_event e((uchar)INSERT_ID_EVENT, thd->last_insert_id); Intvar_log_event e((uchar)INSERT_ID_EVENT, thd->last_insert_id);
if (e.write(&log_file)) if (e.write(file))
{
sql_print_error(ER(ER_ERROR_ON_WRITE), name, errno);
goto err; goto err;
} }
}
if (thd->convert_set) if (thd->convert_set)
{ {
char buf[1024] = "SET CHARACTER SET "; char buf[1024] = "SET CHARACTER SET ";
...@@ -644,28 +645,93 @@ void MYSQL_LOG::write(Query_log_event* event_info) ...@@ -644,28 +645,93 @@ void MYSQL_LOG::write(Query_log_event* event_info)
// just in case somebody wants it later // just in case somebody wants it later
thd->query_length = (uint)(p - buf); thd->query_length = (uint)(p - buf);
Query_log_event e(thd, buf); Query_log_event e(thd, buf);
if (e.write(&log_file)) if (e.write(file))
{
sql_print_error(ER(ER_ERROR_ON_WRITE), name, errno);
goto err; goto err;
}
thd->query_length = save_query_length; // clean up thd->query_length = save_query_length; // clean up
} }
if (event_info->write(&log_file) || flush_io_cache(&log_file)) if (event_info->write(file) ||
file == &log_file && flush_io_cache(file))
goto err;
error=0;
err:
if (error)
{ {
sql_print_error(ER(ER_ERROR_ON_WRITE), name, errno); if (my_errno == EFBIG)
my_error(ER_TRANS_CACHE_FULL, MYF(0));
else
my_error(ER_ERROR_ON_WRITE, MYF(0), name, errno);
write_error=1;
} }
err: if (file == &log_file)
VOID(pthread_cond_broadcast(&COND_binlog_update)); VOID(pthread_cond_broadcast(&COND_binlog_update));
} }
else
error=0;
VOID(pthread_mutex_unlock(&LOCK_log)); VOID(pthread_mutex_unlock(&LOCK_log));
} return error;
} }
void MYSQL_LOG::write(Load_log_event* event_info) /*
Write a cached log entry to the binary log
We only come here if there is something in the cache.
'cache' needs to be reinitialized after this functions returns.
*/
bool MYSQL_LOG::write(IO_CACHE *cache)
{ {
VOID(pthread_mutex_lock(&LOCK_log));
bool error=1;
if (is_open()) if (is_open())
{ {
uint length;
my_off_t start_pos=my_b_tell(&log_file);
if (reinit_io_cache(cache, WRITE_CACHE, 0, 0, 0))
{
if (!write_error)
sql_print_error(ER(ER_ERROR_ON_WRITE), cache->file_name, errno);
goto err;
}
while ((length=my_b_fill(cache)))
{
if (my_b_write(&log_file, cache->rc_pos, length))
{
if (!write_error)
sql_print_error(ER(ER_ERROR_ON_WRITE), name, errno);
goto err;
}
cache->rc_pos=cache->rc_end; // Mark buffer used up
}
if (flush_io_cache(&log_file))
{
if (!write_error)
sql_print_error(ER(ER_ERROR_ON_WRITE), name, errno);
goto err;
}
if (cache->error) // Error on read
{
if (!write_error)
sql_print_error(ER(ER_ERROR_ON_READ), cache->file_name, errno);
goto err;
}
}
error=0;
err:
if (error)
write_error=1;
else
VOID(pthread_cond_broadcast(&COND_binlog_update));
VOID(pthread_mutex_unlock(&LOCK_log));
return error;
}
bool MYSQL_LOG::write(Load_log_event* event_info)
{
bool error=0;
VOID(pthread_mutex_lock(&LOCK_log)); VOID(pthread_mutex_lock(&LOCK_log));
if (is_open()) if (is_open())
{ {
...@@ -674,34 +740,39 @@ void MYSQL_LOG::write(Load_log_event* event_info) ...@@ -674,34 +740,39 @@ void MYSQL_LOG::write(Load_log_event* event_info)
!(thd->master_access & PROCESS_ACL)) !(thd->master_access & PROCESS_ACL))
{ {
if (event_info->write(&log_file) || flush_io_cache(&log_file)) if (event_info->write(&log_file) || flush_io_cache(&log_file))
{
if (!write_error)
sql_print_error(ER(ER_ERROR_ON_WRITE), name, errno); sql_print_error(ER(ER_ERROR_ON_WRITE), name, errno);
error=write_error=1;
}
VOID(pthread_cond_broadcast(&COND_binlog_update)); VOID(pthread_cond_broadcast(&COND_binlog_update));
} }
} }
VOID(pthread_mutex_unlock(&LOCK_log)); VOID(pthread_mutex_unlock(&LOCK_log));
} return error;
} }
/* Write update log in a format suitable for incremental backup */ /* Write update log in a format suitable for incremental backup */
void MYSQL_LOG::write(THD *thd,const char *query, uint query_length, bool MYSQL_LOG::write(THD *thd,const char *query, uint query_length,
time_t query_start) time_t query_start)
{ {
bool error=0;
if (is_open()) if (is_open())
{ {
time_t current_time; time_t current_time;
VOID(pthread_mutex_lock(&LOCK_log)); VOID(pthread_mutex_lock(&LOCK_log));
if (is_open()) if (is_open())
{ // Safety agains reopen { // Safety agains reopen
int error=0; int tmp_errno=0;
char buff[80],*end; char buff[80],*end;
end=buff; end=buff;
if (!(thd->options & OPTION_UPDATE_LOG) && if (!(thd->options & OPTION_UPDATE_LOG) &&
(thd->master_access & PROCESS_ACL)) (thd->master_access & PROCESS_ACL))
{ {
VOID(pthread_mutex_unlock(&LOCK_log)); VOID(pthread_mutex_unlock(&LOCK_log));
return; return 0;
} }
if ((specialflag & SPECIAL_LONG_LOG_FORMAT) || query_start) if ((specialflag & SPECIAL_LONG_LOG_FORMAT) || query_start)
{ {
...@@ -722,14 +793,14 @@ void MYSQL_LOG::write(THD *thd,const char *query, uint query_length, ...@@ -722,14 +793,14 @@ void MYSQL_LOG::write(THD *thd,const char *query, uint query_length,
start->tm_min, start->tm_min,
start->tm_sec); start->tm_sec);
if (my_b_write(&log_file, (byte*) buff,24)) if (my_b_write(&log_file, (byte*) buff,24))
error=errno; tmp_errno=errno;
} }
if (my_b_printf(&log_file, "# User@Host: %s[%s] @ %s [%s]\n", if (my_b_printf(&log_file, "# User@Host: %s[%s] @ %s [%s]\n",
thd->priv_user, thd->priv_user,
thd->user, thd->user,
thd->host ? thd->host : "", thd->host ? thd->host : "",
thd->ip ? thd->ip : "") == (uint) -1) thd->ip ? thd->ip : "") == (uint) -1)
error=errno; tmp_errno=errno;
} }
if (query_start) if (query_start)
{ {
...@@ -739,12 +810,12 @@ void MYSQL_LOG::write(THD *thd,const char *query, uint query_length, ...@@ -739,12 +810,12 @@ void MYSQL_LOG::write(THD *thd,const char *query, uint query_length,
(ulong) (current_time - query_start), (ulong) (current_time - query_start),
(ulong) (thd->time_after_lock - query_start), (ulong) (thd->time_after_lock - query_start),
(ulong) thd->sent_row_count) == (uint) -1) (ulong) thd->sent_row_count) == (uint) -1)
error=errno; tmp_errno=errno;
} }
if (thd->db && strcmp(thd->db,db)) if (thd->db && strcmp(thd->db,db))
{ // Database changed { // Database changed
if (my_b_printf(&log_file,"use %s;\n",thd->db) == (uint) -1) if (my_b_printf(&log_file,"use %s;\n",thd->db) == (uint) -1)
error=errno; tmp_errno=errno;
strmov(db,thd->db); strmov(db,thd->db);
} }
if (thd->last_insert_id_used) if (thd->last_insert_id_used)
...@@ -777,7 +848,7 @@ void MYSQL_LOG::write(THD *thd,const char *query, uint query_length, ...@@ -777,7 +848,7 @@ void MYSQL_LOG::write(THD *thd,const char *query, uint query_length,
*end=0; *end=0;
if (my_b_write(&log_file, (byte*) "SET ",4) || if (my_b_write(&log_file, (byte*) "SET ",4) ||
my_b_write(&log_file, (byte*) buff+1,(uint) (end-buff)-1)) my_b_write(&log_file, (byte*) buff+1,(uint) (end-buff)-1))
error=errno; tmp_errno=errno;
} }
if (!query) if (!query)
{ {
...@@ -787,28 +858,21 @@ void MYSQL_LOG::write(THD *thd,const char *query, uint query_length, ...@@ -787,28 +858,21 @@ void MYSQL_LOG::write(THD *thd,const char *query, uint query_length,
if (my_b_write(&log_file, (byte*) query,query_length) || if (my_b_write(&log_file, (byte*) query,query_length) ||
my_b_write(&log_file, (byte*) ";\n",2) || my_b_write(&log_file, (byte*) ";\n",2) ||
flush_io_cache(&log_file)) flush_io_cache(&log_file))
error=errno; tmp_errno=errno;
if (error && ! write_error) if (tmp_errno)
{
error=1;
if (! write_error)
{ {
write_error=1; write_error=1;
sql_print_error(ER(ER_ERROR_ON_WRITE),name,error); sql_print_error(ER(ER_ERROR_ON_WRITE),name,error);
} }
} }
VOID(pthread_mutex_unlock(&LOCK_log));
} }
} VOID(pthread_mutex_unlock(&LOCK_log));
#ifdef TO_BE_REMOVED
void MYSQL_LOG::flush()
{
if (is_open())
if (flush_io_cache(log_file) && ! write_error)
{
write_error=1;
sql_print_error(ER(ER_ERROR_ON_WRITE),name,errno);
} }
return error;
} }
#endif
void MYSQL_LOG::close(bool exiting) void MYSQL_LOG::close(bool exiting)
......
...@@ -118,16 +118,18 @@ public: ...@@ -118,16 +118,18 @@ public:
ulong thread_id; ulong thread_id;
#if !defined(MYSQL_CLIENT) #if !defined(MYSQL_CLIENT)
THD* thd; THD* thd;
Query_log_event(THD* thd_arg, const char* query_arg): bool cache_stmt;
Log_event(thd_arg->start_time,0,0,thd_arg->server_id), data_buf(0), Query_log_event(THD* thd_arg, const char* query_arg, bool using_trans=0):
Log_event(thd_arg->start_time,0,1,thd_arg->server_id), data_buf(0),
query(query_arg), db(thd_arg->db), q_len(thd_arg->query_length), query(query_arg), db(thd_arg->db), q_len(thd_arg->query_length),
error_code(thd_arg->net.last_errno), error_code(thd_arg->net.last_errno),
thread_id(thd_arg->thread_id), thd(thd_arg) thread_id(thd_arg->thread_id), thd(thd_arg),
cache_stmt(using_trans &&
(thd_arg->options & (OPTION_NOT_AUTO_COMMIT | OPTION_BEGIN)))
{ {
time_t end_time; time_t end_time;
time(&end_time); time(&end_time);
exec_time = (ulong) (end_time - thd->start_time); exec_time = (ulong) (end_time - thd->start_time);
valid_exec_time = 1;
db_len = (db) ? (uint32) strlen(db) : 0; db_len = (db) ? (uint32) strlen(db) : 0;
} }
#endif #endif
......
...@@ -121,7 +121,7 @@ int init_io_cache(IO_CACHE *info, File file, uint cachesize, ...@@ -121,7 +121,7 @@ int init_io_cache(IO_CACHE *info, File file, uint cachesize,
} }
/* end_of_file may be changed by user later */ /* end_of_file may be changed by user later */
info->end_of_file= ((type == READ_NET || type == READ_FIFO ) ? 0 info->end_of_file= ((type == READ_NET || type == READ_FIFO ) ? 0
: MY_FILEPOS_ERROR); : ~(my_off_t) 0);
info->type=type; info->type=type;
info->error=0; info->error=0;
info->read_function=(type == READ_NET) ? _my_b_net_read : _my_b_read; /* net | file */ info->read_function=(type == READ_NET) ? _my_b_net_read : _my_b_read; /* net | file */
...@@ -176,6 +176,8 @@ my_bool reinit_io_cache(IO_CACHE *info, enum cache_type type, ...@@ -176,6 +176,8 @@ my_bool reinit_io_cache(IO_CACHE *info, enum cache_type type,
DBUG_ENTER("reinit_io_cache"); DBUG_ENTER("reinit_io_cache");
info->seek_not_done= test(info->file >= 0); /* Seek not done */ info->seek_not_done= test(info->file >= 0); /* Seek not done */
/* If the whole file is in memory, avoid flushing to disk */
if (! clear_cache && if (! clear_cache &&
seek_offset >= info->pos_in_file && seek_offset >= info->pos_in_file &&
seek_offset <= info->pos_in_file + seek_offset <= info->pos_in_file +
...@@ -186,8 +188,12 @@ my_bool reinit_io_cache(IO_CACHE *info, enum cache_type type, ...@@ -186,8 +188,12 @@ my_bool reinit_io_cache(IO_CACHE *info, enum cache_type type,
info->rc_end=info->rc_pos; info->rc_end=info->rc_pos;
info->end_of_file=my_b_tell(info); info->end_of_file=my_b_tell(info);
} }
else if (info->type == READ_CACHE && type == WRITE_CACHE) else if (type == WRITE_CACHE)
{
if (info->type == READ_CACHE)
info->rc_end=info->buffer+info->buffer_length; info->rc_end=info->buffer+info->buffer_length;
info->end_of_file = ~(my_off_t) 0;
}
info->rc_pos=info->rc_request_pos+(seek_offset-info->pos_in_file); info->rc_pos=info->rc_request_pos+(seek_offset-info->pos_in_file);
#ifdef HAVE_AIOWAIT #ifdef HAVE_AIOWAIT
my_aiowait(&info->aio_result); /* Wait for outstanding req */ my_aiowait(&info->aio_result); /* Wait for outstanding req */
...@@ -195,11 +201,20 @@ my_bool reinit_io_cache(IO_CACHE *info, enum cache_type type, ...@@ -195,11 +201,20 @@ my_bool reinit_io_cache(IO_CACHE *info, enum cache_type type,
} }
else else
{ {
/*
If we change from WRITE_CACHE to READ_CACHE, assume that everything
after the current positions should be ignored
*/
if (info->type == WRITE_CACHE && type == READ_CACHE) if (info->type == WRITE_CACHE && type == READ_CACHE)
info->end_of_file=my_b_tell(info); info->end_of_file=my_b_tell(info);
if (flush_io_cache(info)) /* No need to flush cache if we want to reuse it */
if ((type != WRITE_CACHE || !clear_cache) && flush_io_cache(info))
DBUG_RETURN(1); DBUG_RETURN(1);
if (info->pos_in_file != seek_offset)
{
info->pos_in_file=seek_offset; info->pos_in_file=seek_offset;
info->seek_not_done=1;
}
info->rc_request_pos=info->rc_pos=info->buffer; info->rc_request_pos=info->rc_pos=info->buffer;
if (type == READ_CACHE || type == READ_NET || type == READ_FIFO) if (type == READ_CACHE || type == READ_NET || type == READ_FIFO)
{ {
...@@ -210,7 +225,7 @@ my_bool reinit_io_cache(IO_CACHE *info, enum cache_type type, ...@@ -210,7 +225,7 @@ my_bool reinit_io_cache(IO_CACHE *info, enum cache_type type,
info->rc_end=info->buffer+info->buffer_length- info->rc_end=info->buffer+info->buffer_length-
(seek_offset & (IO_SIZE-1)); (seek_offset & (IO_SIZE-1));
info->end_of_file= ((type == READ_NET || type == READ_FIFO) ? 0 : info->end_of_file= ((type == READ_NET || type == READ_FIFO) ? 0 :
MY_FILEPOS_ERROR); ~(my_off_t) 0);
} }
} }
info->type=type; info->type=type;
...@@ -536,6 +551,11 @@ int _my_b_write(register IO_CACHE *info, const byte *Buffer, uint Count) ...@@ -536,6 +551,11 @@ int _my_b_write(register IO_CACHE *info, const byte *Buffer, uint Count)
Buffer+=rest_length; Buffer+=rest_length;
Count-=rest_length; Count-=rest_length;
info->rc_pos+=rest_length; info->rc_pos+=rest_length;
if (info->pos_in_file+info->buffer_length > info->end_of_file)
{
my_errno=errno=EFBIG;
return info->error = -1;
}
if (flush_io_cache(info)) if (flush_io_cache(info))
return 1; return 1;
if (Count >= IO_SIZE) if (Count >= IO_SIZE)
......
...@@ -507,9 +507,9 @@ extern ulong keybuff_size,sortbuff_size,max_item_sort_length,table_cache_size, ...@@ -507,9 +507,9 @@ extern ulong keybuff_size,sortbuff_size,max_item_sort_length,table_cache_size,
net_read_timeout,net_write_timeout, net_read_timeout,net_write_timeout,
what_to_log,flush_time, what_to_log,flush_time,
max_tmp_tables,max_heap_table_size,query_buff_size, max_tmp_tables,max_heap_table_size,query_buff_size,
lower_case_table_names,thread_stack,thread_stack_min; lower_case_table_names,thread_stack,thread_stack_min,
extern ulong specialflag; binlog_cache_size, max_binlog_cache_size;
extern ulong current_pid; extern ulong specialflag, current_pid;
extern bool low_priority_updates; extern bool low_priority_updates;
extern bool opt_sql_bin_update; extern bool opt_sql_bin_update;
extern char language[LIBLEN],reg_ext[FN_EXTLEN],blob_newline; extern char language[LIBLEN],reg_ext[FN_EXTLEN],blob_newline;
......
...@@ -201,10 +201,10 @@ ulong keybuff_size,sortbuff_size,max_item_sort_length,table_cache_size, ...@@ -201,10 +201,10 @@ ulong keybuff_size,sortbuff_size,max_item_sort_length,table_cache_size,
query_buff_size, lower_case_table_names, mysqld_net_retry_count, query_buff_size, lower_case_table_names, mysqld_net_retry_count,
net_interactive_timeout, slow_launch_time = 2L, net_interactive_timeout, slow_launch_time = 2L,
net_read_timeout,net_write_timeout,slave_open_temp_tables=0; net_read_timeout,net_write_timeout,slave_open_temp_tables=0;
ulong thread_cache_size=0; ulong thread_cache_size=0, binlog_cache_size=0, max_binlog_cache_size=0;
volatile ulong cached_thread_count=0; volatile ulong cached_thread_count=0;
// replication parameters, if master_host is not NULL, we are slaving off the master // replication parameters, if master_host is not NULL, we are a slave
my_string master_user = (char*) "test", master_password = 0, master_host=0, my_string master_user = (char*) "test", master_password = 0, master_host=0,
master_info_file = (char*) "master.info"; master_info_file = (char*) "master.info";
const char *localhost=LOCAL_HOST; const char *localhost=LOCAL_HOST;
...@@ -1496,9 +1496,10 @@ int main(int argc, char **argv) ...@@ -1496,9 +1496,10 @@ int main(int argc, char **argv)
if (opt_update_log) if (opt_update_log)
open_log(&mysql_update_log, glob_hostname, opt_update_logname, "", open_log(&mysql_update_log, glob_hostname, opt_update_logname, "",
LOG_NEW); LOG_NEW);
if (!server_id)
server_id= !master_host ? 1 : 2;
if (opt_bin_log) if (opt_bin_log)
{
if(server_id)
{ {
if (!opt_bin_logname) if (!opt_bin_logname)
{ {
...@@ -1511,9 +1512,6 @@ int main(int argc, char **argv) ...@@ -1511,9 +1512,6 @@ int main(int argc, char **argv)
open_log(&mysql_bin_log, glob_hostname, opt_bin_logname, "-bin", open_log(&mysql_bin_log, glob_hostname, opt_bin_logname, "-bin",
LOG_BIN); LOG_BIN);
} }
else
sql_print_error("Server id is not set - binary logging disabled");
}
if (opt_slow_log) if (opt_slow_log)
open_log(&mysql_slow_log, glob_hostname, opt_slow_logname, "-slow.log", open_log(&mysql_slow_log, glob_hostname, opt_slow_logname, "-slow.log",
...@@ -1620,9 +1618,7 @@ int main(int argc, char **argv) ...@@ -1620,9 +1618,7 @@ int main(int argc, char **argv)
} }
// slave thread // slave thread
if(master_host) if (master_host)
{
if(server_id)
{ {
pthread_t hThread; pthread_t hThread;
if(!opt_skip_slave_start && if(!opt_skip_slave_start &&
...@@ -1631,9 +1627,6 @@ int main(int argc, char **argv) ...@@ -1631,9 +1627,6 @@ int main(int argc, char **argv)
else if(opt_skip_slave_start) else if(opt_skip_slave_start)
init_master_info(&glob_mi); init_master_info(&glob_mi);
} }
else
sql_print_error("Server id is not set, slave thread will not be started");
}
printf(ER(ER_READY),my_progname,server_version,""); printf(ER(ER_READY),my_progname,server_version,"");
fflush(stdout); fflush(stdout);
...@@ -2205,7 +2198,8 @@ enum options { ...@@ -2205,7 +2198,8 @@ enum options {
OPT_BDB_HOME, OPT_BDB_LOG, OPT_BDB_HOME, OPT_BDB_LOG,
OPT_BDB_TMP, OPT_BDB_NOSYNC, OPT_BDB_TMP, OPT_BDB_NOSYNC,
OPT_BDB_LOCK, OPT_BDB_SKIP, OPT_BDB_LOCK, OPT_BDB_SKIP,
OPT_BDB_RECOVER, OPT_MASTER_HOST, OPT_BDB_RECOVER, OPT_BDB_SHARED,
OPT_MASTER_HOST,
OPT_MASTER_USER, OPT_MASTER_PASSWORD, OPT_MASTER_USER, OPT_MASTER_PASSWORD,
OPT_MASTER_PORT, OPT_MASTER_INFO_FILE, OPT_MASTER_PORT, OPT_MASTER_INFO_FILE,
OPT_MASTER_CONNECT_RETRY, OPT_SQL_BIN_UPDATE_SAME, OPT_MASTER_CONNECT_RETRY, OPT_SQL_BIN_UPDATE_SAME,
...@@ -2233,6 +2227,7 @@ static struct option long_options[] = { ...@@ -2233,6 +2227,7 @@ static struct option long_options[] = {
{"bdb-logdir", required_argument, 0, (int) OPT_BDB_LOG}, {"bdb-logdir", required_argument, 0, (int) OPT_BDB_LOG},
{"bdb-recover", no_argument, 0, (int) OPT_BDB_RECOVER}, {"bdb-recover", no_argument, 0, (int) OPT_BDB_RECOVER},
{"bdb-no-sync", no_argument, 0, (int) OPT_BDB_NOSYNC}, {"bdb-no-sync", no_argument, 0, (int) OPT_BDB_NOSYNC},
{"bdb-shared-data", required_argument, 0, (int) OPT_BDB_SHARED},
{"bdb-tmpdir", required_argument, 0, (int) OPT_BDB_TMP}, {"bdb-tmpdir", required_argument, 0, (int) OPT_BDB_TMP},
#endif #endif
{"big-tables", no_argument, 0, (int) OPT_BIG_TABLES}, {"big-tables", no_argument, 0, (int) OPT_BIG_TABLES},
...@@ -2323,7 +2318,7 @@ static struct option long_options[] = { ...@@ -2323,7 +2318,7 @@ static struct option long_options[] = {
(int) OPT_REPLICATE_REWRITE_DB}, (int) OPT_REPLICATE_REWRITE_DB},
{"safe-mode", no_argument, 0, (int) OPT_SAFE}, {"safe-mode", no_argument, 0, (int) OPT_SAFE},
{"socket", required_argument, 0, (int) OPT_SOCKET}, {"socket", required_argument, 0, (int) OPT_SOCKET},
{"server-id", required_argument, 0, (int)OPT_SERVER_ID}, {"server-id", required_argument, 0, (int) OPT_SERVER_ID},
{"set-variable", required_argument, 0, 'O'}, {"set-variable", required_argument, 0, 'O'},
#ifdef HAVE_BERKELEY_DB #ifdef HAVE_BERKELEY_DB
{"skip-bdb", no_argument, 0, (int) OPT_BDB_SKIP}, {"skip-bdb", no_argument, 0, (int) OPT_BDB_SKIP},
...@@ -2363,9 +2358,14 @@ CHANGEABLE_VAR changeable_vars[] = { ...@@ -2363,9 +2358,14 @@ CHANGEABLE_VAR changeable_vars[] = {
#ifdef HAVE_BERKELEY_DB #ifdef HAVE_BERKELEY_DB
{ "bdb_cache_size", (long*) &berkeley_cache_size, { "bdb_cache_size", (long*) &berkeley_cache_size,
KEY_CACHE_SIZE, 20*1024, (long) ~0, 0, IO_SIZE }, KEY_CACHE_SIZE, 20*1024, (long) ~0, 0, IO_SIZE },
{ "bdb_lock_max", (long*) &berkeley_lock_max, { "bdb_max_lock", (long*) &berkeley_max_lock,
1000, 0, (long) ~0, 0, 1 },
/* QQ: The following should be removed soon! */
{ "bdb_lock_max", (long*) &berkeley_max_lock,
1000, 0, (long) ~0, 0, 1 }, 1000, 0, (long) ~0, 0, 1 },
#endif #endif
{ "binlog_cache_size", (long*) &binlog_cache_size,
32*1024L, IO_SIZE, ~0L, 0, IO_SIZE },
{ "connect_timeout", (long*) &connect_timeout, { "connect_timeout", (long*) &connect_timeout,
CONNECT_TIMEOUT, 2, 65535, 0, 1 }, CONNECT_TIMEOUT, 2, 65535, 0, 1 },
{ "delayed_insert_timeout", (long*) &delayed_insert_timeout, { "delayed_insert_timeout", (long*) &delayed_insert_timeout,
...@@ -2390,7 +2390,7 @@ CHANGEABLE_VAR changeable_vars[] = { ...@@ -2390,7 +2390,7 @@ CHANGEABLE_VAR changeable_vars[] = {
{"innobase_buffer_pool_size", {"innobase_buffer_pool_size",
(long*) &innobase_buffer_pool_size, 8*1024*1024L, 1024*1024L, (long*) &innobase_buffer_pool_size, 8*1024*1024L, 1024*1024L,
~0L, 0, 1024*1024L}, ~0L, 0, 1024*1024L},
{"innobase_additional_mem_pool_size_mb", {"innobase_additional_mem_pool_size",
(long*) &innobase_additional_mem_pool_size, 1*1024*1024L, 512*1024L, (long*) &innobase_additional_mem_pool_size, 1*1024*1024L, 512*1024L,
~0L, 0, 1024}, ~0L, 0, 1024},
{"innobase_file_io_threads", {"innobase_file_io_threads",
...@@ -2408,6 +2408,8 @@ CHANGEABLE_VAR changeable_vars[] = { ...@@ -2408,6 +2408,8 @@ CHANGEABLE_VAR changeable_vars[] = {
IF_WIN(1,0), 0, 1, 0, 1 }, IF_WIN(1,0), 0, 1, 0, 1 },
{ "max_allowed_packet", (long*) &max_allowed_packet, { "max_allowed_packet", (long*) &max_allowed_packet,
1024*1024L, 80, 17*1024*1024L, MALLOC_OVERHEAD, 1024 }, 1024*1024L, 80, 17*1024*1024L, MALLOC_OVERHEAD, 1024 },
{ "max_binlog_cache_size", (long*) &max_binlog_cache_size,
~0L, IO_SIZE, ~0L, 0, IO_SIZE },
{ "max_connections", (long*) &max_connections, { "max_connections", (long*) &max_connections,
100, 1, 16384, 0, 1 }, 100, 1, 16384, 0, 1 },
{ "max_connect_errors", (long*) &max_connect_errors, { "max_connect_errors", (long*) &max_connect_errors,
...@@ -2465,10 +2467,12 @@ struct show_var_st init_vars[]= { ...@@ -2465,10 +2467,12 @@ struct show_var_st init_vars[]= {
#ifdef HAVE_BERKELEY_DB #ifdef HAVE_BERKELEY_DB
{"bdb_cache_size", (char*) &berkeley_cache_size, SHOW_LONG}, {"bdb_cache_size", (char*) &berkeley_cache_size, SHOW_LONG},
{"bdb_home", (char*) &berkeley_home, SHOW_CHAR_PTR}, {"bdb_home", (char*) &berkeley_home, SHOW_CHAR_PTR},
{"bdb_lock_max", (char*) &berkeley_lock_max, SHOW_LONG}, {"bdb_max_lock", (char*) &berkeley_max_lock, SHOW_LONG},
{"bdb_logdir", (char*) &berkeley_logdir, SHOW_CHAR_PTR}, {"bdb_logdir", (char*) &berkeley_logdir, SHOW_CHAR_PTR},
{"bdb_shared_data", (char*) &berkeley_shared_data, SHOW_BOOL},
{"bdb_tmpdir", (char*) &berkeley_tmpdir, SHOW_CHAR_PTR}, {"bdb_tmpdir", (char*) &berkeley_tmpdir, SHOW_CHAR_PTR},
#endif #endif
{"binlog_cache_size", (char*) &binlog_cache_size, SHOW_LONG},
{"character_set", default_charset, SHOW_CHAR}, {"character_set", default_charset, SHOW_CHAR},
{"character_sets", (char*) &charsets_list, SHOW_CHAR_PTR}, {"character_sets", (char*) &charsets_list, SHOW_CHAR_PTR},
{"concurrent_insert", (char*) &myisam_concurrent_insert, SHOW_MY_BOOL}, {"concurrent_insert", (char*) &myisam_concurrent_insert, SHOW_MY_BOOL},
...@@ -2497,6 +2501,7 @@ struct show_var_st init_vars[]= { ...@@ -2497,6 +2501,7 @@ struct show_var_st init_vars[]= {
{"low_priority_updates", (char*) &low_priority_updates, SHOW_BOOL}, {"low_priority_updates", (char*) &low_priority_updates, SHOW_BOOL},
{"lower_case_table_names", (char*) &lower_case_table_names, SHOW_LONG}, {"lower_case_table_names", (char*) &lower_case_table_names, SHOW_LONG},
{"max_allowed_packet", (char*) &max_allowed_packet, SHOW_LONG}, {"max_allowed_packet", (char*) &max_allowed_packet, SHOW_LONG},
{"max_binlog_cache_size", (char*) &max_binlog_cache_size, SHOW_LONG},
{"max_connections", (char*) &max_connections, SHOW_LONG}, {"max_connections", (char*) &max_connections, SHOW_LONG},
{"max_connect_errors", (char*) &max_connect_errors, SHOW_LONG}, {"max_connect_errors", (char*) &max_connect_errors, SHOW_LONG},
{"max_delayed_threads", (char*) &max_insert_delayed_threads, SHOW_LONG}, {"max_delayed_threads", (char*) &max_insert_delayed_threads, SHOW_LONG},
...@@ -2711,8 +2716,9 @@ static void usage(void) ...@@ -2711,8 +2716,9 @@ static void usage(void)
--bdb-lock-detect=# Berkeley lock detect\n\ --bdb-lock-detect=# Berkeley lock detect\n\
(DEFAULT, OLDEST, RANDOM or YOUNGEST, # sec)\n\ (DEFAULT, OLDEST, RANDOM or YOUNGEST, # sec)\n\
--bdb-logdir=directory Berkeley DB log file directory\n\ --bdb-logdir=directory Berkeley DB log file directory\n\
--bdb-nosync Don't synchronously flush logs\n\ --bdb-no-sync Don't synchronously flush logs\n\
--bdb-recover Start Berkeley DB in recover mode\n\ --bdb-recover Start Berkeley DB in recover mode\n\
--bdb-shared-data Start Berkeley DB in multi-process mode\n\
--bdb-tmpdir=directory Berkeley DB tempfile name\n\ --bdb-tmpdir=directory Berkeley DB tempfile name\n\
--skip-bdb Don't use berkeley db (will save memory)\n\ --skip-bdb Don't use berkeley db (will save memory)\n\
"); ");
...@@ -3224,6 +3230,10 @@ static void get_options(int argc,char **argv) ...@@ -3224,6 +3230,10 @@ static void get_options(int argc,char **argv)
} }
break; break;
} }
case OPT_BDB_SHARED:
berkeley_init_flags&= ~(DB_PRIVATE);
berkeley_shared_data=1;
break;
case OPT_BDB_SKIP: case OPT_BDB_SKIP:
berkeley_skip=1; berkeley_skip=1;
break; break;
......
...@@ -207,3 +207,4 @@ ...@@ -207,3 +207,4 @@
"Tabulka '%-.64s' je ozna-Bena jako poruen a mla by bt opravena",-A "Tabulka '%-.64s' je ozna-Bena jako poruen a mla by bt opravena",-A
"Tabulka '%-.64s' je ozna-Bena jako poruen a posledn (automatick?) oprava se nezdaila",-A "Tabulka '%-.64s' je ozna-Bena jako poruen a posledn (automatick?) oprava se nezdaila",-A
"Warning: Some non-transactional changed tables couldn't be rolled back", "Warning: Some non-transactional changed tables couldn't be rolled back",
"Multi-statement transaction required more than 'max_binlog_cache_size' bytes of storage. Increase this mysqld variable and try again',
...@@ -201,3 +201,4 @@ ...@@ -201,3 +201,4 @@
"Table '%-.64s' is marked as crashed and should be repaired", "Table '%-.64s' is marked as crashed and should be repaired",
"Table '%-.64s' is marked as crashed and last (automatic?) repair failed", "Table '%-.64s' is marked as crashed and last (automatic?) repair failed",
"Warning: Some non-transactional changed tables couldn't be rolled back", "Warning: Some non-transactional changed tables couldn't be rolled back",
"Multi-statement transaction required more than 'max_binlog_cache_size' bytes of storage. Increase this mysqld variable and try again',
...@@ -198,3 +198,4 @@ ...@@ -198,3 +198,4 @@
"Table '%-.64s' is marked as crashed and should be repaired", "Table '%-.64s' is marked as crashed and should be repaired",
"Table '%-.64s' is marked as crashed and last (automatic?) repair failed", "Table '%-.64s' is marked as crashed and last (automatic?) repair failed",
"Warning: Some non-transactional changed tables couldn't be rolled back", "Warning: Some non-transactional changed tables couldn't be rolled back",
"Multi-statement transaction required more than 'max_binlog_cache_size' bytes of storage. Increase this mysqld variable and try again',
...@@ -198,3 +198,4 @@ ...@@ -198,3 +198,4 @@
"Table '%-.64s' is marked as crashed and should be repaired", "Table '%-.64s' is marked as crashed and should be repaired",
"Table '%-.64s' is marked as crashed and last (automatic?) repair failed", "Table '%-.64s' is marked as crashed and last (automatic?) repair failed",
"Warning: Some non-transactional changed tables couldn't be rolled back", "Warning: Some non-transactional changed tables couldn't be rolled back",
"Multi-statement transaction required more than 'max_binlog_cache_size' bytes of storage. Increase this mysqld variable and try again',
...@@ -202,3 +202,4 @@ ...@@ -202,3 +202,4 @@
"Table '%-.64s' is marked as crashed and should be repaired", "Table '%-.64s' is marked as crashed and should be repaired",
"Table '%-.64s' is marked as crashed and last (automatic?) repair failed", "Table '%-.64s' is marked as crashed and last (automatic?) repair failed",
"Warning: Some non-transactional changed tables couldn't be rolled back", "Warning: Some non-transactional changed tables couldn't be rolled back",
"Multi-statement transaction required more than 'max_binlog_cache_size' bytes of storage. Increase this mysqld variable and try again',
...@@ -198,3 +198,4 @@ ...@@ -198,3 +198,4 @@
"Table '%-.64s' is marked as crashed and should be repaired", "Table '%-.64s' is marked as crashed and should be repaired",
"Table '%-.64s' is marked as crashed and last (automatic?) repair failed", "Table '%-.64s' is marked as crashed and last (automatic?) repair failed",
"Warning: Some non-transactional changed tables couldn't be rolled back", "Warning: Some non-transactional changed tables couldn't be rolled back",
"Multi-statement transaction required more than 'max_binlog_cache_size' bytes of storage. Increase this mysqld variable and try again',
...@@ -201,3 +201,4 @@ ...@@ -201,3 +201,4 @@
"Table '%-.64s' is marked as crashed and should be repaired", "Table '%-.64s' is marked as crashed and should be repaired",
"Table '%-.64s' is marked as crashed and last (automatic?) repair failed", "Table '%-.64s' is marked as crashed and last (automatic?) repair failed",
"Warning: Some non-transactional changed tables couldn't be rolled back", "Warning: Some non-transactional changed tables couldn't be rolled back",
"Multi-statement transaction required more than 'max_binlog_cache_size' bytes of storage. Increase this mysqld variable and try again',
...@@ -198,3 +198,4 @@ ...@@ -198,3 +198,4 @@
"Table '%-.64s' is marked as crashed and should be repaired", "Table '%-.64s' is marked as crashed and should be repaired",
"Table '%-.64s' is marked as crashed and last (automatic?) repair failed", "Table '%-.64s' is marked as crashed and last (automatic?) repair failed",
"Warning: Some non-transactional changed tables couldn't be rolled back", "Warning: Some non-transactional changed tables couldn't be rolled back",
"Multi-statement transaction required more than 'max_binlog_cache_size' bytes of storage. Increase this mysqld variable and try again',
...@@ -200,3 +200,4 @@ ...@@ -200,3 +200,4 @@
"Table '%-.64s' is marked as crashed and should be repaired", "Table '%-.64s' is marked as crashed and should be repaired",
"Table '%-.64s' is marked as crashed and last (automatic?) repair failed", "Table '%-.64s' is marked as crashed and last (automatic?) repair failed",
"Warning: Some non-transactional changed tables couldn't be rolled back", "Warning: Some non-transactional changed tables couldn't be rolled back",
"Multi-statement transaction required more than 'max_binlog_cache_size' bytes of storage. Increase this mysqld variable and try again',
...@@ -198,3 +198,4 @@ ...@@ -198,3 +198,4 @@
"La tabella '%-.64s' e' segnalata come rovinata e deve essere riparata", "La tabella '%-.64s' e' segnalata come rovinata e deve essere riparata",
"La tabella '%-.64s' e' segnalata come rovinata e l'ultima ricostruzione (automatica?) e' fallita", "La tabella '%-.64s' e' segnalata come rovinata e l'ultima ricostruzione (automatica?) e' fallita",
"Warning: Some non-transactional changed tables couldn't be rolled back", "Warning: Some non-transactional changed tables couldn't be rolled back",
"Multi-statement transaction required more than 'max_binlog_cache_size' bytes of storage. Increase this mysqld variable and try again',
...@@ -200,3 +200,4 @@ ...@@ -200,3 +200,4 @@
"Table '%-.64s' is marked as crashed and should be repaired", "Table '%-.64s' is marked as crashed and should be repaired",
"Table '%-.64s' is marked as crashed and last (automatic?) repair failed", "Table '%-.64s' is marked as crashed and last (automatic?) repair failed",
"Warning: Some non-transactional changed tables couldn't be rolled back", "Warning: Some non-transactional changed tables couldn't be rolled back",
"Multi-statement transaction required more than 'max_binlog_cache_size' bytes of storage. Increase this mysqld variable and try again',
...@@ -198,3 +198,4 @@ ...@@ -198,3 +198,4 @@
"Table '%-.64s' is marked as crashed and should be repaired", "Table '%-.64s' is marked as crashed and should be repaired",
"Table '%-.64s' is marked as crashed and last (automatic?) repair failed", "Table '%-.64s' is marked as crashed and last (automatic?) repair failed",
"Warning: Some non-transactional changed tables couldn't be rolled back", "Warning: Some non-transactional changed tables couldn't be rolled back",
"Multi-statement transaction required more than 'max_binlog_cache_size' bytes of storage. Increase this mysqld variable and try again',
...@@ -200,3 +200,4 @@ ...@@ -200,3 +200,4 @@
"Table '%-.64s' is marked as crashed and should be repaired", "Table '%-.64s' is marked as crashed and should be repaired",
"Table '%-.64s' is marked as crashed and last (automatic?) repair failed", "Table '%-.64s' is marked as crashed and last (automatic?) repair failed",
"Warning: Some non-transactional changed tables couldn't be rolled back", "Warning: Some non-transactional changed tables couldn't be rolled back",
"Multi-statement transaction required more than 'max_binlog_cache_size' bytes of storage. Increase this mysqld variable and try again',
...@@ -200,3 +200,4 @@ ...@@ -200,3 +200,4 @@
"Table '%-.64s' is marked as crashed and should be repaired", "Table '%-.64s' is marked as crashed and should be repaired",
"Table '%-.64s' is marked as crashed and last (automatic?) repair failed", "Table '%-.64s' is marked as crashed and last (automatic?) repair failed",
"Warning: Some non-transactional changed tables couldn't be rolled back", "Warning: Some non-transactional changed tables couldn't be rolled back",
"Multi-statement transaction required more than 'max_binlog_cache_size' bytes of storage. Increase this mysqld variable and try again',
...@@ -202,3 +202,4 @@ ...@@ -202,3 +202,4 @@
"Table '%-.64s' is marked as crashed and should be repaired", "Table '%-.64s' is marked as crashed and should be repaired",
"Table '%-.64s' is marked as crashed and last (automatic?) repair failed", "Table '%-.64s' is marked as crashed and last (automatic?) repair failed",
"Warning: Some non-transactional changed tables couldn't be rolled back", "Warning: Some non-transactional changed tables couldn't be rolled back",
"Multi-statement transaction required more than 'max_binlog_cache_size' bytes of storage. Increase this mysqld variable and try again',
...@@ -198,3 +198,4 @@ ...@@ -198,3 +198,4 @@
"Table '%-.64s' is marked as crashed and should be repaired", "Table '%-.64s' is marked as crashed and should be repaired",
"Table '%-.64s' is marked as crashed and last (automatic?) repair failed", "Table '%-.64s' is marked as crashed and last (automatic?) repair failed",
"Warning: Some non-transactional changed tables couldn't be rolled back", "Warning: Some non-transactional changed tables couldn't be rolled back",
"Multi-statement transaction required more than 'max_binlog_cache_size' bytes of storage. Increase this mysqld variable and try again',
...@@ -202,3 +202,4 @@ ...@@ -202,3 +202,4 @@
"Table '%-.64s' is marked as crashed and should be repaired", "Table '%-.64s' is marked as crashed and should be repaired",
"Table '%-.64s' is marked as crashed and last (automatic?) repair failed", "Table '%-.64s' is marked as crashed and last (automatic?) repair failed",
"Warning: Some non-transactional changed tables couldn't be rolled back", "Warning: Some non-transactional changed tables couldn't be rolled back",
"Multi-statement transaction required more than 'max_binlog_cache_size' bytes of storage. Increase this mysqld variable and try again',
...@@ -201,3 +201,4 @@ ...@@ -201,3 +201,4 @@
" '%-.64s' ", " '%-.64s' ",
" '%-.64s' (?) ", " '%-.64s' (?) ",
"Warning: Some non-transactional changed tables couldn't be rolled back", "Warning: Some non-transactional changed tables couldn't be rolled back",
"Multi-statement transaction required more than 'max_binlog_cache_size' bytes of storage. Increase this mysqld variable and try again',
...@@ -206,3 +206,4 @@ ...@@ -206,3 +206,4 @@
"Table '%-.64s' is marked as crashed and should be repaired", "Table '%-.64s' is marked as crashed and should be repaired",
"Table '%-.64s' is marked as crashed and last (automatic?) repair failed", "Table '%-.64s' is marked as crashed and last (automatic?) repair failed",
"Warning: Some non-transactional changed tables couldn't be rolled back", "Warning: Some non-transactional changed tables couldn't be rolled back",
"Multi-statement transaction required more than 'max_binlog_cache_size' bytes of storage. Increase this mysqld variable and try again',
...@@ -199,3 +199,4 @@ ...@@ -199,3 +199,4 @@
"Table '%-.64s' is marked as crashed and should be repaired", "Table '%-.64s' is marked as crashed and should be repaired",
"Table '%-.64s' is marked as crashed and last (automatic?) repair failed", "Table '%-.64s' is marked as crashed and last (automatic?) repair failed",
"Warning: Some non-transactional changed tables couldn't be rolled back", "Warning: Some non-transactional changed tables couldn't be rolled back",
"Multi-statement transaction required more than 'max_binlog_cache_size' bytes of storage. Increase this mysqld variable and try again',
...@@ -198,3 +198,5 @@ ...@@ -198,3 +198,5 @@
"Tabell '%-.64s' är crashad och bör repareras med REPAIR TABLE", "Tabell '%-.64s' är crashad och bör repareras med REPAIR TABLE",
"Tabell '%-.64s' är crashad och senast (automatiska?) reparation misslyckades", "Tabell '%-.64s' är crashad och senast (automatiska?) reparation misslyckades",
"Warning: Några icke transaktionella tabeller kunde inte återställas vid ROLLBACK", "Warning: Några icke transaktionella tabeller kunde inte återställas vid ROLLBACK",
#ER_TRANS_CACHE_FULL
"Transaktionen krävde mera än 'max_binlog_cache_size' minne. Utöka denna mysqld variabel och försök på nytt",
...@@ -198,3 +198,4 @@ ...@@ -198,3 +198,4 @@
"Tabell '%-.64s' är crashad och bör repareras med REPAIR TABLE", "Tabell '%-.64s' är crashad och bör repareras med REPAIR TABLE",
"Tabell '%-.64s' är crashad och senast (automatiska?) reparation misslyckades", "Tabell '%-.64s' är crashad och senast (automatiska?) reparation misslyckades",
"Warning: Några icke transaktionella tabeller kunde inte återställas vid ROLLBACK", "Warning: Några icke transaktionella tabeller kunde inte återställas vid ROLLBACK",
"Transaktionen krävde mera än 'max_binlog_cache_size' minne. Utöka denna mysqld variabel och försök på nytt",
...@@ -454,7 +454,7 @@ void close_temporary_tables(THD *thd) ...@@ -454,7 +454,7 @@ void close_temporary_tables(THD *thd)
next=table->next; next=table->next;
close_temporary(table); close_temporary(table);
} }
if(query && mysql_bin_log.is_open()) if (query && mysql_bin_log.is_open())
{ {
uint save_query_len = thd->query_length; uint save_query_len = thd->query_length;
*--p = 0; *--p = 0;
......
...@@ -121,8 +121,10 @@ THD::THD():user_time(0),fatal_error(0),last_insert_id_used(0), ...@@ -121,8 +121,10 @@ THD::THD():user_time(0),fatal_error(0),last_insert_id_used(0),
#ifdef USING_TRANSACTIONS #ifdef USING_TRANSACTIONS
bzero((char*) &transaction,sizeof(transaction)); bzero((char*) &transaction,sizeof(transaction));
if (open_cached_file(&transaction.trans_log, if (open_cached_file(&transaction.trans_log,
mysql_tmpdir,LOG_PREFIX,0,MYF(MY_WME))) mysql_tmpdir, LOG_PREFIX, binlog_cache_size,
MYF(MY_WME)))
killed=1; killed=1;
transaction.trans_log.end_of_file= max_binlog_cache_size;
#endif #endif
#ifdef __WIN__ #ifdef __WIN__
......
...@@ -74,12 +74,12 @@ public: ...@@ -74,12 +74,12 @@ public:
void open(const char *log_name,enum_log_type log_type, void open(const char *log_name,enum_log_type log_type,
const char *new_name=0); const char *new_name=0);
void new_file(void); void new_file(void);
void write(THD *thd, enum enum_server_command command,const char *format,...); bool write(THD *thd, enum enum_server_command command,const char *format,...);
void write(THD *thd, const char *query, uint query_length, bool write(THD *thd, const char *query, uint query_length,
time_t query_start=0); time_t query_start=0);
void write(Query_log_event* event_info); // binary log write bool write(Query_log_event* event_info); // binary log write
void write(Load_log_event* event_info); bool write(Load_log_event* event_info);
bool write(IO_CACHE *cache);
int generate_new_name(char *new_name,const char *old_name); int generate_new_name(char *new_name,const char *old_name);
void make_log_name(char* buf, const char* log_ident); void make_log_name(char* buf, const char* log_ident);
bool is_active(const char* log_file_name); bool is_active(const char* log_file_name);
......
...@@ -158,14 +158,14 @@ exit: ...@@ -158,14 +158,14 @@ exit:
are 2 digits (raid directories). are 2 digits (raid directories).
*/ */
static long mysql_rm_known_files(THD *thd, MY_DIR *dirp, const char *path, static long mysql_rm_known_files(THD *thd, MY_DIR *dirp, const char *org_path,
uint level) uint level)
{ {
long deleted=0; long deleted=0;
ulong found_other_files=0; ulong found_other_files=0;
char filePath[FN_REFLEN]; char filePath[FN_REFLEN];
DBUG_ENTER("mysql_rm_known_files"); DBUG_ENTER("mysql_rm_known_files");
DBUG_PRINT("enter",("path: %s", path)); DBUG_PRINT("enter",("path: %s", org_path));
/* remove all files with known extensions */ /* remove all files with known extensions */
for (uint idx=2 ; for (uint idx=2 ;
...@@ -181,7 +181,7 @@ static long mysql_rm_known_files(THD *thd, MY_DIR *dirp, const char *path, ...@@ -181,7 +181,7 @@ static long mysql_rm_known_files(THD *thd, MY_DIR *dirp, const char *path,
{ {
char newpath[FN_REFLEN]; char newpath[FN_REFLEN];
MY_DIR *new_dirp; MY_DIR *new_dirp;
strxmov(newpath,path,"/",file->name,NullS); strxmov(newpath,org_path,"/",file->name,NullS);
unpack_filename(newpath,newpath); unpack_filename(newpath,newpath);
if ((new_dirp = my_dir(newpath,MYF(MY_DONT_SORT)))) if ((new_dirp = my_dir(newpath,MYF(MY_DONT_SORT))))
{ {
...@@ -199,7 +199,7 @@ static long mysql_rm_known_files(THD *thd, MY_DIR *dirp, const char *path, ...@@ -199,7 +199,7 @@ static long mysql_rm_known_files(THD *thd, MY_DIR *dirp, const char *path,
found_other_files++; found_other_files++;
continue; continue;
} }
strxmov(filePath,path,"/",file->name,NullS); strxmov(filePath,org_path,"/",file->name,NullS);
unpack_filename(filePath,filePath); unpack_filename(filePath,filePath);
if (my_delete(filePath,MYF(MY_WME))) if (my_delete(filePath,MYF(MY_WME)))
{ {
...@@ -224,9 +224,9 @@ static long mysql_rm_known_files(THD *thd, MY_DIR *dirp, const char *path, ...@@ -224,9 +224,9 @@ static long mysql_rm_known_files(THD *thd, MY_DIR *dirp, const char *path,
*/ */
if (!found_other_files) if (!found_other_files)
{ {
#ifdef HAVE_READLINK
char tmp_path[FN_REFLEN]; char tmp_path[FN_REFLEN];
path=unpack_filename(tmp_path,path); char *path=unpack_filename(tmp_path,org_path);
#ifdef HAVE_READLINK
int linkcount = readlink(path,filePath,sizeof(filePath)-1); int linkcount = readlink(path,filePath,sizeof(filePath)-1);
if (linkcount > 0) // If the path was a symbolic link if (linkcount > 0) // If the path was a symbolic link
{ {
......
...@@ -126,7 +126,7 @@ int mysql_delete(THD *thd,TABLE_LIST *table_list,COND *conds,ha_rows limit, ...@@ -126,7 +126,7 @@ int mysql_delete(THD *thd,TABLE_LIST *table_list,COND *conds,ha_rows limit,
SQL_SELECT *select; SQL_SELECT *select;
READ_RECORD info; READ_RECORD info;
bool using_limit=limit != HA_POS_ERROR; bool using_limit=limit != HA_POS_ERROR;
bool use_generate_table; bool use_generate_table,using_transactions;
DBUG_ENTER("mysql_delete"); DBUG_ENTER("mysql_delete");
if (!table_list->db) if (!table_list->db)
...@@ -214,18 +214,20 @@ int mysql_delete(THD *thd,TABLE_LIST *table_list,COND *conds,ha_rows limit, ...@@ -214,18 +214,20 @@ int mysql_delete(THD *thd,TABLE_LIST *table_list,COND *conds,ha_rows limit,
(void) table->file->extra(HA_EXTRA_READCHECK); (void) table->file->extra(HA_EXTRA_READCHECK);
if (options & OPTION_QUICK) if (options & OPTION_QUICK)
(void) table->file->extra(HA_EXTRA_NORMAL); (void) table->file->extra(HA_EXTRA_NORMAL);
if (deleted) using_transactions=table->file->has_transactions();
if (deleted && (error == 0 || !using_transactions))
{ {
mysql_update_log.write(thd,thd->query, thd->query_length); mysql_update_log.write(thd,thd->query, thd->query_length);
if (mysql_bin_log.is_open()) if (mysql_bin_log.is_open())
{ {
Query_log_event qinfo(thd, thd->query); Query_log_event qinfo(thd, thd->query, using_transactions);
mysql_bin_log.write(&qinfo); if (mysql_bin_log.write(&qinfo) && using_transactions)
error=1;
} }
if (!table->file->has_transactions()) if (!using_transactions)
thd->options|=OPTION_STATUS_NO_TRANS_UPDATE; thd->options|=OPTION_STATUS_NO_TRANS_UPDATE;
} }
if (ha_autocommit_or_rollback(thd,error >= 0)) if (using_transactions && ha_autocommit_or_rollback(thd,error >= 0))
error=1; error=1;
if (thd->lock) if (thd->lock)
{ {
......
...@@ -102,6 +102,7 @@ int mysql_insert(THD *thd,TABLE_LIST *table_list, List<Item> &fields, ...@@ -102,6 +102,7 @@ int mysql_insert(THD *thd,TABLE_LIST *table_list, List<Item> &fields,
int error; int error;
bool log_on= ((thd->options & OPTION_UPDATE_LOG) || bool log_on= ((thd->options & OPTION_UPDATE_LOG) ||
!(thd->master_access & PROCESS_ACL)); !(thd->master_access & PROCESS_ACL));
bool using_transactions;
uint value_count; uint value_count;
uint save_time_stamp; uint save_time_stamp;
ulong counter = 1; ulong counter = 1;
...@@ -254,17 +255,20 @@ int mysql_insert(THD *thd,TABLE_LIST *table_list, List<Item> &fields, ...@@ -254,17 +255,20 @@ int mysql_insert(THD *thd,TABLE_LIST *table_list, List<Item> &fields,
thd->insert_id(id); // For update log thd->insert_id(id); // For update log
else if (table->next_number_field) else if (table->next_number_field)
id=table->next_number_field->val_int(); // Return auto_increment value id=table->next_number_field->val_int(); // Return auto_increment value
if (info.copied || info.deleted) using_transactions=table->file->has_transactions();
if ((info.copied || info.deleted) && (error == 0 || !using_transactions))
{ {
mysql_update_log.write(thd, thd->query, thd->query_length); mysql_update_log.write(thd, thd->query, thd->query_length);
if (mysql_bin_log.is_open()) if (mysql_bin_log.is_open())
{ {
Query_log_event qinfo(thd, thd->query); Query_log_event qinfo(thd, thd->query, using_transactions);
mysql_bin_log.write(&qinfo); if (mysql_bin_log.write(&qinfo) && using_transactions)
error=1;
} }
if (!table->file->has_transactions()) if (!using_transactions)
thd->options|=OPTION_STATUS_NO_TRANS_UPDATE; thd->options|=OPTION_STATUS_NO_TRANS_UPDATE;
} }
if (using_transactions)
error=ha_autocommit_or_rollback(thd,error); error=ha_autocommit_or_rollback(thd,error);
if (thd->lock) if (thd->lock)
{ {
...@@ -1265,7 +1269,8 @@ bool select_insert::send_eof() ...@@ -1265,7 +1269,8 @@ bool select_insert::send_eof()
mysql_update_log.write(thd,thd->query,thd->query_length); mysql_update_log.write(thd,thd->query,thd->query_length);
if (mysql_bin_log.is_open()) if (mysql_bin_log.is_open())
{ {
Query_log_event qinfo(thd, thd->query); Query_log_event qinfo(thd, thd->query,
table->file->has_transactions());
mysql_bin_log.write(&qinfo); mysql_bin_log.write(&qinfo);
} }
return 0; return 0;
......
...@@ -245,10 +245,11 @@ int mysql_load(THD *thd,sql_exchange *ex,TABLE_LIST *table_list, ...@@ -245,10 +245,11 @@ int mysql_load(THD *thd,sql_exchange *ex,TABLE_LIST *table_list,
if (!table->file->has_transactions()) if (!table->file->has_transactions())
thd->options|=OPTION_STATUS_NO_TRANS_UPDATE; thd->options|=OPTION_STATUS_NO_TRANS_UPDATE;
if (!read_file_from_client) if (!read_file_from_client && mysql_bin_log.is_open())
{ {
ex->skip_lines = save_skip_lines; ex->skip_lines = save_skip_lines;
Load_log_event qinfo(thd, ex, table->table_name, fields, handle_duplicates); Load_log_event qinfo(thd, ex, table->table_name, fields,
handle_duplicates);
mysql_bin_log.write(&qinfo); mysql_bin_log.write(&qinfo);
} }
DBUG_RETURN(0); DBUG_RETURN(0);
......
...@@ -172,7 +172,7 @@ check_connections(THD *thd) ...@@ -172,7 +172,7 @@ check_connections(THD *thd)
vio_description(net->vio))); vio_description(net->vio)));
if (!thd->host) // If TCP/IP connection if (!thd->host) // If TCP/IP connection
{ {
char ip[17]; char ip[30];
if (vio_peer_addr(net->vio,ip)) if (vio_peer_addr(net->vio,ip))
return (ER_BAD_HOST_ERROR); return (ER_BAD_HOST_ERROR);
...@@ -718,7 +718,7 @@ bool do_command(THD *thd) ...@@ -718,7 +718,7 @@ bool do_command(THD *thd)
case COM_DROP_DB: case COM_DROP_DB:
{ {
char *db=thd->strdup(packet+1); char *db=thd->strdup(packet+1);
if (check_access(thd,DROP_ACL,db,0,1)) if (check_access(thd,DROP_ACL,db,0,1) || end_active_trans(thd))
break; break;
mysql_log.write(thd,command,db); mysql_log.write(thd,command,db);
mysql_rm_db(thd,db,0); mysql_rm_db(thd,db,0);
...@@ -1136,6 +1136,9 @@ mysql_execute_command(void) ...@@ -1136,6 +1136,9 @@ mysql_execute_command(void)
goto error; /* purecov: inspected */ goto error; /* purecov: inspected */
if (grant_option && check_grant(thd,INDEX_ACL,tables)) if (grant_option && check_grant(thd,INDEX_ACL,tables))
goto error; goto error;
if (end_active_trans(thd))
res= -1;
else
res = mysql_create_index(thd, tables, lex->key_list); res = mysql_create_index(thd, tables, lex->key_list);
break; break;
...@@ -1224,7 +1227,9 @@ mysql_execute_command(void) ...@@ -1224,7 +1227,9 @@ mysql_execute_command(void)
goto error; goto error;
} }
} }
if (mysql_rename_tables(thd,tables)) if (end_active_trans(thd))
res= -1;
else if (mysql_rename_tables(thd,tables))
res= -1; res= -1;
break; break;
} }
...@@ -1428,6 +1433,9 @@ mysql_execute_command(void) ...@@ -1428,6 +1433,9 @@ mysql_execute_command(void)
{ {
if (check_table_access(thd,DROP_ACL,tables)) if (check_table_access(thd,DROP_ACL,tables))
goto error; /* purecov: inspected */ goto error; /* purecov: inspected */
if (end_active_trans(thd))
res= -1;
else
res = mysql_rm_table(thd,tables,lex->drop_if_exists); res = mysql_rm_table(thd,tables,lex->drop_if_exists);
} }
break; break;
...@@ -1438,6 +1446,9 @@ mysql_execute_command(void) ...@@ -1438,6 +1446,9 @@ mysql_execute_command(void)
goto error; /* purecov: inspected */ goto error; /* purecov: inspected */
if (grant_option && check_grant(thd,INDEX_ACL,tables)) if (grant_option && check_grant(thd,INDEX_ACL,tables))
goto error; goto error;
if (end_active_trans(thd))
res= -1;
else
res = mysql_drop_index(thd, tables, lex->drop_list); res = mysql_drop_index(thd, tables, lex->drop_list);
break; break;
case SQLCOM_SHOW_DATABASES: case SQLCOM_SHOW_DATABASES:
...@@ -1643,7 +1654,8 @@ mysql_execute_command(void) ...@@ -1643,7 +1654,8 @@ mysql_execute_command(void)
} }
case SQLCOM_DROP_DB: case SQLCOM_DROP_DB:
{ {
if (check_access(thd,DROP_ACL,lex->name,0,1)) if (check_access(thd,DROP_ACL,lex->name,0,1) ||
end_active_trans(thd))
break; break;
mysql_rm_db(thd,lex->name,lex->drop_if_exists); mysql_rm_db(thd,lex->name,lex->drop_if_exists);
break; break;
......
...@@ -1427,6 +1427,17 @@ int mysql_alter_table(THD *thd,char *new_db, char *new_name, ...@@ -1427,6 +1427,17 @@ int mysql_alter_table(THD *thd,char *new_db, char *new_name,
thd->count_cuted_fields=0; /* Don`t calc cuted fields */ thd->count_cuted_fields=0; /* Don`t calc cuted fields */
new_table->time_stamp=save_time_stamp; new_table->time_stamp=save_time_stamp;
#if defined( __WIN__) || defined( __EMX__)
/*
We must do the COMMIT here so that we can close and rename the
temporary table (as windows can't rename open tables)
*/
if (ha_commit_stmt(thd))
error=1;
if (ha_commit(thd))
error=1;
#endif
if (table->tmp_table) if (table->tmp_table)
{ {
/* We changed a temporary table */ /* We changed a temporary table */
...@@ -1544,6 +1555,8 @@ int mysql_alter_table(THD *thd,char *new_db, char *new_name, ...@@ -1544,6 +1555,8 @@ int mysql_alter_table(THD *thd,char *new_db, char *new_name,
goto err; goto err;
} }
} }
#if !(defined( __WIN__) || defined( __EMX__))
/* The ALTER TABLE is always in it's own transaction */ /* The ALTER TABLE is always in it's own transaction */
error = ha_commit_stmt(thd); error = ha_commit_stmt(thd);
if (ha_commit(thd)) if (ha_commit(thd))
...@@ -1554,6 +1567,7 @@ int mysql_alter_table(THD *thd,char *new_db, char *new_name, ...@@ -1554,6 +1567,7 @@ int mysql_alter_table(THD *thd,char *new_db, char *new_name,
VOID(pthread_mutex_unlock(&LOCK_open)); VOID(pthread_mutex_unlock(&LOCK_open));
goto err; goto err;
} }
#endif
thd->proc_info="end"; thd->proc_info="end";
mysql_update_log.write(thd, thd->query,thd->query_length); mysql_update_log.write(thd, thd->query,thd->query_length);
......
...@@ -49,7 +49,7 @@ int mysql_update(THD *thd,TABLE_LIST *table_list,List<Item> &fields, ...@@ -49,7 +49,7 @@ int mysql_update(THD *thd,TABLE_LIST *table_list,List<Item> &fields,
thr_lock_type lock_type) thr_lock_type lock_type)
{ {
bool using_limit=limit != HA_POS_ERROR; bool using_limit=limit != HA_POS_ERROR;
bool used_key_is_modified; bool used_key_is_modified, using_transactions;
int error=0; int error=0;
uint save_time_stamp, used_index; uint save_time_stamp, used_index;
key_map old_used_keys; key_map old_used_keys;
...@@ -237,18 +237,20 @@ int mysql_update(THD *thd,TABLE_LIST *table_list,List<Item> &fields, ...@@ -237,18 +237,20 @@ int mysql_update(THD *thd,TABLE_LIST *table_list,List<Item> &fields,
thd->proc_info="end"; thd->proc_info="end";
VOID(table->file->extra(HA_EXTRA_READCHECK)); VOID(table->file->extra(HA_EXTRA_READCHECK));
table->time_stamp=save_time_stamp; // Restore auto timestamp pointer table->time_stamp=save_time_stamp; // Restore auto timestamp pointer
if (updated) using_transactions=table->file->has_transactions();
if (updated && (error == 0 || !using_transactions))
{ {
mysql_update_log.write(thd,thd->query,thd->query_length); mysql_update_log.write(thd,thd->query,thd->query_length);
if (mysql_bin_log.is_open()) if (mysql_bin_log.is_open())
{ {
Query_log_event qinfo(thd, thd->query); Query_log_event qinfo(thd, thd->query, using_transactions);
mysql_bin_log.write(&qinfo); if (mysql_bin_log.write(&qinfo) && using_transactions)
error=1;
} }
if (!table->file->has_transactions()) if (!using_transactions)
thd->options|=OPTION_STATUS_NO_TRANS_UPDATE; thd->options|=OPTION_STATUS_NO_TRANS_UPDATE;
} }
if (ha_autocommit_or_rollback(thd, error >= 0)) if (using_transactions && ha_autocommit_or_rollback(thd, error >= 0))
error=1; error=1;
if (thd->lock) if (thd->lock)
{ {
......
...@@ -2451,6 +2451,7 @@ user: ...@@ -2451,6 +2451,7 @@ user:
keyword: keyword:
ACTION {} ACTION {}
| AFTER_SYM {} | AFTER_SYM {}
| AGAINST {}
| AGGREGATE_SYM {} | AGGREGATE_SYM {}
| AUTOCOMMIT {} | AUTOCOMMIT {}
| AVG_ROW_LENGTH {} | AVG_ROW_LENGTH {}
......
...@@ -34,10 +34,12 @@ set-variable = record_buffer=2M ...@@ -34,10 +34,12 @@ set-variable = record_buffer=2M
set-variable = thread_cache=8 set-variable = thread_cache=8
set-variable = thread_concurrency=8 # Try number of CPU's*2 set-variable = thread_concurrency=8 # Try number of CPU's*2
set-variable = myisam_sort_buffer_size=64M set-variable = myisam_sort_buffer_size=64M
log-update log-bin
server-id = 1
# Uncomment the following if you are using BDB tables # Uncomment the following if you are using BDB tables
#set-variable = bdb_cache_size=384M #set-variable = bdb_cache_size=384M
#set-variable = bdb_max_lock=100000
# Point the following paths to different dedicated disks # Point the following paths to different dedicated disks
#tmpdir = /tmp/ #tmpdir = /tmp/
......
...@@ -34,10 +34,12 @@ set-variable = record_buffer=1M ...@@ -34,10 +34,12 @@ set-variable = record_buffer=1M
set-variable = myisam_sort_buffer_size=64M set-variable = myisam_sort_buffer_size=64M
set-variable = thread_cache=8 set-variable = thread_cache=8
set-variable = thread_concurrency=8 # Try number of CPU's*2 set-variable = thread_concurrency=8 # Try number of CPU's*2
log-update log-bin
server-id = 1
# Uncomment the following if you are using BDB tables # Uncomment the following if you are using BDB tables
#set-variable = bdb_cache_size=64M #set-variable = bdb_cache_size=64M
#set-variable = bdb_max_lock=100000
# Point the following paths to different dedicated disks # Point the following paths to different dedicated disks
#tmpdir = /tmp/ #tmpdir = /tmp/
......
...@@ -33,10 +33,12 @@ set-variable = table_cache=64 ...@@ -33,10 +33,12 @@ set-variable = table_cache=64
set-variable = sort_buffer=512K set-variable = sort_buffer=512K
set-variable = net_buffer_length=8K set-variable = net_buffer_length=8K
set-variable = myisam_sort_buffer_size=8M set-variable = myisam_sort_buffer_size=8M
log-update log-bin
server-id = 1
# Uncomment the following if you are using BDB tables # Uncomment the following if you are using BDB tables
#set-variable = bdb_cache_size=4M #set-variable = bdb_cache_size=4M
#set-variable = bdb_max_lock=10000
# Point the following paths to different dedicated disks # Point the following paths to different dedicated disks
#tmpdir = /tmp/ #tmpdir = /tmp/
......
...@@ -33,12 +33,13 @@ set-variable = thread_stack=64K ...@@ -33,12 +33,13 @@ set-variable = thread_stack=64K
set-variable = table_cache=4 set-variable = table_cache=4
set-variable = sort_buffer=64K set-variable = sort_buffer=64K
set-variable = net_buffer_length=2K set-variable = net_buffer_length=2K
server-id = 1
# Uncomment the following if you are NOT using BDB tables # Uncomment the following if you are NOT using BDB tables
#skip-bdb #skip-bdb
# Uncomment the following if you want to log updates # Uncomment the following if you want to log updates
#log-update #log-bin
[mysqldump] [mysqldump]
quick quick
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment