Commit fec268ed authored by unknown's avatar unknown

Lots of fixes for BDB tables

Change DROP TABLE to first drop the data, then the .frm file


Docs/manual.texi:
  Updated TODO and Changelog
include/Makefile.am:
  Portability fix
mysql-test/misc/select.res:
  ***MISSING WEAVE***
mysys/mf_iocache2.c:
  cleanup
scripts/mysqlhotcopy.sh:
  Fixed --noindices
sql-bench/Results/ATIS-pg-Linux_2.2.14_my_SMP_i686-cmp-mysql,pg:
  Updated benchmarks
sql-bench/Results/RUN-pg-Linux_2.2.14_my_SMP_i686-cmp-mysql,pg:
  Updated benchmarks
sql-bench/Results/alter-table-pg-Linux_2.2.14_my_SMP_i686-cmp-mysql,pg:
  Updated benchmarks
sql-bench/Results/big-tables-pg-Linux_2.2.14_my_SMP_i686-cmp-mysql,pg:
  Updated benchmarks
sql-bench/Results/connect-pg-Linux_2.2.14_my_SMP_i686-cmp-mysql,pg:
  Updated benchmarks
sql-bench/Results/create-pg-Linux_2.2.14_my_SMP_i686-cmp-mysql,pg:
  Updated benchmarks
sql-bench/Results/insert-pg-Linux_2.2.14_my_SMP_i686-cmp-mysql,pg:
  Updated benchmarks
sql-bench/Results/select-pg-Linux_2.2.14_my_SMP_i686-cmp-mysql,pg:
  Updated benchmarks
sql-bench/Results/wisconsin-pg-Linux_2.2.14_my_SMP_i686-cmp-mysql,pg:
  Updated benchmarks
sql-bench/limits/pg.cfg:
  Updated to new crash-me
sql-bench/server-cfg.sh:
  Fixes for pg 7.0.2
sql/ha_berkeley.cc:
  Lots of BDB table fixes
sql/ha_berkeley.h:
  Lots of BDB table fixes
sql/handler.cc:
  Change DROP TABLE to first drop the data, then the .frm file
sql/hostname.cc:
  Fixes for empty hostnames
sql/log.cc:
  Fixed transaction logging
sql/share/swedish/errmsg.OLD:
  cleanup
sql/sql_delete.cc:
  Fixes for logging
sql/sql_insert.cc:
  Fixes for logging
sql/sql_select.cc:
  Fixes for BDB tables
sql/sql_table.cc:
  Change DROP TABLE to first drop the data, then the .frm file
sql/sql_update.cc:
  Fixes for logging
BitKeeper/etc/ignore:
  Added scripts/mysqld_multi to the ignore list
BitKeeper/etc/logging_ok:
  Logging to logging@openlogging.org accepted
parent 99b3680b
...@@ -425,3 +425,4 @@ mysql-test/var/slave-data/mysql-bin.012 ...@@ -425,3 +425,4 @@ mysql-test/var/slave-data/mysql-bin.012
mysql-test/var/slave-data/mysql-bin.013 mysql-test/var/slave-data/mysql-bin.013
mysql-test/var/slave-data/mysql-bin.014 mysql-test/var/slave-data/mysql-bin.014
mysql-test/var/slave-data/mysql-bin.index mysql-test/var/slave-data/mysql-bin.index
scripts/mysqld_multi
jani@prima.mysql.com monty@donna.mysql.com
sasha@mysql.sashanet.com
sasha@work.mysql.com
serg@serg.mysql.com
jani@prima.mysql.fi
...@@ -31688,9 +31688,9 @@ for a similar query to get the correct row count. ...@@ -31688,9 +31688,9 @@ for a similar query to get the correct row count.
@cindex Borland Buidler 4 program @cindex Borland Buidler 4 program
@item Borland Builder 4 @item Borland Builder 4
When you start a query you can use the property @code{Active} or use the When you start a query you can use the property @code{Active} or use the
method @code{Open}. Note that @code{Active} will start by automatically issuing method @code{Open}. Note that @code{Active} will start by automatically
a @code{SELECT * FROM ...} query that may not be a good thing if your tables issuing a @code{SELECT * FROM ...} query that may not be a good thing if
are big! your tables are big!
@item ColdFusion (On Unix) @item ColdFusion (On Unix)
The following information is taken from the ColdFusion documentation: The following information is taken from the ColdFusion documentation:
...@@ -31702,11 +31702,16 @@ newer version should also work.) You can download @strong{MyODBC} at ...@@ -31702,11 +31702,16 @@ newer version should also work.) You can download @strong{MyODBC} at
@uref{http://www.mysql.com/downloads/api-myodbc.html} @uref{http://www.mysql.com/downloads/api-myodbc.html}
@cindex ColdFusion program @cindex ColdFusion program
ColdFusion Version 4.5.1 allows you to us the ColdFusion Administrator to add ColdFusion Version 4.5.1 allows you to us the ColdFusion Administrator
the @strong{MySQL} data source. However, the driver is not included with to add the @strong{MySQL} data source. However, the driver is not
ColdFusion Version 4.5.1. Before the @strong{MySQL} driver will appear in the ODBC included with ColdFusion Version 4.5.1. Before the @strong{MySQL} driver
datasources drop-down list, you must build and copy the @strong{MyODBC} driver will appear in the ODBC datasources drop-down list, you must build and
to @file{/opt/coldfusion/lib/libmyodbc.so}. copy the @strong{MyODBC} driver to
@file{/opt/coldfusion/lib/libmyodbc.so}.
The Contrib directory contains the program mydsn-xxx.zip which allows
you to build and remove the DSN registry file for the MyODBC driver
on Coldfusion applications.
@cindex DataJunction @cindex DataJunction
@item DataJunction @item DataJunction
...@@ -38643,13 +38648,18 @@ databases. By Hal Roberts. ...@@ -38643,13 +38648,18 @@ databases. By Hal Roberts.
Interface for Stk. Stk is the Tk widgets with Scheme underneath instead of Tcl. Interface for Stk. Stk is the Tk widgets with Scheme underneath instead of Tcl.
By Terry Jones. By Terry Jones.
@item @uref{http://www.mysql.com/Downloads/Contrib/eiffel-wrapper-1.0.tar.gz,eiffel-wrapper-1.0.tar.gz}. @item @uref{http://www.mysql.com/Downloads/Contrib/eiffel-wrapper-1.0.tar.gz,eiffel-wrapper-1.0.tar.gz}
Eiffel wrapper by Michael Ravits. Eiffel wrapper by Michael Ravits.
@item @uref{http://www.mysql.com/Downloads/Contrib/SQLmy0.06.tgz,SQLmy0.06.tgz}. @item @uref{http://www.mysql.com/Downloads/Contrib/SQLmy0.06.tgz,SQLmy0.06.tgz}
FlagShip Replaceable Database Driver (RDD) for MySQL. By Alejandro FlagShip Replaceable Database Driver (RDD) for MySQL. By Alejandro
Fernandez Herrero. Fernandez Herrero.
@uref{http://www.fship.com/rdds.html, Flagship RDD home page} @uref{http://www.fship.com/rdds.html, Flagship RDD home page}
@item @uref{http://www.mysql.com/Downloads/Contrib/mydsn-1.0.zip,mydsn-1.0.zip}
Binary and source for @code{mydsn.dll}. mydsn should be used to build
and remove the DSN registry file for the MyODBC driver in Coldfusion
applications. By Miguel Angel Solórzano.
@end itemize @end itemize
@appendixsec Clients @appendixsec Clients
...@@ -39603,36 +39613,49 @@ though, so Version 3.23 is not released as a stable version yet. ...@@ -39603,36 +39613,49 @@ though, so Version 3.23 is not released as a stable version yet.
@appendixsubsec Changes in release 3.23.29 @appendixsubsec Changes in release 3.23.29
@itemize @bullet @itemize @bullet
@item @item
Changed drop table to first drop the tables and then the @code{.frm} file.
@item
Fixed a bug in the hostname cache which caused @code{mysqld} to report the
hostname as '' in some error messages.
@item
Fixed a bug with @code{HEAP} type tables; the variable Fixed a bug with @code{HEAP} type tables; the variable
@code{max_heap_table_size} wasn't used. Now either @code{MAX_ROWS} or @code{max_heap_table_size} wasn't used. Now either @code{MAX_ROWS} or
@code{max_heap_table_size} can be used to limit the size of a @code{HEAP} @code{max_heap_table_size} can be used to limit the size of a @code{HEAP}
type table. type table.
@item @item
Renamed variable @code{bdb_lock_max} to @code{bdb_max_lock}.
@item
Changed the default server-id to 1 for masters and 2 for slaves Changed the default server-id to 1 for masters and 2 for slaves
to make it easier to use the binary log. to make it easier to use the binary log.
@item @item
Added @code{CHECK}, @code{ANALYZE} and @code{OPTIMIZE} of BDB tables. Renamed variable @code{bdb_lock_max} to @code{bdb_max_lock}.
@item
Added support for @code{auto_increment} on sub fields for BDB tables.
@item
Added @code{ANALYZE} of BDB tables.
@item @item
Store in BDB tables the number of rows; This helps to optimize queries Store in BDB tables the number of rows; This helps to optimize queries
when we need an approximation of the number of row. when we need an approximation of the number of row.
@item @item
Made @code{DROP TABLE}, @code{RENAME TABLE}, @code{CREATE INDEX} and If we get an error in a multi-row statement, we now only rollback the
@code{DROP INDEX} are now transaction endpoints. last statement, not the entire transaction.
@item
If you do a @code{ROLLBACK} when you have updated a non-transactional table
you will get an error as a warning.
@item @item
Added option @code{--bdb-shared-data} to @code{mysqld}. Added option @code{--bdb-shared-data} to @code{mysqld}.
@item @item
Added status variable @code{Slave_open_temp_tables}.
@item
Added variables @code{binlog_cache_size} and @code{max_binlog_cache_size} to Added variables @code{binlog_cache_size} and @code{max_binlog_cache_size} to
@code{mysqld}. @code{mysqld}.
@item @item
Made @code{DROP TABLE}, @code{RENAME TABLE}, @code{CREATE INDEX} and
@code{DROP INDEX} are now transaction endpoints.
@item
If you do a @code{DROP DATABASE} on a symbolic linked database, both If you do a @code{DROP DATABASE} on a symbolic linked database, both
the link and the original database is deleted. the link and the original database is deleted.
@item @item
Fixed that @code{DROP DATABASE} works on OS/2. Fixed that @code{DROP DATABASE} works on OS/2.
@item @item
New client @code{mysqld_multi}. @xref{mysqld_multi}.
@item
Fixed bug when doing a @code{SELECT DISTINCT ... table1 LEFT JOIN Fixed bug when doing a @code{SELECT DISTINCT ... table1 LEFT JOIN
table2..} when table2 was empty. table2..} when table2 was empty.
@item @item
...@@ -39640,13 +39663,13 @@ Added @code{--abort-slave-event-count} and ...@@ -39640,13 +39663,13 @@ Added @code{--abort-slave-event-count} and
@code{--disconnect-slave-event-count} options to @code{mysqld} for @code{--disconnect-slave-event-count} options to @code{mysqld} for
debugging and testing of replication. debugging and testing of replication.
@item @item
added @code{Slave_open_temp_tables} status variable.
@item
Fixed replication of temporary tables. Handles everything except Fixed replication of temporary tables. Handles everything except
slave server restart. slave server restart.
@item @item
@code{SHOW KEYS} now shows whether or not key is @code{FULLTEXT}. @code{SHOW KEYS} now shows whether or not key is @code{FULLTEXT}.
@item @item
New script @code{mysqld_multi}. @xref{mysqld_multi}.
@item
Added new script, @file{mysql-multi.server.sh}. Thanks to Added new script, @file{mysql-multi.server.sh}. Thanks to
Tim Bunce @email{Tim.Bunce@@ig.co.uk} for modifying @file{mysql.server} to Tim Bunce @email{Tim.Bunce@@ig.co.uk} for modifying @file{mysql.server} to
easily handle hosts running many @code{mysqld} processes. easily handle hosts running many @code{mysqld} processes.
...@@ -39682,12 +39705,6 @@ with FrontBase. ...@@ -39682,12 +39705,6 @@ with FrontBase.
Allow @code{RESTRICT} and @code{CASCADE} after @code{DROP TABLE} to make Allow @code{RESTRICT} and @code{CASCADE} after @code{DROP TABLE} to make
porting easier. porting easier.
@item @item
If we get an error we now only rollback the statement (for BDB tables),
not the entire transaction.
@item
If you do a @code{ROLLBACK} when you have updated a non-transactional table
you will get an error as a warning.
@item
Reset status variable which could cause problem if one used @code{--slow-log}. Reset status variable which could cause problem if one used @code{--slow-log}.
@item @item
Added variable @code{connect_timeout} to @code{mysql} and @code{mysqladmin}. Added variable @code{connect_timeout} to @code{mysql} and @code{mysqladmin}.
...@@ -44053,6 +44070,32 @@ Fixed @code{DISTINCT} with calculated columns. ...@@ -44053,6 +44070,32 @@ Fixed @code{DISTINCT} with calculated columns.
@node Bugs, TODO, News, Top @node Bugs, TODO, News, Top
@appendix Known errors and design deficiencies in MySQL @appendix Known errors and design deficiencies in MySQL
The following problems are known and have a very high priority to get
fixed:
@itemize @bullet
@item
@code{ANALYZE TABLE} on a BDB table may in some case make the table
unusable until one has restarted @code{mysqld}. When this happens you will
see errors like the following in the @strong{MySQL} error file:
@example
001207 22:07:56 bdb: log_flush: LSN past current end-of-log
@end example
@item
Don't execute @code{ALTER TABLE} on a @code{BDB} table on which you are
running not completed multi-statement transactions. (The transaction
will probably be ignored).
@item
Doing a @code{LOCK TABLE ..} and @code{FLUSH TABLES ..} doesn't
guarantee that there isn't a half-finished transaction in progress on the
table.
@end itemize
The following problems are known and will be fixed in due time:
@itemize @bullet @itemize @bullet
@item @item
@code{mysqldump} on a @code{MERGE} table doesn't include the current @code{mysqldump} on a @code{MERGE} table doesn't include the current
...@@ -44120,7 +44163,7 @@ you a nice speed increase as it allows @strong{MySQL} to do some ...@@ -44120,7 +44163,7 @@ you a nice speed increase as it allows @strong{MySQL} to do some
optimizations that otherwise would be very hard to do. optimizations that otherwise would be very hard to do.
If you set a column to a wrong value, @strong{MySQL} will, instead of doing If you set a column to a wrong value, @strong{MySQL} will, instead of doing
a rollback, store the @code{best possible value} in the column. a rollback, store the @code{best possible value} in the column:
@itemize @bullet @itemize @bullet
@item @item
...@@ -44144,6 +44187,7 @@ If the date is totally wrong, @strong{MySQL} will store the special ...@@ -44144,6 +44187,7 @@ If the date is totally wrong, @strong{MySQL} will store the special
If you set an @code{enum} to an unsupported value, it will be set to If you set an @code{enum} to an unsupported value, it will be set to
the error value 'empty string', with numeric value 0. the error value 'empty string', with numeric value 0.
@end itemize @end itemize
@item @item
If you execute a @code{PROCEDURE} on a query that returns an empty set, If you execute a @code{PROCEDURE} on a query that returns an empty set,
in some cases the @code{PROCEDURE} will not transform the columns. in some cases the @code{PROCEDURE} will not transform the columns.
...@@ -51,7 +51,7 @@ my_global.h: global.h ...@@ -51,7 +51,7 @@ my_global.h: global.h
# These files should not be included in distributions since they are # These files should not be included in distributions since they are
# generated by configure from the .h.in files # generated by configure from the .h.in files
dist-hook: dist-hook:
rm -f $(distdir)/mysql_version.h $(distdir)/my_config.h $(RM) -f $(distdir)/mysql_version.h $(distdir)/my_config.h
# Don't update the files from bitkeeper # Don't update the files from bitkeeper
%::SCCS/s.% %::SCCS/s.%
...@@ -32,20 +32,19 @@ ...@@ -32,20 +32,19 @@
void my_b_seek(IO_CACHE *info,my_off_t pos) void my_b_seek(IO_CACHE *info,my_off_t pos)
{ {
if(info->type == READ_CACHE) if (info->type == READ_CACHE)
{ {
info->rc_pos=info->rc_end=info->buffer; info->rc_pos=info->rc_end=info->buffer;
} }
else if(info->type == WRITE_CACHE) else if (info->type == WRITE_CACHE)
{ {
byte* try_rc_pos; byte* try_rc_pos;
try_rc_pos = info->rc_pos + (pos - info->pos_in_file); try_rc_pos = info->rc_pos + (pos - info->pos_in_file);
if(try_rc_pos >= info->buffer && try_rc_pos <= info->rc_end) if (try_rc_pos >= info->buffer && try_rc_pos <= info->rc_end)
info->rc_pos = try_rc_pos; info->rc_pos = try_rc_pos;
else else
flush_io_cache(info); flush_io_cache(info);
} }
info->pos_in_file=pos; info->pos_in_file=pos;
info->seek_not_done=1; info->seek_not_done=1;
} }
......
...@@ -37,10 +37,12 @@ WARNING: THIS IS VERY MUCH A FIRST-CUT ALPHA. Comments/patches welcome. ...@@ -37,10 +37,12 @@ WARNING: THIS IS VERY MUCH A FIRST-CUT ALPHA. Comments/patches welcome.
# Documentation continued at end of file # Documentation continued at end of file
my $VERSION = "1.9"; my $VERSION = "1.9";
my $opt_tmpdir= $main::env{TMPDIR}; my $opt_tmpdir= $main::ENV{TMPDIR};
my $OPTIONS = <<"_OPTIONS"; my $OPTIONS = <<"_OPTIONS";
$0 Ver $VERSION
Usage: $0 db_name [new_db_name | directory] Usage: $0 db_name [new_db_name | directory]
-?, --help display this helpscreen and exit -?, --help display this helpscreen and exit
...@@ -115,6 +117,8 @@ GetOptions( \%opt, ...@@ -115,6 +117,8 @@ GetOptions( \%opt,
my @db_desc = (); my @db_desc = ();
my $tgt_name = undef; my $tgt_name = undef;
usage("") if ($opt{help});
if ( $opt{regexp} || $opt{suffix} || @ARGV > 2 ) { if ( $opt{regexp} || $opt{suffix} || @ARGV > 2 ) {
$tgt_name = pop @ARGV unless ( exists $opt{suffix} ); $tgt_name = pop @ARGV unless ( exists $opt{suffix} );
@db_desc = map { s{^([^\.]+)\./(.+)/$}{$1}; { 'src' => $_, 't_regex' => ( $2 ? $2 : '.*' ) } } @ARGV; @db_desc = map { s{^([^\.]+)\./(.+)/$}{$1}; { 'src' => $_, 't_regex' => ( $2 ? $2 : '.*' ) } } @ARGV;
...@@ -133,10 +137,9 @@ else { ...@@ -133,10 +137,9 @@ else {
} }
} }
my $mysqld_help;
my %mysqld_vars; my %mysqld_vars;
my $start_time = time; my $start_time = time;
my $opt_tmpdir= $opt{tempdir} ? $opt{tmpdir} : $main::env{TMPDIR}; my $opt_tmpdir= $opt{tmpdir} ? $opt{tmpdir} : $main::ENV{TMPDIR};
$0 = $1 if $0 =~ m:/([^/]+)$:; $0 = $1 if $0 =~ m:/([^/]+)$:;
$opt{quiet} = 0 if $opt{debug}; $opt{quiet} = 0 if $opt{debug};
$opt{allowold} = 1 if $opt{keepold}; $opt{allowold} = 1 if $opt{keepold};
...@@ -310,15 +313,19 @@ print Dumper( \@db_desc ) if ( $opt{debug} ); ...@@ -310,15 +313,19 @@ print Dumper( \@db_desc ) if ( $opt{debug} );
die "No tables to hot-copy" unless ( length $hc_locks ); die "No tables to hot-copy" unless ( length $hc_locks );
# --- create target directories --- # --- create target directories if we are using 'cp' ---
my @existing = (); my @existing = ();
foreach my $rdb ( @db_desc ) {
if ($opt{method} =~ /^cp\b/)
{
foreach my $rdb ( @db_desc ) {
push @existing, $rdb->{target} if ( -d $rdb->{target} ); push @existing, $rdb->{target} if ( -d $rdb->{target} );
} }
die "Can't hotcopy to '", join( "','", @existing ), "' because already exist and --allowold option not given.\n" die "Can't hotcopy to '", join( "','", @existing ), "' because already exist and --allowold option not given.\n"
if ( @existing && !$opt{allowold} ); if ( @existing && !$opt{allowold} );
}
retire_directory( @existing ) if ( @existing ); retire_directory( @existing ) if ( @existing );
...@@ -385,54 +392,11 @@ foreach my $rdb ( @db_desc ) ...@@ -385,54 +392,11 @@ foreach my $rdb ( @db_desc )
push @failed, "$rdb->{src} -> $rdb->{target} failed: $@" push @failed, "$rdb->{src} -> $rdb->{target} failed: $@"
if ( $@ ); if ( $@ );
@files = map { "$datadir/$rdb->{src}/$_" } @{$rdb->{index}}; @files = @{$rdb->{index}};
if ($rdb->{index}) if ($rdb->{index})
{ {
# copy_index($opt{method}, \@files,
# Copy only the header of the index file "$datadir/$rdb->{src}", $rdb->{target} );
#
my $tmpfile="$opt_tmpdir/mysqlhotcopy$$";
foreach my $file ($rdb->{index})
{
my $from="$datadir/$rdb->{src}/$file";
my $to="$rdb->{target}/$file";
my $buff;
open(INPUT, $from) || die "Can't open file $from: $!\n";
my $length=read INPUT, $buff, 2048;
die "Can't read index header from $from\n" if ($length <= 1024);
close INPUT;
if ( $opt{dryrun} )
{
print '$opt{method}-header $from $to\n';
}
elsif ($opt{method} eq 'cp')
{
!open(OUTPUT,$to) || die "Can\'t create file $to: $!\n";
if (write(OUTPUT,$buff) != length($buff))
{
die "Error when writing data to $to: $!\n";
}
close OUTPUT || die "Error on close of $to: $!\n";
}
elsif ($opt{method} eq 'scp')
{
my $tmp=$tmpfile;
open(OUTPUT,"$tmp") || die "Can\'t create file $tmp: $!\n";
if (write(OUTPUT,$buff) != length($buff))
{
die "Error when writing data to $tmp: $!\n";
}
close OUTPUT || die "Error on close of $tmp: $!\n";
safe_system('scp $tmp $to');
}
else
{
die "Can't use unsupported method '$opt{method}'\n";
}
}
unlink "$opt_tmpdir/mysqlhotcopy$$";
} }
if ( $opt{checkpoint} ) { if ( $opt{checkpoint} ) {
...@@ -534,9 +498,62 @@ sub copy_files { ...@@ -534,9 +498,62 @@ sub copy_files {
safe_system (@cmd); safe_system (@cmd);
} }
#
# Copy only the header of the index file
#
sub copy_index
{
my ($method, $files, $source, $target) = @_;
my $tmpfile="$opt_tmpdir/mysqlhotcopy$$";
print "Copying indices for ".@$files." files...\n" unless $opt{quiet};
foreach my $file (@$files)
{
my $from="$source/$file";
my $to="$target/$file";
my $buff;
open(INPUT, "<$from") || die "Can't open file $from: $!\n";
my $length=read INPUT, $buff, 2048;
die "Can't read index header from $from\n" if ($length < 1024);
close INPUT;
if ( $opt{dryrun} )
{
print "$opt{method}-header $from $to\n";
}
elsif ($opt{method} eq 'cp')
{
open(OUTPUT,">$to") || die "Can\'t create file $to: $!\n";
if (syswrite(OUTPUT,$buff) != length($buff))
{
die "Error when writing data to $to: $!\n";
}
close OUTPUT || die "Error on close of $to: $!\n";
}
elsif ($opt{method} eq 'scp')
{
my $tmp=$tmpfile;
open(OUTPUT,">$tmp") || die "Can\'t create file $tmp: $!\n";
if (syswrite(OUTPUT,$buff) != length($buff))
{
die "Error when writing data to $tmp: $!\n";
}
close OUTPUT || die "Error on close of $tmp: $!\n";
safe_system("scp $tmp $to");
}
else
{
die "Can't use unsupported method '$opt{method}'\n";
}
}
unlink "$tmpfile" if ($opt{method} eq 'scp');
}
sub safe_system sub safe_system
{ {
my @cmd=shift; my @cmd= @_;
if ( $opt{dryrun} ) if ( $opt{dryrun} )
{ {
...@@ -546,7 +563,7 @@ sub safe_system ...@@ -546,7 +563,7 @@ sub safe_system
## for some reason system fails but backticks works ok for scp... ## for some reason system fails but backticks works ok for scp...
print "Executing '@cmd'\n" if $opt{debug}; print "Executing '@cmd'\n" if $opt{debug};
my $cp_status = system @cmd; my $cp_status = system "@cmd > /dev/null";
if ($cp_status != 0) { if ($cp_status != 0) {
warn "Burp ('scuse me). Trying backtick execution...\n" if $opt{debug}; #' warn "Burp ('scuse me). Trying backtick execution...\n" if $opt{debug}; #'
## try something else ## try something else
...@@ -680,7 +697,9 @@ UNIX domain socket to use when connecting to local server ...@@ -680,7 +697,9 @@ UNIX domain socket to use when connecting to local server
=item --noindices =item --noindices
don't include index files in copy Don\'t include index files in copy. Only up to the first 2048 bytes
are copied; You can restore the indexes with isamchk -r or myisamchk -r
on the backup.
=item --method=# =item --method=#
...@@ -689,9 +708,10 @@ method for copy (only "cp" currently supported). Alpha support for ...@@ -689,9 +708,10 @@ method for copy (only "cp" currently supported). Alpha support for
will vary with your ability to understand how scp works. 'man scp' will vary with your ability to understand how scp works. 'man scp'
and 'man ssh' are your friends. and 'man ssh' are your friends.
The destination directory _must exist_ on the target machine using The destination directory _must exist_ on the target machine using the
the scp method. Liberal use of the --debug option will help you figure scp method. --keepold and --allowold are meeningless with scp.
out what's really going on when you do an scp. Liberal use of the --debug option will help you figure out what\'s
really going on when you do an scp.
Note that using scp will lock your tables for a _long_ time unless Note that using scp will lock your tables for a _long_ time unless
your network connection is _fast_. If this is unacceptable to you, your network connection is _fast_. If this is unacceptable to you,
...@@ -755,3 +775,4 @@ Ralph Corderoy - added synonyms for commands ...@@ -755,3 +775,4 @@ Ralph Corderoy - added synonyms for commands
Scott Wiersdorf - added table regex and scp support Scott Wiersdorf - added table regex and scp support
Monty - working --noindex (copy only first 2048 bytes of index file) Monty - working --noindex (copy only first 2048 bytes of index file)
Fixes for --method=scp
Testing server 'PostgreSQL version 7.0.2' at 2000-08-15 16:58:55 Testing server 'PostgreSQL version ???' at 2000-12-05 5:18:45
ATIS table test ATIS table test
Creating tables Creating tables
Time for create_table (28): 1 wallclock secs ( 0.02 usr 0.00 sys + 0.00 cusr 0.00 csys = 0.00 CPU) Time for create_table (28): 0 wallclock secs ( 0.02 usr 0.01 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Inserting data Inserting data
Time to insert (9768): 9 wallclock secs ( 2.71 usr 0.43 sys + 0.00 cusr 0.00 csys = 0.00 CPU) Time to insert (9768): 9 wallclock secs ( 2.88 usr 0.35 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Retrieving data Retrieving data
Time for select_simple_join (500): 3 wallclock secs ( 0.76 usr 0.04 sys + 0.00 cusr 0.00 csys = 0.00 CPU) Time for select_simple_join (500): 3 wallclock secs ( 0.69 usr 0.04 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Time for select_join (200): 13 wallclock secs ( 4.80 usr 0.22 sys + 0.00 cusr 0.00 csys = 0.00 CPU) Time for select_join (200): 14 wallclock secs ( 5.18 usr 0.20 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Time for select_distinct (800): 17 wallclock secs ( 2.10 usr 0.03 sys + 0.00 cusr 0.00 csys = 0.00 CPU) Time for select_distinct (800): 17 wallclock secs ( 2.21 usr 0.07 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Time for select_group (2500): 44 wallclock secs ( 1.57 usr 0.13 sys + 0.00 cusr 0.00 csys = 0.00 CPU) Time for select_group (2600): 45 wallclock secs ( 1.73 usr 0.10 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Removing tables Removing tables
Time to drop_table (28): 1 wallclock secs ( 0.00 usr 0.00 sys + 0.00 cusr 0.00 csys = 0.00 CPU) Time to drop_table (28): 0 wallclock secs ( 0.00 usr 0.00 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Total time: 88 wallclock secs (11.97 usr 0.85 sys + 0.00 cusr 0.00 csys = 0.00 CPU) Total time: 89 wallclock secs (12.72 usr 0.77 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Benchmark DBD suite: 2.8 Benchmark DBD suite: 2.10
Date of test: 2000-08-16 21:56:32 Date of test: 2000-12-05 5:18:45
Running tests on: Linux 2.2.14-my-SMP i686 Running tests on: Linux 2.2.14-my-SMP i686
Arguments: Arguments:
Comments: Intel Xeon, 2x550 Mhz, 1G, pg started with -o -F Comments: Intel Xeon, 2x550 Mhz 500 Mb, pg started with -o -F
Limits from: mysql,pg Limits from: mysql,pg
Server version: PostgreSQL version 7.0.2 Server version: PostgreSQL version ???
ATIS: Total time: 88 wallclock secs (11.97 usr 0.85 sys + 0.00 cusr 0.00 csys = 0.00 CPU) ATIS: Total time: 89 wallclock secs (12.72 usr 0.77 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
alter-table: Total time: 50 wallclock secs ( 0.67 usr 0.07 sys + 0.00 cusr 0.00 csys = 0.00 CPU) alter-table: Total time: 29 wallclock secs ( 0.71 usr 0.09 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
big-tables: Total time: 1244 wallclock secs ( 8.76 usr 0.69 sys + 0.00 cusr 0.00 csys = 0.00 CPU) big-tables: Total time: 1248 wallclock secs ( 9.27 usr 0.79 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
connect: Total time: 482 wallclock secs (45.81 usr 18.33 sys + 0.00 cusr 0.00 csys = 0.00 CPU) connect: Total time: 472 wallclock secs (48.80 usr 17.77 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
create: Total time: 8745 wallclock secs (32.62 usr 4.94 sys + 0.00 cusr 0.00 csys = 0.00 CPU) create: Total time: 8968 wallclock secs (35.76 usr 5.26 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
insert: Estimated total time: 102579 wallclock secs (481.81 usr 72.29 sys + 0.00 cusr 0.00 csys = 0.00 CPU) insert: Estimated total time: 110214 wallclock secs (659.27 usr 91.88 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
select: Estimated total time: 8574 wallclock secs (124.45 usr 11.39 sys + 0.00 cusr 0.00 csys = 0.00 CPU) select: Estimated total time: 8255 wallclock secs (54.76 usr 6.93 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
wisconsin: Total time: 810 wallclock secs (12.32 usr 1.94 sys + 0.00 cusr 0.00 csys = 0.00 CPU) wisconsin: Total time: 813 wallclock secs (12.05 usr 2.14 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
All 8 test executed successfully All 8 test executed successfully
Tests with estimated time have a + at end of line Tests with estimated time have a + at end of line
Totals per operation: Totals per operation:
Operation seconds usr sys cpu tests Operation seconds usr sys cpu tests
alter_table_add 46.00 0.32 0.01 0.00 992 alter_table_add 28.00 0.41 0.03 0.00 992
connect 129.00 8.57 4.58 0.00 10000 connect 125.00 9.11 3.79 0.00 10000
connect+select_1_row 176.00 11.82 5.48 0.00 10000 connect+select_1_row 173.00 12.56 5.56 0.00 10000
connect+select_simple 142.00 11.34 5.77 0.00 10000 connect+select_simple 140.00 12.15 5.74 0.00 10000
count 121.00 0.03 0.01 0.00 100 count 130.00 0.01 0.03 0.00 100
count_distinct 232.00 0.39 0.08 0.00 1000 count_distinct 235.00 0.76 0.12 0.00 2000
count_distinct_big 691.00 82.24 2.83 0.00 1020 count_distinct_big 200.00 8.26 0.30 0.00 120
count_distinct_group 268.00 1.09 0.06 0.00 1000 count_distinct_group 271.00 1.27 0.10 0.00 1000
count_distinct_group_on_key 169.00 0.37 0.07 0.00 1000 count_distinct_group_on_key 174.00 0.44 0.11 0.00 1000
count_distinct_group_on_key_parts 267.00 1.11 0.10 0.00 1000 count_distinct_group_on_key_parts 270.00 1.43 0.07 0.00 1000
count_group_on_key_parts 238.00 1.01 0.03 0.00 1000 count_group_on_key_parts 242.00 1.19 0.05 0.00 1000
count_on_key 2504.00 13.04 3.07 0.00 50100 + count_on_key 2544.00 16.73 2.42 0.00 50100 +
create+drop 3022.00 10.18 1.71 0.00 10000 create+drop 2954.00 11.24 1.81 0.00 10000
create_MANY_tables 455.00 8.09 1.12 0.00 10000 create_MANY_tables 448.00 7.42 0.95 0.00 10000
create_index 1.00 0.00 0.00 0.00 8 create_index 1.00 0.00 0.00 0.00 8
create_key+drop 3752.00 8.40 1.09 0.00 10000 create_key+drop 4055.00 10.98 1.30 0.00 10000
create_table 1.00 0.02 0.00 0.00 31 create_table 1.00 0.03 0.01 0.00 31
delete_big 1915.00 0.00 0.01 0.00 13 delete_all 341.00 0.00 0.00 0.00 12
delete_big_many_keys 10.00 0.00 0.00 0.00 2 delete_all_many_keys 31.00 0.07 0.00 0.00 1
delete_key 256.00 3.10 0.66 0.00 10000 delete_big 0.00 0.00 0.00 0.00 1
delete_big_many_keys 30.00 0.07 0.00 0.00 128
delete_key 283.00 2.91 0.52 0.00 10000
drop_index 0.00 0.00 0.00 0.00 8 drop_index 0.00 0.00 0.00 0.00 8
drop_table 1.00 0.00 0.00 0.00 28 drop_table 0.00 0.00 0.00 0.00 28
drop_table_when_MANY_tables 1328.00 2.91 0.56 0.00 10000 drop_table_when_MANY_tables 1324.00 3.41 0.51 0.00 10000
insert 8783.00 110.09 19.24 0.00 350768 insert 8542.00 109.96 19.42 0.00 350768
insert_duplicates 55.00 29.54 3.69 0.00 300000 insert_duplicates 3055.00 60.75 8.53 0.00 100000
insert_key 3825.00 33.55 6.09 0.00 100000 insert_key 3693.00 33.29 5.64 0.00 100000
insert_many_fields 357.00 1.00 0.17 0.00 2000 insert_many_fields 357.00 1.18 0.13 0.00 2000
min_max 55.00 0.01 0.00 0.00 60 insert_select_1_key 49.00 0.00 0.00 0.00 1
min_max_on_key 10785.00 26.27 4.98 0.00 85000 ++ insert_select_2_keys 43.00 0.00 0.00 0.00 1
order_by 103.00 22.05 0.77 0.00 10 min_max 58.00 0.02 0.01 0.00 60
order_by_key 118.00 22.03 0.69 0.00 10 min_max_on_key 11172.00 24.56 3.60 0.00 85000 ++
select_1_row 7.00 2.56 0.42 0.00 10000 order_by_big 121.00 21.92 0.67 0.00 10
select_2_rows 7.00 2.76 0.42 0.00 10000 order_by_big_key 115.00 22.06 0.67 0.00 10
select_big 64.00 26.10 1.44 0.00 10080 order_by_big_key2 118.00 22.07 0.53 0.00 10
select_column+column 8.00 2.28 0.49 0.00 10000 order_by_big_key_desc 116.00 22.15 0.66 0.00 10
select_diff_key 13.00 0.17 0.01 0.00 500 order_by_big_key_diff 126.00 22.20 0.79 0.00 10
select_distinct 17.00 2.10 0.03 0.00 800 order_by_key 15.00 1.09 0.06 0.00 500
select_group 277.00 1.59 0.13 0.00 2611 order_by_key2_diff 19.00 2.00 0.06 0.00 500
select_group_when_MANY_tables 188.00 3.03 0.46 0.00 10000 order_by_range 16.00 1.21 0.02 0.00 500
select_join 13.00 4.80 0.22 0.00 200 select_1_row 7.00 3.10 0.50 0.00 10000
select_key 5051.00 66.15 11.60 0.00 200000 + select_2_rows 6.00 2.75 0.54 0.00 10000
select_key_prefix 5061.00 67.04 11.03 0.00 200000 + select_big 64.00 25.86 1.65 0.00 10080
select_many_fields 886.00 7.75 0.52 0.00 2000 select_column+column 9.00 2.41 0.31 0.00 10000
select_range 24336.00 10.60 1.23 0.00 25410 ++ select_diff_key 13.00 0.24 0.01 0.00 500
select_range_prefix 24383.00 6.53 0.60 0.00 25000 ++ select_distinct 17.00 2.21 0.07 0.00 800
select_simple 5.00 2.71 0.49 0.00 10000 select_group 285.00 1.76 0.11 0.00 2711
select_simple_join 3.00 0.76 0.04 0.00 500 select_group_when_MANY_tables 187.00 2.71 0.68 0.00 10000
update_big 2330.00 0.00 0.00 0.00 500 select_join 14.00 5.18 0.20 0.00 200
update_of_key 4738.00 14.09 2.44 0.00 756 select_key 4967.00 68.44 12.65 0.00 200000 +
update_of_key_big 249.00 0.12 0.01 0.00 501 select_key2 4933.00 67.48 11.08 0.00 200000 +
update_with_key 15050.00 85.10 15.69 0.00 100000 select_key_prefix 4938.00 67.63 10.85 0.00 200000 +
wisc_benchmark 16.00 3.11 0.27 0.00 114 select_many_fields 891.00 8.07 0.66 0.00 2000
TOTALS 122507.00 717.92 110.41 0.00 1594122 +++++++++ select_range 35.00 0.87 0.02 0.00 410
select_range_key2 26862.00 7.62 1.08 0.00 25000 ++
select_range_prefix 24419.00 9.69 0.80 0.00 25000 ++
select_simple 4.00 2.96 0.45 0.00 10000
select_simple_join 3.00 0.69 0.04 0.00 500
update_big 1894.00 0.02 0.00 0.00 10
update_of_key 3624.00 15.41 3.10 0.00 50256
update_of_key_big 444.00 0.20 0.00 0.00 501
update_with_key 14806.00 89.73 16.29 0.00 300000
wisc_benchmark 18.00 3.04 0.25 0.00 114
TOTALS 130055.00 832.98 125.55 0.00 1844991 ++++++++++
Testing server 'PostgreSQL version 7.0.2' at 2000-08-16 1:58:36 Testing server 'PostgreSQL version ???' at 2000-12-05 5:20:15
Testing of ALTER TABLE Testing of ALTER TABLE
Testing with 1000 columns and 1000 rows in 20 steps Testing with 1000 columns and 1000 rows in 20 steps
Insert data into the table Insert data into the table
Time for insert (1000) 1 wallclock secs ( 0.35 usr 0.06 sys + 0.00 cusr 0.00 csys = 0.00 CPU) Time for insert (1000) 0 wallclock secs ( 0.28 usr 0.06 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Time for alter_table_add (992): 46 wallclock secs ( 0.32 usr 0.01 sys + 0.00 cusr 0.00 csys = 0.00 CPU) Time for alter_table_add (992): 28 wallclock secs ( 0.41 usr 0.03 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Time for create_index (8): 1 wallclock secs ( 0.00 usr 0.00 sys + 0.00 cusr 0.00 csys = 0.00 CPU) Time for create_index (8): 1 wallclock secs ( 0.00 usr 0.00 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Time for drop_index (8): 0 wallclock secs ( 0.00 usr 0.00 sys + 0.00 cusr 0.00 csys = 0.00 CPU) Time for drop_index (8): 0 wallclock secs ( 0.00 usr 0.00 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Total time: 50 wallclock secs ( 0.67 usr 0.07 sys + 0.00 cusr 0.00 csys = 0.00 CPU) Total time: 29 wallclock secs ( 0.71 usr 0.09 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Testing server 'PostgreSQL version 7.0.2' at 2000-08-16 1:59:26 Testing server 'PostgreSQL version ???' at 2000-12-05 5:20:45
Testing of some unusual tables Testing of some unusual tables
All tests are done 1000 times with 1000 fields All tests are done 1000 times with 1000 fields
Testing table with 1000 fields Testing table with 1000 fields
Testing select * from table with 1 record Testing select * from table with 1 record
Time to select_many_fields(1000): 389 wallclock secs ( 3.71 usr 0.29 sys + 0.00 cusr 0.00 csys = 0.00 CPU) Time to select_many_fields(1000): 402 wallclock secs ( 3.75 usr 0.32 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Testing select all_fields from table with 1 record Testing select all_fields from table with 1 record
Time to select_many_fields(1000): 497 wallclock secs ( 4.04 usr 0.23 sys + 0.00 cusr 0.00 csys = 0.00 CPU) Time to select_many_fields(1000): 489 wallclock secs ( 4.32 usr 0.34 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Testing insert VALUES() Testing insert VALUES()
Time to insert_many_fields(1000): 143 wallclock secs ( 0.43 usr 0.07 sys + 0.00 cusr 0.00 csys = 0.00 CPU) Time to insert_many_fields(1000): 144 wallclock secs ( 0.38 usr 0.08 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Testing insert (all_fields) VALUES() Testing insert (all_fields) VALUES()
Time to insert_many_fields(1000): 214 wallclock secs ( 0.57 usr 0.10 sys + 0.00 cusr 0.00 csys = 0.00 CPU) Time to insert_many_fields(1000): 213 wallclock secs ( 0.80 usr 0.05 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Total time: 1244 wallclock secs ( 8.76 usr 0.69 sys + 0.00 cusr 0.00 csys = 0.00 CPU) Total time: 1248 wallclock secs ( 9.27 usr 0.79 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Testing server 'PostgreSQL version 7.0.2' at 2000-08-15 17:01:48 Testing server 'PostgreSQL version ???' at 2000-12-05 5:41:34
Testing the speed of connecting to the server and sending of data Testing the speed of connecting to the server and sending of data
All tests are done 10000 times All tests are done 10000 times
Testing connection/disconnect Testing connection/disconnect
Time to connect (10000): 129 wallclock secs ( 8.57 usr 4.58 sys + 0.00 cusr 0.00 csys = 0.00 CPU) Time to connect (10000): 125 wallclock secs ( 9.11 usr 3.79 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Test connect/simple select/disconnect Test connect/simple select/disconnect
Time for connect+select_simple (10000): 142 wallclock secs (11.34 usr 5.77 sys + 0.00 cusr 0.00 csys = 0.00 CPU) Time for connect+select_simple (10000): 140 wallclock secs (12.15 usr 5.74 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Test simple select Test simple select
Time for select_simple (10000): 5 wallclock secs ( 2.71 usr 0.49 sys + 0.00 cusr 0.00 csys = 0.00 CPU) Time for select_simple (10000): 4 wallclock secs ( 2.96 usr 0.45 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Testing connect/select 1 row from table/disconnect Testing connect/select 1 row from table/disconnect
Time to connect+select_1_row (10000): 176 wallclock secs (11.82 usr 5.48 sys + 0.00 cusr 0.00 csys = 0.00 CPU) Time to connect+select_1_row (10000): 173 wallclock secs (12.56 usr 5.56 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Testing select 1 row from table Testing select 1 row from table
Time to select_1_row (10000): 7 wallclock secs ( 2.56 usr 0.42 sys + 0.00 cusr 0.00 csys = 0.00 CPU) Time to select_1_row (10000): 7 wallclock secs ( 3.10 usr 0.50 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Testing select 2 rows from table Testing select 2 rows from table
Time to select_2_rows (10000): 7 wallclock secs ( 2.76 usr 0.42 sys + 0.00 cusr 0.00 csys = 0.00 CPU) Time to select_2_rows (10000): 6 wallclock secs ( 2.75 usr 0.54 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Test select with aritmetic (+) Test select with aritmetic (+)
Time for select_column+column (10000): 8 wallclock secs ( 2.28 usr 0.49 sys + 0.00 cusr 0.00 csys = 0.00 CPU) Time for select_column+column (10000): 9 wallclock secs ( 2.41 usr 0.31 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Testing retrieval of big records (7000 bytes) Testing retrieval of big records (7000 bytes)
Time to select_big (10000): 8 wallclock secs ( 3.76 usr 0.68 sys + 0.00 cusr 0.00 csys = 0.00 CPU) Time to select_big (10000): 8 wallclock secs ( 3.74 usr 0.88 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Total time: 482 wallclock secs (45.81 usr 18.33 sys + 0.00 cusr 0.00 csys = 0.00 CPU) Total time: 472 wallclock secs (48.80 usr 17.77 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Testing server 'PostgreSQL version 7.0.2' at 2000-08-15 17:09:50 Testing server 'PostgreSQL version ???' at 2000-12-05 5:49:26
Testing the speed of creating and droping tables Testing the speed of creating and droping tables
Testing with 10000 tables and 10000 loop count Testing with 10000 tables and 10000 loop count
Testing create of tables Testing create of tables
Time for create_MANY_tables (10000): 455 wallclock secs ( 8.09 usr 1.12 sys + 0.00 cusr 0.00 csys = 0.00 CPU) Time for create_MANY_tables (10000): 448 wallclock secs ( 7.42 usr 0.95 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Accessing tables Accessing tables
Time to select_group_when_MANY_tables (10000): 188 wallclock secs ( 3.03 usr 0.46 sys + 0.00 cusr 0.00 csys = 0.00 CPU) Time to select_group_when_MANY_tables (10000): 187 wallclock secs ( 2.71 usr 0.68 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Testing drop Testing drop
Time for drop_table_when_MANY_tables (10000): 1328 wallclock secs ( 2.91 usr 0.56 sys + 0.00 cusr 0.00 csys = 0.00 CPU) Time for drop_table_when_MANY_tables (10000): 1324 wallclock secs ( 3.41 usr 0.51 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Testing create+drop Testing create+drop
Time for create+drop (10000): 3022 wallclock secs (10.18 usr 1.71 sys + 0.00 cusr 0.00 csys = 0.00 CPU) Time for create+drop (10000): 2954 wallclock secs (11.24 usr 1.81 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Time for create_key+drop (10000): 3752 wallclock secs ( 8.40 usr 1.09 sys + 0.00 cusr 0.00 csys = 0.00 CPU) Time for create_key+drop (10000): 4055 wallclock secs (10.98 usr 1.30 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Total time: 8745 wallclock secs (32.62 usr 4.94 sys + 0.00 cusr 0.00 csys = 0.00 CPU) Total time: 8968 wallclock secs (35.76 usr 5.26 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Testing server 'PostgreSQL version 7.0.2' at 2000-08-16 2:20:11 Testing server 'PostgreSQL version ???' at 2000-12-05 8:18:54
Testing the speed of inserting data into 1 table and do some selects on it. Testing the speed of inserting data into 1 table and do some selects on it.
The tests are done with a table that has 100000 rows. The tests are done with a table that has 100000 rows.
...@@ -8,73 +8,91 @@ Creating tables ...@@ -8,73 +8,91 @@ Creating tables
Inserting 100000 rows in order Inserting 100000 rows in order
Inserting 100000 rows in reverse order Inserting 100000 rows in reverse order
Inserting 100000 rows in random order Inserting 100000 rows in random order
Time for insert (300000): 7729 wallclock secs (94.80 usr 16.69 sys + 0.00 cusr 0.00 csys = 0.00 CPU) Time for insert (300000): 7486 wallclock secs (94.98 usr 16.58 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Testing insert of duplicates Testing insert of duplicates
Time for insert_duplicates (300000): 55 wallclock secs (29.54 usr 3.69 sys + 0.00 cusr 0.00 csys = 0.00 CPU) Time for insert_duplicates (100000): 3055 wallclock secs (60.75 usr 8.53 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Retrieving data from the table Retrieving data from the table
Time for select_big (10:3000000): 53 wallclock secs (22.20 usr 0.75 sys + 0.00 cusr 0.00 csys = 0.00 CPU) Time for select_big (10:3000000): 54 wallclock secs (21.95 usr 0.77 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Time for order_by_key (10:3000000): 118 wallclock secs (22.03 usr 0.69 sys + 0.00 cusr 0.00 csys = 0.00 CPU) Time for order_by_big_key (10:3000000): 115 wallclock secs (22.06 usr 0.67 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Time for order_by (10:3000000): 103 wallclock secs (22.05 usr 0.77 sys + 0.00 cusr 0.00 csys = 0.00 CPU) Time for order_by_big_key_desc (10:3000000): 116 wallclock secs (22.15 usr 0.66 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Time for select_diff_key (500:1000): 13 wallclock secs ( 0.17 usr 0.01 sys + 0.00 cusr 0.00 csys = 0.00 CPU) Time for order_by_big_key2 (10:3000000): 118 wallclock secs (22.07 usr 0.53 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Time for order_by_big_key_diff (10:3000000): 126 wallclock secs (22.20 usr 0.79 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Time for order_by_big (10:3000000): 121 wallclock secs (21.92 usr 0.67 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Time for order_by_range (500:125750): 16 wallclock secs ( 1.21 usr 0.02 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Time for order_by_key (500:125750): 15 wallclock secs ( 1.09 usr 0.06 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Time for order_by_key2_diff (500:250500): 19 wallclock secs ( 2.00 usr 0.06 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Time for select_diff_key (500:1000): 13 wallclock secs ( 0.24 usr 0.01 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Note: Query took longer then time-limit: 600 Note: Query took longer then time-limit: 600
Estimating end time based on: Estimating end time based on:
165 queries in 165 loops of 5000 loops took 605 seconds 180 queries in 180 loops of 5000 loops took 653 seconds
Estimated time for select_range_prefix (5000:1386): 18333 wallclock secs ( 3.03 usr 0.00 sys + 0.00 cusr 0.00 csys = 0.00 CPU) Estimated time for select_range_prefix (5000:1512): 18138 wallclock secs ( 5.00 usr 0.28 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Note: Query took longer then time-limit: 600 Note: Query took longer then time-limit: 600
Estimating end time based on: Estimating end time based on:
165 queries in 165 loops of 5000 loops took 603 seconds 165 queries in 165 loops of 5000 loops took 614 seconds
Estimated time for select_range (5000:1386): 18272 wallclock secs ( 5.45 usr 0.91 sys + 0.00 cusr 0.00 csys = 0.00 CPU) Estimated time for select_range_key2 (5000:1386): 18606 wallclock secs ( 3.03 usr 0.00 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Note: Query took longer then time-limit: 600 Note: Query took longer then time-limit: 600
Estimating end time based on: Estimating end time based on:
23746 queries in 11873 loops of 100000 loops took 601 seconds 24340 queries in 12170 loops of 100000 loops took 601 seconds
Estimated time for select_key_prefix (200000): 5061 wallclock secs (67.04 usr 11.03 sys + 0.00 cusr 0.00 csys = 0.00 CPU) Estimated time for select_key_prefix (200000): 4938 wallclock secs (67.63 usr 10.85 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Note: Query took longer then time-limit: 600 Note: Query took longer then time-limit: 600
Estimating end time based on: Estimating end time based on:
23796 queries in 11898 loops of 100000 loops took 601 seconds 24198 queries in 12099 loops of 100000 loops took 601 seconds
Estimated time for select_key (200000): 5051 wallclock secs (66.15 usr 11.60 sys + 0.00 cusr 0.00 csys = 0.00 CPU) Estimated time for select_key (200000): 4967 wallclock secs (68.44 usr 12.65 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Note: Query took longer then time-limit: 600
Estimating end time based on:
24362 queries in 12181 loops of 100000 loops took 601 seconds
Estimated time for select_key2 (200000): 4933 wallclock secs (67.48 usr 11.08 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Test of compares with simple ranges Test of compares with simple ranges
Note: Query took longer then time-limit: 600 Note: Query took longer then time-limit: 600
Estimating end time based on: Estimating end time based on:
2000 queries in 50 loops of 500 loops took 605 seconds 1920 queries in 48 loops of 500 loops took 603 seconds
Estimated time for select_range_prefix (20000:4350): 6050 wallclock secs ( 3.50 usr 0.60 sys + 0.00 cusr 0.00 csys = 0.00 CPU) Estimated time for select_range_prefix (20000:4176): 6281 wallclock secs ( 4.69 usr 0.52 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Note: Query took longer then time-limit: 600 Note: Query took longer then time-limit: 600
Estimating end time based on: Estimating end time based on:
2000 queries in 50 loops of 500 loops took 603 seconds 1480 queries in 37 loops of 500 loops took 611 seconds
Estimated time for select_range (20000:4350): 6030 wallclock secs ( 4.30 usr 0.30 sys + 0.00 cusr 0.00 csys = 0.00 CPU) Estimated time for select_range_key2 (20000:3219): 8256 wallclock secs ( 4.59 usr 1.08 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Time for select_group (111): 233 wallclock secs ( 0.02 usr 0.00 sys + 0.00 cusr 0.00 csys = 0.00 CPU) Time for select_group (111): 240 wallclock secs ( 0.03 usr 0.01 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Note: Query took longer then time-limit: 600 Note: Query took longer then time-limit: 600
Estimating end time based on: Estimating end time based on:
1362 queries in 227 loops of 2500 loops took 601 seconds 1314 queries in 219 loops of 2500 loops took 603 seconds
Estimated time for min_max_on_key (15000): 6618 wallclock secs ( 5.40 usr 0.33 sys + 0.00 cusr 0.00 csys = 0.00 CPU) Estimated time for min_max_on_key (15000): 6883 wallclock secs ( 4.00 usr 0.46 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Time for min_max (60): 55 wallclock secs ( 0.01 usr 0.00 sys + 0.00 cusr 0.00 csys = 0.00 CPU) Time for min_max (60): 58 wallclock secs ( 0.02 usr 0.01 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Time for count_on_key (100): 116 wallclock secs ( 0.04 usr 0.01 sys + 0.00 cusr 0.00 csys = 0.00 CPU) Time for count_on_key (100): 120 wallclock secs ( 0.03 usr 0.00 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Time for count (100): 121 wallclock secs ( 0.03 usr 0.01 sys + 0.00 cusr 0.00 csys = 0.00 CPU) Time for count (100): 130 wallclock secs ( 0.01 usr 0.03 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Time for count_distinct_big (20): 139 wallclock secs ( 0.02 usr 0.00 sys + 0.00 cusr 0.00 csys = 0.00 CPU) Time for count_distinct_big (20): 143 wallclock secs ( 0.02 usr 0.00 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Testing update of keys with functions Testing update of keys with functions
Time for update_of_key (500): 2520 wallclock secs (13.97 usr 2.44 sys + 0.00 cusr 0.00 csys = 0.00 CPU) Time for update_of_key (50000): 2460 wallclock secs (15.33 usr 3.09 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Time for update_of_key_big (501): 249 wallclock secs ( 0.12 usr 0.01 sys + 0.00 cusr 0.00 csys = 0.00 CPU) Time for update_of_key_big (501): 444 wallclock secs ( 0.20 usr 0.00 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Testing update with key Testing update with key
Time for update_with_key (100000): 15050 wallclock secs (85.10 usr 15.69 sys + 0.00 cusr 0.00 csys = 0.00 CPU) Time for update_with_key (300000): 14806 wallclock secs (89.73 usr 16.29 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Testing update of all rows Testing update of all rows
Time for update_big (500): 2330 wallclock secs ( 0.00 usr 0.00 sys + 0.00 cusr 0.00 csys = 0.00 CPU) Time for update_big (10): 1894 wallclock secs ( 0.02 usr 0.00 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Testing INSERT INTO ... SELECT
Time for insert_select_1_key (1): 49 wallclock secs ( 0.00 usr 0.00 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Time for insert_select_2_keys (1): 43 wallclock secs ( 0.00 usr 0.00 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Time for drop table(2): 20 wallclock secs ( 0.01 usr 0.00 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Testing delete Testing delete
Time for delete_key (10000): 256 wallclock secs ( 3.10 usr 0.66 sys + 0.00 cusr 0.00 csys = 0.00 CPU) Time for delete_key (10000): 283 wallclock secs ( 2.91 usr 0.52 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Time for delete_big (12): 1914 wallclock secs ( 0.00 usr 0.01 sys + 0.00 cusr 0.00 csys = 0.00 CPU) Time for delete_all (12): 341 wallclock secs ( 0.00 usr 0.00 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Insert into table with 16 keys and with a primary key with 16 parts Insert into table with 16 keys and with a primary key with 16 parts
Time for insert_key (100000): 3825 wallclock secs (33.55 usr 6.09 sys + 0.00 cusr 0.00 csys = 0.00 CPU) Time for insert_key (100000): 3693 wallclock secs (33.29 usr 5.64 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Testing update of keys Testing update of keys
Time for update_of_key (256): 2218 wallclock secs ( 0.12 usr 0.00 sys + 0.00 cusr 0.00 csys = 0.00 CPU) Time for update_of_key (256): 1164 wallclock secs ( 0.08 usr 0.01 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Deleting rows from the table
Time for delete_big_many_keys (128): 30 wallclock secs ( 0.07 usr 0.00 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Deleting everything from table Deleting everything from table
Time for delete_big_many_keys (2): 10 wallclock secs ( 0.00 usr 0.00 sys + 0.00 cusr 0.00 csys = 0.00 CPU) Time for delete_all_many_keys (1): 31 wallclock secs ( 0.07 usr 0.00 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Estimated total time: 102579 wallclock secs (481.81 usr 72.29 sys + 0.00 cusr 0.00 csys = 0.00 CPU) Estimated total time: 110214 wallclock secs (659.27 usr 91.88 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Testing server 'PostgreSQL version 7.0.2' at 2000-08-16 13:49:53 Testing server 'PostgreSQL version ???' at 2000-12-05 20:00:31
Testing the speed of selecting on keys that consist of many parts Testing the speed of selecting on keys that consist of many parts
The test-table has 10000 rows and the test is done with 12 ranges. The test-table has 10000 rows and the test is done with 12 ranges.
Creating table Creating table
Inserting 10000 rows Inserting 10000 rows
Time to insert (10000): 254 wallclock secs ( 3.38 usr 0.46 sys + 0.00 cusr 0.00 csys = 0.00 CPU) Time to insert (10000): 254 wallclock secs ( 3.11 usr 0.60 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Testing big selects on the table Testing big selects on the table
Time for select_big (70:17207): 3 wallclock secs ( 0.14 usr 0.01 sys + 0.00 cusr 0.00 csys = 0.00 CPU) Time for select_big (70:17207): 2 wallclock secs ( 0.17 usr 0.00 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Time for select_range (410:75949): 34 wallclock secs ( 0.85 usr 0.02 sys + 0.00 cusr 0.00 csys = 0.00 CPU) Time for select_range (410:75949): 35 wallclock secs ( 0.87 usr 0.02 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Note: Query took longer then time-limit: 600 Note: Query took longer then time-limit: 600
Estimating end time based on: Estimating end time based on:
10094 queries in 1442 loops of 10000 loops took 601 seconds 9807 queries in 1401 loops of 10000 loops took 601 seconds
Estimated time for min_max_on_key (70000): 4167 wallclock secs (20.87 usr 4.65 sys + 0.00 cusr 0.00 csys = 0.00 CPU) Estimated time for min_max_on_key (70000): 4289 wallclock secs (20.56 usr 3.14 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Note: Query took longer then time-limit: 600 Note: Query took longer then time-limit: 600
Estimating end time based on: Estimating end time based on:
12580 queries in 2516 loops of 10000 loops took 601 seconds 12395 queries in 2479 loops of 10000 loops took 601 seconds
Estimated time for count_on_key (50000): 2388 wallclock secs (13.00 usr 3.06 sys + 0.00 cusr 0.00 csys = 0.00 CPU) Estimated time for count_on_key (50000): 2424 wallclock secs (16.70 usr 2.42 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Time for count_group_on_key_parts (1000:0): 238 wallclock secs ( 1.01 usr 0.03 sys + 0.00 cusr 0.00 csys = 0.00 CPU) Time for count_group_on_key_parts (1000:100000): 242 wallclock secs ( 1.19 usr 0.05 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Testing count(distinct) on the table Testing count(distinct) on the table
Time for count_distinct (1000:2000): 232 wallclock secs ( 0.39 usr 0.08 sys + 0.00 cusr 0.00 csys = 0.00 CPU) Time for count_distinct (2000:2000): 235 wallclock secs ( 0.76 usr 0.12 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Time for count_distinct_group_on_key (1000:6000): 169 wallclock secs ( 0.37 usr 0.07 sys + 0.00 cusr 0.00 csys = 0.00 CPU) Time for count_distinct_group_on_key (1000:6000): 174 wallclock secs ( 0.44 usr 0.11 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Time for count_distinct_group_on_key_parts (1000:100000): 267 wallclock secs ( 1.11 usr 0.10 sys + 0.00 cusr 0.00 csys = 0.00 CPU) Time for count_distinct_group_on_key_parts (1000:100000): 270 wallclock secs ( 1.43 usr 0.07 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Time for count_distinct_group (1000:100000): 268 wallclock secs ( 1.09 usr 0.06 sys + 0.00 cusr 0.00 csys = 0.00 CPU) Time for count_distinct_group (1000:100000): 271 wallclock secs ( 1.27 usr 0.10 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Time for count_distinct_big (1000:10000000): 552 wallclock secs (82.22 usr 2.83 sys + 0.00 cusr 0.00 csys = 0.00 CPU) Time for count_distinct_big (100:1000000): 57 wallclock secs ( 8.24 usr 0.30 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Estimated total time: 8574 wallclock secs (124.45 usr 11.39 sys + 0.00 cusr 0.00 csys = 0.00 CPU) Estimated total time: 8255 wallclock secs (54.76 usr 6.93 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Testing server 'PostgreSQL version 7.0.2' at 2000-08-16 14:43:33 Testing server 'PostgreSQL version ???' at 2000-12-05 20:46:15
Wisconsin benchmark test Wisconsin benchmark test
Time for create_table (3): 0 wallclock secs ( 0.00 usr 0.00 sys + 0.00 cusr 0.00 csys = 0.00 CPU) Time for create_table (3): 1 wallclock secs ( 0.01 usr 0.00 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Inserting data Inserting data
Time to insert (31000): 791 wallclock secs ( 9.20 usr 1.66 sys + 0.00 cusr 0.00 csys = 0.00 CPU) Time to insert (31000): 793 wallclock secs ( 8.99 usr 1.89 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Time to delete_big (1): 1 wallclock secs ( 0.00 usr 0.00 sys + 0.00 cusr 0.00 csys = 0.00 CPU) Time to delete_big (1): 0 wallclock secs ( 0.00 usr 0.00 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Running actual benchmark Running actual benchmark
Time for wisc_benchmark (114): 16 wallclock secs ( 3.11 usr 0.27 sys + 0.00 cusr 0.00 csys = 0.00 CPU) Time for wisc_benchmark (114): 18 wallclock secs ( 3.04 usr 0.25 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
Total time: 810 wallclock secs (12.32 usr 1.94 sys + 0.00 cusr 0.00 csys = 0.00 CPU) Total time: 813 wallclock secs (12.05 usr 2.14 sys + 0.00 cusr 0.00 csys = 0.00 CPU)
#This file is automaticly generated by crash-me 1.45 #This file is automaticly generated by crash-me 1.54
NEG=yes # update of column= -column NEG=yes # update of column= -column
Need_cast_for_null=no # Need to cast NULL for arithmetic Need_cast_for_null=no # Need to cast NULL for arithmetic
...@@ -18,40 +18,44 @@ alter_drop_unique=no # Alter table drop unique ...@@ -18,40 +18,44 @@ alter_drop_unique=no # Alter table drop unique
alter_modify_col=no # Alter table modify column alter_modify_col=no # Alter table modify column
alter_rename_table=yes # Alter table rename table alter_rename_table=yes # Alter table rename table
atomic_updates=no # atomic updates atomic_updates=no # atomic updates
automatic_rowid=no # Automatic rowid automatic_rowid=no # Automatic row id
binary_numbers=no # binary numbers (0b1001) binary_numbers=no # binary numbers (0b1001)
binary_strings=yes # binary strings (b'0110') binary_strings=yes # binary strings (b'0110')
case_insensitive_strings=no # case insensitive compare case_insensitive_strings=no # Case insensitive compare
char_is_space_filled=yes # char are space filled char_is_space_filled=yes # char are space filled
column_alias=yes # Column alias column_alias=yes # Column alias
columns_in_group_by=+64 # number of columns in group by columns_in_group_by=+64 # number of columns in group by
columns_in_order_by=+64 # number of columns in order by columns_in_order_by=+64 # number of columns in order by
comment_#=no # # as comment comment_#=no # # as comment
comment_--=yes # -- as comment comment_--=yes # -- as comment (ANSI)
comment_/**/=yes # /* */ as comment comment_/**/=yes # /* */ as comment
comment_//=no # // as comment comment_//=no # // as comment (ANSI)
compute=no # Compute compute=no # Compute
connections=32 # Simultaneous connections (installation default) connections=32 # Simultaneous connections (installation default)
constraint_check=yes # Column constraints constraint_check=yes # Column constraints
constraint_check_table=yes # Table constraints constraint_check_table=yes # Table constraints
constraint_null=yes # NULL constraint (SyBase style) constraint_null=yes # NULL constraint (SyBase style)
crash_me_safe=yes # crash me safe crash_me_safe=yes # crash me safe
crash_me_version=1.45 # crash me version crash_me_version=1.54 # crash me version
create_default=yes # default value for column create_default=yes # default value for column
create_default_func=no # default value function for column create_default_func=no # default value function for column
create_if_not_exists=no # create table if not exists create_if_not_exists=no # create table if not exists
create_index=yes # create index create_index=yes # create index
create_schema=no # Create SCHEMA create_schema=no # Create SCHEMA
create_table_select=no # create table from select create_table_select=with AS # create table from select
cross_join=yes # cross join (same as from a,b) cross_join=yes # cross join (same as from a,b)
date_infinity=no # Supports 'infinity dates
date_last=yes # Supports 9999-12-31 dates date_last=yes # Supports 9999-12-31 dates
date_one=yes # Supports 0001-01-01 dates date_one=yes # Supports 0001-01-01 dates
date_with_YY=yes # Supports YY-MM-DD 2000 compilant dates date_with_YY=yes # Supports YY-MM-DD 2000 compilant dates
date_zero=no # Supports 0000-00-00 dates date_zero=no # Supports 0000-00-00 dates
domains=no # Domains (ANSI SQL) domains=no # Domains (ANSI SQL)
dont_require_cast_to_float=no # No need to cast from integer to float
double_quotes=yes # Double '' as ' in strings double_quotes=yes # Double '' as ' in strings
drop_if_exists=no # drop table if exists drop_if_exists=no # drop table if exists
drop_index=yes # drop index drop_index=yes # drop index
drop_requires_cascade=no # drop table require cascade/restrict
drop_restrict=no # drop table with cascade/restrict
end_colon=yes # allows end ';' end_colon=yes # allows end ';'
except=yes # except except=yes # except
except_all=no # except all except_all=no # except all
...@@ -158,6 +162,7 @@ func_extra_version=yes # Function VERSION ...@@ -158,6 +162,7 @@ func_extra_version=yes # Function VERSION
func_extra_weekday=no # Function WEEKDAY func_extra_weekday=no # Function WEEKDAY
func_extra_|=no # Function | (bitwise or) func_extra_|=no # Function | (bitwise or)
func_extra_||=no # Function OR as '||' func_extra_||=no # Function OR as '||'
func_extra_~*=yes # Function ~* (case insensitive compare)
func_odbc_abs=yes # Function ABS func_odbc_abs=yes # Function ABS
func_odbc_acos=yes # Function ACOS func_odbc_acos=yes # Function ACOS
func_odbc_ascii=yes # Function ASCII func_odbc_ascii=yes # Function ASCII
...@@ -178,7 +183,7 @@ func_odbc_dayofweek=no # Function DAYOFWEEK ...@@ -178,7 +183,7 @@ func_odbc_dayofweek=no # Function DAYOFWEEK
func_odbc_dayofyear=no # Function DAYOFYEAR func_odbc_dayofyear=no # Function DAYOFYEAR
func_odbc_degrees=yes # Function DEGREES func_odbc_degrees=yes # Function DEGREES
func_odbc_difference=no # Function DIFFERENCE() func_odbc_difference=no # Function DIFFERENCE()
func_odbc_exp=no # Function EXP func_odbc_exp=yes # Function EXP
func_odbc_floor=yes # Function FLOOR func_odbc_floor=yes # Function FLOOR
func_odbc_fn_left=no # Function ODBC syntax LEFT & RIGHT func_odbc_fn_left=no # Function ODBC syntax LEFT & RIGHT
func_odbc_hour=no # Function HOUR func_odbc_hour=no # Function HOUR
...@@ -240,7 +245,8 @@ func_sql_extract_sql=yes # Function EXTRACT ...@@ -240,7 +245,8 @@ func_sql_extract_sql=yes # Function EXTRACT
func_sql_localtime=no # Function LOCALTIME func_sql_localtime=no # Function LOCALTIME
func_sql_localtimestamp=no # Function LOCALTIMESTAMP func_sql_localtimestamp=no # Function LOCALTIMESTAMP
func_sql_lower=yes # Function LOWER func_sql_lower=yes # Function LOWER
func_sql_nullif=no # Function NULLIF func_sql_nullif_num=yes # Function NULLIF with numbers
func_sql_nullif_string=no # Function NULLIF with strings
func_sql_octet_length=no # Function OCTET_LENGTH func_sql_octet_length=no # Function OCTET_LENGTH
func_sql_position=yes # Function POSITION func_sql_position=yes # Function POSITION
func_sql_searched_case=yes # Function searched CASE func_sql_searched_case=yes # Function searched CASE
...@@ -270,7 +276,7 @@ func_where_unique=no # Function UNIQUE ...@@ -270,7 +276,7 @@ func_where_unique=no # Function UNIQUE
functions=yes # Functions functions=yes # Functions
group_by=yes # Group by group_by=yes # Group by
group_by_alias=yes # Group by alias group_by_alias=yes # Group by alias
group_by_null=yes # group on column with null values group_by_null=yes # Group on column with null values
group_by_position=yes # Group by position group_by_position=yes # Group by position
group_distinct_functions=yes # Group functions with distinct group_distinct_functions=yes # Group functions with distinct
group_func_extra_bit_and=no # Group function BIT_AND group_func_extra_bit_and=no # Group function BIT_AND
...@@ -279,28 +285,33 @@ group_func_extra_count_distinct_list=no # Group function COUNT(DISTINCT expr,exp ...@@ -279,28 +285,33 @@ group_func_extra_count_distinct_list=no # Group function COUNT(DISTINCT expr,exp
group_func_extra_std=no # Group function STD group_func_extra_std=no # Group function STD
group_func_extra_stddev=no # Group function STDDEV group_func_extra_stddev=no # Group function STDDEV
group_func_extra_variance=no # Group function VARIANCE group_func_extra_variance=no # Group function VARIANCE
group_func_sql_any=no # Group function ANY
group_func_sql_avg=yes # Group function AVG group_func_sql_avg=yes # Group function AVG
group_func_sql_count_*=yes # Group function COUNT (*) group_func_sql_count_*=yes # Group function COUNT (*)
group_func_sql_count_column=yes # Group function COUNT column name group_func_sql_count_column=yes # Group function COUNT column name
group_func_sql_count_distinct=yes # Group function COUNT(DISTINCT expr) group_func_sql_count_distinct=yes # Group function COUNT(DISTINCT expr)
group_func_sql_every=no # Group function EVERY
group_func_sql_max=yes # Group function MAX on numbers group_func_sql_max=yes # Group function MAX on numbers
group_func_sql_max_str=yes # Group function MAX on strings group_func_sql_max_str=yes # Group function MAX on strings
group_func_sql_min=yes # Group function MIN on numbers group_func_sql_min=yes # Group function MIN on numbers
group_func_sql_min_str=yes # Group function MIN on strings group_func_sql_min_str=yes # Group function MIN on strings
group_func_sql_some=no # Group function SOME
group_func_sql_sum=yes # Group function SUM group_func_sql_sum=yes # Group function SUM
group_functions=yes # Group functions group_functions=yes # Group functions
group_on_unused=yes # Group on unused column
has_true_false=yes # TRUE and FALSE has_true_false=yes # TRUE and FALSE
having=yes # Having having=yes # Having
having_with_alias=no # Having on alias having_with_alias=no # Having on alias
having_with_group=yes # Having with group function having_with_group=yes # Having with group function
hex_numbers=no # hex numbers (0x41) hex_numbers=no # hex numbers (0x41)
hex_strings=yes # hex strings (x'1ace') hex_strings=yes # hex strings (x'1ace')
ignore_end_space=yes # ignore end space in compare ignore_end_space=yes # Ignore end space in compare
index_in_create=no # index in create table index_in_create=no # index in create table
index_namespace=no # different namespace for index index_namespace=no # different namespace for index
index_parts=no # index on column part (extension) index_parts=no # index on column part (extension)
inner_join=no # inner join inner_join=yes # inner join
insert_empty_string=yes # insert empty string insert_empty_string=yes # insert empty string
insert_multi_value=no # INSERT with Value lists
insert_select=yes # insert INTO ... SELECT ... insert_select=yes # insert INTO ... SELECT ...
insert_with_set=no # INSERT with set syntax insert_with_set=no # INSERT with set syntax
intersect=yes # intersect intersect=yes # intersect
...@@ -343,7 +354,6 @@ multi_null_in_unique=yes # null in unique index ...@@ -343,7 +354,6 @@ multi_null_in_unique=yes # null in unique index
multi_strings=yes # Multiple line strings multi_strings=yes # Multiple line strings
multi_table_delete=no # DELETE FROM table1,table2... multi_table_delete=no # DELETE FROM table1,table2...
multi_table_update=no # Update with many tables multi_table_update=no # Update with many tables
insert_multi_value=no # Value lists in INSERT
natural_join=yes # natural join natural_join=yes # natural join
natural_join_incompat=yes # natural join (incompatible lists) natural_join_incompat=yes # natural join (incompatible lists)
natural_left_outer_join=no # natural left outer join natural_left_outer_join=no # natural left outer join
...@@ -352,6 +362,7 @@ null_concat_expr=yes # Is 'a' || NULL = NULL ...@@ -352,6 +362,7 @@ null_concat_expr=yes # Is 'a' || NULL = NULL
null_in_index=yes # null in index null_in_index=yes # null in index
null_in_unique=yes # null in unique index null_in_unique=yes # null in unique index
null_num_expr=yes # Is 1+NULL = NULL null_num_expr=yes # Is 1+NULL = NULL
nulls_in_unique=yes # null combination in unique index
odbc_left_outer_join=no # left outer join odbc style odbc_left_outer_join=no # left outer join odbc style
operating_system=Linux 2.2.14-5.0 i686 # crash-me tested on operating_system=Linux 2.2.14-5.0 i686 # crash-me tested on
order_by=yes # Order by order_by=yes # Order by
...@@ -359,6 +370,7 @@ order_by_alias=yes # Order by alias ...@@ -359,6 +370,7 @@ order_by_alias=yes # Order by alias
order_by_function=yes # Order by function order_by_function=yes # Order by function
order_by_position=yes # Order by position order_by_position=yes # Order by position
order_by_remember_desc=no # Order by DESC is remembered order_by_remember_desc=no # Order by DESC is remembered
order_on_unused=yes # Order by on unused column
primary_key_in_create=yes # primary key in create table primary_key_in_create=yes # primary key in create table
psm_functions=no # PSM functions (ANSI SQL) psm_functions=no # PSM functions (ANSI SQL)
psm_modules=no # PSM modules (ANSI SQL) psm_modules=no # PSM modules (ANSI SQL)
...@@ -372,6 +384,7 @@ quote_with_"=no # Allows ' and " as string markers ...@@ -372,6 +384,7 @@ quote_with_"=no # Allows ' and " as string markers
recursive_subqueries=+64 # recursive subqueries recursive_subqueries=+64 # recursive subqueries
remember_end_space=no # Remembers end space in char() remember_end_space=no # Remembers end space in char()
remember_end_space_varchar=yes # Remembers end space in varchar() remember_end_space_varchar=yes # Remembers end space in varchar()
rename_table=no # rename table
repeat_string_size=+8000000 # return string size from function repeat_string_size=+8000000 # return string size from function
right_outer_join=no # right outer join right_outer_join=no # right outer join
rowid=oid # Type for row id rowid=oid # Type for row id
...@@ -381,15 +394,16 @@ select_limit2=yes # SELECT with LIMIT #,# ...@@ -381,15 +394,16 @@ select_limit2=yes # SELECT with LIMIT #,#
select_string_size=16777207 # constant string size in SELECT select_string_size=16777207 # constant string size in SELECT
select_table_update=yes # Update with sub select select_table_update=yes # Update with sub select
select_without_from=yes # SELECT without FROM select_without_from=yes # SELECT without FROM
server_version=PostgreSQL 7.0 # server version server_version=PostgreSQL version 7.0.2 # server version
simple_joins=yes # ANSI SQL simple joins simple_joins=yes # ANSI SQL simple joins
storage_of_float=round # Storage of float values storage_of_float=round # Storage of float values
subqueries=yes # subqueries subqueries=yes # subqueries
table_alias=yes # Table alias table_alias=yes # Table alias
table_name_case=yes # case independent table names table_name_case=yes # case independent table names
table_wildcard=yes # Select table_name.* table_wildcard=yes # Select table_name.*
tempoary_table=yes # temporary tables temporary_table=yes # temporary tables
transactions=yes # transactions transactions=yes # transactions
truncate_table=yes # truncate
type_extra_abstime=yes # Type abstime type_extra_abstime=yes # Type abstime
type_extra_bfile=no # Type bfile type_extra_bfile=no # Type bfile
type_extra_blob=no # Type blob type_extra_blob=no # Type blob
...@@ -397,6 +411,7 @@ type_extra_bool=yes # Type bool ...@@ -397,6 +411,7 @@ type_extra_bool=yes # Type bool
type_extra_box=yes # Type box type_extra_box=yes # Type box
type_extra_byte=no # Type byte type_extra_byte=no # Type byte
type_extra_char(1_arg)_binary=no # Type char(1 arg) binary type_extra_char(1_arg)_binary=no # Type char(1 arg) binary
type_extra_cidr=yes # Type cidr
type_extra_circle=yes # Type circle type_extra_circle=yes # Type circle
type_extra_clob=no # Type clob type_extra_clob=no # Type clob
type_extra_datetime=yes # Type datetime type_extra_datetime=yes # Type datetime
...@@ -406,6 +421,7 @@ type_extra_float(2_arg)=no # Type float(2 arg) ...@@ -406,6 +421,7 @@ type_extra_float(2_arg)=no # Type float(2 arg)
type_extra_float4=yes # Type float4 type_extra_float4=yes # Type float4
type_extra_float8=yes # Type float8 type_extra_float8=yes # Type float8
type_extra_image=no # Type image type_extra_image=no # Type image
type_extra_inet=yes # Type inet
type_extra_int(1_arg)_zerofill=no # Type int(1 arg) zerofill type_extra_int(1_arg)_zerofill=no # Type int(1 arg) zerofill
type_extra_int1=no # Type int1 type_extra_int1=no # Type int1
type_extra_int2=yes # Type int2 type_extra_int2=yes # Type int2
...@@ -422,6 +438,7 @@ type_extra_long_raw=no # Type long raw ...@@ -422,6 +438,7 @@ type_extra_long_raw=no # Type long raw
type_extra_long_varbinary=no # Type long varbinary type_extra_long_varbinary=no # Type long varbinary
type_extra_long_varchar(1_arg)=no # Type long varchar(1 arg) type_extra_long_varchar(1_arg)=no # Type long varchar(1 arg)
type_extra_lseg=yes # Type lseg type_extra_lseg=yes # Type lseg
type_extra_macaddr=yes # Type macaddr
type_extra_mediumint=no # Type mediumint type_extra_mediumint=no # Type mediumint
type_extra_mediumtext=no # Type mediumtext type_extra_mediumtext=no # Type mediumtext
type_extra_middleint=no # Type middleint type_extra_middleint=no # Type middleint
...@@ -457,6 +474,7 @@ type_odbc_varbinary(1_arg)=no # Type varbinary(1 arg) ...@@ -457,6 +474,7 @@ type_odbc_varbinary(1_arg)=no # Type varbinary(1 arg)
type_sql_bit=yes # Type bit type_sql_bit=yes # Type bit
type_sql_bit(1_arg)=yes # Type bit(1 arg) type_sql_bit(1_arg)=yes # Type bit(1 arg)
type_sql_bit_varying(1_arg)=yes # Type bit varying(1 arg) type_sql_bit_varying(1_arg)=yes # Type bit varying(1 arg)
type_sql_boolean=yes # Type boolean
type_sql_char(1_arg)=yes # Type char(1 arg) type_sql_char(1_arg)=yes # Type char(1 arg)
type_sql_char_varying(1_arg)=yes # Type char varying(1 arg) type_sql_char_varying(1_arg)=yes # Type char varying(1 arg)
type_sql_character(1_arg)=yes # Type character(1 arg) type_sql_character(1_arg)=yes # Type character(1 arg)
......
...@@ -581,7 +581,7 @@ sub new ...@@ -581,7 +581,7 @@ sub new
$limits{'table_wildcard'} = 1; $limits{'table_wildcard'} = 1;
$limits{'max_column_name'} = 32; # Is this true $limits{'max_column_name'} = 32; # Is this true
$limits{'max_columns'} = 1000; # 500 crashes pg 6.3 $limits{'max_columns'} = 1000; # 500 crashes pg 6.3
$limits{'max_tables'} = 65000; # Should be big enough $limits{'max_tables'} = 5000; # 10000 crashes pg 7.0.2
$limits{'max_conditions'} = 30; # This makes Pg real slow $limits{'max_conditions'} = 30; # This makes Pg real slow
$limits{'max_index'} = 64; # Is this true ? $limits{'max_index'} = 64; # Is this true ?
$limits{'max_index_parts'} = 16; # Is this true ? $limits{'max_index_parts'} = 16; # Is this true ?
......
...@@ -99,6 +99,7 @@ static byte* bdb_get_key(BDB_SHARE *share,uint *length, ...@@ -99,6 +99,7 @@ static byte* bdb_get_key(BDB_SHARE *share,uint *length,
my_bool not_used __attribute__((unused))); my_bool not_used __attribute__((unused)));
static BDB_SHARE *get_share(const char *table_name, TABLE *table); static BDB_SHARE *get_share(const char *table_name, TABLE *table);
static void free_share(BDB_SHARE *share, TABLE *table); static void free_share(BDB_SHARE *share, TABLE *table);
static int write_status(DB *status_block, char *buff, uint length);
static void update_status(BDB_SHARE *share, TABLE *table); static void update_status(BDB_SHARE *share, TABLE *table);
static void berkeley_noticecall(DB_ENV *db_env, db_notices notice); static void berkeley_noticecall(DB_ENV *db_env, db_notices notice);
...@@ -433,7 +434,6 @@ int ha_berkeley::open(const char *name, int mode, uint test_if_locked) ...@@ -433,7 +434,6 @@ int ha_berkeley::open(const char *name, int mode, uint test_if_locked)
DBUG_RETURN(1); DBUG_RETURN(1);
} }
info(HA_STATUS_NO_LOCK | HA_STATUS_VARIABLE | HA_STATUS_CONST);
transaction=0; transaction=0;
cursor=0; cursor=0;
key_read=0; key_read=0;
...@@ -485,6 +485,7 @@ int ha_berkeley::open(const char *name, int mode, uint test_if_locked) ...@@ -485,6 +485,7 @@ int ha_berkeley::open(const char *name, int mode, uint test_if_locked)
share->status|=STATUS_PRIMARY_KEY_INIT; share->status|=STATUS_PRIMARY_KEY_INIT;
} }
get_status(); get_status();
info(HA_STATUS_NO_LOCK | HA_STATUS_VARIABLE | HA_STATUS_CONST);
DBUG_RETURN(0); DBUG_RETURN(0);
} }
...@@ -1106,7 +1107,7 @@ int ha_berkeley::index_init(uint keynr) ...@@ -1106,7 +1107,7 @@ int ha_berkeley::index_init(uint keynr)
dbug_assert(cursor == 0); dbug_assert(cursor == 0);
if ((error=file->cursor(key_file[keynr], transaction, &cursor, if ((error=file->cursor(key_file[keynr], transaction, &cursor,
table->reginfo.lock_type > TL_WRITE_ALLOW_READ ? table->reginfo.lock_type > TL_WRITE_ALLOW_READ ?
DB_RMW : 0))) 0 : 0)))
cursor=0; // Safety cursor=0; // Safety
bzero((char*) &last_key,sizeof(last_key)); bzero((char*) &last_key,sizeof(last_key));
DBUG_RETURN(error); DBUG_RETURN(error);
...@@ -1539,6 +1540,7 @@ int ha_berkeley::create(const char *name, register TABLE *form, ...@@ -1539,6 +1540,7 @@ int ha_berkeley::create(const char *name, register TABLE *form,
char name_buff[FN_REFLEN]; char name_buff[FN_REFLEN];
char part[7]; char part[7];
uint index=1; uint index=1;
int error=1;
DBUG_ENTER("ha_berkeley::create"); DBUG_ENTER("ha_berkeley::create");
fn_format(name_buff,name,"", ha_berkeley_ext,2 | 4); fn_format(name_buff,name,"", ha_berkeley_ext,2 | 4);
...@@ -1563,9 +1565,22 @@ int ha_berkeley::create(const char *name, register TABLE *form, ...@@ -1563,9 +1565,22 @@ int ha_berkeley::create(const char *name, register TABLE *form,
/* Create the status block to save information from last status command */ /* Create the status block to save information from last status command */
/* Is DB_BTREE the best option here ? (QUEUE can't be used in sub tables) */ /* Is DB_BTREE the best option here ? (QUEUE can't be used in sub tables) */
if (create_sub_table(name_buff,"status",DB_BTREE,0))
DBUG_RETURN(1); DB *status_block;
DBUG_RETURN(0); if (!db_create(&status_block, db_env, 0))
{
if (!status_block->open(status_block, name_buff,
"status", DB_BTREE, DB_CREATE, 0))
{
char rec_buff[4+MAX_KEY*4];
uint length= 4+ table->keys*4;
bzero(rec_buff, length);
if (!write_status(status_block, rec_buff, length))
error=0;
status_block->close(status_block,0);
}
}
DBUG_RETURN(error);
} }
...@@ -1574,11 +1589,8 @@ int ha_berkeley::delete_table(const char *name) ...@@ -1574,11 +1589,8 @@ int ha_berkeley::delete_table(const char *name)
int error; int error;
char name_buff[FN_REFLEN]; char name_buff[FN_REFLEN];
if ((error=db_create(&file, db_env, 0))) if ((error=db_create(&file, db_env, 0)))
{
my_errno=error; my_errno=error;
file=0; else
return 1;
}
error=file->remove(file,fn_format(name_buff,name,"",ha_berkeley_ext,2 | 4), error=file->remove(file,fn_format(name_buff,name,"",ha_berkeley_ext,2 | 4),
NULL,0); NULL,0);
file=0; // Safety file=0; // Safety
...@@ -1659,23 +1671,22 @@ longlong ha_berkeley::get_auto_increment() ...@@ -1659,23 +1671,22 @@ longlong ha_berkeley::get_auto_increment()
table->next_number_key_offset); table->next_number_key_offset);
/* Store for compare */ /* Store for compare */
memcpy(key_buff2, key_buff, (key_len=last_key.size)); memcpy(key_buff2, key_buff, (key_len=last_key.size));
key_info->handler.bdb_return_if_eq= -1; /* Modify the compare so that we will find the next key */
error=read_row(cursor->c_get(cursor, &last_key, &row, DB_SET_RANGE), key_info->handler.bdb_return_if_eq= 1;
table->record[1], active_index, &row, (DBT*) 0, 0); /* We lock the next key as the new key will probl. be on the same page */
error=cursor->c_get(cursor, &last_key, &row, DB_SET_RANGE | DB_RMW),
key_info->handler.bdb_return_if_eq= 0; key_info->handler.bdb_return_if_eq= 0;
if (!error && !berkeley_key_cmp(table, key_info, key_buff2, key_len))
if (!error || error == DB_NOTFOUND)
{ {
/* /*
Found matching key; Now search after next key, go one step back Now search go one step back and then we should have found the
and then we should have found the biggest key with the given biggest key with the given prefix
prefix
*/ */
(void) read_row(cursor->c_get(cursor, &last_key, &row, DB_NEXT_NODUP), if (read_row(cursor->c_get(cursor, &last_key, &row, DB_PREV | DB_RMW),
table->record[1], active_index, &row, (DBT*) 0, 0);
if (read_row(cursor->c_get(cursor, &last_key, &row, DB_PREV),
table->record[1], active_index, &row, (DBT*) 0, 0) || table->record[1], active_index, &row, (DBT*) 0, 0) ||
berkeley_key_cmp(table, key_info, key_buff2, key_len)) berkeley_key_cmp(table, key_info, key_buff2, key_len))
error=1; // Something went wrong error=1; // Something went wrong or no such key
} }
} }
nr=(longlong) nr=(longlong)
...@@ -1718,25 +1729,47 @@ static void print_msg(THD *thd, const char *table_name, const char *op_name, ...@@ -1718,25 +1729,47 @@ static void print_msg(THD *thd, const char *table_name, const char *op_name,
int ha_berkeley::analyze(THD* thd, HA_CHECK_OPT* check_opt) int ha_berkeley::analyze(THD* thd, HA_CHECK_OPT* check_opt)
{ {
DB_BTREE_STAT stat; DB_BTREE_STAT *stat=0;
uint i; uint i;
for (i=0 ; i < table->keys ; i++) for (i=0 ; i < table->keys ; i++)
{ {
file->stat(key_file[i], (void*) &stat, 0, 0); if (stat)
share->rec_per_key[i]= stat.bt_ndata / stat.bt_nkeys; {
free(stat);
stat=0;
} }
/* If hidden primary key */ if (file->stat(key_file[i], (void*) &stat, 0, 0))
goto err;
share->rec_per_key[i]= (stat->bt_ndata /
(stat->bt_nkeys ? stat->bt_nkeys : 1));
}
/* A hidden primary key is not in key_file[] */
if (hidden_primary_key) if (hidden_primary_key)
file->stat(file, (void*) &stat, 0, 0); {
if (stat)
{
free(stat);
stat=0;
}
if (file->stat(file, (void*) &stat, 0, 0))
goto err;
}
pthread_mutex_lock(&share->mutex); pthread_mutex_lock(&share->mutex);
share->rows=stat.bt_ndata; share->rows=stat->bt_ndata;
share->status|=STATUS_BDB_ANALYZE; // Save status on close share->status|=STATUS_BDB_ANALYZE; // Save status on close
share->version++; // Update stat in table share->version++; // Update stat in table
pthread_mutex_unlock(&share->mutex); pthread_mutex_unlock(&share->mutex);
update_status(share,table); // Write status to file update_status(share,table); // Write status to file
if (stat)
free(stat);
return ((share->status & STATUS_BDB_ANALYZE) ? HA_ADMIN_FAILED : return ((share->status & STATUS_BDB_ANALYZE) ? HA_ADMIN_FAILED :
HA_ADMIN_OK); HA_ADMIN_OK);
err:
if (stat)
free(stat);
return HA_ADMIN_FAILED;
} }
int ha_berkeley::optimize(THD* thd, HA_CHECK_OPT* check_opt) int ha_berkeley::optimize(THD* thd, HA_CHECK_OPT* check_opt)
...@@ -1749,25 +1782,65 @@ int ha_berkeley::check(THD* thd, HA_CHECK_OPT* check_opt) ...@@ -1749,25 +1782,65 @@ int ha_berkeley::check(THD* thd, HA_CHECK_OPT* check_opt)
{ {
char name_buff[FN_REFLEN]; char name_buff[FN_REFLEN];
int error; int error;
DB *tmp_file;
DBUG_ENTER("ha_berkeley::check");
DBUG_RETURN(HA_ADMIN_NOT_IMPLEMENTED);
#ifdef NOT_YET
/*
To get this to work we need to ensure that no running transaction is
using the table. We also need to create a new environment without
locking for this.
*/
/* We must open the file again to be able to check it! */
if ((error=db_create(&tmp_file, db_env, 0)))
{
print_msg(thd, table->real_name, "check", "error",
"Got error %d creating environment",error);
DBUG_RETURN(HA_ADMIN_FAILED);
}
/* Compare the overall structure */
tmp_file->set_bt_compare(tmp_file,
(hidden_primary_key ? berkeley_cmp_hidden_key :
berkeley_cmp_packed_key));
file->app_private= (void*) (table->key_info+table->primary_key);
fn_format(name_buff,share->table_name,"", ha_berkeley_ext, 2 | 4); fn_format(name_buff,share->table_name,"", ha_berkeley_ext, 2 | 4);
if ((error=file->verify(file, name_buff, NullS, (FILE*) 0, if ((error=tmp_file->verify(tmp_file, name_buff, NullS, (FILE*) 0,
hidden_primary_key ? 0 : DB_NOORDERCHK))) hidden_primary_key ? 0 : DB_NOORDERCHK)))
{ {
print_msg(thd, table->real_name, "check", "error", print_msg(thd, table->real_name, "check", "error",
"Got error %d checking file structure",error); "Got error %d checking file structure",error);
return HA_ADMIN_CORRUPT; tmp_file->close(tmp_file,0);
DBUG_RETURN(HA_ADMIN_CORRUPT);
} }
for (uint i=0 ; i < table->keys ; i++)
/* Check each index */
tmp_file->set_bt_compare(tmp_file, berkeley_cmp_packed_key);
for (uint index=0,i=0 ; i < table->keys ; i++)
{ {
if ((error=file->verify(key_file[i], name_buff, NullS, (FILE*) 0, char part[7];
if (i == primary_key)
strmov(part,"main");
else
sprintf(part,"key%02d",++index);
tmp_file->app_private= (void*) (table->key_info+i);
if ((error=tmp_file->verify(tmp_file, name_buff, part, (FILE*) 0,
DB_ORDERCHKONLY))) DB_ORDERCHKONLY)))
{ {
print_msg(thd, table->real_name, "check", "error", print_msg(thd, table->real_name, "check", "error",
"Key %d was not in order",error); "Key %d was not in order (Error: %d)",
return HA_ADMIN_CORRUPT; index+ test(i >= primary_key),
error);
tmp_file->close(tmp_file,0);
DBUG_RETURN(HA_ADMIN_CORRUPT);
} }
} }
return HA_ADMIN_OK; tmp_file->close(tmp_file,0);
DBUG_RETURN(HA_ADMIN_OK);
#endif
} }
/**************************************************************************** /****************************************************************************
...@@ -1856,7 +1929,7 @@ void ha_berkeley::get_status() ...@@ -1856,7 +1929,7 @@ void ha_berkeley::get_status()
fn_format(name_buff, share->table_name,"", ha_berkeley_ext, 2 | 4); fn_format(name_buff, share->table_name,"", ha_berkeley_ext, 2 | 4);
if (!db_create(&share->status_block, db_env, 0)) if (!db_create(&share->status_block, db_env, 0))
{ {
if (!share->status_block->open(share->status_block, name_buff, if (share->status_block->open(share->status_block, name_buff,
"status", DB_BTREE, open_mode, 0)) "status", DB_BTREE, open_mode, 0))
{ {
share->status_block->close(share->status_block, 0); share->status_block->close(share->status_block, 0);
...@@ -1871,15 +1944,16 @@ void ha_berkeley::get_status() ...@@ -1871,15 +1944,16 @@ void ha_berkeley::get_status()
if (!file->cursor(share->status_block, 0, &cursor, 0)) if (!file->cursor(share->status_block, 0, &cursor, 0))
{ {
DBT row; DBT row;
char rec_buff[64],*pos=rec_buff; char rec_buff[64];
bzero((char*) &row,sizeof(row)); bzero((char*) &row,sizeof(row));
bzero((char*) &last_key,sizeof(last_key)); bzero((char*) &last_key,sizeof(last_key));
row.data=rec_buff; row.data=rec_buff;
row.size=sizeof(rec_buff); row.ulen=sizeof(rec_buff);
row.flags=DB_DBT_USERMEM; row.flags=DB_DBT_USERMEM;
if (!cursor->c_get(cursor, &last_key, &row, DB_FIRST)) if (!cursor->c_get(cursor, &last_key, &row, DB_FIRST))
{ {
uint i; uint i;
uchar *pos=(uchar*) row.data;
share->org_rows=share->rows=uint4korr(pos); pos+=4; share->org_rows=share->rows=uint4korr(pos); pos+=4;
for (i=0 ; i < table->keys ; i++) for (i=0 ; i < table->keys ; i++)
{ {
...@@ -1896,6 +1970,24 @@ void ha_berkeley::get_status() ...@@ -1896,6 +1970,24 @@ void ha_berkeley::get_status()
} }
static int write_status(DB *status_block, char *buff, uint length)
{
DB_TXN *trans;
DBT row,key;
int error;
const char *key_buff="status";
bzero((char*) &row,sizeof(row));
bzero((char*) &key,sizeof(key));
row.data=buff;
key.data=(void*) key_buff;
key.size=sizeof(key_buff);
row.size=length;
error=status_block->put(status_block, 0, &key, &row, 0);
return error;
}
static void update_status(BDB_SHARE *share, TABLE *table) static void update_status(BDB_SHARE *share, TABLE *table)
{ {
DBUG_ENTER("update_status"); DBUG_ENTER("update_status");
...@@ -1922,25 +2014,18 @@ static void update_status(BDB_SHARE *share, TABLE *table) ...@@ -1922,25 +2014,18 @@ static void update_status(BDB_SHARE *share, TABLE *table)
goto end; goto end;
} }
{ {
uint i; char rec_buff[4+MAX_KEY*4], *pos=rec_buff;
DBT row,key;
char rec_buff[4+MAX_KEY*sizeof(ulong)], *pos=rec_buff;
const char *key_buff="status"; const char *key_buff="status";
bzero((char*) &row,sizeof(row));
bzero((char*) &key,sizeof(key));
row.data=rec_buff;
key.data=(void*) key_buff;
key.size=sizeof(key_buff);
row.flags=key.flags=DB_DBT_USERMEM;
int4store(pos,share->rows); pos+=4; int4store(pos,share->rows); pos+=4;
for (i=0 ; i < table->keys ; i++) for (uint i=0 ; i < table->keys ; i++)
{ {
int4store(pos,share->rec_per_key[i]); pos+=4; int4store(pos,share->rec_per_key[i]); pos+=4;
} }
row.size=(uint) (pos-rec_buff); DBUG_PRINT("info",("updating status for %s",share->table_name));
(void) share->status_block->put(share->status_block, 0, &key, &row, 0); (void) write_status(share->status_block, rec_buff,
(uint) (pos-rec_buff));
share->status&= ~STATUS_BDB_ANALYZE; share->status&= ~STATUS_BDB_ANALYZE;
share->org_rows=share->rows;
} }
end: end:
pthread_mutex_unlock(&share->mutex); pthread_mutex_unlock(&share->mutex);
......
...@@ -297,12 +297,16 @@ bool ha_flush_logs() ...@@ -297,12 +297,16 @@ bool ha_flush_logs()
return result; return result;
} }
/*
This should return ENOENT if the file doesn't exists.
The .frm file will be deleted only if we return 0 or ENOENT
*/
int ha_delete_table(enum db_type table_type, const char *path) int ha_delete_table(enum db_type table_type, const char *path)
{ {
handler *file=get_new_handler((TABLE*) 0, table_type); handler *file=get_new_handler((TABLE*) 0, table_type);
if (!file) if (!file)
return -1; return ENOENT;
int error=file->delete_table(path); int error=file->delete_table(path);
delete file; delete file;
return error; return error;
...@@ -620,12 +624,16 @@ uint handler::get_dup_key(int error) ...@@ -620,12 +624,16 @@ uint handler::get_dup_key(int error)
int handler::delete_table(const char *name) int handler::delete_table(const char *name)
{ {
int error=0;
for (const char **ext=bas_ext(); *ext ; ext++) for (const char **ext=bas_ext(); *ext ; ext++)
{ {
if (delete_file(name,*ext,2)) if (delete_file(name,*ext,2))
return my_errno; {
if ((error=errno) != ENOENT)
break;
} }
return 0; }
return error;
} }
......
...@@ -81,10 +81,12 @@ static void add_hostname(struct in_addr *in,const char *name) ...@@ -81,10 +81,12 @@ static void add_hostname(struct in_addr *in,const char *name)
if ((entry=(host_entry*) malloc(sizeof(host_entry)+length+1))) if ((entry=(host_entry*) malloc(sizeof(host_entry)+length+1)))
{ {
char *new_name= (char *) (entry+1); char *new_name;
memcpy_fixed(&entry->ip, &in->s_addr, sizeof(in->s_addr)); memcpy_fixed(&entry->ip, &in->s_addr, sizeof(in->s_addr));
memcpy(new_name, name, length); // Should work even if name == NULL if (length)
new_name[length]=0; // End of string memcpy(new_name= (char *) (entry+1), name, length+1);
else
new_name=0;
entry->hostname=new_name; entry->hostname=new_name;
entry->errors=0; entry->errors=0;
(void) hostname_cache->add(entry); (void) hostname_cache->add(entry);
......
...@@ -686,9 +686,8 @@ bool MYSQL_LOG::write(IO_CACHE *cache) ...@@ -686,9 +686,8 @@ bool MYSQL_LOG::write(IO_CACHE *cache)
uint length; uint length;
my_off_t start_pos=my_b_tell(&log_file); my_off_t start_pos=my_b_tell(&log_file);
if (reinit_io_cache(cache, WRITE_CACHE, 0, 0, 0)) if (reinit_io_cache(cache, READ_CACHE, 0, 0, 0))
{ {
if (!write_error)
sql_print_error(ER(ER_ERROR_ON_WRITE), cache->file_name, errno); sql_print_error(ER(ER_ERROR_ON_WRITE), cache->file_name, errno);
goto err; goto err;
} }
...@@ -710,7 +709,6 @@ bool MYSQL_LOG::write(IO_CACHE *cache) ...@@ -710,7 +709,6 @@ bool MYSQL_LOG::write(IO_CACHE *cache)
} }
if (cache->error) // Error on read if (cache->error) // Error on read
{ {
if (!write_error)
sql_print_error(ER(ER_ERROR_ON_READ), cache->file_name, errno); sql_print_error(ER(ER_ERROR_ON_READ), cache->file_name, errno);
goto err; goto err;
} }
......
...@@ -198,5 +198,4 @@ ...@@ -198,5 +198,4 @@
"Tabell '%-.64s' är crashad och bör repareras med REPAIR TABLE", "Tabell '%-.64s' är crashad och bör repareras med REPAIR TABLE",
"Tabell '%-.64s' är crashad och senast (automatiska?) reparation misslyckades", "Tabell '%-.64s' är crashad och senast (automatiska?) reparation misslyckades",
"Warning: Några icke transaktionella tabeller kunde inte återställas vid ROLLBACK", "Warning: Några icke transaktionella tabeller kunde inte återställas vid ROLLBACK",
#ER_TRANS_CACHE_FULL
"Transaktionen krävde mera än 'max_binlog_cache_size' minne. Utöka denna mysqld variabel och försök på nytt", "Transaktionen krävde mera än 'max_binlog_cache_size' minne. Utöka denna mysqld variabel och försök på nytt",
...@@ -215,7 +215,7 @@ int mysql_delete(THD *thd,TABLE_LIST *table_list,COND *conds,ha_rows limit, ...@@ -215,7 +215,7 @@ int mysql_delete(THD *thd,TABLE_LIST *table_list,COND *conds,ha_rows limit,
if (options & OPTION_QUICK) if (options & OPTION_QUICK)
(void) table->file->extra(HA_EXTRA_NORMAL); (void) table->file->extra(HA_EXTRA_NORMAL);
using_transactions=table->file->has_transactions(); using_transactions=table->file->has_transactions();
if (deleted && (error == 0 || !using_transactions)) if (deleted && (error <= 0 || !using_transactions))
{ {
mysql_update_log.write(thd,thd->query, thd->query_length); mysql_update_log.write(thd,thd->query, thd->query_length);
if (mysql_bin_log.is_open()) if (mysql_bin_log.is_open())
......
...@@ -256,7 +256,7 @@ int mysql_insert(THD *thd,TABLE_LIST *table_list, List<Item> &fields, ...@@ -256,7 +256,7 @@ int mysql_insert(THD *thd,TABLE_LIST *table_list, List<Item> &fields,
else if (table->next_number_field) else if (table->next_number_field)
id=table->next_number_field->val_int(); // Return auto_increment value id=table->next_number_field->val_int(); // Return auto_increment value
using_transactions=table->file->has_transactions(); using_transactions=table->file->has_transactions();
if ((info.copied || info.deleted) && (error == 0 || !using_transactions)) if ((info.copied || info.deleted) && (error <= 0 || !using_transactions))
{ {
mysql_update_log.write(thd, thd->query, thd->query_length); mysql_update_log.write(thd, thd->query, thd->query_length);
if (mysql_bin_log.is_open()) if (mysql_bin_log.is_open())
......
...@@ -863,7 +863,8 @@ make_join_statistics(JOIN *join,TABLE_LIST *tables,COND *conds, ...@@ -863,7 +863,8 @@ make_join_statistics(JOIN *join,TABLE_LIST *tables,COND *conds,
else else
s->dependent=(table_map) 0; s->dependent=(table_map) 0;
s->key_dependent=(table_map) 0; s->key_dependent=(table_map) 0;
if ((table->system || table->file->records <= 1L) && ! s->dependent) if ((table->system || table->file->records <= 1) && ! s->dependent &&
!(table->file->option_flag() & HA_NOT_EXACT_COUNT))
{ {
s->type=JT_SYSTEM; s->type=JT_SYSTEM;
const_table_map|=table->map; const_table_map|=table->map;
...@@ -924,7 +925,8 @@ make_join_statistics(JOIN *join,TABLE_LIST *tables,COND *conds, ...@@ -924,7 +925,8 @@ make_join_statistics(JOIN *join,TABLE_LIST *tables,COND *conds,
{ {
if (s->dependent & ~(const_table_map)) // All dep. must be constants if (s->dependent & ~(const_table_map)) // All dep. must be constants
continue; continue;
if (s->table->file->records <= 1L) if (s->table->file->records <= 1L &&
!(s->table->file->option_flag() & HA_NOT_EXACT_COUNT))
{ // system table { // system table
s->type=JT_SYSTEM; s->type=JT_SYSTEM;
const_table_map|=s->table->map; const_table_map|=s->table->map;
......
...@@ -110,24 +110,25 @@ int mysql_rm_table(THD *thd,TABLE_LIST *tables, my_bool if_exists) ...@@ -110,24 +110,25 @@ int mysql_rm_table(THD *thd,TABLE_LIST *tables, my_bool if_exists)
table_type=get_table_type(path); table_type=get_table_type(path);
if (my_delete(path,MYF(0))) /* Delete the table definition file */ if (access(path,F_OK))
{
if (errno != ENOENT || !if_exists)
{ {
if (!if_exists)
error=1; error=1;
if (errno != ENOENT)
{
my_error(ER_CANT_DELETE_FILE,MYF(0),path,errno);
}
}
} }
else else
{ {
some_tables_deleted=1; char *end;
*fn_ext(path)=0; // Remove extension; *(end=fn_ext(path))=0; // Remove extension
error=ha_delete_table(table_type, path); error=ha_delete_table(table_type, path);
if (error == ENOENT && if_exists) if (error == ENOENT && if_exists)
error = 0; error = 0;
if (!error || error == ENOENT)
{
/* Delete the table definition file */
strmov(end,reg_ext);
if (!(error=my_delete(path,MYF(MY_WME))))
some_tables_deleted=1;
}
} }
if (error) if (error)
{ {
...@@ -1427,17 +1428,6 @@ int mysql_alter_table(THD *thd,char *new_db, char *new_name, ...@@ -1427,17 +1428,6 @@ int mysql_alter_table(THD *thd,char *new_db, char *new_name,
thd->count_cuted_fields=0; /* Don`t calc cuted fields */ thd->count_cuted_fields=0; /* Don`t calc cuted fields */
new_table->time_stamp=save_time_stamp; new_table->time_stamp=save_time_stamp;
#if defined( __WIN__) || defined( __EMX__)
/*
We must do the COMMIT here so that we can close and rename the
temporary table (as windows can't rename open tables)
*/
if (ha_commit_stmt(thd))
error=1;
if (ha_commit(thd))
error=1;
#endif
if (table->tmp_table) if (table->tmp_table)
{ {
/* We changed a temporary table */ /* We changed a temporary table */
...@@ -1556,7 +1546,6 @@ int mysql_alter_table(THD *thd,char *new_db, char *new_name, ...@@ -1556,7 +1546,6 @@ int mysql_alter_table(THD *thd,char *new_db, char *new_name,
} }
} }
#if !(defined( __WIN__) || defined( __EMX__))
/* The ALTER TABLE is always in it's own transaction */ /* The ALTER TABLE is always in it's own transaction */
error = ha_commit_stmt(thd); error = ha_commit_stmt(thd);
if (ha_commit(thd)) if (ha_commit(thd))
...@@ -1567,7 +1556,6 @@ int mysql_alter_table(THD *thd,char *new_db, char *new_name, ...@@ -1567,7 +1556,6 @@ int mysql_alter_table(THD *thd,char *new_db, char *new_name,
VOID(pthread_mutex_unlock(&LOCK_open)); VOID(pthread_mutex_unlock(&LOCK_open));
goto err; goto err;
} }
#endif
thd->proc_info="end"; thd->proc_info="end";
mysql_update_log.write(thd, thd->query,thd->query_length); mysql_update_log.write(thd, thd->query,thd->query_length);
......
...@@ -238,7 +238,7 @@ int mysql_update(THD *thd,TABLE_LIST *table_list,List<Item> &fields, ...@@ -238,7 +238,7 @@ int mysql_update(THD *thd,TABLE_LIST *table_list,List<Item> &fields,
VOID(table->file->extra(HA_EXTRA_READCHECK)); VOID(table->file->extra(HA_EXTRA_READCHECK));
table->time_stamp=save_time_stamp; // Restore auto timestamp pointer table->time_stamp=save_time_stamp; // Restore auto timestamp pointer
using_transactions=table->file->has_transactions(); using_transactions=table->file->has_transactions();
if (updated && (error == 0 || !using_transactions)) if (updated && (error <= 0 || !using_transactions))
{ {
mysql_update_log.write(thd,thd->query,thd->query_length); mysql_update_log.write(thd,thd->query,thd->query_length);
if (mysql_bin_log.is_open()) if (mysql_bin_log.is_open())
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment