Skip to content
Projects
Groups
Snippets
Help
Loading...
Help
Support
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
M
mariadb
Project overview
Project overview
Details
Activity
Releases
Repository
Repository
Files
Commits
Branches
Tags
Contributors
Graph
Compare
Issues
0
Issues
0
List
Boards
Labels
Milestones
Merge Requests
0
Merge Requests
0
Analytics
Analytics
Repository
Value Stream
Wiki
Wiki
Snippets
Snippets
Members
Members
Collapse sidebar
Close sidebar
Activity
Graph
Create a new issue
Commits
Issue Boards
Open sidebar
Kirill Smelkov
mariadb
Commits
71c02e76
Commit
71c02e76
authored
Dec 10, 2000
by
unknown
Browse files
Options
Browse Files
Download
Plain Diff
Merge work:/my/mysql into donna.mysql.com:/home/my/bk/mysql
Docs/manual.texi: Auto merged
parents
8beb4350
b6f23087
Changes
4
Show whitespace changes
Inline
Side-by-side
Showing
4 changed files
with
98 additions
and
45 deletions
+98
-45
Docs/manual.texi
Docs/manual.texi
+16
-5
sql/ha_berkeley.cc
sql/ha_berkeley.cc
+30
-19
sql/sql_table.cc
sql/sql_table.cc
+42
-20
sql/sql_update.cc
sql/sql_update.cc
+10
-1
No files found.
Docs/manual.texi
View file @
71c02e76
...
@@ -17897,6 +17897,11 @@ tables as one. This only works with MERGE tables. @xref{MERGE}.
...
@@ -17897,6 +17897,11 @@ tables as one. This only works with MERGE tables. @xref{MERGE}.
For the moment you need to have @code{SELECT}, @code{UPDATE}, and
For the moment you need to have @code{SELECT}, @code{UPDATE}, and
@code{DELETE} privileges on the tables you map to a @code{MERGE} table.
@code{DELETE} privileges on the tables you map to a @code{MERGE} table.
All mapped tables must be in the same database as the @code{MERGE} table.
All mapped tables must be in the same database as the @code{MERGE} table.
@item
In the created table the @code{PRIMARY} key will be placed first, followed
by all @code{UNIQUE} keys and then the normal keys. This helps the
@strong{MySQL} optimizer to prioritize which key to use and also more quickly
detect duplicated @code{UNIQUE} keys.
@end itemize
@end itemize
@cindex silent column changes
@cindex silent column changes
...
@@ -22598,7 +22603,7 @@ You may also want to change @code{binlog_cache_size} and
...
@@ -22598,7 +22603,7 @@ You may also want to change @code{binlog_cache_size} and
@itemize @bullet
@itemize @bullet
@item
@item
@strong{MySQL} requires a @code{PRIMARY KEY} in each BDB table to be
@strong{MySQL} requires a @code{PRIMARY KEY} in each BDB table to be
able to refer to previously read rows. If you don't create on,
able to refer to previously read rows. If you don't create on
e
,
@strong{MySQL} will create an maintain a hidden @code{PRIMARY KEY} for
@strong{MySQL} will create an maintain a hidden @code{PRIMARY KEY} for
you. The hidden key has a length of 5 bytes and is incremented for each
you. The hidden key has a length of 5 bytes and is incremented for each
insert attempt.
insert attempt.
...
@@ -22618,8 +22623,6 @@ you don't use @code{LOCK TABLE}, @strong{MYSQL} will issue an internal
...
@@ -22618,8 +22623,6 @@ you don't use @code{LOCK TABLE}, @strong{MYSQL} will issue an internal
multiple-write lock on the table to ensure that the table will be
multiple-write lock on the table to ensure that the table will be
properly locked if another thread issues a table lock.
properly locked if another thread issues a table lock.
@item
@item
@code{ALTER TABLE} doesn't yet work on @code{BDB} tables.
@item
Internal locking in @code{BDB} tables is done on page level.
Internal locking in @code{BDB} tables is done on page level.
@item
@item
@code{SELECT COUNT(*) FROM table_name} is slow as @code{BDB} tables doesn't
@code{SELECT COUNT(*) FROM table_name} is slow as @code{BDB} tables doesn't
...
@@ -22637,8 +22640,8 @@ tables. In other words, the key information will take a little more
...
@@ -22637,8 +22640,8 @@ tables. In other words, the key information will take a little more
space in @code{BDB} tables compared to MyISAM tables which don't use
space in @code{BDB} tables compared to MyISAM tables which don't use
@code{PACK_KEYS=0}.
@code{PACK_KEYS=0}.
@item
@item
There is often holes in the BDB table to allow you to insert new rows
There is often holes in the BDB table to allow you to insert new rows
in
between different keys
. This makes BDB tables somewhat larger than
the middle of the key tree
. This makes BDB tables somewhat larger than
MyISAM tables.
MyISAM tables.
@item
@item
@strong{MySQL} performs a checkpoint each time a new Berkeley DB log
@strong{MySQL} performs a checkpoint each time a new Berkeley DB log
...
@@ -39762,6 +39765,14 @@ though, so Version 3.23 is not released as a stable version yet.
...
@@ -39762,6 +39765,14 @@ though, so Version 3.23 is not released as a stable version yet.
@appendixsubsec Changes in release 3.23.29
@appendixsubsec Changes in release 3.23.29
@itemize @bullet
@itemize @bullet
@item
@item
When creating a table, put @code{PRIMARY} keys first, followed by
@code{UNIQUE} keys.
@item
Fixed a bug in @code{UPDATE} involving multi-part keys where one
specified all key parts both in the update and the @code{WHERE} part. In
this case @strong{MySQL} could try to update a record that didn't match
the whole @code{WHERE} part.
@item
Changed drop table to first drop the tables and then the @code{.frm} file.
Changed drop table to first drop the tables and then the @code{.frm} file.
@item
@item
Fixed a bug in the hostname cache which caused @code{mysqld} to report the
Fixed a bug in the hostname cache which caused @code{mysqld} to report the
sql/ha_berkeley.cc
View file @
71c02e76
...
@@ -1660,7 +1660,8 @@ longlong ha_berkeley::get_auto_increment()
...
@@ -1660,7 +1660,8 @@ longlong ha_berkeley::get_auto_increment()
}
}
else
else
{
{
DBT
row
;
DBT
row
,
old_key
;
DBC
*
auto_cursor
;
bzero
((
char
*
)
&
row
,
sizeof
(
row
));
bzero
((
char
*
)
&
row
,
sizeof
(
row
));
uint
key_len
;
uint
key_len
;
KEY
*
key_info
=
&
table
->
key_info
[
active_index
];
KEY
*
key_info
=
&
table
->
key_info
[
active_index
];
...
@@ -1670,25 +1671,35 @@ longlong ha_berkeley::get_auto_increment()
...
@@ -1670,25 +1671,35 @@ longlong ha_berkeley::get_auto_increment()
key_buff
,
table
->
record
[
0
],
key_buff
,
table
->
record
[
0
],
table
->
next_number_key_offset
);
table
->
next_number_key_offset
);
/* Store for compare */
/* Store for compare */
memcpy
(
key_buff2
,
key_buff
,
(
key_len
=
last_key
.
size
));
memcpy
(
old_key
.
data
=
key_buff2
,
key_buff
,
(
old_key
.
size
=
last_key
.
size
));
error
=
1
;
if
(
!
(
file
->
cursor
(
key_file
[
active_index
],
transaction
,
&
auto_cursor
,
0
)))
{
/* Modify the compare so that we will find the next key */
/* Modify the compare so that we will find the next key */
key_info
->
handler
.
bdb_return_if_eq
=
1
;
key_info
->
handler
.
bdb_return_if_eq
=
1
;
/* We lock the next key as the new key will probl. be on the same page */
/* We lock the next key as the new key will probl. be on the same page */
error
=
cursor
->
c_get
(
cursor
,
&
last_key
,
&
row
,
DB_SET_RANGE
|
DB_RMW
),
error
=
auto_cursor
->
c_get
(
auto_cursor
,
&
last_key
,
&
row
,
DB_SET_RANGE
|
DB_RMW
);
key_info
->
handler
.
bdb_return_if_eq
=
0
;
key_info
->
handler
.
bdb_return_if_eq
=
0
;
if
(
!
error
||
error
==
DB_NOTFOUND
)
if
(
!
error
||
error
==
DB_NOTFOUND
)
{
{
/*
/*
Now search go one step back and then we should have found the
Now search go one step back and then we should have found the
biggest key with the given prefix
biggest key with the given prefix
*/
*/
if
(
read_row
(
cursor
->
c_get
(
cursor
,
&
last_key
,
&
row
,
DB_PREV
|
DB_RMW
),
error
=
1
;
table
->
record
[
1
],
active_index
,
&
row
,
(
DBT
*
)
0
,
0
)
||
if
(
!
auto_cursor
->
c_get
(
auto_cursor
,
&
last_key
,
&
row
,
DB_PREV
|
DB_RMW
)
berkeley_key_cmp
(
table
,
key_info
,
key_buff2
,
key_len
))
&&
!
berkeley_cmp_packed_key
(
key_file
[
active_index
],
&
old_key
,
error
=
1
;
// Something went wrong or no such key
&
last_key
))
{
error
=
0
;
// Found value
unpack_key
(
table
->
record
[
1
],
&
last_key
,
active_index
);
}
}
}
auto_cursor
->
c_close
(
auto_cursor
);
}
}
}
if
(
!
error
)
nr
=
(
longlong
)
nr
=
(
longlong
)
table
->
next_number_field
->
val_int_offset
(
table
->
rec_buff_length
)
+
1
;
table
->
next_number_field
->
val_int_offset
(
table
->
rec_buff_length
)
+
1
;
ha_berkeley
::
index_end
();
ha_berkeley
::
index_end
();
...
...
sql/sql_table.cc
View file @
71c02e76
...
@@ -327,18 +327,28 @@ int mysql_create_table(THD *thd,const char *db, const char *table_name,
...
@@ -327,18 +327,28 @@ int mysql_create_table(THD *thd,const char *db, const char *table_name,
}
}
/* Create keys */
/* Create keys */
List_iterator
<
Key
>
key_iterator
(
keys
);
List_iterator
<
Key
>
key_iterator
(
keys
);
uint
key_parts
=
0
,
key_count
=
keys
.
elements
;
uint
key_parts
=
0
,
key_count
=
keys
.
elements
;
bool
primary_key
=
0
,
unique_key
=
0
;
List
<
Key
>
keys_in_order
;
// Add new keys here
Key
*
primary_key
=
0
;
bool
unique_key
=
0
;
Key
*
key
;
Key
*
key
;
uint
tmp
;
uint
tmp
;
tmp
=
min
(
file
->
max_keys
(),
MAX_KEY
);
tmp
=
min
(
file
->
max_keys
(),
MAX_KEY
);
if
(
key_count
>
tmp
)
if
(
key_count
>
tmp
)
{
{
my_error
(
ER_TOO_MANY_KEYS
,
MYF
(
0
),
tmp
);
my_error
(
ER_TOO_MANY_KEYS
,
MYF
(
0
),
tmp
);
DBUG_RETURN
(
-
1
);
DBUG_RETURN
(
-
1
);
}
}
/*
Check keys;
Put PRIMARY KEY first, then UNIQUE keys and other keys last
This will make checking for duplicated keys faster and ensure that
primary keys are prioritized.
*/
while
((
key
=
key_iterator
++
))
while
((
key
=
key_iterator
++
))
{
{
tmp
=
max
(
file
->
max_key_parts
(),
MAX_REF_PARTS
);
tmp
=
max
(
file
->
max_key_parts
(),
MAX_REF_PARTS
);
...
@@ -353,17 +363,6 @@ int mysql_create_table(THD *thd,const char *db, const char *table_name,
...
@@ -353,17 +363,6 @@ int mysql_create_table(THD *thd,const char *db, const char *table_name,
DBUG_RETURN
(
-
1
);
DBUG_RETURN
(
-
1
);
}
}
key_parts
+=
key
->
columns
.
elements
;
key_parts
+=
key
->
columns
.
elements
;
}
key_info_buffer
=
key_info
=
(
KEY
*
)
sql_calloc
(
sizeof
(
KEY
)
*
key_count
);
key_part_info
=
(
KEY_PART_INFO
*
)
sql_calloc
(
sizeof
(
KEY_PART_INFO
)
*
key_parts
);
if
(
!
key_info_buffer
||
!
key_part_info
)
DBUG_RETURN
(
-
1
);
// Out of memory
key_iterator
.
rewind
();
for
(;
(
key
=
key_iterator
++
)
;
key_info
++
)
{
uint
key_length
=
0
;
key_part_spec
*
column
;
if
(
key
->
type
==
Key
::
PRIMARY
)
if
(
key
->
type
==
Key
::
PRIMARY
)
{
{
if
(
primary_key
)
if
(
primary_key
)
...
@@ -371,10 +370,39 @@ int mysql_create_table(THD *thd,const char *db, const char *table_name,
...
@@ -371,10 +370,39 @@ int mysql_create_table(THD *thd,const char *db, const char *table_name,
my_error
(
ER_MULTIPLE_PRI_KEY
,
MYF
(
0
));
my_error
(
ER_MULTIPLE_PRI_KEY
,
MYF
(
0
));
DBUG_RETURN
(
-
1
);
DBUG_RETURN
(
-
1
);
}
}
primary_key
=
1
;
primary_key
=
key
;
}
}
else
if
(
key
->
type
==
Key
::
UNIQUE
)
else
if
(
key
->
type
==
Key
::
UNIQUE
)
{
unique_key
=
1
;
unique_key
=
1
;
if
(
keys_in_order
.
push_front
(
key
))
DBUG_RETURN
(
-
1
);
}
else
if
(
keys_in_order
.
push_back
(
key
))
DBUG_RETURN
(
-
1
);
}
if
(
primary_key
)
{
if
(
keys_in_order
.
push_front
(
primary_key
))
DBUG_RETURN
(
-
1
);
}
else
if
(
!
unique_key
&&
(
file
->
option_flag
()
&
HA_REQUIRE_PRIMARY_KEY
))
{
my_error
(
ER_REQUIRES_PRIMARY_KEY
,
MYF
(
0
));
DBUG_RETURN
(
-
1
);
}
key_info_buffer
=
key_info
=
(
KEY
*
)
sql_calloc
(
sizeof
(
KEY
)
*
key_count
);
key_part_info
=
(
KEY_PART_INFO
*
)
sql_calloc
(
sizeof
(
KEY_PART_INFO
)
*
key_parts
);
if
(
!
key_info_buffer
||
!
key_part_info
)
DBUG_RETURN
(
-
1
);
// Out of memory
List_iterator
<
Key
>
key_iterator_in_order
(
keys_in_order
);
for
(;
(
key
=
key_iterator_in_order
++
)
;
key_info
++
)
{
uint
key_length
=
0
;
key_part_spec
*
column
;
key_info
->
flags
=
(
key
->
type
==
Key
::
MULTIPLE
)
?
0
:
key_info
->
flags
=
(
key
->
type
==
Key
::
MULTIPLE
)
?
0
:
(
key
->
type
==
Key
::
FULLTEXT
)
?
HA_FULLTEXT
:
HA_NOSAME
;
(
key
->
type
==
Key
::
FULLTEXT
)
?
HA_FULLTEXT
:
HA_NOSAME
;
key_info
->
key_parts
=
(
uint8
)
key
->
columns
.
elements
;
key_info
->
key_parts
=
(
uint8
)
key
->
columns
.
elements
;
...
@@ -508,12 +536,6 @@ int mysql_create_table(THD *thd,const char *db, const char *table_name,
...
@@ -508,12 +536,6 @@ int mysql_create_table(THD *thd,const char *db, const char *table_name,
my_error
(
ER_WRONG_AUTO_KEY
,
MYF
(
0
));
my_error
(
ER_WRONG_AUTO_KEY
,
MYF
(
0
));
DBUG_RETURN
(
-
1
);
DBUG_RETURN
(
-
1
);
}
}
if
(
!
primary_key
&&
!
unique_key
&&
(
file
->
option_flag
()
&
HA_REQUIRE_PRIMARY_KEY
))
{
my_error
(
ER_REQUIRES_PRIMARY_KEY
,
MYF
(
0
));
DBUG_RETURN
(
-
1
);
}
/* Check if table exists */
/* Check if table exists */
if
(
create_info
->
options
&
HA_LEX_CREATE_TMP_TABLE
)
if
(
create_info
->
options
&
HA_LEX_CREATE_TMP_TABLE
)
...
...
sql/sql_update.cc
View file @
71c02e76
...
@@ -75,8 +75,16 @@ int mysql_update(THD *thd,TABLE_LIST *table_list,List<Item> &fields,
...
@@ -75,8 +75,16 @@ int mysql_update(THD *thd,TABLE_LIST *table_list,List<Item> &fields,
if
(
table
->
timestamp_field
&&
// Don't set timestamp if used
if
(
table
->
timestamp_field
&&
// Don't set timestamp if used
table
->
timestamp_field
->
query_id
==
thd
->
query_id
)
table
->
timestamp_field
->
query_id
==
thd
->
query_id
)
table
->
time_stamp
=
0
;
table
->
time_stamp
=
0
;
/* Reset the query_id string so that ->used_keys is based on the WHERE */
table
->
used_keys
=
table
->
keys_in_use
;
table
->
used_keys
=
table
->
keys_in_use
;
table
->
quick_keys
=
0
;
table
->
quick_keys
=
0
;
reg2
Item
*
item
;
List_iterator
<
Item
>
it
(
fields
);
ulong
query_id
=
thd
->
query_id
-
1
;
while
((
item
=
it
++
))
((
Item_field
*
)
item
)
->
field
->
query_id
=
query_id
;
if
(
setup_fields
(
thd
,
table_list
,
values
,
0
,
0
)
||
if
(
setup_fields
(
thd
,
table_list
,
values
,
0
,
0
)
||
setup_conds
(
thd
,
table_list
,
&
conds
))
setup_conds
(
thd
,
table_list
,
&
conds
))
{
{
...
@@ -84,7 +92,8 @@ int mysql_update(THD *thd,TABLE_LIST *table_list,List<Item> &fields,
...
@@ -84,7 +92,8 @@ int mysql_update(THD *thd,TABLE_LIST *table_list,List<Item> &fields,
DBUG_RETURN
(
-
1
);
/* purecov: inspected */
DBUG_RETURN
(
-
1
);
/* purecov: inspected */
}
}
old_used_keys
=
table
->
used_keys
;
old_used_keys
=
table
->
used_keys
;
table
->
used_keys
=
0
;
// Can't use 'only index'
// Don't count on usage of 'only index' when calculating which key to use
table
->
used_keys
=
0
;
select
=
make_select
(
table
,
0
,
0
,
conds
,
&
error
);
select
=
make_select
(
table
,
0
,
0
,
conds
,
&
error
);
if
(
error
||
if
(
error
||
(
select
&&
select
->
check_quick
(
test
(
thd
->
options
&
SQL_SAFE_UPDATES
),
(
select
&&
select
->
check_quick
(
test
(
thd
->
options
&
SQL_SAFE_UPDATES
),
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment