- 29 Nov, 2012 4 commits
-
-
Tor Didriksen authored
frac is the number of decimal digits after the point For each multiplication in the expression, decimal_mul() does this: to->frac= from1->frac + from2->frac; /* store size in digits */ which will eventually overflow. The code for handling the overflow, will truncate the two digits in "1.75" to "1" Solution: Truncate to 31 significant fractional digits, when doing decimal multiplication.
-
Venkatesh Duggirala authored
LESS COLUMNS THAN MASTER Problem: ======== If DML operation requires a converstion at slave and if slave contains less number of columns than master, slave is crashing. Fix: ==== When Slave applies any DML operation, it sees if any of the columns requires conversion. If yes, it creates conversion table. While creating the coversion table, it should look into the actual number of columns required to create the table instead of getting the number of columns from Master (size()). Columns would have dropped or added at Slave. So the value should be min(columns@master, columns@slave)
-
Harin Vadodaria authored
Description: Null merge.
-
Harin Vadodaria authored
Description: A very large database name causes buffer overflow in functions acl_get() and check_grant_db() in sql_acl.cc. It happens due to an unguarded string copy operation. This puts required sanity checks before copying db string to destination buffer.
-
- 28 Nov, 2012 5 commits
-
-
mysql-builder@oracle.com authored
No commit message
-
Yasufumi Kinoshita authored
The converted implicit lock should wait for the prior conflicting lock if found. rb://1437 approved by Marko
-
Yasufumi Kinoshita authored
The converted implicit lock should wait for the prior conflicting lock if found. rb://1437 approved by Marko
-
Marko Mäkelä authored
-
Marko Mäkelä authored
BUF_PAGE_GET_GEN REDUNDANT? buf_page_get_gen(): When decompressing a compressed page that had already been accessed in the buffer pool, do not attempt to merge buffered changes. rb:1602 approved by Inaam Rana
-
- 26 Nov, 2012 6 commits
-
-
Bjorn Munch authored
-
sayantan.dutta@oracle.com authored
-
sayantan.dutta@oracle.com authored
-
Venkata Sidagam authored
-
Yasufumi Kinoshita authored
trx_undo_prev_version_build() should confirm existence of inherited (not-own) external pages. Bug #14676084 : ROW_UNDO_MOD_UPD_DEL_SEC() DOESN'T NEED UNDO_ROW AND UNDO_EXT INITIALIZED mtr script could hit the assertion error !bpage->file_page_was_freed using this path. So, also fixed rb://1337 approved by Marko Makela.
-
Yasufumi Kinoshita authored
trx_undo_prev_version_build() should confirm existence of inherited (not-own) external pages. Bug #14676084 : ROW_UNDO_MOD_UPD_DEL_SEC() DOESN'T NEED UNDO_ROW AND UNDO_EXT INITIALIZED mtr script could hit the assertion error !bpage->file_page_was_freed using this path. So, also fixed rb://1337 approved by Marko Makela.
-
- 21 Nov, 2012 2 commits
-
-
Harin Vadodaria authored
Description: Null merge from 5.1 to 5.5
-
Harin Vadodaria authored
Description: Updated yassl to version 2.2.2
-
- 20 Nov, 2012 4 commits
-
-
Nuno Carvalho authored
When a binlog is replayed into a server, e.g.: $ mysqlbinlog binlog.000001 | mysql it sets a pseudo slave mode on the client connection in order to server be able to read binlog events, there is, a format description event is needed to correctly read following events. Also this pseudo slave mode applies to the current connection replication rules that are needed to correctly apply binlog events. If a binlog dump is sourced on a connection, this pseudo slave mode will remains after it, what will apply unexpected rules from customer perspective to following commands. Added a new SET statement to binlog dump that will unset pseudo slave mode at the end of dump file.
-
sayantan.dutta@oracle.com authored
-
Vamsikrishna Bhagi authored
MYSQLDUMP OUTPUT A patch is pushed on this bug. A result mismatch occured for the test main.ddl_i18n_utf8 in x86_64 gcov build of linux in pb2. This commit is to modify ddl_i18n_utf8.result to match the changes made for the bug.
-
Vamsikrishna Bhagi authored
MYSQLDUMP OUTPUT A patch is pushed on this bug. A result mismatch occured for the test main.ddl_i18n_koi8r in x86_64 gcov build of linux in pb2. This commit is to modify ddl_i18n_koi8r.result to match the changes made for the bug.
-
- 19 Nov, 2012 2 commits
-
-
Vamsikrishna Bhagi authored
MYSQLDUMP OUTPUT Problem: mysqldump when used with option --routines, dumps all the routines of the specified database into output. The statements in this output are written in such a way that they are version safe using C style version commenting (of the format /*!<version num> <sql statement>*/). If a semicolon is present right before closing of the comment in dump output, it results in a syntax error while importing. Solution: Version comments for dumped routines are specifically to protect the ones older than 5.0. When the import is done on 5.0 or later versions, entire create statement gets executed as all the check conditions at the beginning of the comments are cleared. Since the trade off is between the performance of newer versions which are more in use and protection of very old versions which are no longer supported, it is proposed that these comments be removed altogether to maintain stability of the versions supported.
-
Satya Bodapati authored
This bug is fixed by Bug#14251529. Only testcase from the contribution is used.
-
- 16 Nov, 2012 9 commits
-
-
magnus.blaudd@oracle.com authored
-
magnus.blaudd@oracle.com authored
-
Inaam Rana authored
-
Inaam Rana authored
rb://1546 approved by: Sunny Bains and Marko Makela Our dealing of buf_page_t::access_time flag is inaccurate. * If LRU eviction has not started we don't set the access_time * If LRU eviction is started we set it only if the block is not 'too old'. * Not a correctness issue but we hold buf_pool::mutex when setting the flag This patch fixes this by: * Setting flag unconditionally whenever the first page access happens * Use buf_page_t mutex to protect write to the flag
-
magnus.blaudd@oracle.com authored
-
Mattias Jonsson authored
-
Mattias Jonsson authored
-
Harin Vadodaria authored
INC_HOST_ERRORS() IS CALLED. Description: Null merge from 5.1 to 5.5
-
mysql-builder@oracle.com authored
No commit message
-
- 15 Nov, 2012 6 commits
-
-
Marko Mäkelä authored
-
Marko Mäkelä authored
Remove a bogus debug assertion.
-
Marko Mäkelä authored
-
Marko Mäkelä authored
CHAR(n) in ROW_FORMAT=REDUNDANT tables is always fixed-length (n*mbmaxlen bytes), but in the temporary file it is variable-length (n*mbminlen to n*mbmaxlen bytes) for variable-length character sets, such as UTF-8. The temporary file format used during index creation and online ALTER TABLE is based on ROW_FORMAT=COMPACT. Thus, it should use the variable-length encoding even if the base table is in ROW_FORMAT=REDUNDNAT. dtype_get_fixed_size_low(): Replace an assertion-like check with a debug assertion. rec_init_offsets_comp_ordinary(), rec_convert_dtuple_to_rec_comp(): Make this an inline function. Replace 'ulint extra' with 'bool temp'. rec_get_converted_size_comp_prefix_low(): Renamed from rec_get_converted_size_comp_prefix(), and made inline. Add the parameter 'bool temp'. If temp=true, do not add REC_N_NEW_EXTRA_BYTES. rec_get_converted_size_comp_prefix(): Remove the comment about dict_table_is_comp(). This function is only to be called for other than ROW_FORMAT=REDUNDANT records. rec_get_converted_size_temp(): New function for computing temporary file record size. Omit REC_N_NEW_EXTRA_BYTES from the sizes. rec_init_offsets_temp(), rec_convert_dtuple_to_temp(): New functions, for operating on temporary file records. rb:1559 approved by Jimmy Yang
-
magnus.blaudd@oracle.com authored
- no need to use --skip-ndb in collections files anymore, since long but more clear logic after recent mtr.pl fixes. ndb tests are never run in MySQL Server unless explicitly requested - remove sys_vars.ndb_log_update_as_write_basic.test and sys_vars.ndb_log_updated_only_basic.result since MySQL Server does not have those options. - Only sys_vars.have_ndbcluster_basic left since MySQL Server has that variable hardcoded.
-
mysql-builder@oracle.com authored
No commit message
-
- 14 Nov, 2012 2 commits
-
-
Nuno Carvalho authored
When master and slave have different schemas, in particular different AUTO_INCREMENT columns, INSERT_ID events logged for a given table on master may be applied to a different table on slave on SBR, e.g.: master has one table (t1) with one auto-inc column and another table (t2) without auto-inc column, on slave t1 does not have auto-inc column (despite having the same columns) and t2 has a auto-inc column. The INSERT_ID that is intended for t1, since t1 on slave doesn't have auto-inc column is used on t2, causing consistency problems. To fix this incorrect behaviour, auto-inc interval allocation via INSERT_ID is made effectively terminated at the end of top-level statements on slave and binlog replay.
-
Venkata Sidagam authored
Problem description: Incorrect key file. Key file is corrupted, while reading the keys from the file. The problem here is that keyseg->start (which should point to the beginning of a field) is pointing beyond total record length. Fix: If keyseg->start is greater than total record length then return error.
-