- 22 Jan, 2018 4 commits
-
-
Sergei Golubchik authored
mark freed memory as not accessible, not merely undefined
-
Sergei Golubchik authored
TRASH was mapped to TRASH_FREE and was supposed to be used for memory that should not be accessed anymore, while TRASH_ALLOC() is to be used for uninitialized but to-be-used memory. But sometimes TRASH() was used in the latter sense. Remove TRASH() macro, always use explicit TRASH_ALLOC() or TRASH_FREE().
-
Sergei Golubchik authored
-
Marko Mäkelä authored
InnoDB limited the maximum number of bytes per character to 4. But, the filename character set that was introduced in MySQL 5.1 uses up to 5 bytes per character. To allow InnoDB tables to be created with wider characters, let us split the mbminmaxlen fields into mbminlen, mbmaxlen, and increase the limit to 7 bytes per character. This will increase the payload size of dtype_t and dict_col_t by one bit. The storage size will be unchanged (54 bits and 77 bits will use the same number of bytes as the previous sizes 53 and 76 bits).
-
- 19 Jan, 2018 4 commits
-
-
Vicențiu Ciorbaru authored
-
Daniel Bartholomew authored
-
Vicențiu Ciorbaru authored
Resolving a stacktrace including functions in dynamic libraries requires us to look inside the libraries for the symbols. Addr2line needs to be started with the correct binary for each address on the stack. To do this, figure out which library it is using dladdr, then if the addr2line binary was started with a different binary, fork it again with the correct one. We only have one addr2line process running at any point during the stacktrace resolving step. The maximum number of forks for addr2line should generally be around 6. One for server stacktrace code, one for plugin code, one when going back into server code, one for pthread library, one for libc, one for the _start function in the server. More can come up if plugin calls server function which goes back to a plugin, etc.
-
Varun Gupta authored
In this case we were using the optimization derived_with_keys but we could not create a key because the length of the key was greater than the max allowed(MI_MAX_KEY_LENGTH). To do the join we needed to create a hash join key instead, but in the explain output it showed that we were still referring to derived keys which were created but not used.
-
- 18 Jan, 2018 8 commits
-
-
Igor Babaev authored
In the function JOIN::shrink_join_buffers the iteration over joined tables was organized in a wrong way. This could cause a crash if the optimizer chose to materialize a semi-join that used join caches for which the sizes must be adjusted.
-
Sergei Golubchik authored
-
Sergei Golubchik authored
-
Sergei Golubchik authored
-
Sergei Golubchik authored
-
Sergei Golubchik authored
-
Vladislav Vaintroub authored
Fix WinUIDialogBmp.jpg to use correct dimensions
-
Vladislav Vaintroub authored
Use new grey logo.
-
- 17 Jan, 2018 1 commit
-
-
Sergei Golubchik authored
-
- 16 Jan, 2018 4 commits
-
-
Sergei Golubchik authored
MEMORY engine needs the record length to be at least sizeof(void*), because it stores a pointer there (linking deleted records into a list). So when the reclength is less than sizeof(void*), it's set to sizeof(void*). That is done inside heap_create(), and the upper layer doesn't know that the engine writes beyond share->reclength. While it's usually safe (in-memory record size is rounded up to sizeof(double), so even if share->reclength is too small, share->rec_buff_len is not), it could cause problems in the code that copies records and expects them to fix in share->reclength, e.g. in partitioning.
-
Sergei Golubchik authored
* get_rec_bits() was always reading two bytes, even if the bit field contained only of one byte * In various places the code used field->pack_length() bytes starting from field->ptr, while it should be field->pack_length_in_rec() * Field_bit::key_cmp and Field_bit::cmp_max passed field_length as an argument to memcmp(), but field_length is the number of bits!
-
Sergei Golubchik authored
-
Sergei Golubchik authored
if the property is not found, set it to the empty string, otherwise it'll show as libmysql_link_flags-NOTFOUND on the linker command line, and the linker won't like it. Also, don't specify LINK_FLAG_NO_UNDEFINED twice, MERGE_LIBRARIES already put it into LINK_FLAGS.
-
- 15 Jan, 2018 7 commits
-
-
Igor Babaev authored
optimizer_switch For DATE and DATETIME columns defined as NOT NULL, "date_notnull IS NULL" has to be modified to: "date_notnull IS NULL OR date_notnull == 0" if date_notnull is from an inner table of outer join); "date_notnull == 0" - otherwise. This must hold for such columns of mergeable views and derived tables as well. So far the code did the above re-writing only for columns of base tables and temporary tables.
-
Sergei Golubchik authored
1. test readdir_r() availability under -Werror 2. don't protect readdir() with mutexes, it's not needed for the way we use readdir()
-
Sergei Golubchik authored
-
Sergei Golubchik authored
gcc 6 issues a warning about a suspicious construct while(0); { some code }
-
Sergey Vojtovich authored
Enumerate plugins that use password field.
-
Daniel Black authored
-
Alexander Barkov authored
The function trans_rollback_to_savepoint(), unlike trans_savepoint(), did not allow xa_state=XA_ACTIVE, so an attempt to do ROLLBCK TO SAVEPOINT inside an XA transaction incorrectly returned an error "...command cannot be executed ... in the ACTIVE state...". Partially merging a MySQL patch: 7fb5c47390311d9b1b5367f97cb8fedd4102dd05 This is WL#7193 (Decouple THD and st_transactions)... The currently merged part includes these changes: - Introducing st_xid_state::check_has_uncommitted_xa() - Reusing it in both trans_rollback_to_savepoint() and trans_savepoint(), so now both allow XA_ACTIVE.
-
- 14 Jan, 2018 1 commit
-
-
Oleksandr Byelkin authored
The problem was in such scenario: T1 - starts registering query and locked QC T2 - starts disabling QC and wait for UNLOCK T1 - unlock QC T2 - disable QC and destroy signals without waiting for query unlock T1 a) - not yet unlocked query in qc and crash on attempt to unlock because QC signals are destroyed b) if above was done before destruction, it execute end_of results first time at exit on after try_lock which see QC disables and return TRUE. But it do not reset query_cache_tls->first_query_block which lead to second call of end_of_result when diagnostic arena has already inappropriate status (not is_eof()). Fix is: 1) wait for all queries unlocked before destroying them by locking and unlocking 2) remove query_cache_tls->first_query_block if QC disabled
-
- 13 Jan, 2018 1 commit
-
-
Sergey Vojtovich authored
Regression after 5ea28015.
-
- 12 Jan, 2018 3 commits
-
-
Igor Babaev authored
with joins, SQ, ORDER BY, semijoin=on A bug in get_sort_by_table() could mislead the function setup_semijoin_dups_elimination(). As a result the optimizer could produce invalid execution plans for queries with ORDER BY and subquery predicates that could be converted to semi-joins.
-
Oleksandr Byelkin authored
Remove non prepared (and so belonging to removed clauses FT functions) from the list. in later version it will be fixed by building the list during preparation.
-
Daniel Black authored
Signed-off-by: Daniel Black <daniel@linux.vnet.ibm.com>
-
- 11 Jan, 2018 4 commits
-
-
Oleksandr Byelkin authored
-
Oleksandr Byelkin authored
Fix the call to correspond protocoll of pagecache call. Fix of misleading variables names.
-
Monty authored
This bug happens when locking the same Aria "transactional" table (page format) more then once with LOCK TABLES and inserting into one of them with INSERT ... SELECT when the table is empty. Fixed by ensuring we don't use fast bulk insert if table is opened twice with LOCK TABLES (as this changes table->s->state) Code changes: - Added use_count to MARIA_USED_TABLES to be able to check if table is opened twice for a statement/lock table - Don't clear history or reset info->start_state if we don't have versioning. One reason for the bug was was that info->start_state was set to point to different states for the two tables. If there is no versioning info->start_state should always point to info->s->state.common. Other things: - Fixed also some typos that was noticed while scanning the code - More DBUG_PRINT
-
Marko Mäkelä authored
The warning was originally added in commit c6766305 (MySQL 4.1.12, 5.0.3) to trace claimed undo log corruption that was analyzed in https://lists.mysql.com/mysql/176250 on November 9, 2004. Originally, the limit was 20,000 undo log headers or transactions, but in commit 9d6d1902 in MySQL 5.5.11 it was increased to 2,000,000. The message can be triggered when the progress of purge is prevented by a long-running transaction (or just an idle transaction whose read view was started a long time ago), by running many transactions that UPDATE or DELETE some records, then starting another transaction with a read view, and finally by executing more than 2,000,000 transactions that UPDATE or DELETE records in InnoDB tables. Finally, when the oldest long-running transaction is completed, purge would run up to the next-oldest transaction, and there would still be more than 2,000,000 transactions to purge. Because the message can be triggered when the database is obviously not corrupted, it should be removed. Heavy users of InnoDB should be monitoring the "History list length" in SHOW ENGINE INNODB STATUS; there is no need to spam the error log.
-
- 10 Jan, 2018 3 commits
-
-
Oleksandr Byelkin authored
Roll back to most general duplicate removing strategi in case of different stratagies for one position.
-
Marko Mäkelä authored
Backport the fix from 10.0.33 to 5.5, in case someone compiles XtraDB with -DUNIV_LOG_ARCHIVE
-
Marko Mäkelä authored
The XtraDB option innodb_track_changed_pages causes the function log_group_read_log_seg() to be invoked even when recv_sys==NULL, leading to the SIGSEGV. This regression was caused by MDEV-11027 InnoDB log recovery is too noisy
-