- 10 Jun, 2020 3 commits
-
-
Vicențiu Ciorbaru authored
On large hard disks (> 2TB), the plugin won't function correctly, always showing 2 TB of available space due to integer overflow. Upgrade table fields to bigint to resolve this problem.
-
Oleksandr Byelkin authored
The real problem was that attempt to roll back cahnes after end of memory in QC was made incorrectly and lead to using uninitialized memory. (bug has nothing to do with resize operation, it is just lack of resources erro processed incorrectly)
-
Oleksandr Byelkin authored
This reverts commit 44339123.
-
- 09 Jun, 2020 1 commit
-
-
rucha174 authored
In case of SELECT without tables which returns either 0 or 1 rows, JOIN::exec_inner() did not check if the flag representing SQL_CALC_FOUND_ROWS is set or not and send_records was direclty assigned 0. So SELECT FOUND_ROWS() was giving 0 in the output. Now it checks if the flag is set, if it is set send_record=1 else 0. 1 is the number of rows that could have been sent to the client if the SELECT query had SQL_CALC_FOUND_ROWS. It is 0 when no rows were sent because the SELECT query did not have SQL_CALC_FOUND_ROWS.
-
- 08 Jun, 2020 2 commits
-
-
Sujatha authored
MDEV-22717: Conditional jump or move depends on uninitialised value(s) in find_uniq_filename(char*, unsigned long) Fix: === Initialize 'number' variable to '0'.
-
Ian Gilfillan authored
-
- 06 Jun, 2020 1 commit
-
-
Varun Gupta authored
MDEV-22728: SIGFPE in Unique::get_cost_calc_buff_size from prepare_search_best_index_intersect on optimized builds For low sort_buffer_size, in the cost calculation of using the Unique object the elements in the tree were evaluated to 0, make sure to have atleast 1 element in the Unique tree. Also for the function Unique::get allocate memory for atleast MERGEBUFF2+1 keys.
-
- 04 Jun, 2020 1 commit
-
-
Varun Gupta authored
For DECIMAL[(M[,D])] datatype max_sort_length was not being honoured which was leading to buffer overflow while making the sort key. The fix to this problem would be to create sort keys for decimals with atmost max_sort_key bytes Important: The minimum value of max_sort_length has been raised to 8 (previously was 4), so fixed size datatypes like DOUBLE and BIGINIT are not truncated for lower values of max_sort_length.
-
- 03 Jun, 2020 2 commits
-
-
Julius Goryavsky authored
-
sjaakola authored
Backported the support for aborting and replaying stored procedure and fix for trigger key assigments from 10.4 version. Backported also two mtr tests: wsrep_sp_bf_abort and MDEV-20225
-
- 02 Jun, 2020 1 commit
-
-
Bernard Spil authored
both both -> both Closes #1560
-
- 29 May, 2020 5 commits
-
-
Sergey Vojtovich authored
Part of MDEV-19061 - table_share used for reading statistical tables is not protected
-
Sergey Vojtovich authored
Previously multiple threads were allowed to load histograms concurrently. There were no known problems caused by this. But given amount of data races in this code, it'd happen sooner or later. To avoid scalability bottleneck, histograms loading is protected by per-TABLE_SHARE atomic variable. Whenever histograms were loaded by preceding statement (hot-path), a scalable load-acquire check is performed. Whenever histograms have to be loaded anew, mutual exclusion for loaders is established by atomic variable. If histograms are being loaded concurrently, statement waits until load is completed. - Table_statistics::total_hist_size moved to TABLE_STATISTICS_CB: only meaningful within TABLE_SHARE (not used for collected stats). - TABLE_STATISTICS_CB::histograms_can_be_read and TABLE_STATISTICS_CB::histograms_are_read are replaced with a tri state atomic variable. - Simplified away alloc_histograms_for_table_share(). Note: there's still likely a data race if a thread attempts accessing histograms data after it failed to load it (because of concurrent load). It was there previously and goes out of the scope of this effort. One way of fixing it could be reviving TABLE::histograms_are_read and adding appropriate checks whenever it is needed. Part of MDEV-19061 - table_share used for reading statistical tables is not protected
-
Sergey Vojtovich authored
Previously multiple threads were allowed to load statistics concurrently. There were no known problems caused by this. But given amount of data races in this code, it'd happen sooner or later. To avoid scalability bottleneck, statistics loading is protected by per-TABLE_SHARE atomic variable. Whenever statistics were loaded by preceding statement (hot-path), a scalable load-acquire check is performed. Whenever statistics have to be loaded anew, mutual exclusion for loaders is established by atomic variable. If statistics are being loaded concurrently, statement waits until load is completed. TABLE_STATISTICS_CB::stats_can_be_read and TABLE_STATISTICS_CB::stats_is_read are replaced with a tri state atomic variable. Part of MDEV-19061 - table_share used for reading statistical tables is not protected
-
Sergey Vojtovich authored
Removed redundant loops, integrated logics into the caller instead. Unified condition in read_statistics_for_tables(), less "table_share != NULL" checks, no more potential "table_share == NULL" dereferencing. Part of MDEV-19061 - table_share used for reading statistical tables is not protected
-
Alexander Barkov authored
MDEV-22744 *SAN: sql/item_xmlfunc.cc:791:43: runtime error: downcast of address ... which does not point to an object of type 'Item_func' note: object is of type 'Item_bool' (on optimized builds) In Item_nodeset_func_predicate::val_nodeset, args[1] is not necessarily an Item_func descendant. It can be Item_bool. Removing a wrong cast. It was not really needed anyway.
-
- 28 May, 2020 2 commits
-
-
Anel Husakovic authored
Pre-definitions are allowed for non-embedded. Failur catched with: ``` cmake ../../10.1 -DCMAKE_BUILD_TYPE=Debug -DCMAKE_CXX_COMPILER=g++-9 -DCMAKE_C_COMPILER=gcc-9 -DWITH_EMBEDDED_SERVER=ON -DCMAKE_BUILD_TYPE=Debug -DPLUGIN_{ARCHIVE,TOKUDB,MROONGA,OQGRAPH,ROCKSDB,PERFSCHEMA,SPIDER,SPHINX}=N -DMYSQL_MAINTAINER_MODE=ON -DNOT_FOR_DISTRIBUTION=ON ``` Alternative fix would be ``` --- a/sql/sql_acl.cc +++ b/sql/sql_acl.cc @@ -201,8 +201,10 @@ LEX_STRING current_user= { C_STRING_WITH_LEN("*current_user") }; LEX_STRING current_role= { C_STRING_WITH_LEN("*current_role") }; LEX_STRING current_user_and_current_role= { C_STRING_WITH_LEN("*current_user_and_current_role") }; +#ifndef EMBEDDED_LIBRARY class ACL_USER; static ACL_USER *find_user_or_anon(const char *host, const char *user, const char *ip); +#endif ```
-
Anel Husakovic authored
- `SET DEFAULT ROLE xxx [FOR yyy]` should say: "User yyy has not been granted a role xxx" if: - The current user (not the user `yyy` in the FOR clause) can see the role xxx. It can see the role if: * role exists in `mysql.roles_mappings` (traverse the graph), * If the current user has read access on `mysql.user` table - in that case, it can see all roles, granted or not. - Otherwise it should be "Invalid role specification". In other words, it should not be possible to use `SET DEFAULT ROLE` to discover whether a specific role exist or not.
-
- 26 May, 2020 3 commits
-
-
Andrei Elkin authored
The immediate bug was caused by a failure to recognize a correct position to stop the slave applier run in optimistic parallel mode. There were the following set of issues that the analysis unveil. 1 incorrect estimate for the event binlog position passed to is_until_satisfied 2 wait for workers to complete by the driver thread did not account non-group events that could be left unprocessed and thus to mix up the last executed binlog group's file and position: the file remained old and the position related to the new rotated file 3 incorrect 'slave reached file:pos' by the parallel slave report in the error log 4 relay log UNTIL missed out the parallel slave branch in is_until_satisfied. The patch addresses all of them to simplify logics of log change notification in either the master and relay-log until case. P.1 is addressed with passing the event into is_until_satisfied() for proper analisis by the function. P.2 is fixed by changes in handle_queued_pos_update(). P.4 required removing relay-log change notification by workers. Instead the driver thread updates the notion of the current relay-log fully itself with aid of introduced bool Relay_log_info::until_relay_log_names_defer. An extra print out of the requested until file:pos is arranged with --log-warning=3.
-
Marko Mäkelä authored
-
Marko Mäkelä authored
Do not blindly disconnect the connection that is in WAIT_FOR because it could happen that neither the disconnect nor the SIGNAL would be processed before RESET would discard the signal.
-
- 25 May, 2020 1 commit
-
-
Varun Gupta authored
Initialize the parameter PARAM::max_key_part when we iterate over the ranges to get estimates from EITS.
-
- 22 May, 2020 1 commit
-
-
Alexander Barkov authored
MDEV-22111 ERROR 1064 & 1033 and SIGSEGV on CREATE TABLE w/ various charsets on 10.4/5 optimized builds | Assertion `(uint) (table_check_constraints - share->check_constraints) == (uint) (share->table_check_constraints - share->field_check_constraints)' failed The code incorrectly assumed in multiple places that TYPELIB values cannot have 0x00 bytes inside. In fact they can: CREATE TABLE t1 (a ENUM(0x61, 0x0062) CHARACTER SET BINARY); Note, the TYPELIB value encoding used in FRM is ambiguous about 0x00. So this fix is partial. It fixes 0x00 bytes in many (but not all) places: - In the middle or in the end of a value: CREATE TABLE t1 (a ENUM(0x6100) ...); CREATE TABLE t1 (a ENUM(0x610062) ...); - In the beginning of the first value: CREATE TABLE t1 (a ENUM(0x0061)); CREATE TABLE t1 (a ENUM(0x0061), b ENUM('b')); - In the beginning of the second (and following) value of the *last* ENUM/SET in the table: CREATE TABLE t1 (a ENUM('a',0x0061)); CREATE TABLE t1 (a ENUM('a'), b ENUM('b',0x0061)); However, it does not fix 0x00 when: - 0x00 byte is in the beginning of a value of a non-last ENUM/SET causes an error: CREATE TABLE t1 (a ENUM('a',0x0061), b ENUM('b')); ERROR 1033 (HY000): Incorrect information in file: './test/t1.frm' This is an ambuguous case and will be fixed separately. We need a new TYPELIB encoding to fix this. Details: - unireg.cc The function pack_header() incorrectly used strlen() to detect a TYPELIB value length. Adding a new function typelib_values_packed_length() which uses TYPELIB::type_lengths[n] to detect the n-th value length, and reusing the new function in pack_header() and packed_fields_length() - table.cc fix_type_pointers() assumed in multiple places that values cannot have 0x00 inside and used strlen(TYPELIB::type_names[n]) to set the corresponding TYPELIB::type_lengths[n]. Also, fix_type_pointers() did not check the encoded data for consistency. Rewriting fix_type_pointers() code to populate TYPELIB::type_names[n] and TYPELIB::type_lengths[n] at the same time, so no additional loop with strlen() is needed any more. Adding many data consistency tests. Fixing the main loop in fix_type_pointers() to use memchr() instead of strchr() to handle 0x00 properly. Fixing create_key_infos() to return the result in a LEX_STRING rather that in a char*.
-
- 20 May, 2020 3 commits
-
-
Sujatha authored
MDEV-22451: SIGSEGV in __memmove_avx_unaligned_erms/memcpy from _my_b_write on CREATE after RESET MASTER Analysis: ======== RESET MASTER TO # command deletes all binary log files listed in the index file, resets the binary log index file to be empty, and creates a new binary log with number #. When the user provided binary log number is greater than the max allowed value '2147483647' server fails to generate a new binary log. The RESET MASTER statement marks the binlog closure status as 'LOG_CLOSE_TO_BE_OPENED' and exits. Statements which follow RESET MASTER try to write to binary log they find the log_state != LOG_CLOSED and proceed to write to binary log cache and it results in crash. Fix: === During MYSQL_BIN_LOG open, if generation of new binary log name fails then the "log_state" needs to be marked as "LOG_CLOSED". With this further statements will find binary log as closed and they will skip writing to the binary log.
-
Rasmus Johansson authored
-
Marko Mäkelä authored
For no good reason, innodb_encryption_threads was limited to 4,294,967,295. Expectedly, the server would crash if such an insane value was specified. Let us limit the maximum to 255. The encryption threads are not doing much useful work. They are basically only dirtying pages by performing dummy writes via the redo log. The encryption key rotation or the in-place addition or removal of encryption will take place in the page cleaner. In a quick test on a 20-core CPU (40 threads in total), the sweet spot on an otherwise idle server seemed to be innodb_encryption_threads=16 for the test encryption.encrypt_and_grep. The new limit 255 should be more than enough for even bigger servers.
-
- 19 May, 2020 3 commits
-
-
Andrei Elkin authored
This is a new test from upstream that did not expect the correct value of the command slot of the Dump thread when the latter gets killed. The test is made to expect "Killed" string as the command in show-processlist as it is supposed to when a thread gets killed.
-
Rasmus Johansson authored
-
Rasmus Johansson authored
-
- 18 May, 2020 2 commits
-
-
Andrei Elkin authored
The assert was caused by early cleanup of a user variable participant in BINLOG @var,@var where it plays twice. That was unexpected by the base code to clear its value prematurely. Fixed with relocating the user var destruction after operations with its value is over.
-
Daniel Black authored
Localhost, depending on the platform can return any 127.0.0.1/8 address.
-
- 15 May, 2020 1 commit
-
-
Alexander Barkov authored
The code erroneously allowed both: INSERT INTO t1 (vcol) VALUES (DEFAULT); INSERT INTO t1 (vcol) VALUES (DEFAULT(non_virtual_column)); The former is OK, but the latter is not. Adding a new virtual method in Item: virtual bool vcol_assignment_allowed_value() const { return false; } Item_null, Item_param and Item_default_value override it. Item_default_value overrides it in the way to: - allow DEFAULT - disallow DEFAULT(col)
-
- 14 May, 2020 4 commits
-
-
Varun Gupta authored
For the case when the optimizer does the IN-EXISTS transformation, the equality condition is injected in the WHERE OR HAVING clause of the subquery. If the select list of the subquery has a reference to the parent select make sure to use the reference and not the original item.
-
Marko Mäkelä authored
On a checksum failure of a ROW_FORMAT=COMPRESSED page, buf_LRU_free_one_page() would invoke buf_LRU_block_remove_hashed() which will read the uncompressed page frame, although it would not be initialized. With bad enough luck, fil_page_get_type(page) could return an unrecognized value and cause the server to abort. buf_page_io_complete(): On the corruption of a ROW_FORMAT=COMPRESSED page, zerofill the uncompressed page frame.
-
Alexander Barkov authored
TRUNCATE(decimal_5_5) erroneously tried to create a DECIMAL(0,0) column. Creating a DECIMAL(1,0) column instead.
-
Alexander Barkov authored
MDEV-22503 MDB limits DECIMAL column precision to 16 doing CTAS with floor/ceil over DECIMAL(X,Y) where X > 16 The DECIMAL data type branch in Item_func_int_val::fix_length_and_dec() incorrectly used DOUBLE-style length calculation, which resulted in a smaller data type than the actual result of FLOOR()/CEIL() needs.
-
- 11 May, 2020 3 commits
-
-
Oleksandr Byelkin authored
-
Daniel Bartholomew authored
-
Daniel Bartholomew authored
-
- 08 May, 2020 1 commit
-
-
Marko Mäkelä authored
Let us limit the maximum value of the debug parameter innodb_data_file_size to 256 MiB. It is only being used in the test innodb.log_data_file_size, and the size of the system tablespace should never exceed some 70 MiB in ./mtr. Thus, 256 MiB should be a reasonable limit. The fact that negative values that are passed to unsigned parameters wrap around to the maximum value appears to be a regression due to commit 18ef02b0 and has been filed as bug MDEV-22219.
-