- 05 Oct, 2023 3 commits
-
-
Vlad Lesin authored
MDEV-30658 lock_row_lock_current_waits counter in information_schema.innodb_metrics may become negative MONITOR_OVLD_ROW_LOCK_CURRENT_WAIT monitor should has MONITOR_DISPLAY_CURRENT flag set in its definition, as it shows the current state and does not accumulate anything. Reviewed by: Marko Mäkelä
-
Sergei Petrunia authored
Fix order_by_optimizer_innodb and order_by_innodb tests. The problem was that the query could be ran before InnoDB was ready to provide a realistic statistic for #records in the table. It provided a number that was too low, which caused the optimizer to decide that range access plan wasn't advantageous and discard it.
-
Alexander Barkov authored
-
- 04 Oct, 2023 2 commits
-
-
Alexander Barkov authored
MDEV-32275 getting error 'Illegal parameter data types row and bigint for operation '+' ' when using ITERATE in a FOR..DO An "ITERATE innerLoop" did not work properly inside a WHILE loop, which itself is inside an outer FOR loop: outerLoop: FOR ... innerLoop: WHILE ... ITERATE innerLoop; ... END WHILE; ... END FOR; It erroneously generated an integer increment code for the outer FOR loop. There were two problems: 1. "ITERATE innerLoop" worked like "ITERATE outerLoop" 2. It was always integer increment, even in case of FOR cursor loops. Background: - A FOR loop automatically creates a dedicated sp_pcontext stack entry, to put the iteration and bound variables on it. - Other loop types (LOOP, WHILE, REPEAT), do not generate a dedicated slack entry. The old code erroneously assumed that sp_pcontext::m_for_loop either describes the most inner loop (in case the inner loop is FOR), or is empty (in case the inner loop is not FOR). But in fact, sp_pcontext::m_for_loop is never empty inside a FOR loop: it describes the closest FOR loop, even if this FOR loop has nested non-FOR loops inside. So when we're near the ITERATE statement in the above script, sp_pcontext::m_for_loop is not empty - it stores information about the FOR loop labeled as "outrLoop:". Fix: - Adding a new member sp_pcontext::Lex_for_loop::m_start_label, to remember the explicit or the auto-generated label correspoding to the start of the FOR body. It's used during generation of "ITERATE loop_label" code to check if "loop_label" belongs to the current FOR loop pointed by sp_pcontext::m_for_loop, or belongs to a non-FOR nested loop. - Adding LEX methods sp_for_loop_intrange_iterate() and sp_for_loop_cursor_iterate() to reuse the code between methods handling: * ITERATE * END FOR - Adding a test for Lex_for_loop::is_for_loop_cursor() and generate a code either a cursor fetch, or for an integer increment. Before this change, it always erroneously generated an integer increment version. - Cleanup: Initialize Lex_for_loop_st::m_cursor_offset inside Lex_for_loop_st::init(), to avoid not initialized members. - Cleanup: Removing a redundant method: Lex_for_loop_st::init(const Lex_for_loop_st &other) Using Lex_for_loop_st::operator(const Lex_for_loop_st &other) instead.
-
Alexander Barkov authored
Problem: Item_func_date_format::val_str() and make_date_time() did not take into account that the format string and the result string (separately or at the same time) can be of a tricky character set like UCS2, UTF16, UTF32. As a result, DATE_FORMAT() could generate an ill-formed result which crashed on DBUG_ASSERTs testing well-formedness in other parts of the code. Fix: 1. class String changes Removing String::append_with_prefill(). It was not compatible with tricky character sets. Also it was inconvenient to use and required too much duplicate code on the caller side. Adding String::append_zerofill() instead. It's compatible with tricky character sets and is easier to use. Adding helper methods Static_binary_string::q_append_wc() and String::append_wc(), to append a single wide character (a Unicode code point in my_wc_t). 2. storage/spider changes Removing spider_string::append_with_prefill(). It used String::append_with_prefix() inside, but it was unused itself. 3. Changing tricky charset incompatible code pieces in make_date_time() to compatible replacements: - Fixing the loop scanning the format string to iterate in terms of Unicode code points (using mb_wc()) rather than in terms of "char" items. - Using append_wc(my_wc_t) instead of append(char) to append a single character to the result string. - Using append_zerofill() instead of append_with_prefill() to append date/time numeric components to the result string.
-
- 03 Oct, 2023 1 commit
-
-
Sergei Petrunia authored
Add FORCE INDEX and ANALYZE TABLE PERSISTENT FOR ALL to make the plans stable.
-
- 29 Sep, 2023 3 commits
-
-
Jan Lindström authored
At the moment we cannot support wsrep_forced_binlog_format=[MIXED|STATEMENT] during CREATE TABLE AS SELECT. Statement will use ROW instead and give a warning. Signed-off-by: Julius Goryavsky <julius.goryavsky@mariadb.com>
-
Sergei Golubchik authored
otherwise, e.g. ./mtr main.mysql_install_db_win_admin,innodb results in sporadic mysql-test-run: *** ERROR: Could not run main.mysql_install_db_win_admin with 'innodb' combination(s) depending on whether it'll process the skip (not windows admin) or the innodb.combinations first (if skip is processed first, innodb combination wasn't, making the further code think that the test doesn't have innodb combination)
-
Vladislav Vaintroub authored
The error was specific to threadpool/compressed protocol. set_thd_idle() set socket state to idle twice, causing assert failure. This happens if unread compressed data on connection,after query was finished. On a protocol level, this means a single compression packet contains multiple command packets.
-
- 28 Sep, 2023 3 commits
-
-
Jan Lindström authored
state() == s_prepared || state() == s_must_abort || state() == s_aborting || state() == s_cert_failed || state() == s_must_replay' failed When applier tries to execute write rows event it find out in table_def::compatible_with that value is not compatible and sets error and thd->is_slave_error but thd->is_error() is false. Later in rpl_group_info::slave_close_thread_tables we commit stmt. This is bad for Galera because later in apply_write_set we notice that event apply was not successful and try to rollback transaction, but wsrep transaction is already in s_committed state. This is fixed on rpl_group_info::slave_close_thread_tables so that in Galera case we rollback stmt if thd->is_slave_error or thd->is_error() is set. Then later we can rollback wsrep transaction. Signed-off-by: Julius Goryavsky <julius.goryavsky@mariadb.com>
-
Jan Lindström authored
Problem is that mysql.galera_slave_pos table is replicated, thus it should be InnoDB to allow rolling back in case of replay. Signed-off-by: Julius Goryavsky <julius.goryavsky@mariadb.com>
-
Oleksandr Byelkin authored
Fix Item_func_match to avoid removing Item_direct_ref_to_item from item tree via real_item() call.
-
- 27 Sep, 2023 5 commits
-
-
Igor Babaev authored
The function setup_windows() called at the prepare phase of processing a select builds a list of all window specifications used in the select. This list is built on the statement memory and it must be done only once. Approved by Oleksandr Byelkin <sanja@mariadb.com>
-
Sergei Golubchik authored
and make it print both version it compares to be able to see if it starts misbehaving again
-
Oleksandr Byelkin authored
Pass name separately for sequence check because sequence can be created with CREATE TABLE (see https://mariadb.com/kb/en/create-table/#sequence )
-
Oleksandr Byelkin authored
Disable view protocol for the MDEV-31742 test because it make statistics differ or wrong (without service connection)
-
Lena Startseva authored
MDEV-31455: main.events_stress or events.events_stress fails with view-protocol MDEV-31457: main.delete_use_source fails (hangs) with view-protocol Fixed tests: main.sum_distinct-big, main.delete_use_source - disabled view-protocol for some cases because they use transactions without autocommit main.events_stress, main.merge-big - disabled service connection for some queries since it is necessary that the query SELECT pass in the same session
-
- 26 Sep, 2023 6 commits
-
-
Igor Babaev authored
With this patch st_select_lex::ref_pointer_array is never re-allocated. Approved by Oleksandr Byelkin <sanja@mariadb.com>
-
Oleksandr Byelkin authored
Do not manipulate empty dynamic column, just better return empty dynamic column from the begining. (it is also optimisation)
-
Daniel Black authored
.snapshot exists as a directory on NetApp storage and should not be copied during the sst process. Thanks Daniel Czadek for the bug report. Signed-off-by: Julius Goryavsky <julius.goryavsky@mariadb.com>
-
Jan Lindström authored
MDEV-31651 : Assertion wsrep_thd_is_applying(thd) && !wsrep_thd_is_local_toi(thd) in wsrep_ignored_error_code Problem was that with BINLOG-statement you can execute binlog events on master also (not only in applier). Fix removes too strict part wsrep_thd_is_applying from assertion. Note that actual event in test is intentionally corrupted to test should this error being ignored. Signed-off-by: Julius Goryavsky <julius.goryavsky@mariadb.com>
-
Jan Lindström authored
|| state() == s_prepared || state() == s_committing || state() == s_must_abort || state() == s_replaying' failed. CACHE INDEX and LOAD INDEX INTO CACHE are local operations. Therefore, do not replicate them with Galera. Signed-off-by: Julius Goryavsky <julius.goryavsky@mariadb.com>
-
Thirunarayanan Balathandayuthapani authored
Problem: ========= During commit, server calls prepare_commit_versioned to determine the transaction modified system-versioned data. Due to binlog_do_db option, we disable the binlog for the statement. But prepare_commit_versioned() is being called only when binlog is enabled for the statement. Fix: === prepare_commit_versioned() should happen irrespective of binlog state. So if the server has any read-write operation then we should call prepare_commit_versioned().
-
- 25 Sep, 2023 3 commits
-
-
Daniel Black authored
There are many filesystem related errors that can occur with MariaBackup. These already outputed to stderr with a good description of the error. Many of these are permission or resource (file descriptor) limits where the assertion and resulting core crash doesn't offer developers anything more than the log message. To the user, assertions and core crashes come across as poor error handling. As such we return an error and handle this all the way up the stack.
-
Vlad Lesin authored
The fix is to return 3-state value from Range_rowid_filter::build() call: 1. The filter was built successfully; 2. The filter was not built, but the error was not fatal, i.e. there is no need to rollback transaction. For example, if the size of container to storevrow ids is not enough; 3. The filter was not built because of fatal error, for example, deadlock or lock wait timeout from storage engine. In this case we should stop query plan execution and roll back transaction. Reviewed by: Sergey Petrunya
-
Yuchen Pei authored
Spider is part of the server, and there's no need to check the version. All spider plugins are uninstalled in clean_up_spider.inc DROP SERVER IF EXISTS makes things easier
-
- 24 Sep, 2023 1 commit
-
-
Igor Babaev authored
Memory for type holders of the columns of a table value constructor must be allocated only once. Approved by Oleksandr Byelkin <sanja@mariadb.com>
-
- 22 Sep, 2023 3 commits
-
-
Vladislav Vaintroub authored
is_file_on_ssd() is more expensive than it should be. It caches the results by volume name, but still calls GetVolumePathName() every time, which, as procmon shows, opens multiple directories in filesystem hierarchy (db directory, datadir, and all ancestors) The fix is to cache SSD status by volume serial ID, which is cheap to retrieve with GetFileInformationByHandleEx()
-
Oleksandr Byelkin authored
The counter is global so we do not need add backup to it if we do not zero it after taking the backup.
-
Oleksandr Byelkin authored
Fix row counters to be able to get any possible value.
-
- 20 Sep, 2023 2 commits
-
-
Oleksandr Byelkin authored
-
Oleg Smirnov authored
MDEV-29731 Assertion failure when HAVING in a correlated subquery references columns in the outer query When resolving a column from the HAVING clause, a new Item_field object may be created inside Item_ref::fix_fields(). But the object is created with an empty name resolution context, which then leads to debug assertion failure during Item_field::fix_fields(). The solution is to pass the correct name resolution context when creating the Item_field object. Reviewer: Oleksandr Byelkin (sanja@mariadb.com)
-
- 19 Sep, 2023 5 commits
-
-
Daniel Black authored
The table structure from MySQL-5.1.14 is: CREATE TABLE `slow_log` ( `start_time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP, `user_host` mediumtext NOT NULL, `query_time` time NOT NULL, `lock_time` time NOT NULL, `rows_sent` int(11) NOT NULL, `rows_examined` int(11) NOT NULL, `db` varchar(512) DEFAULT NULL, `last_insert_id` int(11) DEFAULT NULL, `insert_id` int(11) DEFAULT NULL, `server_id` int(11) DEFAULT NULL, `sql_text` mediumtext NOT NULL ) ENGINE=CSV DEFAULT CHARSET=utf8 COMMENT='Slow log' Even as far back as MySQL-5.5.40 this table could be created as NULLs where not permitted in the CSV table time, but it seems they where allowed sometime. As the first part of mariadb-upgrade adds the column thread_id without correcting the 'NULL'able status of existing columns it fails. We reorder the sql statements in the ugprade as follows: ALTER TABLE slow_log MODIFY {columns} {new types} NOT NULL,.... As thread_id doesn't exist in the above statement it was removed from the first ALTER TABLE statement to prevent failure. Previous ALTER TABLE slow_log where moved later appending thread_id and rows_affected, and also enforces the type of thread_id if it was incorrectly like the now first ALTER STATEMENT slow_log used to do.
-
Daniel Black authored
-
Marko Mäkelä authored
According to commit ea568419 the stack normally grows downwards, except on HP PA-RISC where it grows upwards. Because determining the stack direction is not possible in a portable way, let us determine the default STACK_DIRECTION in CMake based on the ISA. On clang 16.0.6 running on and targeting AMD64, STACK_DIRECTION=1 is being incorrectly detected, causing the failure of a number of tests.
-
Marko Mäkelä authored
row_vers_vc_matches_cluster(): Invoke dtype_get_at_most_n_mbchars() to extract the correct number of bytes corresponding to the number of characters in a virtual column prefix index, just like we do in row_sel_sec_rec_is_for_clust_rec(). The test case would occasionally reproduce the failure when this fix is not present.
-
Dmitry Shulga authored
On creation of a VIEW that depends on a stored routine an instance of the class Item_func_sp is allocated on a memory root of SP statement. It happens since mysql_make_view() calls the method THD::activate_stmt_arena_if_needed() before parsing definition of the view. On the other hand, when sp_head's rcontext is created an instance of the class Field referenced by the data member Item_func_sp::result_field is allocated on the Item_func_sp's Query_arena (call arena) that set up inside the method Item_sp::execute_impl just before calling the method sp_head::execute_function() On return from the method sp_head::execute_function() all items allocated on the Item_func_sp's Query_arena are released and its memory root is freed (see implementation of the method Item_sp::execute_impl). As a consequence, the pointer Item_func_sp::result_field references to the deallocated memory. Later, when the method sp_head::execute cleans up items allocated for just executed SP instruction the method Item_func_sp::cleanup is invoked and tries to delete an object referenced by data member Item_func_sp::result_field that points to already deallocated memory, that results in a server abnormal termination. To fix the issue the current active arena shouldn't be switched to a statement arena inside the function mysql_make_view() that invoked indirectly by the method sp_head::rcontext_create. It is implemented by introducing the new Query_arena's state STMT_SP_QUERY_ARGUMENTS that is set when explicit Query_arena is created for placing SP arguments and other caller's side items used during SP execution. Then the method THD::activate_stmt_arena_if_needed() checks Query_arena's state and returns immediately without switching to statement's arena.
-
- 18 Sep, 2023 1 commit
-
-
Daniel Black authored
mariadb-install-db --auth-root-authentication-method=normal created 4 root accounts by default, but only two of these had PROXY privilege granted. mariadb-install-db (default option --auth-root-authentication-method=socket) as non-root user also didn't grant PROXY priv to the created nonroot@localhost user. To fix this, in mysql_system_tables_data.sql, we re-use tmp_user_nopasswd as this contains the list of all root users. REPLACE INTO tmp_proxies_priv SELECT @current_hostname, IFNULL(@auth_root_socket, 'root') creates the $user@$current_host but will not error if @auth_root_socket is null. Note @current_hostname lines are filtered out with --cross-bootstrap in mariadb-install-db so it was needed to include this expression for consistency. Like the existing mysql_system_tables.sql is used to create teh $user@localhost proxies_priv. Test cases roles.acl_statistics, perfschema,privilege_table_io depends on the number of proxy users. After: --auth-root-authentication-method=normal: MariaDB [mysql]> select * from global_priv; +-----------+-------------+--------------------------------------------------------------------------------------------------------------------------+ | Host | User | Priv | +-----------+-------------+--------------------------------------------------------------------------------------------------------------------------+ | localhost | mariadb.sys | {"access":0,"plugin":"mysql_native_password","authentication_string":"","account_locked":true,"password_last_changed":0} | | localhost | root | {"access":18446744073709551615} | | bark | root | {"access":18446744073709551615} | | 127.0.0.1 | root | {"access":18446744073709551615} | | ::1 | root | {"access":18446744073709551615} | | localhost | | {} | | bark | | {} | +-----------+-------------+--------------------------------------------------------------------------------------------------------------------------+ 7 rows in set (0.001 sec) MariaDB [mysql]> select * from proxies_priv; +-----------+------+--------------+--------------+------------+---------+---------------------+ | Host | User | Proxied_host | Proxied_user | With_grant | Grantor | Timestamp | +-----------+------+--------------+--------------+------------+---------+---------------------+ | localhost | root | | | 1 | | 2023-07-10 12:12:24 | | 127.0.0.1 | root | | | 1 | | 2023-07-10 12:12:24 | | ::1 | root | | | 1 | | 2023-07-10 12:12:24 | | bark | root | | | 1 | | 2023-07-10 12:12:24 | +-----------+------+--------------+--------------+------------+---------+---------------------+ --auth-root-authentication-method=socket: MariaDB [mysql]> select * from proxies_priv; +-----------+------+--------------+--------------+------------+---------+---------------------+ | Host | User | Proxied_host | Proxied_user | With_grant | Grantor | Timestamp | +-----------+------+--------------+--------------+------------+---------+---------------------+ | localhost | root | | | 1 | | 2023-07-10 12:11:55 | | localhost | dan | | | 1 | | 2023-07-10 12:11:55 | | bark | dan | | | 1 | | 2023-07-10 12:11:55 | +-----------+------+--------------+--------------+------------+---------+---------------------+ 3 rows in set (0.017 sec) MariaDB [mysql]> select * from global_priv; +-----------+-------------+--------------------------------------------------------------------------------------------------------------------------------------------+ | Host | User | Priv | +-----------+-------------+--------------------------------------------------------------------------------------------------------------------------------------------+ | localhost | mariadb.sys | {"access":0,"plugin":"mysql_native_password","authentication_string":"","account_locked":true,"password_last_changed":0} | | localhost | root | {"access":18446744073709551615,"plugin":"mysql_native_password","authentication_string":"invalid","auth_or":[{},{"plugin":"unix_socket"}]} | | localhost | dan | {"access":18446744073709551615,"plugin":"mysql_native_password","authentication_string":"invalid","auth_or":[{},{"plugin":"unix_socket"}]} | | localhost | | {} | | bark | | {} | +-----------+-------------+--------------------------------------------------------------------------------------------------------------------------------------------+ 5 rows in set (0.000 sec) MariaDB [mysql]> show grants; +----------------------------------------------------------------------------------------------------------------------------------------+ | Grants for dan@localhost | +----------------------------------------------------------------------------------------------------------------------------------------+ | GRANT ALL PRIVILEGES ON *.* TO `dan`@`localhost` IDENTIFIED VIA mysql_native_password USING 'invalid' OR unix_socket WITH GRANT OPTION | | GRANT PROXY ON ''@'%' TO 'dan'@'localhost' WITH GRANT OPTION | +----------------------------------------------------------------------------------------------------------------------------------------+
-
- 15 Sep, 2023 2 commits
-
-
Yuchen Pei authored
Removed some redundant hint related string literals from spd_db_conn.cc Clean up SPIDER_PARAM_*_[CHAR]LEN[S] Adding tests covering monitoring_kind=2. What it does is that it reads from mysql.spider_link_mon_servers with matching db_name, table_name, link_id, and does not do anything about that... How monitoring_* can be useful: in the deprecated spider high availability feature, when one remote fails, spider will try another remote, which apparently makes use of these table parameters. A test covering the query_cache_sync table param. Some further tests on some spider table params. Wrapper should be case insensitive. Code documentation on spider priority binary tree. Add an assertion that static_key_cardinality is always -1. All tests pass still
-
Yuchen Pei authored
This helps eliminate "server exists" failures Also, spider/bugfix.mdev_29676, when enabled after MDEV-29525 is pushed will fail because we have not --recorded the result. But the failure will only emerge when working on MDEV-31138 where we manually re-enable this test, so let's worry about that then.
-