- 03 Oct, 2023 3 commits
-
-
Monty authored
- hostnames in hostname_cache added - Some Galera (WSREP) allocations - Table caches
-
Monty authored
This makes it easier to see how much memory MariaDB server has allocated. (For all memory allocations that goes through mysys)
-
Monty authored
Example of what causes the problem: T1: ANALYZE TABLE starts to collect statistics T2: ALTER TABLE starts by deleting statistics for all changed fields, then creates a temp table and copies data to it. T1: ANALYZE ends and writes to the statistics tables. T2: ALTER TABLE renames temp table in place of the old table. Now the statistics from analyze matches the old deleted tables. Fixed by waiting to delete old statistics until ALTER TABLE is the only one using the old table and ensure that rename of columns can handle swapping of column names. rename_columns_in_stat_table() (former rename_column_in_stat_tables()) now takes a list of columns to rename. It uses the following algorithm to update column_stats to be able to handle circular renames - While there are columns to be renamed and it is the first loop or last rename loop did change something. - Loop over all columns to be renamed - Change column name in column_stat - If fail because of duplicate key - If this is first change attempt for this column - Change column name to a temporary column name - If there was a conflicting row, replace it with the current row. else - Remove entry from column list - Loop over all remaining columns in the list - Remove the conflicting row - Change column from temporary name to final name in column_stat Other things: - Don't flush tables for every operation. Only flush when all updates are done. - Rename of columns was not handled in case of ALGORITHM=copy (old bug). - Fixed that we do not collect statistics for hidden hash columns used by UNIQUE constraint on long values. - Fixed that we do not collect statistics for blob columns referred by generated virtual columns. This was achieved by storing the fields for which we want to have statistics in table->has_value_set instead of in table->read_set. - Rename of indexes was not handled for persistent statistics. - This is now handled similar as rename of columns. Renamed columns are now stored in 'rename_stat_indexes' and handled in Alter_info::delete_statistics() together with drooped indexes. - ALTER TABLE .. ADD INDEX may instead of creating a new index rename an existing generated foreign key index. This was not reflected in the index_stats table because this was handled in mysql_prepare_create_table instead instead of in the mysql_alter() code. Fixed by adding a call in mysql_prepare_create_table() to drop the changed index. I also had to change the code that 'marked the index' to be ignored with code that would not destroy the original index name. Reviewer: Sergei Petrunia <sergey@mariadb.com>
-
- 29 Sep, 2023 1 commit
-
-
Andrei authored
In `pseudo_slave_mode=1` aka "psedo-slave" mode any prepared XA transaction disconnects from the user session, as if the user connection drops. The xid of such transaction remains in the server, and should the prepared transaction be read-only, it is marked. The marking makes sure that the following termination of the read-only transaction ends up with ER_XA_RBROLLBACK. This did not take place actually for `pseudo_slave_mode=1` read-only. Fixed with checking the read-only status of a prepared transaction at time it disconnects from the `pseudo_slave_mode=1` session, to mark its xid when that's the case.
-
- 27 Sep, 2023 2 commits
-
-
Vladislav Vaintroub authored
Fix test
-
Vladislav Vaintroub authored
It is enough to just once, during connection phase
-
- 25 Sep, 2023 1 commit
-
-
Jan Lindström authored
MDEV-30217 : Assertion `mode_ == m_local || transaction_.is_streaming()' failed in int wsrep::client_state::bf_abort(wsrep::seqno) Problem was that brute force (BF) thread requested conflicting lock and was trying to kill victim transaction, but this victim was also brute force thread. However, this victim was not actually holding conflicting lock, instead both brute force transaction and victim transaction were had insert intention locks. We should not kill brute force victim transaction if requesting lock does not need to wait. Signed-off-by: Julius Goryavsky <julius.goryavsky@mariadb.com>
-
- 22 Sep, 2023 3 commits
-
-
Vlad Lesin authored
-
Dmitry Shulga authored
Follow-up to fix issue with access to probably not-initialized mutex/cond_var Constructor of the class st_debug_sync_globals was changed to initialize the data members dsp_hits, dsp_executed, dsp_max_active with zero. Formerly, these data members were filled with zeroes by C-runtime since the variable debug_sync_global was declared as static and according with C rules the static variable initialized with zero bytes. By the same reason, the data members debug_sync_global->ds_mutex debug_sync_global->ds_cond were initialized by zeros before the patch for MDEV-31871. After this patch the memory for the synch primitives debug_sync_global->ds_mutex and debug_sync_global->ds_cond are initialized explicitly by calling the functions mysql_mutex_init/mysql_cond_init so access to these synch primitives should be done only after such initialization be completed. Guarded access to these synch primitives has been added to the function debug_sync_end_thread() that is called on clean up since that was single problem place detected by MSAN. Theoretically problem places located in the function debug_sync_execute were not protected with similar check since it is not obvious that the variables debug_sync_global->ds_mutex and debug_sync_global->ds_cond could be not initilialized for use cases where the function debug_sync_execute() is called. It is required additional study to conclude whether it does need or not.
-
Dmitry Shulga authored
mariadb-install crashes on start when the static variable debug_sync_global of the class st_debug_sync_globals is initialized by constructor. Definition of the class st_debug_sync_globals has a data member of the type Hash_set whose implementation depends on thread-specific data associated with the key THD_KEY_mysys. This dependency results from constructor of the class Hash_set that runs my_hash_init2() which in turn invokes my_malloc. The thread-specific data value associated with the key THD_KEY_mysys is used by the function sf_malloc to get id of the current thread. The key THD_KEY_mysys is defined as static variable at my_thr_init.c initialized with the value -1. Proper initialization of the key THD_KEY_mysys is done with the library call pthread_key_create but it happens at the my_init() that called much later after the first time the THD_KEY_mysys() has been invoked. In according with Single Unix Specification, the effect of calling pthread_setspecific() or pthread_getspecific() with a key value not obtained from pthread_key_create() is undefined. That is the reason why mariadb-install crashes on MacOS. To fix the issue, the static variable debug_sync_global is converted to a pointer to the class st_debug_sync_globals and its instantiation is done explicitly at the function debug_sync_init() that is called at right time. This is the follow-up patch to the commits 8885225d f6ecadfe where was introduced a statically instantiated object debug_sync_global of the structure st_debug_sync_globals and the key THR_KEY_mysys for this thread-specific data was initialized with the value -1.
-
- 21 Sep, 2023 1 commit
-
-
Vlad Lesin authored
trx_t::set_skip_lock_inheritance() must be invoked at the very beginning of lock_release_on_prepare(). Currently trx_t::set_skip_lock_inheritance() is invoked at the end of lock_release_on_prepare() when lock_sys and trx are released, and there can be a case when locks on prepare are released, but "not inherit gap locks" bit has not yet been set, and page split inherits lock to supremum. Also reset supremum bit and rebuild waiting queue when XA is prepared. Reviewed by: Marko Mäkelä
-
- 20 Sep, 2023 3 commits
-
-
Marko Mäkelä authored
This fixes up commmit ed20e5b1 which fixed up the merge commit 202316a3
-
Marko Mäkelä authored
-
Marko Mäkelä authored
-
- 19 Sep, 2023 11 commits
-
-
Monty authored
Constant failures with: "InnoDB: tried to purge non-delete-marked record in index"
-
Monty authored
Fixed by copying the solution from 11.0
-
Monty authored
-
Monty authored
Updated ha_mroonga::storage_check_if_supported_inplace_alter to support new ALTER TABLE flags. This fixes failing tests: mroonga/storage.alter_table_add_index_unique_duplicated mroonga/storage.alter_table_add_index_unique_multiple_column_duplicated
-
Marko Mäkelä authored
This fixes up commit 56f6dab1
-
Thirunarayanan Balathandayuthapani authored
Problem: ======= - There is a race condition between purge and rollback of alter operation. Alter rollback marks the index as corrupted. At the same time, purge is working on the same index and leads to assert failure. This is caused by commit 7c0b9c60 (MDEV-15250). Solution: ======= - After MDEV-15250, InnoDB logs the operation only at the end of transaction commit and applies the log in ha_innobase::commit_inplace_alter_table() and also via dml thread. So there is no need for purge to work on uncommitted index. The assertion would fail in the test innodb.innodb-index-online when the following call is added to the start of the function row_purge_remove_sec_if_poss_leaf(): if (!index->is_committed()) sleep(5);
-
Marko Mäkelä authored
-
Marko Mäkelä authored
-
Marko Mäkelä authored
According to commit ea568419 the stack normally grows downwards, except on HP PA-RISC where it grows upwards. Because determining the stack direction is not possible in a portable way, let us determine the default STACK_DIRECTION in CMake based on the ISA. On clang 16.0.6 running on and targeting AMD64, STACK_DIRECTION=1 is being incorrectly detected, causing the failure of a number of tests.
-
Marko Mäkelä authored
row_vers_vc_matches_cluster(): Invoke dtype_get_at_most_n_mbchars() to extract the correct number of bytes corresponding to the number of characters in a virtual column prefix index, just like we do in row_sel_sec_rec_is_for_clust_rec(). The test case would occasionally reproduce the failure when this fix is not present.
-
Dmitry Shulga authored
On creation of a VIEW that depends on a stored routine an instance of the class Item_func_sp is allocated on a memory root of SP statement. It happens since mysql_make_view() calls the method THD::activate_stmt_arena_if_needed() before parsing definition of the view. On the other hand, when sp_head's rcontext is created an instance of the class Field referenced by the data member Item_func_sp::result_field is allocated on the Item_func_sp's Query_arena (call arena) that set up inside the method Item_sp::execute_impl just before calling the method sp_head::execute_function() On return from the method sp_head::execute_function() all items allocated on the Item_func_sp's Query_arena are released and its memory root is freed (see implementation of the method Item_sp::execute_impl). As a consequence, the pointer Item_func_sp::result_field references to the deallocated memory. Later, when the method sp_head::execute cleans up items allocated for just executed SP instruction the method Item_func_sp::cleanup is invoked and tries to delete an object referenced by data member Item_func_sp::result_field that points to already deallocated memory, that results in a server abnormal termination. To fix the issue the current active arena shouldn't be switched to a statement arena inside the function mysql_make_view() that invoked indirectly by the method sp_head::rcontext_create. It is implemented by introducing the new Query_arena's state STMT_SP_QUERY_ARGUMENTS that is set when explicit Query_arena is created for placing SP arguments and other caller's side items used during SP execution. Then the method THD::activate_stmt_arena_if_needed() checks Query_arena's state and returns immediately without switching to statement's arena.
-
- 18 Sep, 2023 2 commits
-
-
Daniel Black authored
mariadb-install-db --auth-root-authentication-method=normal created 4 root accounts by default, but only two of these had PROXY privilege granted. mariadb-install-db (default option --auth-root-authentication-method=socket) as non-root user also didn't grant PROXY priv to the created nonroot@localhost user. To fix this, in mysql_system_tables_data.sql, we re-use tmp_user_nopasswd as this contains the list of all root users. REPLACE INTO tmp_proxies_priv SELECT @current_hostname, IFNULL(@auth_root_socket, 'root') creates the $user@$current_host but will not error if @auth_root_socket is null. Note @current_hostname lines are filtered out with --cross-bootstrap in mariadb-install-db so it was needed to include this expression for consistency. Like the existing mysql_system_tables.sql is used to create teh $user@localhost proxies_priv. Test cases roles.acl_statistics, perfschema,privilege_table_io depends on the number of proxy users. After: --auth-root-authentication-method=normal: MariaDB [mysql]> select * from global_priv; +-----------+-------------+--------------------------------------------------------------------------------------------------------------------------+ | Host | User | Priv | +-----------+-------------+--------------------------------------------------------------------------------------------------------------------------+ | localhost | mariadb.sys | {"access":0,"plugin":"mysql_native_password","authentication_string":"","account_locked":true,"password_last_changed":0} | | localhost | root | {"access":18446744073709551615} | | bark | root | {"access":18446744073709551615} | | 127.0.0.1 | root | {"access":18446744073709551615} | | ::1 | root | {"access":18446744073709551615} | | localhost | | {} | | bark | | {} | +-----------+-------------+--------------------------------------------------------------------------------------------------------------------------+ 7 rows in set (0.001 sec) MariaDB [mysql]> select * from proxies_priv; +-----------+------+--------------+--------------+------------+---------+---------------------+ | Host | User | Proxied_host | Proxied_user | With_grant | Grantor | Timestamp | +-----------+------+--------------+--------------+------------+---------+---------------------+ | localhost | root | | | 1 | | 2023-07-10 12:12:24 | | 127.0.0.1 | root | | | 1 | | 2023-07-10 12:12:24 | | ::1 | root | | | 1 | | 2023-07-10 12:12:24 | | bark | root | | | 1 | | 2023-07-10 12:12:24 | +-----------+------+--------------+--------------+------------+---------+---------------------+ --auth-root-authentication-method=socket: MariaDB [mysql]> select * from proxies_priv; +-----------+------+--------------+--------------+------------+---------+---------------------+ | Host | User | Proxied_host | Proxied_user | With_grant | Grantor | Timestamp | +-----------+------+--------------+--------------+------------+---------+---------------------+ | localhost | root | | | 1 | | 2023-07-10 12:11:55 | | localhost | dan | | | 1 | | 2023-07-10 12:11:55 | | bark | dan | | | 1 | | 2023-07-10 12:11:55 | +-----------+------+--------------+--------------+------------+---------+---------------------+ 3 rows in set (0.017 sec) MariaDB [mysql]> select * from global_priv; +-----------+-------------+--------------------------------------------------------------------------------------------------------------------------------------------+ | Host | User | Priv | +-----------+-------------+--------------------------------------------------------------------------------------------------------------------------------------------+ | localhost | mariadb.sys | {"access":0,"plugin":"mysql_native_password","authentication_string":"","account_locked":true,"password_last_changed":0} | | localhost | root | {"access":18446744073709551615,"plugin":"mysql_native_password","authentication_string":"invalid","auth_or":[{},{"plugin":"unix_socket"}]} | | localhost | dan | {"access":18446744073709551615,"plugin":"mysql_native_password","authentication_string":"invalid","auth_or":[{},{"plugin":"unix_socket"}]} | | localhost | | {} | | bark | | {} | +-----------+-------------+--------------------------------------------------------------------------------------------------------------------------------------------+ 5 rows in set (0.000 sec) MariaDB [mysql]> show grants; +----------------------------------------------------------------------------------------------------------------------------------------+ | Grants for dan@localhost | +----------------------------------------------------------------------------------------------------------------------------------------+ | GRANT ALL PRIVILEGES ON *.* TO `dan`@`localhost` IDENTIFIED VIA mysql_native_password USING 'invalid' OR unix_socket WITH GRANT OPTION | | GRANT PROXY ON ''@'%' TO 'dan'@'localhost' WITH GRANT OPTION | +----------------------------------------------------------------------------------------------------------------------------------------+
-
Thirunarayanan Balathandayuthapani authored
- InnoDB fails to mark the page status as FREED during freeing of page for temporary tablespace. This behaviour affects scrubbing and doesn't write all zeroes in file even though pages are freed. mtr_t::free(): Mark the page as freed for temporary tablespace also
-
- 15 Sep, 2023 10 commits
-
-
Lena Startseva authored
Fixed tests: main.order_by_pack_big - disabled view-protocol for some queries because the view is created with wrong column name if column name > 64 symbols
-
Yuchen Pei authored
-
Yuchen Pei authored
-
Yuchen Pei authored
Removed some redundant hint related string literals from spd_db_conn.cc Clean up SPIDER_PARAM_*_[CHAR]LEN[S] Adding tests covering monitoring_kind=2. What it does is that it reads from mysql.spider_link_mon_servers with matching db_name, table_name, link_id, and does not do anything about that... How monitoring_* can be useful: in the deprecated spider high availability feature, when one remote fails, spider will try another remote, which apparently makes use of these table parameters. A test covering the query_cache_sync table param. Some further tests on some spider table params. Wrapper should be case insensitive. Code documentation on spider priority binary tree. Add an assertion that static_key_cardinality is always -1. All tests pass still
-
Yuchen Pei authored
This helps eliminate "server exists" failures Also, spider/bugfix.mdev_29676, when enabled after MDEV-29525 is pushed will fail because we have not --recorded the result. But the failure will only emerge when working on MDEV-31138 where we manually re-enable this test, so let's worry about that then.
-
Yuchen Pei authored
-
Yuchen Pei authored
The direct aggregate mechanism sems to be only intended to work when otherwise a full table scan query will be executed from the spider node and the aggregation done at the spider node too. Typically this happens in sub_select(). In the test spider.direct_aggregate_part direct aggregate allows to send COUNT statements directly to the data nodes and adds up the results at the spider node, instead of iterating over the rows one by one at the spider node. By contrast, the group by handler (GBH) typically sends aggregated queries directly to data nodes, in which case DA does not improve the situation here. That is why we should fix it by disabling DA when GBH is used. There are other reasons supporting this change. First, the creation of GBH results in a call to change_to_use_tmp_fields() (as opposed to setup_copy_fields()) which causes the spider DA function spider_db_fetch_for_item_sum_funcs() to work on wrong items. Second, the spider DA function only calls direct_add() on the items, and the follow-up add() needs to be called by the sql layer code. In do_select(), after executing the query with the GBH, it seems that the required add() would not necessarily be called. Disabling DA when GBH is used does fix the bug. There are a few other things included in this commit to improve the situation with spider DA: 1. Add a session variable that allows user to disable DA completely, this will help as a temporary measure if/when further bugs with DA emerge. 2. Move the increment of direct_aggregate_count to the spider DA function. Currently this is done in rather bizarre and random locations. 3. Fix the spider_db_mbase_row creation so that the last of its row field (sentinel) is NULL. The code is already doing a null check, but somehow the sentinel field is on an invalid address, causing the segfaults. With a correct implementation of the row creation, we can avoid such segfaults.
-
Yuchen Pei authored
-
Yuchen Pei authored
Also: - clean up spider_check_and_get_casual_read_conn() and spider_check_and_set_autocommit() - remove a couple of commented out code blocks
-
Yuchen Pei authored
-
- 14 Sep, 2023 3 commits
-
-
Anel Husakovic authored
- Reviewer: <knielsen@knielsen-hq.org> <brandon.nesterenko@mariadb.com>
-
Anel Husakovic authored
- Remove extra connections in the form of `server_number_1` for the same server during initialization of servers in the `rpl_init.inc` file. - Remove disconnecting and reconnecting to the same connections, since they are not used by the test. - Update comments about the above. - Reviewer: <knielsen@knielsen-hq.org> <brandon.nesterenko@mariadb.com>
-
Anel Husakovic authored
- Fix the calling of the assertion condition when `rpl_check_server_ids` parameter is used. - Fix comments regarding the default usage and configuration files extension in this case. - Reviewer: <knielsen@knielsen-hq.org> <brandon.nesterenko@mariadb.com>
-