- 17 Oct, 2022 1 commit
-
-
Dmitry Shulga authored
For some queries that involve tables with different but convertible character sets for columns taking part in the query, repeatable execution of such queries in PS mode or as part of a stored routine would result in server abnormal termination. For example, CREATE TABLE t1 (a2 varchar(10)); CREATE TABLE t2 (u1 varchar(10) CHARACTER SET utf8); CREATE TABLE t3 (u2 varchar(10) CHARACTER SET utf8); PREPARE stmt FROM "SELECT t1.* FROM (t1 JOIN t2 ON (t2.u1 = t1.a2)) WHERE (EXISTS (SELECT 1 FROM t3 WHERE t3.u2 = t1.a2))"; EXECUTE stmt; EXECUTE stmt; <== Running this prepared statement the second time results in server crash. The reason of server crash is that an instance of the class Item_func_conv_charset, that created for conversion of a column from one character set to another, is allocated on execution memory root but pointer to this instance is stored in an item placed on prepared statement memory root. Below is calls trace to the place where an instance of the class Item_func_conv_charset is created. setup_conds Item_func::fix_fields Item_bool_rowready_func2::fix_length_and_dec Item_func::setup_args_and_comparator Item_func_or_sum::agg_arg_charsets_for_comparison Item_func_or_sum::agg_arg_charsets Item_func_or_sum::agg_item_set_converter Item::safe_charset_converter And the following trace shows the place where a pointer to the instance of the class Item_func_conv_charset is passed to the class Item_func_eq, that is created on a memory root of the prepared statement. Prepared_statement::execute mysql_execute_command execute_sqlcom_select handle_select mysql_select JOIN::optimize JOIN::optimize_inner convert_join_subqueries_to_semijoins convert_subq_to_sj To fix the issue, switch to the Prepared Statement memory root before calling the method Item_func::setup_args_and_comparator in order to place any created Items on permanent memory root. It may seem that such approach would result in a memory leakage in case the parameter marker '?' is used in the query as in the following example PREPARE stmt FROM "SELECT t1.* FROM (t1 JOIN t2 ON (t2.u1 = t1.a2)) WHERE (EXISTS (SELECT 1 FROM t3 WHERE t3.u2 = ?))"; EXECUTE stmt USING convert('A' using latin1); but it wouldn't since for such case any of the parameter markers is treated as a constant and no subquery to semijoin optimization is performed.
-
- 16 Oct, 2022 3 commits
-
-
Oleksandr Byelkin authored
-
Anel authored
* ODBC Connect cosmetic fixes - Update command for connection for default `peer` authentication for user `postgres` (unless changed in `pg_hba.conf`). - Update command for privilege to be more verbose. - Update path for `.sql` file - Update instructions for `pg_hba.conf` file to use unix socket (`local`) type as well as TCP/IP type `host`. - Update instruction about usage of user dsn (data source file) over system dsn. - Update path of `odbc-postgresql` driver path in comment * Connect SE: update ODBC result file
-
Brad Smith authored
-
- 15 Oct, 2022 1 commit
-
-
Sergei Golubchik authored
should be the same behavior as for virtual columns: * a warning on every inserted row * silently ignored in a trigger
-
- 14 Oct, 2022 3 commits
-
-
Marko Mäkelä authored
This fixes up 3d9b350a
-
Marko Mäkelä authored
-
Oleksandr Byelkin authored
-
- 13 Oct, 2022 1 commit
-
-
Marko Mäkelä authored
WITH_EMBEDDED_SERVER compiles the SQL parsers separately. Thanks to Vladislav Vaintroub for helping with this. Fixes up commit e05ab0cf
-
- 12 Oct, 2022 2 commits
-
-
Nikita Malyavin authored
See also commits aa8a31da and 64678c for a Bug #22990029 fix. In this scenario INSERT chose to check if delete unmarking is available for a just deleted record. To build an update vector, it needed to calculate the vcols as well. Since this INSERT was not IGNORE-flagged, recalculation failed. Solutiuon: temporarily set abort_on_warning=true, while calculating the column for delete-unmarked insert.
-
Nikita Malyavin authored
As of now innodb does not store trx_id for each record in secondary index. The idea behind is following: let us store only per-page max_trx_id, and delete-mark the records when they are deleted/updated. If the read starts, it rememders the lowest id of currently active transaction. Innodb refers to it as trx->read_view->m_up_limit_id. See also ReadView::open. When the page is fetched, its max_trx_id is compared to m_up_limit_id. If the value is lower, and the secondary index record is not delete-marked, then this page is just safe to read as is. Else, a clustered index could be needed ato access. See page_get_max_trx_id call in row_search_mvcc, and the corresponding switch (row_search_idx_cond_check(...)) below. Virtual columns are required to be updated in case if the record was delete-marked. The motivation behind it is documented in Row_sel_get_clust_rec_for_mysql::operator() near row_sel_sec_rec_is_for_clust_rec call. This was basically a description why virtual column computation can normally happen during SELECT, and, generally, a vcol index access. Sometimes stats tables are updated by innodb. This starts a new transaction, and it can happen that it didn't finish to the moment of SELECT execution, forcing virtual columns recomputation. If the result was a something that normally outputs a warning, like division by zero, then it could be outputted in a racy manner. The solution is to suppress the warnings when a column is computed for the described purpose. ignore_wrnings argument is added innobase_get_computed_value. Currently, it is only true for a call from row_sel_sec_rec_is_for_clust_rec.
-
- 11 Oct, 2022 4 commits
-
-
Vladislav Vaintroub authored
MDEV-19243 introduced a regression on Windows. In (supposedly rare) case, where environment variable TZ was set, @@system_time_zone no longer derives from TZ. Instead, it incorrecty refers to system default time zone, eventhough UTC time conversion takes TZ into account. The fix is to restore TZ-aware handling (timezone name derives from tzname), if TZ is set.
-
Marko Mäkelä authored
-
Zhibo Zhang authored
The statement 'Verify checksum binlog events.' is confusing. Fix word order to make it clear.
-
Julius Goryavsky authored
The problem is related to performing operations without switching wsrep off, this commit fixes this and allows disabled tests.
-
- 10 Oct, 2022 2 commits
-
-
Marko Mäkelä authored
-
Marko Mäkelä authored
A previous fix in commit efd8af53 failed to cover ALTER TABLE. PageBulk::isSpaceAvailable(): Check for record heap number overflow.
-
- 09 Oct, 2022 1 commit
-
-
Anel Husakovic authored
Reviewer: <wlad@mariadb.com>
-
- 06 Oct, 2022 2 commits
-
-
Jan Lindström authored
-
Jan Lindström authored
Do not allow setting wsrep_on=ON if no provider is set.
-
- 05 Oct, 2022 9 commits
-
-
Aleksey Midenkov authored
upon CREATE OR REPLACE causing ER_UPDATE_TABLE_USED Missed set return status to 1.
-
Aleksey Midenkov authored
When f.ex. table is partitioned by HASH(a) and we rename column `a' to `b' partitioning filter stays unchanged: HASH(a). That's the wrong behavior. The patch updates partitioning filter in accordance to the new columns names. That includes partition/subpartition expression and partition/subpartition field list.
-
Aleksey Midenkov authored
For "const char *" replace() and after() accepted const as "T *" and passed forward "void *". This cannot be cast implicitly, so we better use "const void *" instead of "void *" in the input interface. This way we avoid problems with using List for any const type.
-
Vlad Lesin authored
MDEV-27927 row_sel_try_search_shortcut_for_mysql() does not latch a page, violating read view isolation btr_search_guess_on_hash() would only acquire an index page latch if it is invoked with ahi_latch=NULL. If it's invoked from row_sel_try_search_shortcut_for_mysql() with ahi_latch!=NULL, a page will not be latched, and row_search_mvcc() will get a pointer to the record, which can be changed by some other transaction before the record was stored in result buffer with row_sel_store_mysql_rec() call. ahi_latch argument of btr_cur_search_to_nth_level_func() and btr_pcur_open_with_no_init_func() is used only for row_sel_try_search_shortcut_for_mysql(). btr_cur_search_to_nth_level_func(..., ahi_latch !=0, ...) is invoked only from btr_pcur_open_with_no_init_func(..., ahi_latch !=0, ...), which, in turns, is invoked only from row_sel_try_search_shortcut_for_mysql(). I suppose that separate case with ahi_latch!=0 was intentionally implemented to protect row_sel_store_mysql_rec() call in row_search_mvcc() just after row_sel_try_search_shortcut_for_mysql() call. After the ahi_latch was moved from row_seach_mvcc() to row_sel_try_search_shortcut_for_mysql(), there is no need in it at all if btr_search_guess_on_hash() latches a page unconditionally. And if btr_search_guess_on_hash() latched the page, any access to the record in row_sel_try_search_shortcut_for_mysql() after btr_pcur_open_with_no_init() call will be protected with the page latch. The fix is to remove ahi_latch argument from btr_pcur_open_with_no_init_func(), btr_cur_search_to_nth_level_func() and btr_search_guess_on_hash(). There will not be test, as to test it we need to freeze some SELECT execution in the point between row_sel_try_search_shortcut_for_mysql() and row_sel_store_mysql_rec() calls in row_search_mvcc(), and to change the record in some other transaction to let row_sel_store_mysql_rec() to store changed record in result buffer. Buf we can't do this with the fix, as the page will be latched in btr_search_guess_on_hash() call.
-
Marko Mäkelä authored
Let us disable Valgrind on tests that would fail because a server shutdown or a STOP SLAVE command would take longer, causing the test harness to forcibly and silently kill the server due to an exceeded timeout.
-
Marko Mäkelä authored
Under Valgrind, this test may occasionally fail because the sleep-based timeouts of less than 1 second could be exceeded.
-
Marko Mäkelä authored
The test could emit some I/O error when run under Valgrind.
-
Marko Mäkelä authored
-
Marko Mäkelä authored
row_purge_get_partial(): Replaces trx_undo_rec_get_partial_row(). Also copy the purge_node_t::ref to the purge_node_t::row. In this way, the clustered index key fields will always be available, even if thanks to commit d384ead0 (MDEV-14799) they would no longer be repeated in the remaining part of the undo log record.
-
- 04 Oct, 2022 1 commit
-
-
Julius Goryavsky authored
This commit adds automation that will reduce the possibility of user errors when customizing wsrep_notify.sh (in particular caused by user-specified parameters). Now all leading and trailing spaces are removed from the user-specified parameters and automatic port and host address substitution has been added to scripts, as well as automatic password substitution to the client command line, only if it is specified in the wsrep_notify.sh and not as empty strings. Also added support for automatic substitution of the all SSL-related parameters and improved parsing for ipv6 addresses (to allow "[...]" notation for ipv6 addresses). Also added a test to check if the wsrep notify script will works with SSL.
-
- 03 Oct, 2022 1 commit
-
-
Vlad Lesin authored
MDEV-29575 Access to innodb_trx, innodb_locks and innodb_lock_waits along with detached XA's can cause SIGSEGV trx->mysql_thd can be zeroed-out between thd_get_thread_id() and thd_query_safe() calls in fill_trx_row(). trx_disconnect_prepared() zeroes out trx->mysql_thd. And this can cause null pointer dereferencing in fill_trx_row(). fill_trx_row() is invoked from fetch_data_into_cache() under trx_sys.mutex. Bug fix is in reseting trx_t::mysql_thd in trx_disconnect_prepared() under trx_sys.mutex lock too. MTR test case can't be created for the fix, as we need to wait for trx_t::mysql_thd reseting in fill_trx_row() after trx_t::mysql_thd was checked for null while trx_sys.mutex is held. But trx_t::mysql_thd must be reset in trx_disconnect_prepared() under trx_sys.mutex. There will be deadlock.
-
- 01 Oct, 2022 2 commits
-
-
Sergei Golubchik authored
-
Sergei Golubchik authored
extended initializers are only allowed since c++11
-
- 30 Sep, 2022 3 commits
-
-
Oleksandr Byelkin authored
MDEV-17124: mariadb 10.1.34, views and prepared statements: ERROR 1615 (HY000): Prepared statement needs to be re-prepared The problem is that if table definition cache (TDC) is full of real tables which are in tables cache, view definition can not stay there so will be removed by its own underlying tables. In situation above old mechanism of detection matching definition in PS and current version always require reprepare and so prevent executing the PS. One work around is to increase TDC, other - improve version check for views/triggers (which is done here). Now in suspicious cases we check: - timestamp (microseconds) of the view to be sure that version really have changed; - time (microseconds) of creation of a trigger related to time (microseconds) of statement preparation.
-
Oleksandr Byelkin authored
-
Anel Husakovic authored
- Added missing information about database of corresponding table for various types of commands - Update some typos - Reviewed by: <vicentiu@mariadb.org>
-
- 29 Sep, 2022 2 commits
-
-
Sergei Golubchik authored
KILL QUERY ID 0 was sometimes finding con3 that was still in the process of disconnecting and had query_id==0 (as it didn't run any queries)
-
Igor Babaev authored
This patch resolves the problem of improper name resolution of table references to embedded CTEs for some queries. This improper binding could lead to - infinite sequence of calls of recursive functions - crashes due to resolution of null pointers - wrong result sets returned by queries - bogus error messages If the definition of a CTE contains with clauses then such CTE is called embedding CTE while CTEs from the with clauses are called embedded CTEs. If a table reference used in the definition of an embedded CTE cannot be resolved within the unit that contains this reference it still may be resolved against a CTE definition from the with clause with one of the embedding CTEs. A table reference can be resolved against a CTE definition if it used in the the scope of this definition and it refers to the name of the CTE. Table reference t is in the scope of the CTE definition of CTE cte if - the definition of cte is an element of a with clause declared as RECURSIVE and the reference t belongs either to the unit to which this with clause is attached or to one of the elements of this clause - the definition of cte is an element of a with clause without RECURSIVE specifier and the reference t belongs either to the unit to which this with clause is attached or to one of the elements from this clause that are placed before the definition of cte. If a table reference can be resolved against several CTE definitions then it is bound to the most embedded. The code before this patch not always resolved table references used in embedded CTE according to the above rules. Approved by Oleksandr Byelkin <sanja@mariadb.com>
-
- 28 Sep, 2022 1 commit
-
-
Mikhail Chalov authored
Continue with similar changes as done in 19af1890 to replace sprintf(buf, ...) with snprintf(buf, sizeof(buf), ...), specifically in the "easy" cases where buf is allocated with a size known at compile time. All new code of the whole pull request, including one or several files that are either new files or modified ones, are contributed under the BSD-new license. I am contributing on behalf of my employer Amazon Web Services, Inc.
-
- 27 Sep, 2022 1 commit
-
-
Alexey Botchkov authored
When the partition table is cloned, the handlers for the partitions that were not opened should anyway be created (but not opened).
-