- 26 Jul, 2021 1 commit
-
-
Rucha Deodhar authored
failed for TokuDB engine CREATE TABLE Analysis: Assertion failure happens because the database doesn't exist to create the table but ha_tokudb::create() still returns false. So error is not reported. Fix: Store the error state and report the error.
-
- 24 Jul, 2021 1 commit
-
-
Elena Stepanova authored
Set tests to non-valgrind: oqgraph.social encryption.innodb-page_encryption binlog_encryption.encrypted_master innodb.innodb-page_compression_lz4 main.lock_multi_bug38499 main.lock_multi_bug38691
-
- 23 Jul, 2021 3 commits
-
-
Marko Mäkelä authored
In commit 83d2e084 (MDEV-24041) we failed to notice that in addition to the bug with DELETE and ON DELETE CASCADE, there is another bug with UPDATE and ON UPDATE CASCADE. row_ins_foreign_fill_virtual(): Use the correct memory heap for everything that will be reachable from the cascade->update that we return to the caller. Note: It is correct to use the shorter-lived cascade->heap for rec_get_offsets(), because that memory will be abandoned when row_ins_foreign_fill_virtual() returns.
-
Julius Goryavsky authored
MDEV-26080 fixup: fixed .result file for galera_roles test (one word must be enclosed in single quotes).
-
Igor Babaev authored
This bug was fixed by the patch for bug MDEV-26025. Only a new test case is added here.
-
- 22 Jul, 2021 7 commits
-
-
Marko Mäkelä authored
-
Sachin Agarwal authored
Problem: Server throws OOM error when we execute twitter load with SELECTs for UPDATE + UPDATES, and SELECT queries on tables with full-text index. FTS cache->total_memory store count of total memory allocated to FTS cache (for all fulltext indexes on a table). For each word in fts cache, we store doc-id & word position in a node->ilist. we increment cache->total_memory with size of doc-id & word position whereas we allocate ilist in chuck of 16, 32 ,64 bytes or 1.2 times of last size. When we wil insert huge amount of data into the FTS aux index tables then collectively these small chucks for each token become huge unaccounted memory allocated for FTS cache. Fix: Incremented cache->total_memory by size of chunk allocated to node->ilist. RB: 25286 Reviewed by : Rahul Agarkar <rahul.agarkar@oracle.com> mysql/mysql-server@7ab5707f1c4482ed050ed9fa739e9ec0e2fc0ffa
-
Jakub Łopuszański authored
This patch changes it so that we do not free old BP `page_hash`, but rather modify it's parameters, during resize. RB: 26084 Reviewed-by: Marcin Babij <marcin.babij@oracle.com> Reviewed-by: Yasufumi Kinoshita <yasufumi.kinoshita@oracle.com> mysql/mysql-server@ea3adc6a1192e1bca4b4894fd7037e29fbcf0bd0
-
Marko Mäkelä authored
ha_innobase::prepare_inplace_alter_table(): Unless the table is being rebuilt, determine the maximum column length based on the current ROW_FORMAT of the table. When TABLE_SHARE (and the .frm file) contains no explicit ROW_FORMAT, InnoDB table creation or rebuild will use innodb_default_row_format. Based on mysql/mysql-server@3287d33acdc4260806a2a407ca15e9d1e04dddcb
-
Marko Mäkelä authored
-
Marko Mäkelä authored
InnoDB tablespace identifiers and page numbers are 32-bit numbers. Let us use a 32-bit type for them in innochecksum. The changes in commit 1918bdf3 broke the build on 32-bit Windows. Thanks to Vicențiu Ciorbaru for an initial version of this fixup.
-
Anel Husakovic authored
-
- 21 Jul, 2021 4 commits
-
-
Igor Babaev authored
SQL processor failed to catch references to unknown columns and other errors of the phase of semantic analysis in the specification of a hanging recursive CTE. This happened because the function With_clause::prepare_unreferenced_elements() failed to detect a CTE as a hanging CTE if the CTE was recursive. Fixing this problem in the code of the mentioned function opened another problem: EXPLAIN started including the lines for the specifications of hanging recursive CTEs in its output. This problem also was fixed in this patch. Approved by Dmitry Shulga <dmitry.shulga@mariadb.com>
-
Hollow Man authored
-
Heinz Wiesinger authored
This gives a short overview over found/missing dependencies as well as enabled/disabled features. Initial author Heinz Wiesinger <heinz@m2mobi.com> Additions by Vicențiu Ciorbaru <vicentiu@mariadb.org> * Report all plugins enabled via MYSQL_ADD_PLUGIN * Simplify code. Eliminate duplication by making use of WITH_xxx variable values to set feature "ON" / "OFF" state. Reviewed by: wlad@mariadb.com (code details) serg@mariadb.com (the idea)
-
Vicențiu Ciorbaru authored
-
- 20 Jul, 2021 9 commits
-
-
Igor Babaev authored
from view A crash of the server happened when executing a stored procedure whose the only query calculated window functions over a mergeable view specified as a select from non-mergeable view. The crash could be reproduced if the window specifications of the window functions were identical and both contained PARTITION lists and ORDER BY lists. A crash also happened on the second execution of the prepared statement created for such query. If to use derived tables or CTE instead of views the problem still manifests itself crashing the server. When optimizing the window specifications of a window function the server can substitute the partition lists and the order lists for the corresponding lists from another window specification in the case when the lists are identical. This substitution is not permanent and should be rolled back before the second execution. It was not done and this ultimately led to a crash when resolving the column names at the second execution of SP/PS.
-
Igor Babaev authored
This bug appeared after the patch for bug MDEV-23886. Due to this bug execution of queries with CTEs used the same CTE at least twice via prepared statements or with stored procedures caused crashes of the server. It happened because the select created for any of not the first usage of a CTE erroneously was not included into all_selects_list. This patch corrects the patch applied to fix the bug MDEV-26108. Approved by Oleksandr Byelkin <sanja@mariadb.com>
-
Eugene Kosov authored
-
Eugene Kosov authored
Store and maintain xdes pages always. And doesn't verify checksums for freed pages. innochecksum can work only with the first space file of multiple ones. Tell about it and abort in case of not the first file.
-
Jagdeep Sidhu authored
In commit 2e814d47 on MariaDB 10.2 the switch case statement in trx_flush_log_if_needed_low() regressed. Since 10.2 this code was refactored to have switches in descending order, so value of 3 for innodb_flush_log_at_trx_commit is behaving the same as value of 2, that is no FSYNC is being enforced during COMMIT phase. The switch should however not be empty and cases 2 and 3 should not have the identical contents. As per documentation, setting innodb_flush_log_at_trx_commit to 3 should do FSYNC to disk if innodb_flush_log_at_trx_commit is set to 3. This fixes the regression so that the switch statement again does what users expect the setting should do. All new code of the whole pull request, including one or several files that are either new files or modified ones, are contributed under the BSD-new license. I am contributing on behalf of my employer Amazon Web Services, Inc.
-
Vladislav Vaintroub authored
-
Sergei Golubchik authored
When building with `make` gcov files use full path names, when building with `ninja` gcov files use paths relative to the source root in gcov_one_file() the current directory is somewhere under CMakeFiles/, so if a file exists in the specified location, this location must've been a full path name.
-
Sergei Golubchik authored
For every file.gcda file, gcov <7.x created file.cc.gcda.gcov. While gcov 7.x and 8.x create file.cc.gcov And sometimes otherfile.h.gcov or otherfile.ic.gcov, for included files. (gcov 9.x+ creates .json.gz files, see MDEV-26102) So, we use `gcov -l` that will create file.cc.gcda##file.cc.gcov, file.cc.gcda##otherfile.h.gcov, etc. And we search and parse all those file.cc.gcda*.gcov files.
-
Vladislav Vaintroub authored
-
- 19 Jul, 2021 3 commits
-
-
Igor Babaev authored
The bug affected execution of queries with With clauses containing so-called hanging recursive CTEs in PREPARE mode. A CTE is hanging if it's not used in the query. Preparation of a prepared statement from a query with a hanging CTE caused a leak in the server and execution of this prepared statement led to an assert failure of the server built in the debug mode. This happened because the units specifying recursive CTEs erroneously were not cleaned up if those CTEs were hanging. The patch enforces cleanup of hanging recursive CTEs in the same way as other hanging CTEs. Approved by dmitry.shulga@mariadb.com
-
Sergei Golubchik authored
put defaults* options first (and together). list --defaults-group-suffix too
-
Dmitry Shulga authored
Test cases like the following one produce different result sets if it's run with and without th option --ps-protocol. CREATE TABLE t1(a INT); --enable_metadata (SELECT MAX(a) FROM t1) UNION (SELECT MAX(a) FROM t1); --disable_metadata DROP TABLE t1; Result sets differ in metadata for the query (SELECT MAX(a) FROM t1) UNION (SELECT MAX(a) FROM t1); The reason for different content of query metadata is that for queries with union the items being created on JOIN preparing phase is placed into item_list from SELECT_LEX_UNIT whereas for queries without union item_list from SELECT_LEX is used instead.
-
- 16 Jul, 2021 1 commit
-
-
Nikita Malyavin authored
The columns that are part of DEFAULT expression were not read-marked in statements like UPDATE...SET b=DEFAULT. The problem is `F(DEFAULT)` expression depends of the left-hand side of an assignment. However, setup_fields accepts only right-hand side value. Neither Item::fix_fields does. Suchwise, b=DEFAULT(b) works fine, because Item_default_field has information on what field it is default of: if (thd->mark_used_columns != MARK_COLUMNS_NONE) def_field->default_value->expr->update_used_tables(); in Item_default_value::fix_fields(). It is not reasonable to pass a left-hand side to Item:fix_fields, because the case is rare, so the rewrite b= F(DEFAULT) -> b= F(DEFAULT(b)) is made instead. Both UPDATE and multi-UPDATE are affected, however any form of INSERT is not: it marks all the fields in DEFAULT expressions for read in TABLE::mark_default_fields_for_write().
-
- 15 Jul, 2021 2 commits
-
-
Sergei Petrunia authored
If test_if_skip_sort_order() decides to use an index to produce required ordering, it should disable "Range Checked for each record" optimization. This is because Range-Checked-for-each-record may decide to use an index (or an index_merge) which will not produce the required ordering.
-
Oli Sennhauser authored
Thanks to @shinguz for helping with this. This a backport commit from 10.7
-
- 14 Jul, 2021 1 commit
-
-
Nayuta Yanagisawa authored
The function spider_db_append_key_where_internal() converts HA_READ_AFTER_KEY to '>'. The conversion seems to be correct for single-column indexes because HA_READ_AFTER_KEY means "read the key after the provided value." However, how about multi-column indexes? Assume that there is a multi-column index on c1 and c2 and we search with the condition 'c1 >= 100 AND c2 > 200'. The key_range.flag corresponds to the search condition could be HA_READ_AFTER_KEY. In such a case, we could not simply convert HA_READ_AFTER_KEY to '>'. The correct conversion is to convert HA_READ_AFTER_KEY to '>' only for the last column in key_part_map and to convert HA_READ_AFTER_KEY to '>=' for the other column. The similar discussion also applies to the conversion from key_range.flag to a sign of inequality.
-
- 13 Jul, 2021 1 commit
-
-
Alexey Bychko authored
changed rpm db query to output only version for libsepol and not release/buildnumber
-
- 12 Jul, 2021 6 commits
-
-
Nikita Malyavin authored
The problem is the same as in MDEV-18166: columns in virtual field expression are not marked for read, while the field itself does. field->register_field_in_read_map() should be called for read-marking all fields. The test is reproduced only in 10.4+, however the fix is applicable to 10.2+.
-
Nikita Malyavin authored
Reformulate mark_columns_used_by_index* function family in a more laconic way: mark_columns_used_by_index -> mark_index_columns mark_columns_used_by_index_for_read_no_reset -> mark_index_columns_for_read mark_columns_used_by_index_no_reset -> mark_index_columns_no_reset static mark_index_columns -> do_mark_index_columns
-
Nikita Malyavin authored
Several different test cases were failing under the same reason: the fields in a vcol expression were not marked during marking columns of a key contatining virtual column for read. Fix: make marking columns of a key for read a special case where register_field_in_read_map() is done instead of plain bitmap_set_bit(). Some test cases are only reproducible in 10.4+, but the fix is applicable to 10.2+
-
Nikita Malyavin authored
This is a 10.2+ part of a jira task The two bugs regarding virtual column marking have been fixed: 1. UPDATE of a partitioned table, where the optimizer has chosen a secondary index to make a filesort; 2. INSERT into a table with a nonblob field generated from a blob, with binlog enabled and binlog_row_image=noblob. 3. DELETE from a view on a table with virtual column. Generally the assertion happens from update_virtual_fields() call These bugs are root-caused by missing field marking for dependant fields of a virtual column. Therefore a fix is: mark all the fields involved in the vcol expression by calling field->register_field_in_read_map() instead just setting a single bit. 3 was reproducible only on 10.4+, however the problem might has just been invisible in the earlier versions. The fix is applicable to 10.2-10.3 as well.
-
Nikita Malyavin authored
The failing reason was inconsistent truncation rules: the value of virtual column could have been evaluated to '2000' sometimes instead of '0000' for value 'a'. The reason why `c YEAR AS ('aaaa')` was not evaluated same is that len=4 is a special case insidew Field_year::store. The correct fix is: always evaluate a bad value to 0000 instead 2000. The truncated values should be evaluated as usual. $support_virtual_index is finally changed to 1 in gcol.gcol_ins_upd_innodb, which is also enough for testing. The test from original bug report is also added.
-
Robert Bindar authored
mysql_config should never return -lzlib as linking flag. Clients trying to link against mariadb that use the output of mysql_config experience linking error because we return -lzlib when we don't ship any libzlib.a in our binary tarballs, bundled zlib static library is already linked.
-
- 09 Jul, 2021 1 commit
-
-
Thirunarayanan Balathandayuthapani authored
- Set the innodb_encrypt_tables variable before timeout happens. It will add the pending tablespace to default encrypt list and does the encryption/decryption based on innodb_encrypt_tables.
-