- 02 Aug, 2018 3 commits
-
-
Oleksandr Byelkin authored
-
Oleksandr Byelkin authored
-
Marko Mäkelä authored
-
- 01 Aug, 2018 4 commits
-
-
Vladislav Vaintroub authored
Compile on Windows MSVC with -DHAVE_SSE2 and -DHAVE_PCLMUL It is safe, since code will do also runtime checks via cpuid(), before using the instructions, and will fallback to slower versions, if instructions are not available.
-
Marko Mäkelä authored
-
Marko Mäkelä authored
The functions fts_ast_visit() and fts_query() inside InnoDB FULLTEXT INDEX query processing are not checking for THD::killed (trx_is_interrupted()), like anything that potentially takes a long time should do. This is a port of the following change from MySQL 5.7.23, with a completely rewritten test case. commit c58c6f8f66ddd0357ecd0c99646aa6bf1dae49c8 Author: Aakanksha Verma <aakanksha.verma@oracle.com> Date: Fri May 4 15:53:13 2018 +0530 Bug #27155294 MAX_EXECUTION_TIME NOT INTERUPTED WITH FULLTEXT SEARCH USING MECAB
-
Marko Mäkelä authored
-
- 31 Jul, 2018 4 commits
-
-
Alexey Botchkov authored
Incorrect char sentence should be handled properly.
-
Marko Mäkelä authored
-
Elena Stepanova authored
-
Oleksandr Byelkin authored
-
- 30 Jul, 2018 11 commits
-
-
Marko Mäkelä authored
This is a backport of the following fix from MySQL 5.7.23. Some code refactoring has been omitted, and the test case has been adapted to MariaDB. commit 7a689acaa65e9d602575f7aa53fe36a64a07460f Author: Krzysztof Kapuścik <krzysztof.kapuscik@oracle.com> Date: Tue Mar 13 12:34:03 2018 +0100 Bug#27082268 Invalid FTS sync synchronization The fix closes two issues: Bug #27082268 - INNODB: FAILING ASSERTION: SYM_NODE->TABLE != NULL DURING FTS SYNC Bug #27095935 - DEADLOCK BETWEEN FTS_DROP_INDEX AND FTS_OPTIMIZE_SYNC_TABLE Both issues were related to a FTS cache sync being done during operations that perfomed DDL actions on internal FTS tables (ALTER TABLE, TRUNCATE). In some cases the FTS tables and/or internal cache structures could get removed while still being used to perform FTS synchronization leading to crashes. In other the sync operations could not get finishes as it was waiting for dict lock which was taken by thread waiting for the background sync to be finished. The changes done includes: - Stopping background operations during ALTER TABLE and TRUNCATE. - Removal of unused code in FTS. - Cleanup of FTS sync related code to make it more readable and easier to maintain. RB#18262
-
Marko Mäkelä authored
We did not merge Percona XtraDB 5.6.40-84.0 yet. The changes in it are mostly cosmetic, except for 2 bug fixes from Oracle MySQL 5.6.40, which could be security bugs. This was achieved by taking the applicable parts of an earlier InnoDB commit to XtraDB: git diff 15ec8c2f{~,} storage/innobase| sed -e s+/innobase/+/xtradb/+|patch -p1
-
Marko Mäkelä authored
-
Karthik Kamath authored
No commit message
-
Marko Mäkelä authored
-
Sachin Agarwal authored
Problem: As part of bug #24938374 fix, dict_operation_lock was not taken by fts_optimize_thread while syncing fts cache. Due to this change, alter query is able to update SYS_TABLE rows simultaneously. Now when fts_optimizer_thread goes open index table, It doesn't open index table if the record corresponding to that table is set to REC_INFO_DELETED_FLAG in SYS_TABLES and hits an assert. Fix: If fts sync is already in progress, Alter query would wait for sync to complete before renaming table. RB: #19604 Reviewed by : Jimmy.Yang@oracle.com
-
Marko Mäkelä authored
-
mleich1 authored
Add the usual basic test for the variable innodb_log_optimize_ddl. Signed-off-by: mleich1 <Matthias.Leich@mariadb.com>
-
Marko Mäkelä authored
-
Marko Mäkelä authored
This is motivated by Oracle MySQL Bug #27542720 SCHEMA MISMATCH - TABLE FLAGS DON'T MATCH, BUT FLAGS ARE NUMBERS but using a different approach. row_import::match_schema(): In case of a mismatch, display the ROW_FORMAT and optionally KEY_BLOCK_SIZE of the .cfg file.
-
Marko Mäkelä authored
-
- 29 Jul, 2018 1 commit
-
-
Oleksandr Byelkin authored
We do not accept: 1. We did not have this problem (fixed earlier and better) d982e717 Bug#27510150: MYSQLDUMP FAILS FOR SPECIFIC --WHERE CLAUSES 2. We do not have such options (an DBUG_ASSERT put just in case) bbc2e37f Bug#27759871: BACKRONYM ISSUE IS STILL IN MYSQL 5.7 3. Serg fixed it in other way in this release: e48d775c Bug#27980823: HEAP OVERFLOW VULNERABILITIES IN MYSQL CLIENT LIBRARY
-
- 27 Jul, 2018 2 commits
-
-
Jan Lindström authored
Test case was not written correctly.
-
Varun Gupta authored
-
- 26 Jul, 2018 9 commits
-
-
Alexander Barkov authored
The problem happened in the derived condition pushdown code: - When Item_func_regex::build_clone() was called, it created a copy of the original Item_func_regex, and this copy got registered in free_list. Class specific additional dynamic members (such as "re") made a shallow copy, rather than a deep copy, in the cloned Item_func_regex. As a result, the Regexp_processor_pcre::m_pcre of the cloned Item_func_regex and of the original Item_func_regex pointed to the same compiled regular expression. - On cleanup_items(), both original and cloned copies of Item_func_regex called re.cleanup(), which called pcre_free(m_pcre). So the same compiled regular expression was freed two times, which was noticed by ASAN. The same problem was repeatable for Item_func_regexp_instr. A similar problem happened for Item_func_sp, for the sp_result_field member. Both original and cloned copies of Item_func_sp pointed the same Field instance and both deleted it on cleanup(). A possible solution would be to fix build_clone() to create deep (instead of shallow) copies for the dynamic members of the affected classes (Item_func_regex, Item_func_regexp_instr, Item_func sp). However, this would be too complex. As agreed with Galina and Igor, this patch disallows using using these affected classes in derived condition pushdown by overriding get_clone() to return NULL.
-
Igor Babaev authored
Assertion `used_tables_cache == 0' failed This bug manifested itself when executing queries over materialized derived tables /vies and with conjunctive always true predicates containing inexpensive single-row subqueries. This bug disappeared after the patch mdev-15035 had been applied.
-
Marko Mäkelä authored
Introduce the configuration option innodb_log_optimize_ddl for controlling whether native index creation or table-rebuild in InnoDB should keep optimizing the redo log (and writing MLOG_INDEX_LOAD records to ensure that concurrent backup would fail). By default, we have innodb_log_optimize_ddl=ON, that is, the default behaviour that was introduced in MariaDB 10.2.2 (with the merge of InnoDB from MySQL 5.7) will be unchanged. BtrBulk::m_trx: Replaces m_trx_id. We must be able to check for KILL QUERY even if !m_flush_observer (innodb_log_optimize_ddl=OFF). page_cur_insert_rec_write_log(): Declare globally, so that this can be called from PageBulk::insert(). row_merge_insert_index_tuples(): Remove the unused parameter trx_id. row_merge_build_indexes(): Enable or disable redo logging based on the innodb_log_optimize_ddl parameter. PageBulk::init(), PageBulk::insert(), PageBulk::finish(): Write redo log records if needed. For ROW_FORMAT=COMPRESSED, redo log will be written in PageBulk::compress() unless we called m_mtr.set_log_mode(MTR_LOG_NO_REDO).
-
Marko Mäkelä authored
-
Marko Mäkelä authored
-
Marko Mäkelä authored
-
Marko Mäkelä authored
Native ALTER TABLE is never invoked on temporary tables.
-
Marko Mäkelä authored
-
Oleksandr Byelkin authored
-
- 25 Jul, 2018 6 commits
-
-
Oleksandr Byelkin authored
-
Igor Babaev authored
This patch fixes another problem introduced by the patch for mdev-4817. The latter changed Item_cond::fix_fields() in such a way that it could call the virtual method is_expensive(). With the first its call the method saves the result in Item::is_expensive_cache. For all next calls the method returns the result from this cache. So if the item once was determined as expensive the method always returns true. For subqueries it's not good, because non-optimized subqueries always is considered as expensive. It means that the cache should be invalidated after the call of optimize_constant_subqueries().
-
Varun Gupta authored
In this case we are setting the field Item_func_eq::in_eqaulity_no for the semi-join equalities. This helps us to remove these equalites as the inner tables are not available during parent select execution while the outer tables are not available during materialization phase. We only have it set for the equalites for the fields involved with the IN subquery and reset it for the equalities which do not belong to the IN subquery. For example in case of nested IN subqueries: SELECT t1.a FROM t1 WHERE t1.a IN (SELECT t2.a FROM t2 where t2.b IN (select t3.b from t3 where t3.c=27 )) there are two equalites involving the fields of the IN subquery 1) t2.b = t3.b : the field Item_func_eq::in_eqaulity_no is set when we merge the grandchild select into the child select 2) t1.a = t2.a : the field Item_func_eq::in_eqaulity_no is set when we merge the child select into the parent select But when we perform case 2) we should ensure that we reset the equalities in the child's WHERE clause.
-
Varun Gupta authored
MDEV-16751: Server crashes in st_join_table::cleanup or TABLE_LIST::is_with_table_recursive_reference with join_cache_level>2 During muliple equality propagation for a query in which we have an IN subquery, the items in the select list of the subquery may not be part of the multiple equality because there might be another occurence of the same field in the where clause of the subquery. So we keyuse_is_valid_for_access_in_chosen_plan function which expects the items in the select list of the subquery to be same to the ones in the multiple equality (through these multiple equalities we create keyuse array). The solution would be that we expect the same field not the same Item because when we have SEMI JOIN MATERIALIZATION SCAN, we use copy back technique to copies back the materialised table fields to the original fields of the base tables.
-
Thirunarayanan Balathandayuthapani authored
At most one transaction can be active at a time for temporary tables. There is no need to check previous version of record for the temporary tables.
-
Sachin authored
Actually if we use "set password for " command this changes the checksum of mysql.user table -localhost root Y Y Y Y Y Y Y Y YY Y Y Y Y Y Y Y Y Y Y Y $ Y Y Y Y Y Y Y 0 00 0 N N 0.000000 +localhost root Y Y Y Y Y Y Y Y YY Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y 0 00 0 mysql_native_password N N 0.000000 In short we replace '' with mysql_native_password which make checksum to be different, and hence check test case fails. So we use UPDATE mysql.user command.
-