- 24 May, 2018 4 commits
-
-
Marko Mäkelä authored
-
Marko Mäkelä authored
-
Marko Mäkelä authored
i_s_innodb_buffer_page_fill(), i_s_innodb_buf_page_lru_fill(): Only invoke Field::set_notnull() if the index was found.
-
Monty authored
-
- 23 May, 2018 1 commit
-
-
Monty authored
MDEV-16123 ASAN heap-use-after-free handler::ha_index_or_rnd_end MDEV-13828 Segmentation fault on RENAME TABLE Problem was that destructor called methods for closed table. Fixed by removing code in destructor.
-
- 22 May, 2018 5 commits
-
-
Monty authored
Problem was that handle_if_exists_options() didn't correct alter_info->flags when things was removed from the list.
-
Monty authored
Problem was that detection of temporary tables was all wrong for RENAME TABLE. (Temporary tables where opened by top level call to open_temporary_tables(), which can't detect if a temporary table was renamed to something and then reused). Fixed by adding proper parsing of rename list to check against the current name of a table at each rename stage. Also change do_rename_temporary() to check against the current state of temporary tables, not according to the state of start of RENAME TABLE.
-
Monty authored
MDEV-10130 Assertion `share->in_trans == 0' failed in storage/maria/ma_close.c MDEV-10378 Assertion `trn' failed in virtual int ha_maria::start_stmt The problem was that maria_handler->trn was not properly reset at commit/rollback and ha_maria::exernal_lock() could get confused because. There was some old code in ha_maria::implicit_commit() that tried to take care of this, but it was not bullet proof. Fixed by adding list of all tables that is part of the maria transaction to TRN. A nice side effect was of the fix is that loops in ha_maria::implict_commit() got to be much simpler. Other things: - Fixed a bug in mysql_admin_table() where argument open_for_modify was wrongly reset for the next table in the chain - rollback admin command also in case of fatal error. - Split _ma_set_trn_for_table() to three version to simplify code and debugging. - Several new asserts to detect the original problem (that file was not properly removed from trn before calling ma_close())
-
sachin authored
order with Galera and encrypt-tmp-files=1 Problem:- If trans_cache (IO_CACHE) uses encrypted tmp file then on next DML server will crash. Case:- Lets take a case , we have a table t1 , We try to do 2 inserts in t1 1. A really long insert so that trans_cache has to use temp_file 2. Just a small insert Analysis:- Actually server crashes from inside of galera library. /lib64/libc.so.6(abort+0x175)[0x7fb5ba779dc5] /usr/lib64/galera/libgalera_smm.so(_ZN6galera3FSMINS_9TrxHandle5State... mysys/stacktrace.c:247(my_print_stacktrace)[0x7fb5a714940e] sql/signal_handler.cc:160(handle_fatal_signal)[0x7fb5a715c1bd] sql/wsrep_hton.cc:257(wsrep_rollback)[0x7fb5bcce923a] sql/wsrep_hton.cc:268(wsrep_rollback)[0x7fb5bcce9368] sql/handler.cc:1658(ha_rollback_trans(THD*, bool))[0x7fb5bcd4f41a] sql/handler.cc:1483(ha_commit_trans(THD*, bool))[0x7fb5bcd4f804] but actual issue is not in galera but in mariadb, because for 2nd insert we should never call rollback. We are calling rollback because log_and_order fails it fails because write_cache fails , It fails because after reinit_io_cache(trans_cache) , my_b_bytes_in_cache says 0 so we look into tmp_file for data , which is obviously wrong since temp was used for previous insert and it no longer exist. wsrep_write_cache_inc() reads the IO_CACHE in a loop, filling it with my_b_fill() until it returns "0 bytes read". Later MYSQL_BIN_LOG::write_cache() does the same. wsrep_write_cache_inc() assumes that reading a zero bytes past EOF leaves the old data in the cache Solution:- There is two issue in my_b_encr_read 1st we should never equal read_end to info->buffer. I mean this does not make sense read_end should always point to end of buffer. 2nd For most of the case(apart from async IO_CACHE) info->pos_in_file should be equal to info->buffer position wrt to temp file , since in this case we are not changing info->buffer it should remain unchanged.
-
Jacob Mathew authored
The failures with valgrind occur as a result of Spider sometimes using the wrong transaction for operations in background threads that send requests to the data nodes. The use of the wrong transaction caused the networking to the data nodes to use the wrong thread in some cases. Valgrind eventually detects this when such a thread is destroyed before it is used to disconnect from the data node by that wrong transaction when it is freed. I have fixed the problem by correcting the transaction used in each of these cases. Author: Jacob Mathew. Reviewer: Kentoku Shiba. Cherry-Picked: Commit afe5a51c on branch 10.2
-
- 19 May, 2018 3 commits
-
-
Sergei Golubchik authored
-
Sergei Golubchik authored
-
Sergei Golubchik authored
fix a typo that broke the main.view test followup for ef295c31
-
- 18 May, 2018 2 commits
-
-
Vladislav Vaintroub authored
It should work ok on all Unixes, but on Windows ,only worked by accident in the past, with client not being Unicode safe. It stopped working with Visual Studio 2017 15.7 update now.
-
Jacob Mathew authored
The crash occurs when a thread that is closing its connection attempts to access Spider transaction information when another thread has freed that memory while processing Spider plugin deinit. This occurs because Spider does not adjust the plugin's reference count when it sets a transaction information pointer for the plugin. The fix I implemented changes the way Spider sets the transaction information pointer to use thd_set_ha_data() so that Spider's plugin reference counter is adjusted as well. Author: Jacob Mathew. Reviewer: Kentoku Shiba. Merged From: Commit ab9d420d on branch 10.2
-
- 17 May, 2018 1 commit
-
-
Sergei Golubchik authored
When Item_insert_value needs a dummy field, use zero-length Field_string, not Field_null. The latter isn't compatible with CREATE ... SELECT.
-
- 16 May, 2018 3 commits
-
-
Monty authored
Fixed by extending unique_table() with a flag to not allow usage of the replaced table. I also cleaned up find_dup_table() to not use goto next. I also added more comments to the code in find_dup_table()
-
Sergey Vojtovich authored
Analyze core independently of max-save-datadir and max-save-core setting. Increment $num_saved_cores only if core was actually saved. "Move any core files from e.g. mysqltest" independently of max-save-datadir setting. Note: it may overwrite core from mysqld, which might not be desired (it did work this way even before).
-
Monty authored
- Added missing test case for MyISAM
-
- 15 May, 2018 3 commits
-
-
Monty authored
Problem was that if copy_data_between_tables() didn't do proper clean up in case of failures: - copy object was not properly freed - end_bulk_insert() was not called - mysql_trans_prepare_alter_copy_data() set THD->transaction.on to false which was not properly restored The last part caused a crash in Aria as Aria depends on that THD is correct. Other things: - Reset info->switched_transactional after usage (safety) - Reset bulk_insert_single_undo (safety)
-
Monty authored
MDEV-654 Assertion `share->now_transactional' failed in flush_log_for_bitmap on concurrent workload with Aria tables Problem was that we the bitmap needs to be flushed before disabling logging of redo entires, as writing the bitmap to disk by background checkpoint may cause redo entries.
-
Oleksandr Byelkin authored
Make each lex pointing to statement lex instead of global pointer in THD (no need store and restore the global pointer and put it on SP stack).
-
- 11 May, 2018 5 commits
-
-
Marko Mäkelä authored
-
Sachin Agarwal authored
Problem: When FTS index is added into a table which doesn't have 'FTS_DOC_ID' column, Innodb rebuilds table to add column 'FTS_DOC_ID'. when this FTS index is dropped from this table. Innodb doesn't not rebuild table to remove 'FTS_DOC_ID' column and deletes FTS index auxiliary tables. But it doesn't delete FTS common auxiliary tables. Later when the database having this table is renamed, FTS auxiliary tables are not renamed because table's flags2 (dict_table_t.flags2) has been resetted for DICT_TF2_FTS flag during FTS index drop operation. Now when we drop old database, it leads to an assert. Fix: During renaming of FTS auxiliary tables, ORed a condition to check if table has DICT_TF2_FTS_HAS_DOC_ID flag set. RB: 18769 Reviewed by : Jimmy.Yang@oracle.com
-
Thirunarayanan Balathandayuthapani authored
Problem: ======= Multiple insert statement in table contains FULLTEXT KEY and a FTS_DOC_ID column aborts the server if the FTS_DOC_ID exceeds FTS_DOC_ID_MAX_STEP. Solution: ======== Remove the exception for first committed insert statement. Reviewed-by: Jimmy Yang<jimmy.yang@oracle.com> RB: 18023
-
Marko Mäkelä authored
-
Marko Mäkelä authored
When Oracle fixed MDEV-13899 in their own way, they moved the condition to the only caller of PageConverter::update_records(). Thus, the merge of 5.6.40 into MariaDB added a redundant condition. PageConverter::update_records(): Move the page_is_leaf() condition to the only caller, PageConverter::update_index_page().
-
- 10 May, 2018 3 commits
-
-
Alexey Botchkov authored
QUERY_DML_NO_SELECT flag added.
-
Alexey Botchkov authored
QUERY_DML_NO_SELECT flag added.
-
Alexey Botchkov authored
QUERY_DML_NO_SELECT flag added.
-
- 09 May, 2018 5 commits
-
-
Daniel Bartholomew authored
-
Sergei Golubchik authored
-
Marko Mäkelä authored
-
Marko Mäkelä authored
The problem is hard to repeat, and I failed to create a deterministic test case. Online index creation creates stubs for to-be-created indexes. If index creation fails, we could remove these stubs while locks exist in the indexes. (This would require that the index creation was completed, and a concurrent DML operation acquired a lock on a record in the uncommitted index. If a duplicate key error occurs in an uncommitted index, the error will be reported for the CREATE UNIQUE INDEX, not for the DML operation that tried to insert the duplicate.) dict_table_try_drop_aborted(), row_merge_drop_indexes(): If transactional locks exist on the table, keep the table->indexes intact.
-
Jan Lindström authored
Remove the setup_ports function call. This is related to https://github.com/MariaDB/server/pull/717 Thanks to Daniel Black and Bart S.
-
- 08 May, 2018 5 commits
-
-
Vladislav Vaintroub authored
The reason is the missing HAVE_OPENSSL define for mariabackup.
-
Vicențiu Ciorbaru authored
The following variables are used in this project, but they are set to NOTFOUND. LZ4_LIBS The reason for the failure is that pkg_check_modules will not guarantee <prefix>_LIBRARY_DIRS variable to be set, according to documentation. When it's not set, we would force find_library to look in an empty path and thus fail to correctly find LZ4_LIBS, although pck_check_modules did previously discover that the library is installed. To fix the problem and still keep the logic of first following LIBLZ4_LIBRARY_DIRS and *then* look at other paths, we call find_library twice. This is the recommended approach, according to CMake 3.11 documentation.
-
Sergei Golubchik authored
-
Sergey Vojtovich authored
-fno-tree-loop-vectorize is only supported by gcc versions >5.
-
Sergei Golubchik authored
MDEV-15216 Assertion `! is_set() || m_can_overwrite_status' failed in Diagnostics_area::set_error_status upon operation inside XA don't implicitly commit or rollback in mysql_admin_table() unless the statement has CF_IMPLICIT_COMMIT_END flag.
-