- 29 May, 2018 5 commits
-
-
Marko Mäkelä authored
-
Marko Mäkelä authored
-
Marko Mäkelä authored
Also fixes MDEV-14727, MDEV-14491 InnoDB: Error: Waited for 5 secs for hash index ref_count (1) to drop to 0 by replacing the flawed wait logic in dict_index_remove_from_cache_low(). On DISCARD TABLESPACE, there is no need to drop the adaptive hash index. We must drop it on IMPORT TABLESPACE, and eventually on DROP TABLE or DROP INDEX. As long as the dict_index_t object remains in the cache and the table remains inaccessible, the adaptive hash index entries to orphaned pages would not do any harm. They would be dropped when buffer pool pages are reused for something else. btr_search_drop_page_hash_when_freed(), buf_LRU_drop_page_hash_batch(): Remove the parameter zip_size, and pass 0 to buf_page_get_gen(). buf_page_get_gen(): Ignore zip_size if mode==BUF_PEEK_IF_IN_POOL. buf_LRU_drop_page_hash_for_tablespace(): Drop the adaptive hash index even if the tablespace is inaccessible. buf_LRU_drop_page_hash_for_tablespace(): New global function, to drop the adaptive hash index. buf_LRU_flush_or_remove_pages(), fil_delete_tablespace(): Remove the parameter drop_ahi. dict_index_remove_from_cache_low(): Actively drop the adaptive hash index if entries exist. This should prevent InnoDB hangs on DROP TABLE or DROP INDEX. row_import_for_mysql(): Drop any adaptive hash index entries for the table. row_drop_table_for_mysql(): Drop any adaptive hash index for the table, except if the table resides in the system tablespace. (DISCARD TABLESPACE does not apply to the system tablespace, and we do no want to drop the adaptive hash index for other tables than the one that is being dropped.) row_truncate_table_for_mysql(): Drop any adaptive hash index entries for the table, except if the table resides in the system tablespace.
-
Marko Mäkelä authored
-
Marko Mäkelä authored
When the transaction isolation level is SERIALIZABLE, or when a locking read is performed in the REPEATABLE READ isolation level, InnoDB must lock delete-marked records in order to prevent another transaction from inserting something. However, at READ UNCOMMITTED or READ COMMITTED isolation level or when the parameter innodb_locks_unsafe_for_binlog is set, the repeatability of the reads does not matter, and there is no need to lock any records. row_search_for_mysql(): Skip locks on delete-marked committed records upfront, instead of invoking row_unlock_for_mysql() afterwards. The unlocking never worked for secondary index records.
-
- 28 May, 2018 2 commits
-
-
Marko Mäkelä authored
MDEV-13834 10.2 wrongly recognizes 10.1.10 innodb_encrypt_log=ON data as after-crash and refuses to start infos[]: Allocate enough entries to accommodate all keys from both checkpoint pages. infos_used: The size of infos[]. get_crypt_info(): Merge to the only caller, log_crypt_101_read_block(). log_crypt_101_read_block(): Do not validate the log block checksum, because it will not be valid when upgrading from MariaDB 10.1.10. Instead, check that the encryption key exists. log_crypt_101_read_checkpoint(): Append to infos[] instead of overwriting.
-
Sergei Petrunia authored
Use a compatible xargs command-line arguments.
-
- 26 May, 2018 6 commits
-
-
Monty authored
Problem was that max_row_lengt() used different bitmap than pack_row()
-
Monty authored
-
Monty authored
Crash happened because in discover, table->work_part_info was not properly reset before execution. Fixed by resetting before calling execute alter table, create table or mysql_create_frm_image.
-
Monty authored
-
Monty authored
When merging this with 10.2 and later, one can just use the 10.2 or later code
-
Monty authored
Problem was that blob memory allocated in Table_trigger_list was not properly freed
-
- 25 May, 2018 1 commit
-
-
Sergei Golubchik authored
Close MYSQL (and destroy THD) in the same thread where it was used, because THD embeds MDL_context, that owns some LF_PINS, that remember a pointer to my_thread_var->stack_ends_here.
-
- 24 May, 2018 11 commits
-
-
Vladislav Vaintroub authored
-
Monty authored
Don't check global_memory_used if debug_assert_on_not_freed_memory is not set
-
Monty authored
-
Monty authored
ha_archive::max_row_length() and ha_archive::pack_row() didn't use record parameter properly. Especially testing of null was wrong for record[1]
-
Monty authored
The cause of this was several different bugs: - When using binary logging with binlog_row_image=FULL the all bits in read_set was set, which caused a different (wrong) pattern for marking vcol_set. - TABLE::mark_virtual_columns_for_write() didn't in all cases mark vcol_set with the vcol_field. - TABLE::update_virtual_fields() has to update all vcol fields on REPLACE if binary logging with FULL is used. - VCOL_UPDATE_INDEXED should update all vcol fields part of an index that was not updated by VCOL_UPDATE_FOR_READ - max_row_length() calculated length of NULL and not used fields. This didn't cause any crash, but used more memory than needed.
-
Marko Mäkelä authored
thd_destructor_proxy(): Ensure that purge actually exits, like the logic should have been ever since MDEV-14080. srv_purge_shutdown(): A new function to wait for the purge coordinator to exit. Before exiting, the purge coordinator will ensure that all purge workers have exited.
-
Marko Mäkelä authored
-
Marko Mäkelä authored
-
Marko Mäkelä authored
i_s_innodb_buffer_page_fill(), i_s_innodb_buf_page_lru_fill(): Only invoke Field::set_notnull() if the index was found.
-
Marko Mäkelä authored
This is the MariaDB 10.2 version of the patch. field_store_string(): Simplify the code. field_store_index_name(): Remove, and use field_store_string() instead. Starting with MariaDB 10.2.2, there is the predicate dict_index_t::is_committed(), and dict_index_t::name never contains the magic byte 0xff. Correct some comments to refer to TEMP_INDEX_PREFIX_STR. i_s_cmp_per_index_fill_low(): Use the appropriate value NULL to identify that an index was not found. Check that storing each column value succeeded. i_s_innodb_buffer_page_fill(), i_s_innodb_buf_page_lru_fill(): Only invoke Field::set_notnull() if the index was found. (This fixes the bug.) i_s_dict_fill_sys_indexes(): Adjust the index->name that was directly loaded from SYS_INDEXES.NAME (which can start with the 0xff byte). This was the only function that depended on the translation in field_store_index_name().
-
Monty authored
-
- 23 May, 2018 2 commits
-
-
Sergei Petrunia authored
Fix an obvious typo: replace_column should be applied to SHOW TABLE STATUS, not to SELECT * FROM t1.
-
Monty authored
MDEV-16123 ASAN heap-use-after-free handler::ha_index_or_rnd_end MDEV-13828 Segmentation fault on RENAME TABLE Problem was that destructor called methods for closed table. Fixed by removing code in destructor.
-
- 22 May, 2018 9 commits
-
-
Monty authored
-
Monty authored
Crash happened when deleting all columns that was part of a check constraint The bug was that read map for from table was used when checking CHECK constraint and was not properly reset in copy_data_between_tables()
-
Monty authored
Problem was that handle_if_exists_options() didn't correct alter_info->flags when things was removed from the list.
-
Monty authored
Problem was that detection of temporary tables was all wrong for RENAME TABLE. (Temporary tables where opened by top level call to open_temporary_tables(), which can't detect if a temporary table was renamed to something and then reused). Fixed by adding proper parsing of rename list to check against the current name of a table at each rename stage. Also change do_rename_temporary() to check against the current state of temporary tables, not according to the state of start of RENAME TABLE.
-
Monty authored
MDEV-10130 Assertion `share->in_trans == 0' failed in storage/maria/ma_close.c MDEV-10378 Assertion `trn' failed in virtual int ha_maria::start_stmt The problem was that maria_handler->trn was not properly reset at commit/rollback and ha_maria::exernal_lock() could get confused because. There was some old code in ha_maria::implicit_commit() that tried to take care of this, but it was not bullet proof. Fixed by adding list of all tables that is part of the maria transaction to TRN. A nice side effect was of the fix is that loops in ha_maria::implict_commit() got to be much simpler. Other things: - Fixed a bug in mysql_admin_table() where argument open_for_modify was wrongly reset for the next table in the chain - rollback admin command also in case of fatal error. - Split _ma_set_trn_for_table() to three version to simplify code and debugging. - Several new asserts to detect the original problem (that file was not properly removed from trn before calling ma_close())
-
Sergei Petrunia authored
Step#1: RocksDB files require a special #define when they are compiled with valgrind. Without that, valgrind fails with an 'unimplemented syscall' error for fcntl call.
-
sachin authored
order with Galera and encrypt-tmp-files=1 Problem:- If trans_cache (IO_CACHE) uses encrypted tmp file then on next DML server will crash. Case:- Lets take a case , we have a table t1 , We try to do 2 inserts in t1 1. A really long insert so that trans_cache has to use temp_file 2. Just a small insert Analysis:- Actually server crashes from inside of galera library. /lib64/libc.so.6(abort+0x175)[0x7fb5ba779dc5] /usr/lib64/galera/libgalera_smm.so(_ZN6galera3FSMINS_9TrxHandle5State... mysys/stacktrace.c:247(my_print_stacktrace)[0x7fb5a714940e] sql/signal_handler.cc:160(handle_fatal_signal)[0x7fb5a715c1bd] sql/wsrep_hton.cc:257(wsrep_rollback)[0x7fb5bcce923a] sql/wsrep_hton.cc:268(wsrep_rollback)[0x7fb5bcce9368] sql/handler.cc:1658(ha_rollback_trans(THD*, bool))[0x7fb5bcd4f41a] sql/handler.cc:1483(ha_commit_trans(THD*, bool))[0x7fb5bcd4f804] but actual issue is not in galera but in mariadb, because for 2nd insert we should never call rollback. We are calling rollback because log_and_order fails it fails because write_cache fails , It fails because after reinit_io_cache(trans_cache) , my_b_bytes_in_cache says 0 so we look into tmp_file for data , which is obviously wrong since temp was used for previous insert and it no longer exist. wsrep_write_cache_inc() reads the IO_CACHE in a loop, filling it with my_b_fill() until it returns "0 bytes read". Later MYSQL_BIN_LOG::write_cache() does the same. wsrep_write_cache_inc() assumes that reading a zero bytes past EOF leaves the old data in the cache Solution:- There is two issue in my_b_encr_read 1st we should never equal read_end to info->buffer. I mean this does not make sense read_end should always point to end of buffer. 2nd For most of the case(apart from async IO_CACHE) info->pos_in_file should be equal to info->buffer position wrt to temp file , since in this case we are not changing info->buffer it should remain unchanged.
-
Jacob Mathew authored
The failures with valgrind occur as a result of Spider sometimes using the wrong transaction for operations in background threads that send requests to the data nodes. The use of the wrong transaction caused the networking to the data nodes to use the wrong thread in some cases. Valgrind eventually detects this when such a thread is destroyed before it is used to disconnect from the data node by that wrong transaction when it is freed. I have fixed the problem by correcting the transaction used in each of these cases. Author: Jacob Mathew. Reviewer: Kentoku Shiba. Cherry-Picked: Commit afe5a51c on branch 10.2
-
Jacob Mathew authored
The failures with valgrind occur as a result of Spider sometimes using the wrong transaction for operations in background threads that send requests to the data nodes. The use of the wrong transaction caused the networking to the data nodes to use the wrong thread in some cases. Valgrind eventually detects this when such a thread is destroyed before it is used to disconnect from the data node by that wrong transaction when it is freed. I have fixed the problem by correcting the transaction used in each of these cases. Author: Jacob Mathew. Reviewer: Kentoku Shiba. Merged: Commit 4d576d9d on branch bb-10.3-MDEV-12900
-
- 21 May, 2018 1 commit
-
-
Sergei Petrunia authored
-
- 20 May, 2018 2 commits
-
-
Sergei Golubchik authored
-
Sergei Golubchik authored
-
- 19 May, 2018 1 commit
-
-
Sergei Golubchik authored
-