- 30 May, 2018 1 commit
-
-
Marko Mäkelä authored
Fix type mismatches in the unit test mdev10259(). btr_search_info_get_ref_count(): Do not return early if !table->space. We can simply access table->space_id even after the tablespace has been discarded. btr_get_search_latch(): Relax a debug assertion to allow !index->table->space.
-
- 29 May, 2018 11 commits
-
-
Otto Kekäläinen authored
Include all the Makefiles that define variables that can be useful within debian/rules. This includes buildflags.mk as well. Use the standard variable names and don't define our own.
-
James Clarke authored
For details see https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=852728
-
Adrian Bunk authored
For details see https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=865737
-
Otto Kekäläinen authored
-
Marko Mäkelä authored
-
Marko Mäkelä authored
-
Marko Mäkelä authored
-
Marko Mäkelä authored
-
Marko Mäkelä authored
Also fixes MDEV-14727, MDEV-14491 InnoDB: Error: Waited for 5 secs for hash index ref_count (1) to drop to 0 by replacing the flawed wait logic in dict_index_remove_from_cache_low(). On DISCARD TABLESPACE, there is no need to drop the adaptive hash index. We must drop it on IMPORT TABLESPACE, and eventually on DROP TABLE or DROP INDEX. As long as the dict_index_t object remains in the cache and the table remains inaccessible, the adaptive hash index entries to orphaned pages would not do any harm. They would be dropped when buffer pool pages are reused for something else. btr_search_drop_page_hash_when_freed(), buf_LRU_drop_page_hash_batch(): Remove the parameter zip_size, and pass 0 to buf_page_get_gen(). buf_page_get_gen(): Ignore zip_size if mode==BUF_PEEK_IF_IN_POOL. buf_LRU_drop_page_hash_for_tablespace(): Drop the adaptive hash index even if the tablespace is inaccessible. buf_LRU_drop_page_hash_for_tablespace(): New global function, to drop the adaptive hash index. buf_LRU_flush_or_remove_pages(), fil_delete_tablespace(): Remove the parameter drop_ahi. dict_index_remove_from_cache_low(): Actively drop the adaptive hash index if entries exist. This should prevent InnoDB hangs on DROP TABLE or DROP INDEX. row_import_for_mysql(): Drop any adaptive hash index entries for the table. row_drop_table_for_mysql(): Drop any adaptive hash index for the table, except if the table resides in the system tablespace. (DISCARD TABLESPACE does not apply to the system tablespace, and we do no want to drop the adaptive hash index for other tables than the one that is being dropped.) row_truncate_table_for_mysql(): Drop any adaptive hash index entries for the table, except if the table resides in the system tablespace.
-
Marko Mäkelä authored
-
Marko Mäkelä authored
When the transaction isolation level is SERIALIZABLE, or when a locking read is performed in the REPEATABLE READ isolation level, InnoDB must lock delete-marked records in order to prevent another transaction from inserting something. However, at READ UNCOMMITTED or READ COMMITTED isolation level or when the parameter innodb_locks_unsafe_for_binlog is set, the repeatability of the reads does not matter, and there is no need to lock any records. row_search_for_mysql(): Skip locks on delete-marked committed records upfront, instead of invoking row_unlock_for_mysql() afterwards. The unlocking never worked for secondary index records.
-
- 28 May, 2018 9 commits
-
-
Marko Mäkelä authored
MDEV-13834 10.2 wrongly recognizes 10.1.10 innodb_encrypt_log=ON data as after-crash and refuses to start infos[]: Allocate enough entries to accommodate all keys from both checkpoint pages. infos_used: The size of infos[]. get_crypt_info(): Merge to the only caller, log_crypt_101_read_block(). log_crypt_101_read_block(): Do not validate the log block checksum, because it will not be valid when upgrading from MariaDB 10.1.10. Instead, check that the encryption key exists. log_crypt_101_read_checkpoint(): Append to infos[] instead of overwriting.
-
Sergei Petrunia authored
Use a compatible xargs command-line arguments.
-
Otto Kekäläinen authored
All packages in group 'essential' are by default installed on all Debian and Ubuntu systems, and there is no need to mark them as dependencies for any binary.
-
Otto Kekäläinen authored
-
Otto Kekäläinen authored
(This change matches how debian/control is in downstream Debian.org)
-
Otto Kekäläinen authored
(This change matches how debian/rules is in downstream Debian.org)
-
Otto Kekäläinen authored
-
Kristien Nielsen authored
We have carried along this patch as a patch inside our sources since 2012 (commit cfd4fcb0). The validity of this has thus been vetted in production for years and the review done now did not find otherwise. A race in dash causes mysqld_safe to occasionally loop infinitely. Fix by using bash instead. https://bugs.launchpad.net/ubuntu/+source/mysql-dfsg-5.0/+bug/675185 As this is the last patch, we can also clean away usage of dpatch.
-
Christian Hammers authored
We have carried along this patch as a patch inside our sources since 2012 (commit cfd4fcb0). This same patch has been used also in MySQL packaging at Oracle and in downstream Debian.org packages for both MySQL and MariaDB. The validity of this has thus been vetted in production for years and the review done now did not find otherwise. Code contributed to Oracle with http://forge.mysql.com/wiki/Sun_Contributor_Agreement Reported as http://bugs.mysql.com/bug.php?id=31361
-
- 27 May, 2018 4 commits
-
-
Monty authored
Fixed by deleting the sequence if we where not able to initialize it I also noticed that we didn't always set the error message when check_killed(), which could lead to aborted queries without error beeing properly set. Fixed by default setting error message if check_error() noticed that killed had been called. This allowed me to remove a lot of calls to thd->send_kill_message().
-
Monty authored
-
Monty authored
-
Monty authored
This only affected printing of errors
-
- 26 May, 2018 11 commits
-
-
Monty authored
Problem was that max_row_lengt() used different bitmap than pack_row()
-
Monty authored
-
Monty authored
Crash happened because in discover, table->work_part_info was not properly reset before execution. Fixed by resetting before calling execute alter table, create table or mysql_create_frm_image.
-
Monty authored
-
Monty authored
When merging this with 10.2 and later, one can just use the 10.2 or later code
-
Monty authored
Problem was that blob memory allocated in Table_trigger_list was not properly freed
-
Sergey Vojtovich authored
-
Sergey Vojtovich authored
-
Sergey Vojtovich authored
-
Sergey Vojtovich authored
-
Sergey Vojtovich authored
Original problem reported by Wlad: re-compilation of 10.3 on top of 10.2 build would cache undefined HAVE_ISINF from 10.2, whereas it is expected to be 1 in 10.3. std::isinf() seem to be available on all supported platforms.
-
- 25 May, 2018 4 commits
-
-
Sergei Golubchik authored
Close MYSQL (and destroy THD) in the same thread where it was used, because THD embeds MDL_context, that owns some LF_PINS, that remember a pointer to my_thread_var->stack_ends_here.
-
Andrei Elkin authored
Being executed under slow_log is ON the test revealed a "side-effect" in MDEV-8305 implementation which inadvertently made the trigger or stored function statements to reset the top-level query's THD::start_time et al. (Details of the test failure analysis are footnoted). Unlike the SP case the SF and Trigger's internal statement should not do that. Fixed with revising the MDEV-8305 decision to backup/reset/restore the session timestamp inside sp_instr_stmt::execute(). The timestamp actually remains reset in the SP case by its caller per statement basis by ever existing logics. Timestamps related tests are extended to cover the trigger and stored function case. Note, commit 3395ab73 is reverted as its struct QUERY_START_TIME_INFO declaration is not in use anymore after this patch. Footnote: -------- Specifically to the failing test, a query on the master was logged okay with a timestamp of the query's top-level statement but its post update trigger managed to compute one more (later) timestamp which got inserted into another table. The latter table master-vs-slave no fractional part timestamp discrepancy became evident thanks to different execution time of the trigger combined with the fact of the logged with micro-second fractional part master timestamp was truncated on the slave. On master when the fractional part was close to 1 the trigger execution added up its own latency to overflow to next second value. That's how the master timestamp surprisingly turned out to bigger than the slave's one.
-
Marko Mäkelä authored
-
Daniel Bartholomew authored
-