- 17 Feb, 2018 1 commit
-
-
Daniel Black authored
Note: Linux only Core dumps of large buffer pool pages take time and space and pose potential data expose in scenarios where data-at-rest encryption is deployed. Here we use madvise(MADV_DONT_DUMP) on large memory allocations used by the innodb buffer pool, log_sys and recv_sys. The effect of this system call is that these memory areas will not appear in a core dump. Data from these buffers is rarely useful in fault diagnosis. log_sys and recv_sys structures now use large memory allocations for their large buffer. Debug builds don't include the madvise syscall and as such will include full core dumps. A function, buf_madvise_do_dump, is added but never called. It is there to be called from a debugger to re-enable the core dumping of all of these pages if for some reason the entire contents of these buffers are needed. Idea thanks to Hartmut Holzgraefe
-
- 15 Feb, 2018 13 commits
-
-
Vladislav Vaintroub authored
-
Marko Mäkelä authored
-
Marko Mäkelä authored
-
Marko Mäkelä authored
-
Marko Mäkelä authored
fts_cmp_set_sync_doc_id(), fts_load_stopword(): Start the transaction in read-only mode if innodb_read_only is set. fts_update_sync_doc_id(), fts_commit_table(), fts_sync(), fts_optimize_table(): Return DB_READ_ONLY if innodb_read_only is set. fts_doc_fetch_by_doc_id(), fts_table_fetch_doc_ids(): Remove the code to start an internal transaction or to roll back, because this is a read-only operation.
-
Marko Mäkelä authored
-
Marko Mäkelä authored
-
Marko Mäkelä authored
-
Monty authored
Fixes MDEV-14761 "Assertion `!mysql_parse_status || thd->is_error() || thd->get_internal_handler()' failed in parse_sql"
-
Marko Mäkelä authored
The session object is not really needed for anything. We can directly create and free the dummy purge_sys->query->trx.
-
Marko Mäkelä authored
If CREATE TABLE...FULLTEXT INDEX was initiated right before shutdown, then the function fts_load_stopword() could commit modifications after shutdown was initiated, causing an assertion failure in the function trx_purge_add_update_undo_to_history(). Mark as internal all the read/write transactions that modify fulltext indexes, so that they will be ignored by the assertion that guards against transaction commits after shutdown has been initiated. fts_optimize_free(): Invoke trx_commit_for_mysql() just in case, because in fts_optimize_create() we started the transaction as internal, and fts_free_for_backgruond() would assert that the flag is clear. Transaction commit would clear the flag.
-
Marko Mäkelä authored
MDEV-14648 Restore fix for MySQL BUG#39053 - UNINSTALL PLUGIN does not allow the storage engine to cleanup open connections Also, allow the MariaDB 10.2 server to link InnoDB dynamically against ha_innodb.so (which is what mysql-test-run.pl expects to exist, instead of the default name ha_innobase.so). wsrep_load_data_split(): Instead of referring to innodb_hton_ptr, check the handlerton::db_type. This was recently broken by me in MDEV-11415. innodb_lock_schedule_algorithm: Define as a weak global symbol, so that WITH_WSREP will not depend on InnoDB being linked statically. I tested this manually. Notably, running a test that only does SET GLOBAL wsrep_on=1; with a static or dynamic InnoDB and ./mtr --mysqld=--loose-innodb-lock-schedule-algorithm=fcfs will crash with SIGSEGV at shutdown. With the default VATS combination the wsrep_on is properly refused for both the static and dynamic InnoDB. ha_close_connection(): Do invoke the method also for plugins for which UNINSTALL PLUGIN was deferred due to open connections. Thanks to @svoj for pointing this out. thd_to_trx(): Return a pointer, not a reference to a pointer. check_trx_exists(): Invoke thd_set_ha_data() for assigning a transaction. log_write_checkpoint_info(): Remove an unused DEBUG_SYNC point that would cause an assertion failure on shutdown after deferred UNINSTALL PLUGIN. This was tested as follows: cmake -DWITH_WSREP=1 -DPLUGIN_INNOBASE:STRING=DYNAMIC \ -DWITH_MARIABACKUP:BOOL=OFF ... make cd mysql-test ./mtr innodb.innodb_uninstall
-
Sauron authored
-
- 14 Feb, 2018 6 commits
-
-
Alexander Barkov authored
There were two problems related to the bug report: 1. Item_datetime::get_date() was not implemented. So execution went through val_int() followed by int-to-datetime or int-to-time conversion. This was the reason why the optimizer did not work well on data with fractional seconds. 2. Item_datetime::set() did not have a TIME specific code to mix months and days to hours after unpack_time(). This is why the optimizer did not work well with negative TIME values, as well as huge time values. Changes: 1. Overriding Item_datetime::get_date(), to return ltime. This fixes the problem N1. 2. Cleanup: Moving pack_time() and unpack_time() from sql-common/my_time.c and include/my_time.h to sql/sql_time.cc and sql/sql_time.h, as they are not needed on the client side. 3. Adding a new "enum_mysql_timestamp_type ts_type" parameter to unpack_time() and moving the TIME specific code to mix months and days with hours inside unpack_time(). Adding a new "ts_type" parameter to Item_datetime::set(), to pass it from the caller down to unpack_time(). So now the TIME specific code is automatically called from Item_datetime::set(). This fixes the problem N2. This change also helped to get rid of duplicate TIME specific code from other three places, where mixing month/days to hours was done immediately after unpack_time(). Moving the DATE specific code to zero hhmmssff from Item_func_min_max::get_date_native to inside unpack_time(), for symmetry. 4. Removing the virtual method in_vector::result_type(), adding in_vector::type_handler() instead. This helps to get result_type(), field_type(), mysql_timestamp_type() of an in_vector easier. Passing type_handler()->mysql_timestamp_type() as a new parameter to Item_datetime::set() inside in_temporal::value_to_item(). 5. Cleaup: Removing separate implementations of in_datetime::get_value() and in_time::get_value(). Adding a single implementation in_temporal::get_value() instead. Passing type_handler()->field_type() to get_value_internal().
-
Sergey Vojtovich authored
This is regression after bc7a1dc1 of MDEV-15104 - Optimise MVCC snapshot. Aforementioned revision removes mutex lock around ReadView creation, which allows views to be created concurrently. Effectively it invalidates "oldest view" approach: no single view can be considered oldest anymore. Instead we have to iterate trx_sys.m_views to find min(m_low_limit_no), min(m_low_limit_id) and all transaction ids below min(m_low_limit_id), which would form oldest view. Second regression comes from c0d5d7c0 of MDEV-15104 - Optimise MVCC snapshot. It removes mutex protection around trx->no assignment, which opens up a gap between m_max_trx_id increment and transaction serialisation number becoming visible through rw_trx_hash. While we're in this gap concurrent thread may come and do MVCC snapshot without seeing allocated but not yet assigned serialisation number. Then at some point purge thread may clone this view. As a result it won't see newly allocated serialisation number and may remove "unnecessary" history data of this transaction from rollback segments.
-
Monty authored
MDEV-13732 User with SELECT privilege can ALTER sequence
-
Marko Mäkelä authored
-
Marko Mäkelä authored
-
Alexey Yurchenko authored
GAL-506 breaks galera_defaults MTR test by upping repl.proto_max again. Fix this once and for all by overwriting it with constant string since it makes little sense to check for it in this test.
-
- 13 Feb, 2018 9 commits
-
-
Alexander Barkov authored
-
Daniel Bartholomew authored
-
Marko Mäkelä authored
-
Marko Mäkelä authored
-
Alexey Botchkov authored
MDEV-15215 main.partition_explicit_prune fails in bulidbot with assertion failures and server crashes. ha_partition::get_open_file_sample() implemented to be used when we need a file sample that is surely opened.
-
Marko Mäkelä authored
-
Marko Mäkelä authored
-
Marko Mäkelä authored
lock_trx_release_locks(): Relax a debug assertion to allow recovered TRX_STATE_COMMITTED_IN_MEMORY transactions. trx_commit_in_memory(): Add DEBUG_SYNC instrumentation. trx_undo_insert_cleanup(): Skip persistent changes if innodb_read_only is set. This should only happen when a recovered committed transaction would be cleaned up at shutdown.
-
Alexander Barkov authored
The patch for MDEV-15287 made the code in Item_func_min_max::get_date_native() dead, as the TIME data type is now handled in get_time_native(). Removing the TIME related code from get_date_native().
-
- 12 Feb, 2018 11 commits
-
-
Sergei Golubchik authored
MDEV-14990 mysql_upgrade fails with ERROR 1408 (HY000) at line 566: Event Scheduler: An error occurred when initializing system tables Don't check mysql.db and mysql.user from event schedule on startup. Event schedule should only check its own mysql.event table, it has no business checking other system tables. In particular, it's ridiculous for event schedule to fail when privilege tables are not the newest, because sql_acl.cc supports old privilege tables just fine.
-
Sergei Golubchik authored
Truncate_versioning_priv->Delete_history_priv because the command and the privilege were renamed
-
Sergei Golubchik authored
don't expand AS OF in views, and, in particular, don't auto-add AS OF NOW().
-
Sergei Golubchik authored
-
Sergei Golubchik authored
Hide INVISIBLE_SYSTEM columns from writes and from fix_vcol_expr().
-
Sergei Golubchik authored
-
Sergei Golubchik authored
enum_mark_columns -> enum_column_usage mark_used_columns -> column_usage further commits will replace MARK_COLUMN_NONE with COLUMN_READ and COLUMN_WRITE that convey the intention without causing columns to be marked
-
Monty authored
- Max_index_length is supported by MyISAM and Aria tables. - Temporary is a placeholder to signal that a table is a temporary table. For the moment this is always "N", except "Y" for generated information_schema tables and NULL for views. Full temporary table support will be done in another task. (No reason to have to update a lot of result files twice in a row)
-
Monty authored
-
Marko Mäkelä authored
When Mariabackup gets a bad read of the first page of the system tablespace file, it would inappropriately try to apply the doublewrite buffer and write changes back to the data file (to the source file)! This is very wrong and must be prevented. The correct action would be to retry reading the system tablespace as well as any other files whose first page was read incorrectly. Fixing this was not attempted. xb_load_tablespaces(): Shorten a bogus message to be more relevant. The message can be displayed by --backup or --prepare. xtrabackup_backup_func(), os_file_write_func(): Add a missing space to a message. Datafile::restore_from_doublewrite(): Do not even attempt the operation in Mariabackup. recv_init_crash_recovery_spaces(): Do not attempt to restore the doublewrite buffer in Mariabackup (--prepare or --export), because all pages should have been copied correctly in --backup already, and because --backup should ignore the doublewrite buffer. SysTablespace::read_lsn_and_check_flags(): Do not attempt to initialize the doublewrite buffer in Mariabackup. innodb_make_page_dirty(): Correct the bounds check. Datafile::read_first_page(): Correct the name of the parameter.
-
Alexander Barkov authored
-