- 16 Jun, 2020 1 commit
-
-
Otto Kekäläinen authored
Replace all references to /usr/sbin/mysqld (and bin and libexec) with mariadbd, so that the binary server will always be 'mariadbd'. Also update all places that reference the server binary in other ways, such as AppArmor profiles and scripts that previously expected to find a 'mysqld' in process lists.
-
- 15 Jun, 2020 7 commits
-
-
Alexey Botchkov authored
Item_field::is_json_value() implemented.
-
Alexey Botchkov authored
Warning message and function result fixed
-
Aleksey Midenkov authored
MDEV-22881 Unexpected errors, corrupt output, Valgrind / ASAN errors in Item_ident::print or append_identifier After this code end_inplace: if (thd->locked_tables_list.reopen_tables(thd, false)) goto err_with_mdl_after_alter; table is not reopened (need_reopen is false) but some_table_marked_for_reopen is reset to false. Item_field is allocated on table lock and assigned new name on first ALTER which is then freed at the end of the command. Second ALTER accessess this Item_field and gets garbage value.
-
Vicențiu Ciorbaru authored
Make sure to replace the datadir absolute path.
-
Marko Mäkelä authored
This is one more follow-up fix to MDEV-22641. Explicitly specify the dependency of the innobase library on mysys. Also, remove stale references to CRC32_LIBRARY, which should have been removed in commit dec3f8ca.
-
Oleksandr Byelkin authored
-
Sergei Petrunia authored
Make mark_join_nest_as_const() print its action into the trace.
-
- 14 Jun, 2020 21 commits
-
-
Monty authored
This was done to be able to track some cases of unallocated memory in replication tests reported by MSAN.
-
Monty authored
MDEV-22691 MSAN use-of-uninitialized-value in test maria.maria-recovery2 This caused all my_vsnprintf() using doubles to fail. Thanks to the workaround, I was able to remove the disabling of MSAN in dtoa().
-
Oleksandr Byelkin authored
MDEV-20302 Server hangs upon concurrent SELECT from partitioned S3
-
Monty authored
-
Monty authored
-
Monty authored
MDEV-22829 SIGSEGV in _ma_reset_history on LOCK
-
Monty authored
-
Monty authored
MDEV-22048 Assertion `binlog_table_maps == 0 || locked_tables_mode == LTM_LOCK_TABLES' failed in THD::reset_for_next_command
-
Monty authored
Problem was that FLUSH TABLES where trying to read latest sequence state which conflicted with a running ALTER SEQUENCE. Removed the reading of the state, when opening a table for FLUSH, as it's not needed in this case. Other thing: - Fixed a potential issue with concurrently running ALTER SEQUENCE where the later ALTER could potentially read old data
-
Monty authored
-
Monty authored
-
Monty authored
MDEV-22649 SIGSEGV in ha_partition::create_partitioning_metadata on ALTER MDEV-22804 SIGSEGV in ha_partition::create_partitioning_metadata
-
Monty authored
MCOL-3875 Columnstore write cache The main change is to change thr_lock function get_status to return a value that indicates we have to abort the lock. Other thing: - Made start_bulk_insert() and end_bulk_insert() protected so that the insert cache can use these
-
Monty authored
The reson for the change was to make it easier to find true errors when searching in trace logs. "error:" should mainly be used when we have a real error
-
Monty authored
MDEV-22689 MSAN use-of-uninitialized-value in decode_bytes() This was not a user visible issue as the huffman code lookup tables would automatically ignore any of the unitialized bits Fixed by adding a end-zero byte to the bit-stream buffer. Other things: - Fixed a (for this case) wrong assert in strmov() for myisamchk and aria_chk by removing the strmov()
-
Monty authored
- IF EXISTS ends with a list of all not existing object, instead of a separate note for every not existing object - Produce a "Note" for all wrongly dropped objects (like trying to do DROP SEQUENCE for a normal table) - Do not write existing tables that could not be dropped to binlog Other things: MDEV-22820 Bogus "Unknown table" warnings produced upon attempt to drop parent table referenced by FK This was caused by an older version of this commit patch and later fixed
-
Monty authored
- Produce a "Note" for all wrongly dropped objects (Like doing DROP VIEW on a table). - IF EXISTS ends with a list of all not existing objects, instead of a separate note for every not existing object. Other things: - Fixed bug where one could do CREATE TEMPORARY SEQUENCE multiple times and create multiple temporary sequences with the same name.
-
Monty authored
The used code is largely based on code from Tencent The problem is that in some rare cases there may be a conflict between .frm files and the files in the storage engine. In this case the DROP TABLE was not able to properly drop the table. Some MariaDB/MySQL forks has solved this by adding a FORCE option to DROP TABLE. After some discussion among MariaDB developers, we concluded that users expects that DROP TABLE should always work, even if the table would not be consistent. There should not be a need to use a separate keyword to ensure that the table is really deleted. The used solution is: - If a .frm table doesn't exists, try dropping the table from all storage engines. - If the .frm table exists but the table does not exist in the engine try dropping the table from all storage engines. - Update storage engines using many table files (.CVS, MyISAM, Aria) to succeed with the drop even if some of the files are missing. - Add HTON_AUTOMATIC_DELETE_TABLE to handlerton's where delete_table() is not needed and always succeed. This is used by ha_delete_table_force() to know which handlers to ignore when trying to drop a table without a .frm file. The disadvantage of this solution is that a DROP TABLE on a non existing table will be a bit slower as we have to ask all active storage engines if they know anything about the table. Other things: - Added a new flag MY_IGNORE_ENOENT to my_delete() to not give an error if the file doesn't exist. This simplifies some of the code. - Don't clear thd->error in ha_delete_table() if there was an active error. This is a bug fix. - handler::delete_table() will not abort if first file doesn't exists. This is bug fix to handle the case when a drop table was aborted in the middle. - Cleaned up mysql_rm_table_no_locks() to ensure that if_exists uses same code path as when it's not used. - Use non_existing_Table_error() to detect if table didn't exists. Old code used different errors tests in different position. - Table_triggers_list::drop_all_triggers() now drops trigger file if it can't be parsed instead of leaving it hanging around (bug fix) - InnoDB doesn't anymore print error about .frm file out of sync with InnoDB directory if .frm file does not exists. This change was required to be able to try to drop an InnoDB file when .frm doesn't exists. - Fixed bug in mi_delete_table() where the .MYD file would not be dropped if the .MYI file didn't exists. - Fixed memory leak in Mroonga when deleting non existing table - Fixed memory leak in Connect when deleting non existing table Bugs fixed introduced by the original version of this commit: MDEV-22826 Presence of Spider prevents tables from being force-deleted from other engines
-
Marko Mäkelä authored
-
Marko Mäkelä authored
The test case that was added for MDEV-21217 (commit b68f1d84) should have only two possible outcomes for the locking SELECT statement: (1) The statement is blocked, and the test will eventually fail with a lock wait timeout. This is what I observed when the code fix for MDEV-21217 was missing. (2) The lock conflict will ensure that the statement will execute after the rollback has completed, and an empty table will be observed. This is the expected outcome with the recovery fix. What occasionally happens (in some of our CI environments only, so far) is that the locking SELECT will return all 1,000 rows of the table that had been inserted by the transaction that was never supposed to be committed. One possibility is that the transaction was unexpectedly committed when the server was killed. Let us disable the test until the reason of the failure has been determined and addressed.
-
Marko Mäkelä authored
-
- 13 Jun, 2020 7 commits
-
-
Sergei Golubchik authored
when allowing access via perfschema callbacks, update the cached GRANT_INFO to match
-
Sergei Golubchik authored
With RETURNING it can happen that the user has some privileges on the table (namely, DELETE), but later needs different privileges on individual columns (namely, SELECT). Do the same as in check_grant_column() - ER_COLUMNACCESS_DENIED_ERROR, not an assert.
-
Marko Mäkelä authored
-
Marko Mäkelä authored
-
Marko Mäkelä authored
trx_roll_must_shutdown(): Correct the condition that detects the start of shutdown.
-
Marko Mäkelä authored
An InnoDB check for the validity of index pages would occasionally fail in the test encryption.innodb_encryption_discard_import. An analysis of a "rr replay" failure trace revealed that the problem basically is a combination of two old anomalies, and a recently implemented optimization in MariaDB 10.5. MDEV-15528 allows InnoDB to discard buffer pool pages that were freed. PageBulk::init() will disable the InnoDB validity check, because during native ALTER TABLE (rebuilding tables or creating indexes) we could write inconsistent index pages to data files. In the occasional test failure, page 8:6 would have been written from the buffer pool to the data file and subsequently freed. However, fil_crypt_thread may perform dummy writes to pages that have been freed. In case we are causing an inconsistent page to be re-encrypted on page flush, we should disable the check. In the analyzed "rr replay" trace, a fil_crypt_thread attempted to access page 8:6 twice after it had been freed. On the first call, buf_page_get_gen(..., BUF_PEEK_IF_IN_POOL, ...) returned NULL. The second call succeeded, and shortly thereafter, the server intentionally crashed due to writing the corrupted page.
-
Alexander Barkov authored
Item_func_div::fix_length_and_dec_temporal() set the return data type to integer in case of @div_precision_increment==0 for temporal input with FSP=0. This caused Item_func_div to call int_op(), which is not implemented, so a crash on DBUG_ASSERT(0) happened. Fixing fix_length_and_dec_temporal() to set the result type to DECIMAL.
-
- 12 Jun, 2020 4 commits
-
-
Sidney Cammeresi authored
when Item::print() is called with the QT_PARSABLE flag, WHERE i NOT IN (SELECT ...) gets printed as WHERE !i IN (SELECT ...) instead of WHERE !(i in (SELECT ...)) because Item_in_optimizer returns DEFAULT_PRECEDENCE. it should return the precedence of the inner operation.
-
Varun Gupta authored
The problem here is similar to the case with DISTINCT, the tree used for ORDER BY needs to also hold the null bytes of the record. This was not done for GROUP_CONCAT as NULLS are rejected by GROUP_CONCAT. Also introduced a comparator function for the order by tree to handle null values with JSON_ARRAYAGG.
-
Varun Gupta authored
For DISTINCT to be handled with JSON_ARRAYAGG, we need to make sure that the Unique tree also holds the NULL bytes of a table record inside the node of the tree. This behaviour for JSON_ARRAYAGG is different from GROUP_CONCAT because in GROUP_CONCAT we just reject NULL values for columns. Also introduced a comparator function for the unique tree to handle null values for distinct inside JSON_ARRAYAGG.
-
Varun Gupta authored
Backported from MYSQL Bug #25331425: DISTINCT CLAUSE DOES NOT WORK IN GROUP_CONCAT Issue: ------ The problem occurs when: 1) GROUP_CONCAT (DISTINCT ....) is used in the query. 2) Data size greater than value of system variable: tmp_table_size. The result would contain values that are non-unique. Root cause: ----------- An in-memory structure is used to filter out non-unique values. When the data size exceeds tmp_table_size, the overflow is written to disk as a separate file. The expectation here is that when all such files are merged, the full set of unique values can be obtained. But the Item_func_group_concat::add function is in a bit of hurry. Even as it is adding values to the tree, it wants to decide if a value is unique and write it to the result buffer. This works fine if the configured maximum size is greater than the size of the data. But since tmp_table_size is set to a low value, the size of the tree is smaller and hence requires the creation of multiple copies on disk. Item_func_group_concat currently has no mechanism to merge all the copies on disk and then generate the result. This results in duplicate values. Solution: --------- In case of the DISTINCT clause, don't write to the result buffer immediately. Do the merge and only then put the unique values in the result buffer. This has be done in Item_func_group_concat::val_str. Note regarding result file changes: ----------------------------------- Earlier when a unique value was seen in Item_func_group_concat::add, it was dumped to the output. So result is in the order stored in SE. But with this fix, we wait until all the data is read and the final set of unique values are written to output buffer. So the data appears in the sorted order. This only fixes the cases when we have DISTINCT without ORDER BY clause in GROUP_CONCAT.
-