- 04 Jan, 2018 2 commits
-
-
Marko Mäkelä authored
-
Marko Mäkelä authored
-
- 03 Jan, 2018 3 commits
-
-
Marko Mäkelä authored
-
Marko Mäkelä authored
-
Marko Mäkelä authored
-
- 02 Jan, 2018 4 commits
-
-
Marko Mäkelä authored
-
Marko Mäkelä authored
-
Marko Mäkelä authored
trx_undo_rec_get_partial_row(): When the PRIMARY KEY includes a column prefix of an externally stored column, the already parsed part of the undo log record may contain a reference to an off-page column. This is the case in the bug58912 test in innodb.innodb.
-
Marko Mäkelä authored
This is a regression caused by MDEV-14051 'Undo log record is too big.' Purge in the secondary index is wrongly skipped in row_purge_upd_exist_or_extern() because node->row only does not contain all indexed columns. trx_undo_rec_get_partial_row(): Add the parameter for node->update so that the updated columns will be copied from the initial part of the undo log record.
-
- 28 Dec, 2017 2 commits
-
-
Ian Gilfillan authored
-
Sergei Golubchik authored
cherry-pick e6ce97a5
-
- 27 Dec, 2017 3 commits
-
-
Sergei Golubchik authored
* don't use Env module in tests, use $ENV{xxx} instead * collateral changes: ** $file in the error message was unset ** $file in the other error message was unset too :) ** source file arguments are conventionally upper-cased ** abort the test (die) on error, don't just echo/exit
-
Vicențiu Ciorbaru authored
-
Oleksandr Byelkin authored
If translation table present when we materialize the derived table then change it to point to the materialized table. Added debug info to see really what happens with what derived.
-
- 25 Dec, 2017 4 commits
-
-
Sergei Golubchik authored
MDEV-14026 ALTER TABLE ... DELAY_KEY_WRITE=1 creates table copy for partitioned MyISAM table with DATA DIRECTORY/INDEX DIRECTORY options set data_file_name and index_file_name in HA_CREATE_INFO before calling check_if_incompatible_data()
-
Sergei Golubchik authored
-
Sergei Golubchik authored
don't allocate them on THD::mem_root on every init(HA_STATUS_CONST) call, do it once in open() (because they don't change) on TABLE::mem_root (so they stay valid until the table is closed)
-
Sachin Setiya authored
Problem:- Gtid are not transferred in Galera Cluster. Solution:- We need to transfer gtid in the case on either when cluster is slave/master in async replication. In normal Gtid replication gtid are generated on recieving node itself and it is always on sync with other nodes. Because galera keeps node in sync , So all nodes get same no of event groups. So the issue arises when say galera is slave in async replication. A | (Async replication) D <-> E <-> F {Galera replication} So what should happen is that all node should apply the master gtid but this does node happen, becuase node E, F does not recieve gtid from D in write set , So what E(or F) does is that it applies wsrep_gtid_domain_id, D server-id , E gtid next seq no. This generated gtid does not always work when say A has different domain id. So In this commit, on galera node when we see that this event is recieved from master we simply write Gtid_Log_Event in write_set and send it to other nodes.
-
- 22 Dec, 2017 2 commits
-
-
Daniel Bartholomew authored
-
Sergey Vojtovich authored
Coverage for temporary tables modifications in read-only transactions. Introduced in 5.7 by 325cdf426
-
- 21 Dec, 2017 6 commits
-
-
Vicențiu Ciorbaru authored
A suggestion to make role propagation simpler from serg@mariadb.org. Instead of gathering the leaf roles in an array, which for very wide graphs could potentially mean a big part of the whole roles schema, keep the previous logic. When finally merging a role, set its counter to something positive. This will effectively mean that a role has been merged, thus a random pass through roles hash that touches a previously merged role won't cause the problem described in MDEV-12366 any more, as propagate_role_grants_action will stop attempting to merge from that role.
-
Marko Mäkelä authored
Sometimes, the test would fail with a result difference for the READ UNCOMMITTED read, because the incremental backup would finish before redo log was written for all the rows that were inserted in the second batch. To fix that, cause a redo log write by creating another transaction. The transaction rollback (which internally does commit) will be flushed to the redo log, and before that, all the preceding changes will be flushed to the redo log as well.
-
Marko Mäkelä authored
-
Marko Mäkelä authored
row_log_table_apply_insert_low(), row_log_table_apply_update(): When reporting the error_key_num, only count the clustered index if it corresponds to a key in the SQL layer. The assertion failure was probably introduced by the (incomplete) MySQL 5.6.28 bug fix Bug #21364096 THE BOGUS DUPLICATE KEY ERROR IN ONLINE DDL WITH INCORRECT KEY NAME which we are improving. Side note: the fix was incorrectly merged to MySQL 5.7.10; incorrect key names will continue to be reported in MySQL 5.7.
-
Marko Mäkelä authored
These assertions were disabled in MariaDB 10.1.1 in commit df4dd593 with a bogus comment referring to the function wsrep_fake_trx_id() that was introduced in the very same commit.
-
Elena Stepanova authored
-
- 20 Dec, 2017 9 commits
-
-
Sachin Setiya authored
Problem: The command was: find $paths -mindepth 1 -regex $cpat -prune -o -exec rm -rf {} \+ Which was supposed to work as * skipping $paths directories themselves (-mindepth 1) * see if the dir/file name matches $cpat (-regex) * if yes - don't dive into the directory, skip it (-prune) * otherwise (-o) * remove it and everything inside (-exec) Now -exec ... \+ works like this: every new found path is appended to the end of the command line. when accumulated command line length reaches `getconf ARG_MAX` (~2Gb) it's executed, and find continues, appending to a new command line. What happens here, find appends some directory to the command line, then dives into it, and starts appending files from that directory. At some point command line overflows, rm -rf gets executed and removes the whole directory. Now find tries to continue scanning the directory that was already removed. Fix: don't dive into directories that will be recursively removed anyway, use -prune for them. Basically, we should be pruning both paths that have matched $cpat and paths that have not matched it. This is achived by pruning unconditionally, before the regex is tested: find $paths -mindepth 1 -prune -regex $cpat -o -exec rm -rf {} \+ Patch Credit:- Serg
-
Vicențiu Ciorbaru authored
-
Vicențiu Ciorbaru authored
-
Christian Hesse authored
Using systemd we can automate creating users and directories. So generate and install the configuration files. Signed-off-by: Vicențiu Ciorbaru <vicentiu@mariadb.org> Small change in cmake/install_layout.cmake compared to original contributor patch to also install SYSTEMD_SYSUSERS and SYSTEMD_TMPFILES directories. The variables were being set, but the loop which defines the final install files was not updated.
-
Vicențiu Ciorbaru authored
-
Varun Gupta authored
In the function make_sortkey a tmp buffer was defined and in the absence of param->tmp_buffer, tmp buffer used the sort_keys buffer. sort_keys buffer has a length defined in sort_field->length, while param->tmp_buffer is stored in param->rec_length. Make sure to use the appropriate length based on which buffer we are using otherwise we'll overflow. Also added a type cast to size_t during the calculation of the sort keys buffer size to avoid an oveflow if the buffer size exceeds 32 bits.
-
Alexander Barkov authored
An after-fix for MDEV-14008 Assertion failing: `!is_set() || (m_status == DA_OK_BULK && is_bulk_op()) Fixing an additional failure discovered after a merge to 10.2
-
Alexander Barkov authored
-
Marko Mäkelä authored
The comment became stale in commit 9f57e595 which removed the parameter "flags".
-
- 19 Dec, 2017 5 commits
-
-
sjaakola authored
galera_events test shows a regression with the original fix for MW-416 Reason was that Events::drop_event() can be called also from inside event execution, and there we have a speacial treatment for event, which executes "DROP EVENT" statement, and runs TOI replication inside the event processing body. This resulted in executing WSREP_TO_ISOLATION two times for such DROP EVENT statement. Fix is to call WSREP_TO_ISOLATION_BEGIN only in Events::drop_event()
-
sjaakola authored
Changed return code for replicatio error to TRUE. This is aligned with native mysql convention to return TRUE (defined to 1) or FALSE (defined to 0) from a bool function. This is wrong, but follows the mysql conventiosn, at least...
-
Simon J Mudd authored
-
Sergey Vojtovich authored
find_type_or_exit() client helper did exit(1) on error, exit(1) moved to clients. mysql_read_default_options() did exit(1) on error, error is passed through and handled now. my_str_malloc_default() did exit(1) on error, replaced my_str_ allocator functions with normal my_malloc()/my_realloc()/my_free(). sql_connect.cc did many exit(1) on hash initialisation failure. Removed error check since my_hash_init() never fails. my_malloc() did exit(1) on error. Replaced with abort(). my_load_defaults() did exit(1) on error, replaced with return 2. my_load_defaults() still does exit(0) when invoked with --print-defaults.
-
Jan Lindström authored
Problem was that crypt_data->min_key_version is not a reliable way to detect is tablespace encrypted and could lead that in first page of the second (page 192 and similarly for other files if more configured) system tablespace file used key_version is replaced with zero leading a corruption as in next startup page is though to be corrupted. Note that crypt_data->min_key_version is updated only after all pages from tablespace have been processed (i.e. key rotation is done) and flushed. fil_write_flushed_lsn Use crypt_data->should_encrypt() instead.
-