- 28 Dec, 2017 3 commits
-
-
Vicențiu Ciorbaru authored
-
Monty authored
-
Sergei Golubchik authored
cherry-pick e6ce97a5
-
- 27 Dec, 2017 3 commits
-
-
Igor Babaev authored
for a query that uses CTE The first reference to a CTE in the processed query uses the unit built by the parser for the CTE specification. This unit is considered as the specification of the derived table created for the first reference of the CTE. This requires some transformation of the original query tree: the unit of the specification must be moved to a new position as a slave of the select where the first reference to the CTE occurs. The transformation is performed by the function st_select_lex_node::move_as_slave(). There was an obvious bug in this function. As a result of this bug in many cases the moved unit turned out to be lost in the query tree. This could cause different problems. In particular the prepared statements for queries that used CTEs could miss cleanup for some selects that was performed at the end of the preparation/execution of the PSs. If such cleanup is not done for a PS the next execution of the PS causes an assertion abort or a crash.
-
Vicențiu Ciorbaru authored
-
Alexander Barkov authored
MDEV-14249 Wrong character set info of Query_log_event and the query in Query_log_event constructed by different charsets cause error when slave apply the event.
-
- 25 Dec, 2017 9 commits
-
-
Sergei Golubchik authored
MDEV-14026 ALTER TABLE ... DELAY_KEY_WRITE=1 creates table copy for partitioned MyISAM table with DATA DIRECTORY/INDEX DIRECTORY options set data_file_name and index_file_name in HA_CREATE_INFO before calling check_if_incompatible_data()
-
Sergei Golubchik authored
-
Sergei Golubchik authored
don't allocate them on THD::mem_root on every init(HA_STATUS_CONST) call, do it once in open() (because they don't change) on TABLE::mem_root (so they stay valid until the table is closed)
-
Daniel Black authored
PKG_CONFIG does not really work on Windows, Strawberry perl's uses mingw libraries, which VS compiler cannot use, BOOST not used. Tests main.query_cache_debug and main.mdev-504 timed out on debug build at 2 minutes so increase the timeout to 4 minutes. Overall build time was 30 min 44 seconds so plenty of time currently. Signed-off-by: Daniel Black <daniel@linux.vnet.ibm.com>
-
Daniel Black authored
Signed-off-by: Daniel Black <daniel.black@au.ibm.com>
-
Sergei Golubchik authored
fix 011497bd in RPM and DEB: storage engine packages must require the server package of exactly correct version.
-
Sergei Golubchik authored
-
Sachin Setiya authored
Problem:- Gtid are not transferred in Galera Cluster. Solution:- We need to transfer gtid in the case on either when cluster is slave/master in async replication. In normal Gtid replication gtid are generated on recieving node itself and it is always on sync with other nodes. Because galera keeps node in sync , So all nodes get same no of event groups. So the issue arises when say galera is slave in async replication. A | (Async replication) D <-> E <-> F {Galera replication} So what should happen is that all node should apply the master gtid but this does node happen, becuase node E, F does not recieve gtid from D in write set , So what E(or F) does is that it applies wsrep_gtid_domain_id, D server-id , E gtid next seq no. This generated gtid does not always work when say A has different domain id. So In this commit, on galera node when we see that this event is recieved from master we simply write Gtid_Log_Event in write_set and send it to other nodes.
-
Alexey Botchkov authored
Item_func_json_extract::val_int fixed. It wasn't tested yet as it's called in exotic cases only.
-
- 23 Dec, 2017 2 commits
-
-
Monty authored
Problem was that MAX_SLAVE_ERROR didn't cover all possible errors.
-
Sergei Petrunia authored
-
- 22 Dec, 2017 5 commits
-
-
Daniel Bartholomew authored
-
Vicențiu Ciorbaru authored
-
Sergei Petrunia authored
-
Sergey Vojtovich authored
Coverage for temporary tables modifications in read-only transactions. Introduced in 5.7 by 325cdf426
-
Alexander Barkov authored
-
- 21 Dec, 2017 10 commits
-
-
Sergei Petrunia authored
-
Sergei Petrunia authored
- Make my.cnf to include rpl_1slave_base.cnf (needed for tests that actually use replication, i.e. need a functioning slave) - Adjust and enable singledelete_idempotent_table.test - More edits in disabled.def
-
Sergei Petrunia authored
-
Sergei Petrunia authored
-
Vicențiu Ciorbaru authored
A suggestion to make role propagation simpler from serg@mariadb.org. Instead of gathering the leaf roles in an array, which for very wide graphs could potentially mean a big part of the whole roles schema, keep the previous logic. When finally merging a role, set its counter to something positive. This will effectively mean that a role has been merged, thus a random pass through roles hash that touches a previously merged role won't cause the problem described in MDEV-12366 any more, as propagate_role_grants_action will stop attempting to merge from that role.
-
Marko Mäkelä authored
Sometimes, the test would fail with a result difference for the READ UNCOMMITTED read, because the incremental backup would finish before redo log was written for all the rows that were inserted in the second batch. To fix that, cause a redo log write by creating another transaction. The transaction rollback (which internally does commit) will be flushed to the redo log, and before that, all the preceding changes will be flushed to the redo log as well.
-
Marko Mäkelä authored
-
Marko Mäkelä authored
row_log_table_apply_insert_low(), row_log_table_apply_update(): When reporting the error_key_num, only count the clustered index if it corresponds to a key in the SQL layer. The assertion failure was probably introduced by the (incomplete) MySQL 5.6.28 bug fix Bug #21364096 THE BOGUS DUPLICATE KEY ERROR IN ONLINE DDL WITH INCORRECT KEY NAME which we are improving. Side note: the fix was incorrectly merged to MySQL 5.7.10; incorrect key names will continue to be reported in MySQL 5.7.
-
Marko Mäkelä authored
These assertions were disabled in MariaDB 10.1.1 in commit df4dd593 with a bogus comment referring to the function wsrep_fake_trx_id() that was introduced in the very same commit.
-
Elena Stepanova authored
-
- 20 Dec, 2017 8 commits
-
-
Sergei Petrunia authored
-
Sergei Petrunia authored
-
Sergei Petrunia authored
-
Sachin Setiya authored
Problem: The command was: find $paths -mindepth 1 -regex $cpat -prune -o -exec rm -rf {} \+ Which was supposed to work as * skipping $paths directories themselves (-mindepth 1) * see if the dir/file name matches $cpat (-regex) * if yes - don't dive into the directory, skip it (-prune) * otherwise (-o) * remove it and everything inside (-exec) Now -exec ... \+ works like this: every new found path is appended to the end of the command line. when accumulated command line length reaches `getconf ARG_MAX` (~2Gb) it's executed, and find continues, appending to a new command line. What happens here, find appends some directory to the command line, then dives into it, and starts appending files from that directory. At some point command line overflows, rm -rf gets executed and removes the whole directory. Now find tries to continue scanning the directory that was already removed. Fix: don't dive into directories that will be recursively removed anyway, use -prune for them. Basically, we should be pruning both paths that have matched $cpat and paths that have not matched it. This is achived by pruning unconditionally, before the regex is tested: find $paths -mindepth 1 -prune -regex $cpat -o -exec rm -rf {} \+ Patch Credit:- Serg
-
Sergei Petrunia authored
Enable the test back, as the fix has been pushed.
-
Sergei Petrunia authored
-
Oleksandr Byelkin authored
fix_fields calls fixed.
-
Vicențiu Ciorbaru authored
-