- 15 Nov, 2017 1 commit
-
-
Andrei Elkin authored
As reported in MDEV-11969 "there's no way to ditch knowledge" about some domain that is no longer updated on a server. Besides being of annoyance to clutter output in DBA console stale domains can prevent the slave to connect the master as MDEV-12012 witnesses. What domain is obsolete must be evaluated by the user (DBA) according to whether the domain info is still relevant and will the domain ever receive any update. This patch introduces a method to discard obsolete gtid domains from the server binlog state. The removal requires no event group from such domain present in existing binlog files though. If there are any the containing logs must be first PURGEd in order for FLUSH BINARY LOGS DELETE_DOMAIN_ID=(list-of-domains) succeed. Otherwise the command returns an error. The list of obsolete domains can be computed through intersecting two sets - the earliest (first) binlog's Gtid_list and the current value of @@global.gtid_binlog_state - and extracting the domain id components from the intersection list items. The new DELETE_DOMAIN_ID featured FLUSH continues to rotate binlog omitting the deleted domains from the active binlog file's Gtid_list. Notice though when the command is ineffective - that none of requested to delete domain exists in the binlog state - rotation does not occur. Obsolete domain deletion is not harmful for connected slaves as long as master side binlog files *purge* is synchronized with FLUSH-DELETE_DOMAIN_ID. The slaves must have the last event from purged files processed as usual, in order not to bump later into requesting a gtid from a file which was already gone. While the command is not replicated (as ordinary FLUSH BINLOG LOGS is) slaves, even though having extra domains, won't suffer from reconnection errors thanks to master-slave gtid connection protocol allowing the master to be ignorant about a gtid domain. Should at failover such slave to be promoted into master role it may run the ex-master's FLUSH BINARY LOGS DELETE_DOMAIN_ID=(list-of-domains) to clean its own binlog state. NOTES. suite/perfschema/r/start_server_low_digest.result is re-recorded as consequence of internal parser codes changes.
-
- 14 Nov, 2017 2 commits
-
-
Daniel Bartholomew authored
-
Oleksandr Byelkin authored
Repeat reworked solution of procedures for all posible Sp (functions & triggers).
-
- 13 Nov, 2017 2 commits
-
-
Vladislav Vaintroub authored
Do not do reopen_fstreams/setbuf twice during server startup on Windows. fprintf(stderr,..) might crash, if setbuf is executed at the same time.
-
Sergei Golubchik authored
1. remove erroneously committed *.orig 2. fix LZ4 detection on Mac OS X and FreeBSD. Cannot do pkg_check_modules(LIBLZ4 liblz4) find_library(LIBLZ4_LIBS ... ) because find_library(X) does not do anything if X is defined (documented), and pkg_check_modules(Y) sets Y_LIBS to "" (undocumented!)
-
- 10 Nov, 2017 4 commits
-
-
Elena Stepanova authored
-
Sergei Golubchik authored
-
Sergei Golubchik authored
-
Sergei Golubchik authored
-
- 09 Nov, 2017 8 commits
-
-
Sergei Golubchik authored
MDEV-12372 mysqlbinlog --version output is the same on 10.x as on 5.5.x, and contains not only version don't print usage() for --version
-
Sergei Golubchik authored
Some tests are skipped by checks in suite.pm. It is redundant to have an sql-level run-time check in the .inc file itself. In some cases it's not only redundant, but dangerous. After one bug in 10.2 innodb.create_isl_with_direct failed to start InnoDB, but the server started fine (just without InnoDB) and instead of failing, the test was skipped by run-time check in have_innodb.inc. # Conflicts: # mysql-test/include/not_embedded.inc # mysql-test/r/change_user_notembedded.result # mysql-test/suite.pm # mysql-test/t/change_user_notembedded.test
-
Sergei Golubchik authored
-
Sergei Golubchik authored
-
Vladislav Vaintroub authored
Remove the main bottleneck - the memset() call mentioned in the bug. Use os_mem_alloc_large() instead of malloc()/memset().
-
Marko Mäkelä authored
MDEV-14333 Mariabackup --apply-log-only crashes if incomplete transactions with update_undo logs are present trx_undo_free_prepared(): Relax the assertion for mariabackup --apply-log-only.
-
Alexander Barkov authored
-
Oleksandr Byelkin authored
MDEV-14164: Unknown column error when adding aggregate to function in oracle style procedure FOR loop Make differentiation between pullout for merge and pulout of outer field during exists2in transformation. In last case the field was outer and so we can safely start from name resolution context of the SELECT where it was pulled. Old behavior lead to inconsistence between list of tables and outer name resolution context (which skips one SELECT for merge purposes) which creates problem vor name resolution.
-
- 08 Nov, 2017 14 commits
-
-
Jan Lindström authored
Test uses now debug and debug_sync.
-
Vasil Dimov authored
Use symbolic signal names (e.g. SIGSTOP) instead of numeric ones (e.g. 19) because the latter are not portable.
-
Daniele Sciascia authored
-
Teemu Ollakka authored
-
sjaakola authored
Added one more test scenario for two cascading parent tables
-
sjaakola authored
This is needed to clear THD::wsrep_status_vars reference, which would otherwise remain to point to status variable array, which is no more effective.
-
Jan Lindström authored
MariaDB adjustments.
-
sjaakola authored
* created tests focusing in multi-master conflicts during cascading foreign key processing * in row0upd.cc, calling wsrep_row_ups_check_foreign_constraints only when running in cluster * in row0ins.cc fixed regression from MW-369, which caused crash with MW-402.test
-
Jan Lindström authored
MariaDB adjustments.
-
sjaakola authored
* changed thd_binlog_format to return configured binlog format in wsrep execution, regardless of binlogging setting (i.e. with or without binlogging) * thd_binlog_format is used in innobase::write_row(), and would return confusing result there when log_bin==OFF
-
Daniele Sciascia authored
-
Jan Lindström authored
Adapt to MariaDB case
-
Daniele Sciascia authored
It is possible for a stored procedure that has an error handler that catches SQLEXCEPTION to call thd->clear_error() on a thd that failed certification. And because the error is cleared, wsrep patch proceeds with the normal path and may try to commit statements that should actually abort. This patch catches the situation where wsrep_conflict_state is still set, but the thd's error has been cleared, and rolls back the statement in such cases.
-
Marko Mäkelä authored
-
- 07 Nov, 2017 3 commits
-
-
Alexander Barkov authored
-
Vesa Pentti authored
-
Alexander Barkov authored
This problem was earlier fixed by the patch for MDEV-8910. Adding tests only.
-
- 06 Nov, 2017 6 commits
-
-
Vladislav Vaintroub authored
-
Marko Mäkelä authored
-
Vladislav Vaintroub authored
-
Marko Mäkelä authored
-
Marko Mäkelä authored
With a big buffer pool that contains many data pages, DISCARD TABLESPACE took a long time, because it would scan the entire buffer pool to remove any pages that belong to the tablespace. With a large buffer pool, this would take a lot of time, especially when the table-to-discard is empty. The minimum amount of work that DISCARD TABLESPACE must do is to remove the pages of the to-be-discarded table from the buf_pool->flush_list because any writes to the data file must be prevented before the file is deleted. If DISCARD TABLESPACE does not evict the pages from the buffer pool, then IMPORT TABLESPACE must do it, because we must prevent pre-DISCARD, not-yet-evicted pages from being mistaken for pages of the imported tablespace. It would not be a useful fix to simply move the buffer pool scan to the IMPORT TABLESPACE step. What we can do is to actively evict those pages that could be mistaken for imported pages. In this way, when importing a small table into a big buffer pool, the import should still run relatively fast. Import is bypassing the buffer pool when reading pages for the adjustment phase. In the adjustment phase, if a page exists in the buffer pool, we could replace it with the page from the imported file. Unfortunately I did not get this to work properly, so instead we will simply evict any matching page from the buffer pool. buf_page_get_gen(): Implement BUF_EVICT_IF_IN_POOL, a new mode where the requested page will be evicted if it is found. There must be no unwritten changes for the page. buf_remove_t: Remove. Instead, use trx!=NULL to signify that a write to file is desired, and use a separate parameter bool drop_ahi. buf_LRU_flush_or_remove_pages(), fil_delete_tablespace(): Replace buf_remove_t. buf_LRU_remove_pages(), buf_LRU_remove_all_pages(): Remove. PageConverter::m_mtr: A dummy mini-transaction buffer PageConverter::PageConverter(): Complete the member initialization list. PageConverter::operator()(): Evict any 'shadow' pages from the buffer pool so that pre-existing (garbage) pages cannot be mistaken for pages that exist in the being-imported file. row_discard_tablespace(): Remove a bogus comment that seems to refer to IMPORT TABLESPACE, not DISCARD TABLESPACE.
-
Marko Mäkelä authored
buf_flush_or_remove_pages(), buf_flush_dirty_pages(): Remove the redundant parameter flush=(trx!=NULL).
-