- 31 Aug, 2020 1 commit
-
-
Andrei Elkin authored
(This commit is for 10.3 and upper branches) In case of a pattern of non-STMT_END-marked Rows-log-event (A) followed by a STMT_END marked one (B) mysqlbinlog mixes up the base64 encoded rows events with their pseudo sql representation produced by the verbose option: BINLOG ' base64 encoded data for A ### verbose section for A base64 encoded data for B ### verbose section for B '/*!*/; In effect the produced BINLOG '...' query is not valid and is rejected with the error. Examples of this way malformed BINLOG could have been found in binlog_row_annotate.result that gets corrected with the patch. The issue is fixed with introduction an auxiliary IO_CACHE to hold on the verbose comments until the terminal STMT_END event is found. The new cache is emptied out after two pre-existing ones are done at that time. The correctly produced output now for the above case is as the following: BINLOG ' base64 encoded data for A base64 encoded data for B '/*!*/; ### verbose section for A ### verbose section for B Thanks to Alexey Midenkov for the problem recognition and attempt to tackle, and to Venkatesh Duggirala who produced a patch for the upstream whose idea is exploited here, as well as to MDEV-23077 reporter LukeXwang who also contributed a piece of a patch aiming at this issue.
-
- 31 Jul, 2020 1 commit
-
-
Nikita Malyavin authored
Call mark_columns_per_binlog_row_image before find_row() to set up table->vcol_set early, so the virtual column value will be updated after record read (ha_rnd_pos/ha_index_next/etc) by table->update_virtual_fields() call
-
- 22 Jul, 2020 2 commits
-
-
Ian Gilfillan authored
-
Sujatha authored
Problem: ======== During point in time recovery of binary log syntax error is reported for BEGIN statement and recovery fails. Analysis: ========= In MariaDB 10.3 and later, setting the sql_mode system variable to Oracle allows the server to understand a subset of Oracle's PL/SQL language. When sql_mode=ORACLE is set, it switches the parser from the MariaDB parser to Oracle compatible parser. With this change 'BEGIN' is not considered as 'START TRANSACTION'. Hence the syntax error is reported. Fix: === At preset 'BEGIN' query is generated from 'Gtid_log_event::print'. The current session specific 'sql_mode' information is not present as part of 'Gtid_log_event'. If it was available then, mysqlbinlog tool can make use of 'sql_mode == ORACLE' and can output "START TRANSACTION" in this particular mode and for other sql_modes it will write "BEGIN" as part of output. Since it is not available 'mysqlbinlog' tool will output all 'BEGIN' statements as 'START TRANSACTION' irrespective of 'sql_mode'.
-
- 15 May, 2020 1 commit
-
-
Monty authored
Other things: - Removed innodb_encryption_tables.test from valgrind as it takes a REALLY long time
-
- 27 Apr, 2020 1 commit
-
-
Marko Mäkelä authored
This is a backport of the applicable part of commit 93475aff and commit 2c39f69d from 10.4. Before 10.4 and Galera 4, WSREP_ON is a macro that points to a global Boolean variable, so it is not that expensive to evaluate, but we will add an unlikely() hint around it. WSREP_ON_NEW: Remove. This macro was introduced in commit c863159c when reverting WSREP_ON to its previous definition. We replace some use of WSREP_ON with WSREP(thd), like it was done in 93475aff. Note: the macro WSREP() in 10.1 is equivalent to WSREP_NNULL() in 10.4. Item_func_rand::seed_random(): Avoid invoking current_thd when WSREP is not enabled.
-
- 17 Mar, 2020 1 commit
-
-
Marko Mäkelä authored
Rows_log_event::change_to_flashback_event(): Reduce the scope of the variable swap_buff2, and do not duplicate conditions. GCC 9.3.0 flagged the -Wmaybe-uninitialized when compiling the 10.5 branch using cmake -DWITH_ASAN=ON -DCMAKE_CXX_FLAGS=-O2
-
- 23 Feb, 2020 1 commit
-
-
seppo authored
If async replication slave thread conflicts with cluster replication, then the async slave transaction should be BF aborted, and depending on the state of async slave transaction execution, potentially also replayed. There were problems in such BF abort implementation and the replaying was not started. This pull request contains fixes which make sure that if async slave thread is marked to abort and replay, it will complete carry out the rollback and release all locks and resources before starting the replaying. After replaying, async slave transactions is treated as successful, so the slave thread will continue as usual, handling next replication event. There is also new mtr test: galera.galera_slave_replay, which stresses both a certification failure for async slave thread and a successful BF abort followed by replaying.
-
- 24 Jan, 2020 1 commit
-
-
Sujatha authored
MDEV-21490: binlog tests fail with valgrind: Conditional jump or move depends on uninitialised value in sql_ex_info::init Problem: ======= P1) Conditional jump or move depends on uninitialised value(s) sql_ex_info::init(char const*, char const*, bool) (log_event.cc:3083) code: All the following variables are not initialized. ---- return ((cached_new_format != -1) ? cached_new_format : (cached_new_format=(field_term_len > 1 || enclosed_len > 1 || line_term_len > 1 || line_start_len > 1 || escaped_len > 1))); P2) Conditional jump or move depends on uninitialised value(s) Rows_log_event::Rows_log_event(char const*, unsigned int, Format_description_log_event const*) (log_event.cc:9571) Code: Uninitialized values is reported for 'var_header_len' variable. ---- if (var_header_len < 2 || event_len < static_cast<unsigned int>(var_header_len + (post_start - buf))) P3) Conditional jump or move depends on uninitialised value(s) Table_map_log_event::pack_info(Protocol*) (log_event.cc:11553) code:'m_table_id' is uninitialized. ---- void Table_map_log_event::pack_info(Protocol *protocol) ... size_t bytes= my_snprintf(buf, sizeof(buf), "table_id: %lu (%s.%s)", m_table_id, m_dbnam, m_tblnam); Fix: === P1 - Fix) Initialize cached_new_format,field_term_len, enclosed_len, line_term_len, line_start_len, escaped_len members in default constructor. P2 - Fix) "var_header_len" is initialized by reading the event buffer. In case of an invalid event the buffer will contain invalid data. Hence added a check to validate the event data. If event_len is smaller than valid header length return immediately. P3 - Fix) 'm_table_id' within Table_map_log_event is initialized by reading data from the event buffer. Use 'VALIDATE_BYTES_READ' macro to validate the current state of the buffer. If it is invalid return immediately.
-
- 07 Jan, 2020 9 commits
-
-
Sujatha authored
Problem: ======== SHOW BINLOG EVENTS FROM <pos> reports following ASAN error. AddressSanitizer: heap-buffer-overflow on address READ of size 1 at 0x60e00009cf71 thread T28 #0 0x55e37e034ae2 in net_field_length Fix: === **Part10: Avoid reading out of buffer**
-
Sujatha authored
Problem: ======== SHOW BINLOG EVENTS FROM <pos> reports following assert when ASAN is enabled. Query_log_event::Query_log_event(const char*, uint, const Format_description_log_event*, Log_event_type): Assertion `(pos) + (6) <= (end)' failed Fix: === **Part9: Removed additional DBUG_ASSERT**
-
Sujatha authored
Problem: ======== SHOW BINLOG EVENTS FROM <pos> reports following ASAN error AddressSanitizer: SEGV on unknown address The signal is caused by a READ memory access. User_var_log_event::User_var_log_event(char const*, unsigned int, Format_description_log_event const*) Implemented part of upstream patch. commit: mysql/mysql-server@a3a497ccf7ecacc900551fb1e47ea4078b45c351 Fix: === **Part8: added checks to avoid reading out of buffer limits**
-
Sujatha authored
Problem: ======== SHOW BINLOG EVENTS FROM <pos> reports following ASAN error "heap-buffer-overflow on address" and some times it asserts. Table_map_log_event::Table_map_log_event(const char*, uint, const Format_description_log_event*) Assertion `m_field_metadata_size <= (m_colcnt * 2)' failed. Fix: === **Part7: Avoid reading out of buffer** Converted debug assert to error handler code.
-
Sujatha authored
Problem: ======== SHOW BINLOG EVENTS FROM <pos> reports following ASAN error AddressSanitizer: heap-buffer-overflow on address 0x60400002acb8 Load_log_event::copy_log_event(char const*, unsigned long, int, Format_description_log_event const*) Fix: === **Part6: Moved the event_len validation to the begin of copy_log_event function**
-
Sujatha authored
Problem: ======== SHOW BINLOG EVENTS FROM <pos> reports following ASAN error AddressSanitizer: heap-buffer-overflow on address String::append(char const*, unsigned int) Query_log_event::pack_info(Protocol*) Fix: === **Part5: Added check to catch buffer overflow**
-
Sujatha authored
Problem: ======== SHOW BINLOG EVENTS FROM <pos> reports following ASAN error heap-buffer-overflow within "my_strndup" in Rotate_log_event my_strndup /mysys/my_malloc.c:254 Rotate_log_event::Rotate_log_event(char const*, unsigned int, Format_description_log_event const*) Fix: === **Part4: Improved the check for event_len validation**
-
Sujatha authored
Problem: ======== SHOW BINLOG EVENTS FROM <pos> reports following crash when ASAN is enabled. SEGV on unknown address in inline_mysql_mutex_destroy in my_bitmap_free in Update_rows_log_event::~Update_rows_log_event() Fix: === **Part3: Initialize m_cols_ai.bitmap to NULL**
-
Sujatha authored
Problem: ======== SHOW BINLOG EVENTS FROM <pos> reports following assert when ASAN is enabled. Rows_log_event::Rows_log_event(const char*, uint, const Format_description_log_event*): Assertion `var_header_len >= 2' Implemented part of upstream patch. commit: mysql/mysql-server@a3a497ccf7ecacc900551fb1e47ea4078b45c351 Fix: === **Part2: Avoid reading out of buffer limits**
-
- 09 Oct, 2019 2 commits
-
-
Aleksey Midenkov authored
TABLE::mark_columns_needed_for_update(): use_all_columns() assigns pointer of all_set into read_set and write_set, but this is not good since all_set is changed later by TABLE::mark_columns_used_by_index_no_reset(). Do column_bitmaps_signal() whenever we change read_set/write_set.
-
Marko Mäkelä authored
calc_field_event_length(): For type=MYSQL_TYPE_BLOB and meta==0, return 0 instead of *ptr+1. This was noted by -Wimplicit-fallthrough.
-
- 08 Oct, 2019 2 commits
-
-
Sachin Setiya authored
MDEV-20574 Position of events reported by mysqlbinlog is wrong with encrypted binlogs, SHOW BINLOG EVENTS reports the correct one. Analysis Mysqlbinlog output for encrypted binary log #Q> insert into tab1 values (3,'row 003') #190912 17:36:35 server id 10221 end_log_pos 980 CRC32 0x53bcb3d3 Table_map: `test`.`tab1` mapped to number 19 # at 940 #190912 17:36:35 server id 10221 end_log_pos 1026 CRC32 0xf2ae5136 Write_rows: table id 19 flags: STMT_END_F Here we can see Table_map_log_event ends at 980 but Next event starts at 940. And the reason for that is we do not send START_ENCRYPTION_EVENT to the slave Solution:- Send Start_encryption_log_event as Ignorable_log_event to slave(mysqlbinlog), So that mysqlbinlog can update its log_pos. Since Slave can request multiple FORMAT_DESCRIPTION_EVENT while master does not have so We only update slave master pos when master actually have the FORMAT_DESCRIPTION_EVENT. Similar logic should be applied for START_ENCRYPTION_EVENT. Also added the test case when new server reads the data from old server which does not send START_ENCRYPTION_EVENT to slave. Master Slave Upgrade Scenario. When Slave is updated first, Slave will have extra logic of handling START_ENCRYPTION_EVENT But master willnot be sending START_ENCRYPTION_EVENT. So there will be no issue. When Master is updated first, It will send START_ENCRYPTION_EVENT to slave , But slave will ignore this event in queue_event.
-
Sachin authored
calc_field_event_length should accurately calculate the size of BLOB type fields, Instead of returning just the bytes taken by length it should return length bytes + actual length.
-
- 15 Jul, 2019 1 commit
-
-
Sujatha authored
MDEV-11154: Write_on_release_cache(log_event.cc) function will not write "COMMIT", if use "mysqlbinlog ... | mysql ..." Problem: ======= Executing command, "mysqlbinlog --read-from-remote-server --host='xx.xx.xx.xx' --port=3306 --user=xxx --password=xxx --database=mysql --to-last-log mysql-bin.000001 --start-position=1098699 --stop-never |mysql -uxxx -pxxx", we found that last data read from remote couldn't commit. Analysis: ======== The purpose of 'Write_on_release_cache' is that the contents of the Cache will automatically be written to a dedicated result file on destruction. Flush operation on the result file is controlled by a flag 'FLUSH_F'. Events which require force flush upon their destruction will have to enable this 'Write_on_release_cache::FLUSH_F'. At present the 'FLUSH_F' flag is defined as an enum as shown below. enum flag { FLUSH_F }; Since 'FLUSH_F' is the first member without initialization it get the default value '0'. Because of this the following flush condition never succeeds. if (m_flags & FLUSH_F) fflush(m_file); At present the file gets flushed only during my_fclose(result_file) operation. When continuous streaming is enabled through --stop-never option it never gets flushed and hence events are not replicated. Fix: === Initialize the enum value to non zero value.
-
- 20 May, 2019 1 commit
-
-
Sujatha authored
Problem: ======== The test now fails with the following trace: CURRENT_TEST: rpl.rpl_parallel_temptable --- /mariadb/10.4/mysql-test/suite/rpl/r/rpl_parallel_temptable.result +++ /mariadb/10.4/mysql-test/suite/rpl/r/rpl_parallel_temptable.reject @@ -194,7 +194,6 @@ 30 conservative 31 conservative 32 optimistic -33 optimistic Analysis: ========= The part of test which fails with result content mismatch is given below. CREATE TEMPORARY TABLE t4 (a INT PRIMARY KEY) ENGINE=InnoDB; INSERT INTO t4 VALUES (32); INSERT INTO t4 VALUES (33); INSERT INTO t1 SELECT a, "optimistic" FROM t4; slave_parallel_mode=optimistic The expectation of the above test script is, INSERT FROM SELECT should read both 32, 33 and populate table 't1'. But this expectation fails occasionally. All three INSERT statements are handed over to three different slave parallel workers. Temporary tables are not safe for parallel replication. They were designed to be visible to one thread only, so have no table locking. Thus there is no protection against two conflicting transactions committing in parallel and things like that. So anything that uses temporary tables will be serialized with anything before it, when using parallel replication by using a "wait_for_prior_commit" function call. This will ensure that the each transaction is executed sequentially. But there exists a code path in which the above wait doesn't happen. Because of this at times INSERT from SELECT doesn't wait for the INSERT (33) to complete and it completes its executes and enters commit stage. Hence only row 32 is found in those cases resulting in test failure. The wait needs to be added within "open_temporary_table" call. The code looks like this within "open_temporary_table". Each thread tries to open temporary table in 3 different ways: case 1: Find a temporary table which is already in use by using find_temporary_table(tl) && wait_for_prior_commit() case 2: If above failed then try to look for temporary table which is marked for free for reuse. This internally calls "wait_for_prior_commit()" if table is found. find_and_use_tmp_table(tl, &table) case 3: If none of the above open a new table handle from table share. if (!table && (share= find_tmp_table_share(tl))) { table= open_temporary_table(share, tl->get_table_name(), true); } At present the "wait_for_prior_commit" happens only in case 1 & 2. Fix: ==== On slave add a call for "wait_for_prior_commit" for case 3. The above wait on slave will solve the issue. A more detailed fix would be to mark temporary tables as not safe for parallel execution on the master side. In order to do that, on the master side, mark the Gtid_log_event specific flag FL_TRANSACTIONAL to be false all the time. So that they are not scheduled parallely.
-
- 11 May, 2019 1 commit
-
-
Vicențiu Ciorbaru authored
* Update wrong zip-code
-
- 25 Apr, 2019 2 commits
-
-
Venkatesh Venugopal authored
```---- MySQL abnormally exits on KILL command. Fix ``` The abnormal exit has been fixed. RB: 20971, 21129, 21237
-
Oleksandr Byelkin authored
Set table in row ID position mode before using this function.
-
- 04 Mar, 2019 1 commit
-
-
Alexander Barkov authored
-
- 14 Feb, 2019 1 commit
-
-
Oleksandr Byelkin authored
-
- 11 Feb, 2019 1 commit
-
-
Andrei Elkin authored
-
- 31 Jan, 2019 1 commit
-
-
Oleksandr Byelkin authored
-
- 25 Jan, 2019 3 commits
-
-
Sergei Golubchik authored
-
Andrei Elkin authored
MDEV-17803: ulonglongization of table_mapping entry::table_id to fix windows compilation in particular.
-
Sergei Golubchik authored
-
- 24 Jan, 2019 2 commits
-
-
Andrei Elkin authored
The problem was originally stated in http://bugs.mysql.com/bug.php?id=82212 The size of an base64-encoded Rows_log_event exceeds its vanilla byte representation in 4/3 times. When a binlogged event size is about 1GB mysqlbinlog generates a BINLOG query that can't be send out due to its size. It is fixed with fragmenting the BINLOG argument C-string into (approximate) halves when the base64 encoded event is over 1GB size. The mysqlbinlog in such case puts out SET @binlog_fragment_0='base64-encoded-fragment_0'; SET @binlog_fragment_1='base64-encoded-fragment_1'; BINLOG @binlog_fragment_0, @binlog_fragment_1; to represent a big BINLOG. For prompt memory release BINLOG handler is made to reset the BINLOG argument user variables in the middle of processing, as if @binlog_fragment_{0,1} = NULL is assigned. Notice the 2 fragments are enough, though the client and server still may need to tweak their @@max_allowed_packet to satisfy to the fragment size (which they would have to do anyway with greater number of fragments, should that be desired). On the lower level the following changes are made: Log_event::print_base64() remains to call encoder and store the encoded data into a cache but now *without* doing any formatting. The latter is left for time when the cache is copied to an output file (e.g mysqlbinlog output). No formatting behavior is also reflected by the change in the meaning of the last argument which specifies whether to cache the encoded data. Rows_log_event::print_helper() is made to invoke a specialized fragmented cache-to-file copying function which is copy_cache_to_file_wrapped() that takes care of fragmenting also optionally wraps encoded strings (fragments) into SQL stanzas. my_b_copy_to_file() is refactored to into my_b_copy_all_to_file(). The former function is generalized to accepts more a limit argument to constraint the copying and does not reinitialize anymore the cache into reading mode. The limit does not do any effect on the fully read cache.
-
Marko Mäkelä authored
Use the same data type 'ulong' to avoid type mismatch on Windows and on 32-bit systems. FIXME: The correct data type should probably be 64-bit.
-
- 17 Oct, 2018 1 commit
-
-
Sachin authored
consistently) on Replication Slave lower_case_table_names 0 -> 1 replication works, it's safe as long as mixed case names mapping to the lower case ones is one-to-one
-
- 07 Oct, 2018 1 commit
-
-
Kristian Nielsen authored
This would happen especially in optimistic parallel replication, where there is a good chance that a transaction will be rolled back (due to conflicts) after it has executed record_gtid(). If the transaction did any deletions of old rows as part of record_gtid(), those deletions will be undone as well. And the code did not properly ensure that the deletions would be re-tried. This patch makes record_gtid() remember the list of deletions done as part of a transaction. Then in rpl_slave_state::update() when the changes have been committed, we discard the list. However, in case of error and rollback, in cleanup_context() we will instead put the list back into rpl_global_gtid_slave_state so that the deletions will be re-tried later. Probably fixes part of the cause of MDEV-12147 as well. Signed-off-by:
Kristian Nielsen <knielsen@knielsen-hq.org>
-
- 24 Jul, 2018 1 commit
-
-
Jan Lindström authored
Problem was that binlog_hton was not initialized fully when needed i.e. when wsrep_on = true.
-
- 30 Jun, 2018 1 commit
-
-
Sergei Golubchik authored
RBR not versioned -> versioned do it for all write_row events, not only for WRITE_ROWS_EVENT_V1
-