- 05 Aug, 2017 1 commit
-
-
Elena Stepanova authored
Avoid race conditions between the test flow and events by waiting for all started events to finish execution after switching off event scheduler
-
- 03 Aug, 2017 1 commit
-
-
Jan Lindström authored
Always read full page 0 to determine does tablespace contain encryption metadata. Tablespaces that are page compressed or page compressed and encrypted do not compare checksum as it does not exists. For encrypted tables use checksum verification written for encrypted tables and normal tables use normal method. buf_page_is_checksum_valid_crc32 buf_page_is_checksum_valid_innodb buf_page_is_checksum_valid_none Add Innochecksum logging to file buf_page_is_corrupted Remove ib_logf and page_warn_strict_checksum calls in innochecksum compilation. Add innochecksum logging to file. fil0crypt.cc fil0crypt.h Modify to be able to use in innochecksum compilation and move fil_space_verify_crypt_checksum to end of the file. Add innochecksum logging to file. univ.i Add innochecksum strict_verify, log_file and cur_page_num variables as extern. page_zip_verify_checksum Add innochecksum logging to file. innochecksum.cc Lot of changes most notable able to read encryption metadata from page 0 of the tablespace. Added test case where we corrupt intentionally FIL_PAGE_FILE_FLUSH_LSN_OR_KEY_VERSION (encryption key version) FIL_PAGE_FILE_FLUSH_LSN_OR_KEY_VERSION+4 (post encryption checksum) FIL_DATA+10 (data)
-
- 01 Aug, 2017 1 commit
-
-
Alexey Botchkov authored
Don't set the +x on /bin/wsrep_sst_common when installing.
-
- 20 Jul, 2017 1 commit
-
-
Jan Lindström authored
Crashes with innodb_page_size=64K. Does not crash at <= 32K. Problem was that when blob record that was earlier < 16k is enlarged at update wo that length > 16K it should be stored externally. However, that was not enforced when page-size = 64K (note that 16K+1 < 64K/2 i.e. half of the btree leaf page). btr_cur_optimistic_update: limit max record size to 16K or in REDUNDANT row format to 16K-1.
-
- 13 Jul, 2017 1 commit
-
-
Vladislav Vaintroub authored
addr2line utility optionally used to output stacktrace relies relies on correct my_progname, which is initialized from argv[0] from main function. Thus, changing argv[0] can confuse stacktrace output.
-
- 12 Jul, 2017 1 commit
-
-
Jan Lindström authored
In all InnoDB row formats, the pointers or lengths stored in the record header can be at most 14 bits, that is, count up to 16383. In ROW_FORMAT=REDUNDANT, this limits the maximum possible record length to 16383 bytes. In other ROW_FORMAT, it could merely limit the maximum length of variable-length fields. When MySQL 5.7 introduced innodb_page_size=32k and 64k, the maximum record length was limited to 16383 bytes (I hope 16383, not 16384, to be able to distinguish from a record whose length is 0 bytes). This change is present in MariaDB Server 10.2. btr_cur_optimistic_update(): Restrict maximum record size to 16K-1 for REDUNDANT and 64K page size. dict_index_too_big_for_tree(): The maximum allowed record size is half a B-tree page or 16K(-1 for REDUNDANT) for 64K page size. convert_error_code_to_mysql(): Fix error message to print correct limits. my_error_innodb(): Fix error message to print correct limits. page_zip_rec_needs_ext() : record size was already restricted to 16K. Restrict REDUNDANT to 16K-1. rem0rec.h: Introduce REDUNDANT_REC_MAX_DATA_SIZE (16K-1) and COMPRESSED_REC_MAX_DATA_SIZE (16K).
-
- 07 Jul, 2017 1 commit
-
-
Sergei Golubchik authored
-
- 06 Jul, 2017 5 commits
-
-
Sergei Golubchik authored
(10.0+ changes, as specified in the MDEV) and remove unused variable (compiler warning)
-
Sergei Golubchik authored
-
Sergei Golubchik authored
-
Sergei Golubchik authored
-
Marko Mäkelä authored
The option innodb_log_compressed_pages was contributed by Facebook to MySQL 5.6. It was disabled in the 5.6.10 GA release due to problems that were fixed in 5.6.11, which is when the option was enabled. The option was set to innodb_log_compressed_pages=ON by default (disabling the feature), because safety was considered more important than speed. The option innodb_log_compressed_pages=OFF can *CORRUPT* ROW_FORMAT=COMPRESSED tables on crash recovery if the zlib deflate function is behaving differently (producing a different amount of compressed data) from how it behaved when the redo log records were written (prior to the crash recovery). In MDEV-6935, the default value was changed to innodb_log_compressed_pages=OFF. This is inherently unsafe, because there are very many different environments where MariaDB can be running, using different zlib versions. While zlib can decompress data just fine, there are no guarantees that different versions will always compress the same data to the exactly same size. To avoid problems related to zlib upgrades or version mismatch, we must use a safe default setting. This will reduce the write performance for users of ROW_FORMAT=COMPRESSED tables. If you configure innodb_log_compressed_pages=ON, please make sure that you will always cleanly shut down InnoDB before upgrading the server or zlib.
-
- 05 Jul, 2017 4 commits
-
-
Elena Stepanova authored
The test did not handle correctly possible difference in system timezone. The fix is to remove non-functional setting of local time_zone and instead allow timestamp replacement to work with any date/time
-
Eugene Kosov authored
Patch submitted by Eugene Kosov <claprix@yandex.ru>, comments added by commiter. Signed-off-by: Vicențiu Ciorbaru <vicentiu@mariadb.org>
-
Marko Mäkelä authored
When using innodb_page_size=16k, InnoDB tables that were created in MariaDB 10.1.0 to 10.1.20 with PAGE_COMPRESSED=1 and PAGE_COMPRESSION_LEVEL=2 or PAGE_COMPRESSION_LEVEL=3 would fail to load. fsp_flags_is_valid(): When using innodb_page_size=16k, use a more strict check for .ibd files, with the assumption that nobody would try to use different-page-size files.
-
Oleksandr Byelkin authored
MDEV-10146: Wrong result (or questionable result and behavior) with aggregate function in uncorrelated SELECT subquery When outer reference resolved in a VIEW it still should mark aggregate function resolving border.
-
- 04 Jul, 2017 2 commits
-
-
Daniel Bartholomew authored
-
Daniel Black authored
Found by Coverity (id 971843). Signed-off-by: Daniel Black <daniel.black@au.ibm.com>
-
- 03 Jul, 2017 8 commits
-
-
Daniel Black authored
make_lock_and_pin didn't release the lock so we should. Found by Coverity (id 972095).
-
Daniel Black authored
Release the lock for the error path. Found by Coverity (id 972093).
-
Daniel Black authored
translog_stop_writing doesn't release a lock (though does to a DBUG_ASSERT). Better to just release the lock. Found by Coverity id 972092
-
Daniel Black authored
Found by Coverity scan - id 92087
-
Monty authored
If open of the relay log failed, we got an assert in MYSQL_BIN_LOG::close This only affected DEBUG systems
-
Kristian Nielsen authored
CREATE/DROP TEMPORARY TABLE are not safe to optimistically replicate in parallel with other transactions, so they need to be marked as "ddl" in the binlog. This was already done for stand-alone CREATE/DROP TEMPORARY. But temporary tables can also be created and dropped inside a BEGIN...END transaction, and such transactions were not marked as ddl. Nor was the DROP TEMPORARY TABLE statement emitted implicitly when a client connection is closed. So this patch adds such ddl mark for the missing cases. The difference to Kristian's original patch is mainly a fix in mysql_trans_commit_alter_copy_data() to remember the unsafe_rollback_flags over the temporary commit.
-
Daniel Black authored
max_data_file_size is overwritten in next statement so this assignment didn't ever get used. Found by Coverity (ID 1409644)
-
Daniel Black authored
CID 971836 (#1 of 1): Same on both sides (CONSTANT_EXPRESSION_RESULT) pointless_expression: The expression val != end && val != end does not accomplish anything because it evaluates to either of its identical operands, val != end.
-
- 02 Jul, 2017 2 commits
-
-
Andrei Elkin authored
Problem was that in a circular replication setup the master remembers position to events it has generated itself when reading from a slave. If there are no new events in the queue from the slave, a Gtid_list_log_event is generated to remember the last skipped event. The problem happens if there is a network delay and we generate a Gtid_list_log_event in the middle of the transaction, in which case there will be an implicit comment and a new transaction with serverid=0 will be logged. The fix was to not generate any Gtid_list_log_events in the middle of a transaction.
-
Monty authored
This could happen when the client connection dies while sending a progress report packet. Fixed by not raising any errors when sending progress packets.
-
- 01 Jul, 2017 1 commit
-
-
Elena Stepanova authored
-
- 30 Jun, 2017 8 commits
-
-
Jacob Mathew authored
-
Jacob Mathew authored
-
Sergei Golubchik authored
-
Sergei Golubchik authored
-
Sergei Golubchik authored
MDEV-11646 main.myisam, maria.maria, main.mix2_myisam, main.myisampack, main.mrr_icp_extra fail in buildbot with valgrind (Syscall param pwrite64(buf) points to uninitialised byte(s)) If the table has a varchar column and a forced fixed for format (as in varchar.inc), Field_varstring::store() will only store the actual number of bytes, not padded, in the record[0]. That is, on inserts a part of record[0] can be uninitialized. Fix: initialize record[0] when a TABLE is created, it doesn't matter what kind of garbage can be in this unused/invisible part of the record, as long as it's not some random memory contents (that can contain sensitive data).
-
Marko Mäkelä authored
-
Marko Mäkelä authored
Fix the binding of databases_file. It was incorrectly mapped to OPT_XTRA_TABLES_FILE. Remove some unused options and variables.
-
Sergei Golubchik authored
-
- 29 Jun, 2017 3 commits
-
-
Sergei Golubchik authored
-
Sergei Golubchik authored
-
Vicențiu Ciorbaru authored
tmp variable now points to str->ptr() buffer, not tmp_value buffer. Comparing pointers otherwise can lead to false assertion errors as we don't know where buffers are allocated in respect to each other.
-