- 04 Feb, 2024 2 commits
-
-
Sergei Golubchik authored
implement --ssl-fp and --ssl-fplist for all clients. --ssl-fp takes one certificate fingerprint, for example, 00:11:22:33:44:55:66:77:88:99:AA:BB:CC:DD:EE:FF:00:11:22:33 --ssl-fplist takes a path to a file with one fingerprint per line. if the server's certificate fingerprint matches ssl-fp or is found in the file - the certificate is considered verified. If the fingerprint is specified but doesn't match - the connection is aborted independently from the --ssl-verify-server-cert
-
Sergei Golubchik authored
if the client enabled --ssl-verify-server-cert, then the server certificate is verified as follows: * if --ssl-ca or --ssl-capath were specified, the cert must have a proper signature by the specified CA (or CA in the path) and the cert's hostname must match the server's hostname. If the cert isn't signed or a hostname is wrong - the connection is aborted. * if MARIADB_OPT_TLS_PEER_FP was used and the fingerprint matches, the connection is allowed, if it doesn't match - aborted. * If the connection uses unix socket or named pipes - it's allowed. (consistent with server's --require-secure-transport behavior) otherwise the cert is still in doubt, we don't know if we can trust it or there's an active MitM in progress. * If the user has provided no password or the server requested an authentication plugin that sends the password in cleartext - the connection is aborted. * Perform the authentication. If the server accepts the password, it'll send SHA2(scramble || password hash || cert fingerprint) with the OK packet. * Verify the SHA2 digest, if it matches - the connection is allowed, otherwise it's aborted.
-
- 03 Feb, 2024 6 commits
-
-
Sergei Golubchik authored
-
Sergei Golubchik authored
char is a character, uchar is an octet. casts removed (or added) as needed
-
Sergei Golubchik authored
not default_mysqld.cnf. The latter has only server settings, it misses mtr-specific client configuration Except for spider, that doesn't use mysqld.1 server and default_my.cnf starts it automatically. Spider tests have to include both default_mysqld.cnf and default_client.cnf
-
Sergei Golubchik authored
it's for client auth plugins only, server auth plugin should never return it, because they cannot send a correct OK packet. (OK packet is quite complex and carries a lot of information that only the server knows)
-
Sergei Golubchik authored
as suggested by Monty
-
Sergei Golubchik authored
-
- 02 Feb, 2024 1 commit
-
-
Vladislav Vaintroub authored
This commit addresses multiple server shutdown problems observed on macOS, Solaris, and FreeBSD: 1. Corrected a non-portable assumption where socket shutdown was expected to wake up poll() with listening sockets in the main thread. Use more robust self-pipe to wake up poll() by writing to the pipe's write end. 2. Fixed a random crash on macOS from pthread_kill(signal_handler) when the signal_handler was detached and the thread had already exited. Use more robust `kill(getpid(), SIGTERM)` to wake up the signal handler thread. 3. Made sure, that signal handler thread always exits once `abort_loop` is set, and also calls `my_thread_end()` and clears `signal_thread_in_use` when exiting. This fixes warning "1 thread did not exit" by `my_global_thread_end()` seen on FreeBSD/macOS when the process is terminated via signal. Additionally, the shutdown code underwent light refactoring for better readability and maintainability: - Modified `break_connect_loop()` to no longer wait for the main thread, aligning behavior with Windows (since 10.4). - Removed dead code related to the unused `USE_ONE_SIGNAL_HAND` preprocessor constant. - Eliminated support for `#ifndef HAVE_POLL` in `handle_connection_sockets` This code is also dead, since 10.4
-
- 29 Jan, 2024 6 commits
-
-
Vladislav Vaintroub authored
This is done for symmetry with mariadb-dump, which does not use threads but allows parallelism via --parallel Traditional --use-threads can still be used, it is synonymous with --parallel
-
Vladislav Vaintroub authored
- --parallel=N with or without --single-transaction - Error cases (too many connections, emulate error on one connection) - Windows specific test for named pipe connections
-
Vladislav Vaintroub authored
At the moment, it only works with --tab, to execute "SELECT INTO OUTFILE" queries concurrently. Uses connection_pool for concurrent execution.
-
Vladislav Vaintroub authored
Parallelism is achieved by using mysql_send_query on multiple connections without waiting for results, and using IO multiplexing (poll/IOCP) to wait for completions. Refresh libmariadb to pick up CONC-676 (fixes for IOCP use with named pipe)
-
Vladislav Vaintroub authored
- make connect_to_db() return MYSQL*, we'll reuse the function for connection pool. - Remove variable 'mysql_connection', duplicated by variable 'mysql' - do not attempt to start slave if connection did not succeed,# and fix mysqldump.result
-
Vladislav Vaintroub authored
-
- 27 Jan, 2024 1 commit
-
-
Kristian Nielsen authored
Improve the performance of slave connect using B+-Tree indexes on each binlog file. The index allows fast lookup of a GTID position to the corresponding offset in the binlog file, as well as lookup of a position to find the corresponding GTID position. This eliminates a costly sequential scan of the starting binlog file to find the GTID starting position when a slave connects. This is especially costly if the binlog file is not cached in memory (IO cost), or if it is encrypted or a lot of slaves connect simultaneously (CPU cost). The size of the index files is generally less than 1% of the binlog data, so not expected to be an issue. Most of the work writing the index is done as a background task, in the binlog background thread. This minimises the performance impact on transaction commit. A simple global mutex is used to protect index reads and (background) index writes; this is fine as slave connect is a relatively infrequent operation. Here are the user-visible options and status variables. The feature is on by default and is expected to need no tuning or configuration for most users. binlog_gtid_index On by default. Can be used to disable the indexes for testing purposes. binlog_gtid_index_page_size (default 4096) Page size to use for the binlog GTID index. This is the size of the nodes in the B+-tree used internally in the index. A very small page-size (64 is the minimum) will be less efficient, but can be used to stress the BTree-code during testing. binlog_gtid_index_span_min (default 65536) Control sparseness of the binlog GTID index. If set to N, at most one index record will be added for every N bytes of binlog file written. This can be used to reduce the number of records in the index, at the cost only of having to scan a few more events in the binlog file before finding the target position Two status variables are available to monitor the use of the GTID indexes: Binlog_gtid_index_hit Binlog_gtid_index_miss The "hit" status increments for each successful lookup in a GTID index. The "miss" increments when a lookup is not possible. This indicates that the index file is missing (eg. binlog written by old server version without GTID index support), or corrupt. Signed-off-by: Kristian Nielsen <knielsen@knielsen-hq.org>
-
- 24 Jan, 2024 1 commit
-
-
Yuchen Pei authored
Also deprecating table params not implemented in MDEV-28856.
-
- 23 Jan, 2024 1 commit
-
-
Ian Gilfillan authored
-
- 22 Jan, 2024 1 commit
-
-
Brandon Nesterenko authored
This patch augments Gtid_log_event with the user thread-id. In particular that compensates for the loss of this info in Rows_log_events. Gtid_log_event::thread_id gets visible in mysqlbinlog output like #231025 16:21:45 server id 1 end_log_pos 537 CRC32 0x1cf1d963 GTID 0-1-2 ddl thread_id=10 as 64 bit unsigned integer. While the size of Gtid event has grown by 8-9 bytes replication from OLD <-> NEW is not affected by it. This work was started by the late Sujatha Sivakumar. Brandon Nesterenko took it over, reviewed initial patches and extended the work. Reviewed-by: <andrei.elkin@mariadb.com>
-
- 18 Jan, 2024 1 commit
-
-
Libing Song authored
Summary ======= With FULL_NODUP mode, before image inclues all columns and after image inclues only the changed columns. flashback will swap the value of changed columns from after image to before image. For example: BI: c1, c2, c3_old, c4_old AI: c3_new, c4_new flashback will reconstruct the before and after images to BI: c1, c2, c3_new, c4_new AI: c3_old, c4_old Implementation ============== When parsing the before and after image, position and length of the fields are collected into ai_fields and bi_fields, if it is an Update_rows_event and the after image doesn't includes all columns. The changed fields are swapped between bi_fields and ai_fields. Then it recreates the before image and after image by using bi_fields and ai_fields. nullbit will be set to 1 if the field is NULL, otherwise nullbit will be 0. It also optimized flashback a little bit. - calc_row_event_length is used instead of print_verbose_one_row - swap_buff1 and swap_buff2 are removed.
-
- 17 Jan, 2024 1 commit
-
-
Andrew Hutchings authored
BASE 62 uses 0-9, A-Z and then a-z to give the numbers 0-61. This patch increases the range of the string functions to cover this. Based on ideas and tests in PR #2589, but re-written into the charset functions. Includes fix by Sergei, UBSAN complained: ctype-simple.c:683:38: runtime error: negation of -9223372036854775808 cannot be represented in type 'long long int'; cast to an unsigned type to negate this value to itself Co-authored-by: Weijun Huang <huangweijun1001@gmail.com> Co-authored-by: Sergei Golubchik <serg@mariadb.org>
-
- 12 Jan, 2024 1 commit
-
-
Libing Song authored
Field_new_decimal::store_value(const my_decimal*, int*) Analysis ======== When rpl applier is unpacking a before row image, Field::reset() will be called before setting a field to null if null bit of the field is set in the row image. For Field_new_decimal::reset(), it calls Field_new_decimal::store_value() to reset the value. store_value() asserts that the field is in the write_set bitmap since it thinks the field is updating. But that is not true for the row image generated in FULL_NODUP mode. In the mode, the before image includes all fields and the after image includes only updated fields. Fix === In the case unpacking binlog row images, the assertion is meaningless. So the unpacking field is marked in write_set temporarily to avoid the assertion failure.
-
- 10 Jan, 2024 10 commits
-
-
Marko Mäkelä authored
-
Marko Mäkelä authored
-
Marko Mäkelä authored
-
Marko Mäkelä authored
-
Marko Mäkelä authored
-
Marko Mäkelä authored
-
Marko Mäkelä authored
-
Marko Mäkelä authored
When innodb_undo_log_truncate=ON causes an InnoDB undo tablespace to be truncated, we must guarantee that the undo tablespace will be rebuilt atomically: After mtr_t::commit_shrink() has durably written the mini-transaction that rebuilds the undo tablespace, we must not write any old pages to the tablespace. To guarantee this, in trx_purge_truncate_history() we used to traverse the entire buf_pool.flush_list in order to acquire exclusive latches on all pages for the undo tablespace that reside in the buffer pool, so that those pages cannot be written and will be evicted during mtr_t::commit_shrink(). But, this traversal may interfere with the page writing activity of buf_flush_page_cleaner(). It would be better to lazily discard the old pages of the truncated undo tablespace. fil_space_t::is_being_truncated, fil_space_t::clear_stopping(): Remove. fil_space_t::create_lsn: A new field, identifying the LSN of the latest rebuild of a tablespace. buf_page_t::flush(), buf_flush_try_neighbors(): Evict pages whose FIL_PAGE_LSN is below fil_space_t::create_lsn. mtr_t::commit_shrink(): Update fil_space_t::create_lsn and fil_space_t::size right before the log is durably written and the tablespace file is being truncated. fsp_page_create(), trx_purge_truncate_history(): Simplify the logic. Reviewed by: Thirunarayanan Balathandayuthapani, Vladislav Lesin Performance tested by: Axel Schwenke Correctness tested by: Matthias Leich
-
Marko Mäkelä authored
-
Marko Mäkelä authored
trx_purge_free_segment(), trx_purge_truncate_rseg_history(): Do not claim that the blocks will be modified in the mini-transaction, because that will not always be the case. Whenever there is a modification, mtr_t::set_modified() will flag it. The debug assertion that failed in recovery is checking that all changes to data pages are covered by log records. Due to these incorrect calls, we would unnecessarily write unmodified data pages, which is something that commit 05fa4558 aims to avoid. The incorrect calls had originally been added in commit de31ca6a (MDEV-32820) and commit 86767bcc (MDEV-29593). Reviewed by: Vladislav Lesin Tested by: Elena Stepanova
-
- 09 Jan, 2024 7 commits
-
-
Sergei Golubchik authored
perfschema thread walker needs to take thread's LOCK_thd_kill to prevent the thread from disappearing why it's being looked at. But there's no need to lock it for the current thread. In fact, it was harmful as some code down the stack might take LOCK_thd_kill (e.g. set_killed() does it, and my_malloc_size_cb_func() calls set_killed()). And it caused a bunch of mutexes being locked under LOCK_thd_kill, which created problems later when my_malloc_size_cb_func() called set_killed() at some unspecified point under some random mutexes.
-
Sergei Golubchik authored
-
Sergei Golubchik authored
same assertion with spider. spider status variables didn't expect to be queried from a different thread without LOCK_thd_data. And they didn't expect to be queried under LOCK_thd_data either (because spider_get_trx() calls thd_set_ha_data()).
-
Sergei Golubchik authored
reduce code duplication
-
Sergei Golubchik authored
need to protect access to thread-local cache_mngr with LOCK_thd_data technically only access from different threads has to be protected, but this is the SHOW STATUS code path, so the difference is neglectable
-
Sergei Golubchik authored
use utf8mb4 with PCRE2, not utf8mb3
-
Sergei Petrunia authored
- Move it from delete.test to delete_innodb.test - Use --source include/innodb_stable_estimates.inc to make it predicatable.
-
- 08 Jan, 2024 1 commit
-
-
Marko Mäkelä authored
-