- 05 Jun, 2024 1 commit
-
-
Tuukka Pasanen authored
When building on 64-bit kernel machine in 32-bit docker container CMake falsely (but it works as expected) detects that container runtime in also 64-bits. Use linux32 command to change runtime enviroment to 32-bit and then CMake will correctly disable for example ColumnStore and not try to build it This commit only works with debian/autobake-debs.sh
-
- 03 Jun, 2024 1 commit
-
-
Thirunarayanan Balathandayuthapani authored
- Added a counter innodb_num_bulk_insert_operation in INFORMATION_SCHEMA.GLOBAL_STATUS. This counter is incremented whenever a InnoDB undergoes bulk insert operation. - Change the innodb_instant_alter_column to atomic variable.
-
- 30 May, 2024 7 commits
-
-
Yuchen Pei authored
-
Yuchen Pei authored
Spider calls ha_spider::close() at least twice on ALTER TABLE ... ADD PARTITION. The first call frees wide_handler and the second call accesses wide_handler->trx->thd (heap-use-after-free). In general, there seems to be no problem with using THD obtained by the macro current_thd() except in background threads. Thus, we simply replace wide_handler->trx->thd with current_thd(). Original author: Nayuta Yanagasawa
-
Nayuta Yanagisawa authored
The HandlerSocket support of Spider has been deleted by MDEV-26858. Thus, the constants, SPIDER_SQL_TYPE_*_HS, are no longer necessary.
-
Yuchen Pei authored
Remove the dead-code, in Spider, which is related to the Spider's HandlerSocket support. The code has been disabled for a long time and it is unlikely that the code will be enabled. - rm all files under storage/spider/hs_client/ except hs_compat.h - rm storage/spider/spd_db_handlersocket.* - unifdef -UHS_HAS_SQLCOM -UHAVE_HANDLERSOCKET \ -m storage/spider/spd_* storage/spider/ha_spider.* storage/spider/hs_client/* - remove relevant files from storage/spider/CMakeLists.txt
-
Marko Mäkelä authored
-
Marko Mäkelä authored
-
Marko Mäkelä authored
-
- 29 May, 2024 2 commits
-
-
Dave Gosselin authored
Invoke cleanup routine at the end of pfs_noop.
-
Andrew Hutchings authored
This commit updates the README to indicate that the "Get the code, build it, test it" link will help decide the correct branch to work in. Also fixes a grammar issue and cleans-up the Markdown a little bit.
-
- 28 May, 2024 1 commit
-
-
Souradeep Saha authored
Fix various typos, in comments and DEBUG statements, and code changes are non-functional. All new code of the whole pull request, including one or several files that are either new files or modified ones, are contributed under the BSD-new license. I am contributing on behalf of my employer Amazon Web Services, Inc.
-
- 27 May, 2024 3 commits
-
-
Sergei Petrunia authored
- Change the comments in class ha_handler_stats to say the members are in ticks, not milliseconds. - In sql_explain.cc, adjust the scale to print milliseconds.
-
Alexander Barkov authored
MDEV-34226 On startup: UBSAN: applying zero offset to null pointer in my_copy_fix_mb from strings/ctype-mb.c and other locations nullptr+0 is an UB (undefined behavior). - Fixing my_string_metadata_get_mb() to handle {nullptr,0} without UB. - Fixing THD::copy_with_error() to disallow {nullptr,0} by DBUG_ASSERT(). - Fixing parse_client_handshake_packet() to call THD::copy_with_error() with an empty string {"",0} instead of NULL string {nullptr,0}.
-
Alexander Barkov authored
MDEV-30931 UBSAN: negation of -X cannot be represented in type 'long long int'; cast to an unsigned type to negate this value to itself in get_interval_value on SELECT - Fixing the code in get_interval_value() to use Longlong_hybrid_null. This allows to handle correctly: - Signed and unsigned arguments (the old code assumed the argument to be signed) - Avoid undefined negation behavior the corner case with LONGLONG_MIN This fixes the UBSAN warning: negation of -9223372036854775808 cannot be represented in type 'long long int'; - Fixing the code in get_interval_value() to avoid overflow in the INTERVAL_QUARTER and INTERVAL_WEEK branches. This fixes the UBSAN warning: signed integer overflow: -9223372036854775808 * 7 cannot be represented in type 'long long int' - Fixing the INTERVAL_WEEK branch in date_add_interval() to handle huge numbers correctly. Before the change, huge positive numeber were treated as their negative complements. Note, some other branches still can be affected by this problem and should also be fixed eventually.
-
- 24 May, 2024 2 commits
-
-
Thirunarayanan Balathandayuthapani authored
- InnoDB page compression works only on COMPACT or DYNAMIC row format tables. So InnoDB should throw error when alter table tries to enable PAGE_COMPRESSED for redundant table.
-
Thirunarayanan Balathandayuthapani authored
- InnoDB should avoid printing the error message before restoring the first page from doublewrite buffer.
-
- 23 May, 2024 2 commits
-
-
Vladislav Vaintroub authored
Correct the second parameter for strxnmov to prevent potential buffer overflows. The second parameter must be one less than the size of the input buffer to avoid writing past the end of the buffer. While the second parameter is usually correct, there are exceptions that need fixing. This commit addresses the issue within frm_file_exists() and other affected places.
-
Alexander Barkov authored
MDEV-28387 UBSAN: runtime error: negation of -9223372036854775808 cannot be represented in type 'long long int'; cast to an unsigned type to negate this value to itself in my_strtoll10 on SELECT Fixing the condition to raise an overflow in the ulonglong representation of the number is greater or equal to 0x8000000000000000ULL. Before this change the condition did not catch -9223372036854775808 (the smallest possible signed negative longlong number).
-
- 21 May, 2024 6 commits
-
-
Yuchen Pei authored
-
Marko Mäkelä authored
According to https://discussions.apple.com/thread/8256853 an attempt to use AVX512 registers on macOS will result in #UD (crash at runtime). Also, starting with clang-18 and GCC 14, we must add "evex512" to the target flags so that AVX and SSE instructions can use AVX512 specific encodings. This flag was introduced together with the avx10.1-512 target. Older compiler versions do not recognize "evex512". We do not want to write "avx10.1-512" because it could enable some AVX512 subfeatures that we do not have any CPUID check for. Reviewed by: Vladislav Vaintroub Tested on macOS by: Valerii Kravchuk
-
Alexander Barkov authored
The patch for MDEV-31340 fixed the following bugs: MDEV-33084 LASTVAL(t1) and LASTVAL(T1) do not work well with lower-case-table-names=0 MDEV-33085 Tables T1 and t1 do not work well with ENGINE=CSV and lower-case-table-names=0 MDEV-33086 SHOW OPEN TABLES IN DB1 -- is case insensitive with lower-case-table-names=0 MDEV-33088 Cannot create triggers in the database `MYSQL` MDEV-33103 LOCK TABLE t1 AS t2 -- alias is not case sensitive with lower-case-table-names=0 MDEV-33108 TABLE_STATISTICS and INDEX_STATISTICS are case insensitive with lower-case-table-names=0 MDEV-33109 DROP DATABASE MYSQL -- does not drop SP with lower-case-table-names=0 MDEV-33110 HANDLER commands are case insensitive with lower-case-table-names=0 MDEV-33119 User is case insensitive in INFORMATION_SCHEMA.VIEWS MDEV-33120 System log table names are case insensitive with lower-cast-table-names=0 Backporting the fixes from 11.5 to 10.5
-
mariadb-DebarunBanerjee authored
BUF_LRU_MIN_LEN (256) is too high value for low buffer pool(BP) size. For example, for BP size lower than 80M and 16 K page size, the limit is more than 5% of total BP and for lowest BP 5M, it is 80% of the BP. Non-data objects like explicit locks could occupy part of the BP pool reducing the pages available for LRU. If LRU reaches minimum limit and if no free pages are available, server would hang with page cleaner not able to free any more pages. Fix: To avoid such hang, we adjust the LRU limit lower than the limit for data objects as checked in buf_LRU_check_size_of_non_data_objects() i.e. one page less than 5% of BP.
-
Marko Mäkelä authored
trx_free_at_shutdown(): Similar to trx_t::commit_in_memory(), clear the detailed_error (FOREIGN KEY constraint error) before invoking trx_t::free(). We only do this on debug instrumented builds in order to avoid a debug assertion failure on shutdown.
-
mariadb-DebarunBanerjee authored
MDEV-34167 We fail to terminate transaction early with ER_LOCK_TABLE_FULL when lock memory is growing This regression is introduced in 10.6 by following commit. commit b6a24724 MDEV-27891: SIGSEGV in InnoDB buffer pool resize During DML, we check if buffer pool is running out of data pages in buf_pool_t::running_out. Here is 75% of the buffer pool is occupied by non-data pages we rollback the current transaction and exit with ER_LOCK_TABLE_FULL. The integer division (n_chunks_new / 4) becomes zero whenever the total number of chunks are < 4 making the check completely ineffective for such cases. Also the check is inaccurate for larger chunks. Fix-1: Correct the check in buf_pool_t::running_out. Fix-2: While waiting for free page, check for buf_LRU_check_size_of_non_data_objects.
-
- 20 May, 2024 5 commits
-
-
Yuchen Pei authored
A wide_handler is shared among ha_spider of partitions of the same spider table, where the last partition is designated the owner of the wide_handler, and is responsible for its deallocation. Therefore in case of failure, we only reset wide_handler in error handling if the current ha_spider is the owner of the wide_handler, otherwise it will result in segv in the destructor of ha_spider, or during ha_spider::close().
-
Yuchen Pei authored
The init, init_error, and init_error_time fields of a SPIDER_SHARE should only be assigned when actually doing the initialisation of a SPIDER_SHARE, otherwise they could result in spurious failures from spider_get_share() in a subsequent statement.
-
mariadb-DebarunBanerjee authored
This regression is introduced in 10.6 by following commit. commit 898dcf93 (Cleanup the lock creation) It removed one important optimization for lock bitmap pre-allocation. We pre-allocate about 8 byte extra space along with every lock object to adjust for similar locks on newly created records on the same page by same transaction. When it is exhausted, a new lock object is created with similar 8 byte pre-allocation. With this optimization removed we are left with only 1 byte pre-allocation. When large number of records are inserted and locked in a single page, we end up creating too many new locks almost in n^2 order. Fix-1: Bring back LOCK_PAGE_BITMAP_MARGIN for pre-allocation. Fix-2: Use the extra space (40 bytes) for bitmap in trx->lock.rec_pool.
-
Thirunarayanan Balathandayuthapani authored
- InnoDB should print the warning message saying "Shutdown is in progress" only when shutdown state is greater than SRV_SHUTDOWN_INITIATED.
-
Alexander Barkov authored
MDEV-34187 On startup: UBSAN: runtime error: applying zero offset to null pointer in skip_trailing_space and my_hash_sort_utf8mb3_general1400_nopad_as_ci The last element in func_array_oracle_overrides[] equal to {0,0} was erroneously passed to Native_functions_hash::replace(). Removing this element.
-
- 17 May, 2024 1 commit
-
-
Robin Newhouse authored
Similar to #2480. 567b6812 introduced safe_strcpy() to minimize the use of C with potentially unsafe memory overflow with strcpy() whose use is discouraged. Replace instances of strcpy() with safe_strcpy() where possible, limited here to files in the `sql/` directory. All new code of the whole pull request, including one or several files that are either new files or modified ones, are contributed under the BSD-new license. I am contributing on behalf of my employer Amazon Web Services, Inc.
-
- 16 May, 2024 1 commit
-
-
Dave Gosselin authored
-
- 15 May, 2024 2 commits
-
-
Daniel Bartholomew authored
-
Daniel Bartholomew authored
-
- 14 May, 2024 2 commits
-
-
Sergei Golubchik authored
-
Yuchen Pei authored
Spider calls info with HA_STATUS_AUTO to update auto increment info, which may attempt to connect the data node. If the connection fails, it may emit an error and return the same error. This error should not be of lower priority than any possible error from the later call to handler::update_auto_increment(). Without this change, certain errors from update_auto_increment() such as HA_ERR_AUTOINC_ERANGE may get ignored, causing my_insert() to call my_ok(), which fails the assertion because the error was emitted in the info() call (Diagnostics_area::is_set() returns true).
-
- 13 May, 2024 1 commit
-
-
Dmitry Shulga authored
MDEV-33769: Memory leak found in the test main.rownum run with --ps-protocol against a server built with the option -DWITH_PROTECT_STATEMENT_MEMROOT A memory leak happens on the second execution of a query that run in PS mode and uses the function ROWNUM(). A memory leak took place on allocation of an instance of the class Item_int for storing a limit value that is performed at the function set_limit_for_unit indirectly called from JOIN::optimize_inner. Typical trace to the place where the memory leak occurred is below: JOIN::optimize_inner optimize_rownum process_direct_rownum_comparison set_limit_for_unit new (thd->mem_root) Item_int(thd, lim, MAX_BIGINT_WIDTH); To fix this memory leak, calling of the function optimize_rownum() has to be performed only once on first execution and never called after that. To control it, the new data member first_rownum_optimization added into the structure st_select_lex.
-
- 12 May, 2024 3 commits
-
-
Yuchen Pei authored
This handles the situation when one thread is still initiating a storage engine plugin, while another is creating a table using it.
-
Sergei Golubchik authored
-
Marko Mäkelä authored
-