- 30 Jan, 2024 4 commits
-
-
Oleksandr Byelkin authored
-
Oleksandr Byelkin authored
-
Oleksandr Byelkin authored
-
Brandon Nesterenko authored
rpl.rpl_seconds_behind_master_spike uses the DEBUG_SYNC mechanism to count how many format descriptor events (FDEs) have been executed, to attempt to pause on a specific relay log FDE after executing transactions. However, depending on when the IO thread is stopped, it can send an extra FDE before sending the transactions, forcing the test to pause before executing any transactions, resulting in a table not existing, that is attempted to be read for COUNT. This patch fixes this by no longer counting FDEs, but rather by programmatically waiting until the SQL thread has executed the transaction and then automatically activating the DEBUG_SYNC point to trigger at the next relay log FDE.
-
- 27 Jan, 2024 3 commits
- 26 Jan, 2024 4 commits
-
-
Brandon Nesterenko authored
A debug_sync signal could remain for the SQL thread that should have begun a wait_for upon seeing a GTID event, but would instead see the old signal and continue on without waiting. This broke an "idle" condition in SHOW SLAVE STATUS which should have automatically negated Seconds_Behind_Master. Instead, because the SQL thread had already processed the GTID event, it set sql_thread_caught_up to false, and thereby calculated the value of Seconds_behind_master, when the test expected 0. This patch fixes this by resetting the debug_sync state before creating a new transaction which sends a GTID event to the replica
-
Rucha Deodhar authored
-
Vladislav Vaintroub authored
-
Marko Mäkelä authored
Let us suppress this timing-sensitive warning globally. We added it in commit d34479dc (MDEV-33053) so that in case InnoDB hangs due to running out of buffer pool, there would be a warning about it. On a heavily loaded system that is running with a small buffer pool, these warnings may be occasionally issued while page writes are in progress.
-
- 25 Jan, 2024 4 commits
-
-
Vladislav Vaintroub authored
-
Vladislav Vaintroub authored
With this patch, 4-component MSI version can be used, e.g by setting TINY_VERSION variable in CMake, or by adding a string, e.g MYSQL_VERSION_EXTRA=-2 which sets TINY_VERSION to 2, and also changes the package name. The 4-component MSI versions do not support MSI major upgrades, only minor ones, i.e do not reinstall components, just update existing ones based on versioning rules. To support these rules, add DefaultVersion for the files that won't otherwise be versioned - headers, static and import libraries, pdbs, text - xml, python and perl scripts Also silence WiX warning that MSI won't store hashes for those files anymore.
-
Vladislav Vaintroub authored
Testing exit code from popen(), comparing it with 1, and deciding that perl.exe is not there, is a) wrong conclusion, and b) uninteresting, because MTR always runs with perl, and with MTR_PERL set. Background: Recent change in 7af50e4d introduced exit code 1 from perl snippet, that broke Windows CI. Do not want to debug this ever again.
-
Yuchen Pei authored
There are two array fields in spider_share with similar names: share->use_sql_dbton_ids that maps from i to the i-th dbton used by share. Thus it should be used only when i iterates over all distinct dbtons used by share. share->sql_dbton_ids that maps from i to the dbton used by the i-th link of the share. Thus it should be used only when i iterates over all links of a share. We correct instances where share->sql_dbton_ids should be used instead of share->use_sql_dbton_ids.
-
- 24 Jan, 2024 3 commits
-
-
Alexander Barkov authored
write_record() when performing REPLACE has an optimization: - if the unique violation happened in the last unique key, then do UPDATE - otherwise, do DELETE+INSERT This patch changes the way of detecting if this optimization can be applied if the table has long (hash based) unique (i.e. UNIQUE..USING HASH) constraints. Problem: The old condition did not take into account that TABLE_SHARE and TABLE see long uniques differently: - TABLE_SHARE sees as HA_KEY_ALG_LONG_HASH and HA_NOSAME - TABLE sees as usual non-unique indexes So the old condition could erroneously decide that the UPDATE optimization is possible when there are still some unique hash constraints in the table. Fix: - If the current key is a long unique, it now works as follows: UPDATE can be done if the current long unique is the last long unique, and there are no in-engine (normal) uniques. - For in-engine uniques nothing changes, it still works as before: If the current key is an in-engine (normal) unique: UPDATE can be done if it is the last normal unique.
-
Alexander Barkov authored
Turning REGEXP_REPLACE into two schema-qualified functions: - mariadb_schema.regexp_replace() - oracle_schema.regexp_replace() Fixing oracle_schema.regexp_replace(subj,pattern,replacement) to treat NULL in "replacement" as an empty string. Adding new classes implementing oracle_schema.regexp_replace(): - Item_func_regexp_replace_oracle - Create_func_regexp_replace_oracle Adding helper methods: - String *Item::val_str_null_to_empty(String *to) - String *Item::val_str_null_to_empty(String *to, bool null_to_empty) and reusing these methods in both Item_func_replace and Item_func_regexp_replace.
-
Alexey Botchkov authored
The IDENT_sys doesn't include keywords, so the function with the keyword name can be created, but cannot be called. Moving keywords to new rules keyword_func_sp_var_and_label and keyword_func_sp_var_not_label so the functions with these names are allowed.
-
- 23 Jan, 2024 22 commits
-
-
Daniel Black authored
When CMAKE_OSX_ARCHITECTURES isn't set we end up with "Packaging as: mariadb-10.4.33-osx10.19-x86_64" on arm64 builders. Instead of implying 64bit is x86, use CMAKE_SYSTEM_PROCESSOR to form the filename.
-
Vicențiu Ciorbaru authored
More in depth check to cover all used readline functions.
-
Rucha Deodhar authored
-
Sergei Golubchik authored
use the original, not the truncated, field in the long unique prefix, that is, in the hash(left(field, length)) expression. because MyISAM CHECK/REPAIR in compute_vcols() moves table->field but not prefix fields from keyparts. Also, implement Field_string::cmp_prefix() for prefix comparison of CHAR columns to work.
-
Sergei Golubchik authored
no need to use it when both arguments have the same length
-
Sergei Golubchik authored
-
Sergei Golubchik authored
use thd->start_time for the "start_time" column of the slow_log table. "current_time" here refers to the current_time() function return value not to the actual *current* time. also fixes MDEV-33267 User with minimal permissions can intentionally corrupt mysql.slow_log table
-
Sergei Golubchik authored
-
Sergei Golubchik authored
should fix numerous failures of main.bootstrap tets
-
Vicențiu Ciorbaru authored
Undo any Windows behaviour changes.
-
Oleksandr Byelkin authored
We can check only fields which take part in inserts.
-
Monty authored
Most things where wrong in the test suite. The one thing that was a bug was that table_map_id was in some places defined as ulong and in other places as ulonglong. On Linux 64 bit this is not a problem as ulong == ulonglong, but on windows this caused failures. Fixed by ensuring that all instances of table_map_id are ulonglong.
-
Monty authored
-
Monty authored
-
Monty authored
Fails with: query 'select ST_AsWKT(GeometryCollection(Point(44, 6), @g))' failed: ER_ILLEGAL_VALUE_FOR_TYPE (1367): Illegal non geometric '@`g`' value found during parsing
-
Monty authored
MDEV-33279 Disable transparent huge pages after page buffers has been allocatedDisable transparent huge pages (THP) The reason for disabling transparent huge pages (THP) is that they do not work well with MariaDB (or other databases, see links in MDEV-33279). The effect of using THP are that MariaDB will use much more (10x) more memory and will no be able to release memory back to the system. Disabling THP is done after all storage engines are started, to allow buffer pools and keybuffers (big allocations) to be allocated as huge pages.
-
Monty authored
- Removed not used variable 'file' from MYSQL_BIN_LOG::open() - Assigned not initialized variable in connect/tabext.cpp
-
Monty authored
Problem was that TMP_TABLE_PARAM was not properly initialized
-
Monty authored
optimizer-adjust_secondary_key_costs is added to provide 2 small adjustments to the 10.x optimizer cost model. This can be used in the case where the optimizer wrongly uses a secondary key instead of a clustered primary key. The reason behind this change is that MariaDB 10.x does not take into account that for engines like InnoDB, that scanning a primary key can be up to 7x faster than scanning a secondary key + read the row data trough the primary key. The different values for optimizer_adjust_secondary_key_costs are: optimizer_adjust_secondary_key_costs=0 - No changes to current model optimizer_adjust_secondary_key_costs=1 - Ensure that the cost of of secondary indexes has a cost of at least 5x times the cost of a clustered primary key (if one exists). This disables part of the worst_seek optimization described below. optimizer_adjust_secondary_key_costs=2 - Disable "worst_seek optimization" and adjust filter cost slightly (add cost of 1 if filter is used). The idea behind 'worst_seek optimization' is that we limit the cost for all non clustered ref access to the least of: - best-rows-by-range (or all rows in no range found) / 10 - scan-time-table (roughly number of file blocks to scan table) * 3 In addition we also do not try to use rowid_filter if number of rows estimated for 'ref' access is less than the worst_seek limitation. The idea is that worst_seek is trying to take into account that if we do a lot of accesses through a key, this is likely to be cached. However it only does this for secondary keys, and not for clustered keys or index only reads. The effect of the worst_seek are: - In some cases 'ref' will have a much lower cost than range or using a clustered key. - Some possible rowid filters for secondary keys will be ignored. When implementing optimizer_adjust_secondary_key_costs=2, I noticed that there is a slightly different costs for how ref+filter and range+filter are calculated. This caused a lot of range and range+filter to change to ref+filter, which is not good as range+filter provides the optimizer a better estimate of how many accepted rows there will be in the result set. Adding a extra small cost (1 seek) when using filter mitigated the above problems in almost all cases. This patch should not be applied to MariaDB 11.0 as worst_seeks is removed in 11.0 and the cost calculation for clustered keys, secondary keys, index scan and filter is more exact. Test case changes for --optimizer-adjust_secondary_key_costs=1 (Fix secondary key costs to be 5x of primary key): - stat_tables_innodb: - Complex change (probably ok as number of rows are really small) - ref over 1 row changed to range over 10 rows with join buffer - ref over 5 rows changed to eq_ref - secondary ref over 1 row changed to ref of primary key over 4 rows - Change of key to use longer key with index pushdown (a little bit worse but not significant). - Change to use secondary (1 row) -> primary (4 rows) - rowid_filter_innodb: - index_merge (2 rows) & ref (1) -> all (23 rows) -> primary eq_ref. Test case changes for --optimizer-adjust_secondary_key_costs=2 (remove of worst_seeks & adjust filter cost): - stat_tables_innodb: - Join order change (probably ok as number of rows are really small) - ref (5 rows) & ref(1 row) changed to range (10 rows & join buffer) & eq_ref. - selectivity_innodb: - ref -> ref|filter (ok) - rowid_filter_innodb: - ref -> ref|filter (ok) - range|filter (64 rows) changed to ref|filter (128 rows). ok as ref|filter outputs wrong number of rows in explain. - range, range_mrr_icp: -ref (500 rows -> ALL (1000 rows) (ok) - select_pkeycache, select, select_jcl6: - ref|filter (2 rows) -> ref (2 rows) (ok) - selectivity: - ref -> ref_filter (ok) - range: - Change of 'filtered' but no stat or plan change (ok) - selectivity: - ref -> ref+filter (ok) - Change of filtered but no plan change (ok) - join_nested_jcl6: - range -> ref|filter (ok as only 2 rows) - subselect3, subselect3_jcl6: - ref_or_null (4 rows) -> ALL (10 rows) (ok) - Index_subquery (4 rows) -> ALL (10 rows) (ok) - partition_mrr_myisam, partition_mrr_aria and partition_mrr_innodb: - Uses ALL instead of REF for a key value that is the same for > 50% of rows. (good) order_by_innodb: - range (200 rows) -> ref (20 rows)+filesort (ok) - subselect_sj2_mat: - One test changed. One ALL removed and replaced with eq_ref. Likely to be better. - join_cache: - Changed ref over 60% of the rows to use hash join (ok) - opt_tvc: - Changed to use eq_ref instead of ref with plan change (probably ok) - opt_trace: - No worst/max seeks clipping (good). - Almost double range_scan_time and index_scan_time (ok). - rowid_filter: - ref -> ref|filtered (ok) - range|filter (77 rows) changed to ref|filter (151 rows). Proably ok as ref|filter outputs wrong number of rows in explain. Reviewer: Sergei Petrunia <sergey@mariadb.com>
-
Monty authored
In the case of calcuating cost for a ref access for which there exists a usable range, the variable keyread_tmp would always be 0. If there is also another index that could be used as a filter, the cost of that filter would be wrong. In many cases 'the worst_seeks optimzation' would disable the filter from getting used, which is why this bug has not been noticed before. The next commit, which allows one to disable worst_seeks, will have a test case for this bug.
-
Monty authored
The old code collected a list of THD's, locked the THD's from getting deleted by locking two mutex and then later in a separate loop sent a kill signal to each THD. The problem with this approach is that, as THD's can be reused, the second time the THD is killed, the mutex can be taken in different order, which signals failures in safe_mutex. Fixed by sending the kill signal directly and not collect the THD's in a list to be signaled later. This is the same approach we are using in kill_zombie_dump_threads(). Other things: - Reset safe_mutex_t->locked_mutex when freed (Safety fix)
-
Monty authored
This was needed to get semisync to work on Windows.
-