- 19 Jan, 2023 1 commit
-
-
Daniel Black authored
Move the scripts/CMakeLists.txt install links into INSTALL_SCRIPT. As a result the linking of mariadb-install-db isn't needed. INSTALL_SCRIPT components outside the scripts (like rocksdb) now get the same attention.
-
- 11 Jan, 2023 6 commits
-
-
Vladislav Vaintroub authored
mariadb-upgrade needs to accept credential-manager parameter. At the moment, it would have no effect - all the credential manager logic is encapsulated inside cli_connect, and mariadb-upgrade does not connect to the server itself (instead invoking the cli)
-
Vladislav Vaintroub authored
-
Vladislav Vaintroub authored
Add a checkbox in the MSI, and parameter in mysql_install_db.exe The effect is adding credential_manager=1 to the [client] section
-
Vladislav Vaintroub authored
with default = off Theoretically, there is a security risk in using it (any process that runs with current user credentials can read the password), therefore we do not use it by default.
-
Vladislav Vaintroub authored
-
Vladislav Vaintroub authored
Move the common code for client authenication, either interactive (reading from command line), or using provided password into new library.
-
- 25 Dec, 2022 33 commits
-
-
Sergei Golubchik authored
wait until all three concurrent statements are truly completely finished before quering P_S. In particular "Logging slow query" stage happens after sending the OK packet but before the statement appears in events_statements_history
-
Sergei Golubchik authored
-
Sergei Golubchik authored
-
Sergei Golubchik authored
-
Sergei Golubchik authored
-
Monty authored
The old code counted selectivity double in case of queries like: WHERE key_part1=1 and key_part2 < 100 if the optimizer would decide to use a REF access on key_part1. The new code in best_access_path() that changes REF access to RANGE if the RANGE key is longer makes this issue less likely to happen. I was not able to create a test case for 11.0, however if one ports this patch to a MariaDB version without the change of REF to RANGE, the selectivity will be counted double.
-
Monty authored
The reason for this is that we call file->index_flags(index, 0, 1) multiple times in best_access_patch()when optimizing a table. For example, in InnoDB, the calls is not trivial (4 if's and 2 assignments) Now the function is inlined and is just a memory reference. Other things: - handler::is_clustering_key() and pk_is_clustering_key() are now inline. - Added TABLE::can_use_rowid_filter() to simplify some code. - Test if we should use a rowid_filter only if can_use_rowid_filter() is true. - Added TABLE::is_clustering_key() to avoid a memory reference. - Simplify some code using the fact that HA_KEYREAD_ONLY is true implies that HA_CLUSTERED_INDEX is false. - Added DBUG_ASSERT to TABLE::best_range_rowid_filter() to ensure we do not call it with a clustering key. - Reorginized elements in struct st_key to get better memory alignment. - Updated ha_innobase::index_flags() to not have HA_DO_RANGE_FILTER_PUSHDOWN for clustered index
-
Monty authored
- Increased timeout for binlog_mysqlbinlog_raw_flush.test. The old timeout was not enough when running with --valgrind - Disabled ssl_timeout for --valgrind as it times out - Disabled binlog_truncate_multi_engine for --valgrind as it does restarts
-
Monty authored
This could happen if mtr_grab_file() returned empty (happened to me)
-
Sergei Petrunia authored
-
Sergei Petrunia authored
-
Sergei Petrunia authored
Fixes over previous patches: do tracing of attached conditions close to where we generate them. Fix the tracing code to print the right conditions.
-
Rex authored
MDEV-21092,MDEV-21095,MDEV-29997: Optimizer Trace for index condition pushdown, partition pruning, exists-to-in Add Optimizer Tracing for: - Index Condition Pushdown - Partition Pruning - Exists-to-IN optimization
-
Sergei Petrunia authored
Make sure the queries use the intended query plan
-
Sergei Petrunia authored
This seems to confuse windows.
-
Sergei Petrunia authored
-
Monty authored
This was done after discussions with Igor, Sanja and Bar. The main reason for removing the deprication was to ensure that MariaDB is always backward compatible whenever possible. Other things: - Added statistics counters, mainly for the feedback plugin. - INTO OUTFILE - INTO variable - If INTO is using the old syntax (end of query)
-
Monty authored
-
Monty authored
In essence this means that we expect the user query to have at least one matching row in the end. This change will not affect the estimated rows for the plan, but will ensure that the cost for adding a table is not neglected because of record count being too low. The reasons for this is that if we have table combination that together has a very high selectivity then join record_count could become very low (close to 0) This would cause costs for all future tables to be so small that they are irrelevant for the rest of the plan. This has been shown to be the case in some performance benchmarks and in a few mtr tests. There is also still a problem in selectivity calculations as joining two tables in different order causes a different estimation of total rows. This can be seen in selectivity_innodb.test, test 'Q20' where joining nation,supplier is expecting 1.111 rows_out while joining supplier,nation is expecting 0.04 rows_out. The reason for 0.04 is that the optimizer estimates 'supplier' to have 10 matching rows, and joining with nation (eq_ref) has 1 row. However selectivity of n_name = 'UNITED STATES' makes the optimizer things that there will be only 0.04 matching rows. This patch avoids this "too low row count" to affect cost caclulations.
-
Monty authored
- Changed 'WARNING' of type "You need to use --log-bin to make ... work" to 'Note' - Only print startup Notes if log_warnings >= 4
-
Monty authored
-
Monty authored
"select * from information_schema.tables limit 1" was giving the following warning in the log: [ERROR] Invalid (old?) table or database name '#rocksdb'
-
Sergei Petrunia authored
-
Sergei Petrunia authored
Basic printout for join and table execution costs.
-
Monty authored
-
Monty authored
- Simplified test by setting read_time=DBL_MAX at start of loop if FORCE INDEX is used - No need to test for 'group by' as the cost compare should handle it. - Only one test change where index scan was replaced with table scan (correct)
-
Monty authored
- The comment in test_if_skip_sort_order was removed together with a not needed test of 'select'
-
Monty authored
Added override to a few functions in ha_partition.h
-
Monty authored
-
Monty authored
In the case one has an old Aria log file that ands with a Aria checkpoint and the server restarts after next recovery, just after created a new Aria log file (of 8K), the Aria recovery code would abort. If one would try to delete all Aria log files after this (but not the aria_control_file), the server would crash during recovery. The problem was that translog_get_last_page_addr() would regard a log file of exactly 8K as illegal and the rest of the code could not handle this case. Another issue was that if there was a crash directly after the log file head was written to the next page, the code in translog_get_next_chunk() would crash. This patch fixes most of the issues, but not all. For Sanja to look at! Things fixed: - Added code to ignore 8K log files. - Removed ASSERT in translog_get_next_chunk() that checks if page only contains the log page header.
-
Monty authored
I spent 4 hours on work and 12 hours of testing to try to find the reason for aria crashing in recovery when starting a new test, in which case the 'data directory' should be a copy of "install.db", but aria_log.00000001 content was not correct. The following changes are mostly done to make it a bit easier to find out more in case of future similar crashes: - Mark last_checkpoint_lsn volatile (safety). - Write checkpoint message to aria_recovery.trace - When compling with DBUG and with HAVE_DBUG_TRANSLOG_SRC, use checksum's for Aria log pages. We cannot have it on by default for DBUG servers yet as there is bugs when changing CRC between restarts. - Added a message to mtr --verbose when copying the data directory. - Removed extra linefeed in Aria recovery message (cleanup)
-
Monty authored
This includes: - cleanup and optimization of filtering and pushdown engine code. - Adjusted costs for rowid filters (based on extensive testing and profiling). This made a small two changes to the handler_rowid_filter_is_active() API: - One should not call it with a zero pointer! - One does not need to call handler_rowid_filter_is_active() for every row anymore. It is enough to check if filter is active by calling it call it during index_init() or when handler::rowid_filter_changed() is called The changes was to avoid unnecessary function calls and checks if pushdown conditions and rowid_filter is not used. Updated costs for rowid_filter_lookup() to be closer to reality. The old cost was based only on rowid_compare_cost. This is now changed to take into account the overhead in checking the rowid. Changed the Range_rowid_filter class to use DYNAMIC_ARRAY directly instead of Dynamic_array<>. This was done to be able to use the new append_dynamic() functions which gives a notable speed improvment compared to the old code. Removing the abstraction also makes the code easier to understand. The cost of filtering is now slightly lower than before, which is reflected in some test cases that is now using rowid filters.
-
Monty authored
This helps with debugging as 'Query: ' in DBUG traces will show something useful, for internal transactions, instead of just "".
-