- 11 Mar, 2020 1 commit
-
-
Oleksandr Byelkin authored
-
- 09 Mar, 2020 4 commits
-
-
Monty authored
-
Monty authored
The default keyread_time() was optimized for blocks and not suitable for HEAP. The effect was the HEAP prefered table scans over ranges for btree indexes. Fixed also get_sweep_read_cost() for HEAP tables.
-
Monty authored
- Move testing of my_writer to inline functions to avoid calls - Made more functions inline. Especially thd->thread_started() is now very optimized! - Moved Opt_trace_stmt classe to opt_trace_context.h to get critical functions inline
-
Monty authored
- Added unlikely() to optimize for not having optimizer trace enabled - Made THD::trace_started() inline - Added 'if (trace_enabled())' around some potentially expensive code (not many found) - Added ASSERT's to ensure we don't call expensive optimizer trace calls if optimizer trace is not enabled - Added length to Json_writer functions to speed up buffer writes when optimizer trace is enabled. - Changed LEX_CSTRING argument handling to not send full struct to writer function on_add_str() functions now trusts length arguments
-
- 06 Mar, 2020 4 commits
-
-
Oleksandr Byelkin authored
-
Oleksandr Byelkin authored
-
Stepan Patryshev authored
-
Oleksandr Byelkin authored
-
- 05 Mar, 2020 3 commits
-
-
Vicențiu Ciorbaru authored
The .pc file installed by mariadb mentions archful directories and therefore must be archful itself. This fixes MDEV-14340.
-
Vicențiu Ciorbaru authored
-
Anel Husakovic authored
-
- 02 Mar, 2020 1 commit
-
-
Vladislav Vaintroub authored
status threads_connected can temporarily be bigger than max_connections+1 If SHOW STATUS LIKE "Threads_connected" comes after ER_CON_COUNT_ERROR is sent to the client, but before the counter is decremented, Threads_connected can differ from the expected value.
-
- 29 Feb, 2020 1 commit
-
-
Roman Nozdrin authored
-
- 28 Feb, 2020 3 commits
-
-
Vicențiu Ciorbaru authored
-
Marko Mäkelä authored
ha_partition: Remove redundant 'virtual' keywords and add missing 'override'. FIXME: handler::table_type() is not declared virtual, yet ha_partition and ha_sequence are seemingly trying to override it.
-
Thirunarayanan Balathandayuthapani authored
- Flag ALTER_STORED_COLUMN_TYPE set while doing varchar extension for partition table. Basically all partition supports can_be_converted_by_engine() then it should be set to ALTER_COLUMN_TYPE_CHANGE_BY_ENGINE.
-
- 26 Feb, 2020 3 commits
-
-
Oleksandr Byelkin authored
Main select should be pushed first in case of SET STATEMENT.
-
Ben Boeckel authored
When installing, no headers are installed into the parent directory of `${includedir}`.
-
Alexey Bychko authored
cmake cannot detect openssl headers on Mac during checks. the solution is to add path to openssl includes to CMAKE_REQUIRED_INCLUDES before checks.
-
- 25 Feb, 2020 6 commits
-
-
Alexey Bychko authored
added cmake checks for pam_ext.h and pam_appl.h headers added check for pam_syslog() added pam_syslog() if doesn't exist all cmake checks performed from inside the plugin
-
Sergei Petrunia authored
-
Igor Babaev authored
join_cache_level=6+ The patch fixes two similar bugs in the commit 8eeb689e that added multi_range_read support to partitions. The commit opened a possibility to join a partition table using BKA+MRR. However in some cases it could lead to wrong results or even crashes. This could happened when - index condition pushdown was used to join the table or - the joined table was an inner table of an outer join and 'not exist' optimization was applied or - the join table was the inner table of a semi-join and the first match optimization was applied The bugs were in the code of the call-back functions - partition_multi_range_key_skip_record() and - partition_multi_range_key_skip_index_tuple(). Each of this function consist only of an invocation of another function. Yet a wrong parameter was passed at this invocation. The fix was suggested by Sergey Petrunia and it is apparently in line with original design. The corresponding comprehensive test cases demonstrating the problems caused by the bugs were constructed by me.
-
Daniel Black authored
No adverse effects since this was made a null function in 6b53f9d7. This function had the last remaining cmake CMP0026 violation.
-
Daniel Black authored
cmake enabling -DENABLE_DTRACE=ON is particularlly noisy with CMP0026 errors. Fixed in the same way as 6b53f9d7
-
Daniel Black authored
-
- 23 Feb, 2020 1 commit
-
-
seppo authored
If async replication slave thread conflicts with cluster replication, then the async slave transaction should be BF aborted, and depending on the state of async slave transaction execution, potentially also replayed. There were problems in such BF abort implementation and the replaying was not started. This pull request contains fixes which make sure that if async slave thread is marked to abort and replay, it will complete carry out the rollback and release all locks and resources before starting the replaying. After replaying, async slave transactions is treated as successful, so the slave thread will continue as usual, handling next replication event. There is also new mtr test: galera.galera_slave_replay, which stresses both a certification failure for async slave thread and a successful BF abort followed by replaying.
-
- 22 Feb, 2020 1 commit
-
-
Anel Husakovic authored
MDEV-21374: When "--help --verbose" prints out configuration file paths, the --defaults-file option is not considered * `--defaults-file` option is showed only in `--help --verbose` if applied * `--default-extra-file` is showing correctly now in `--help --verbose`, previously it was treated as a directory with appended `my.cnf`
-
- 20 Feb, 2020 4 commits
-
-
Sergei Petrunia authored
Part#2: cleanup: In the part 1 of the fix, DS-MRR implementation would peek into the JOIN_TAB to get the rowid filter from table->reginfo.join_tab->rowid_filter This doesn't look good from code isolation standpoint (why should a storage engine assume it is used through a JOIN_TAB?). Fixed this by storing the 'un-pushed' rowid_filter in the DsMrr_impl structure. The filter survives across multi_range_read_init() calls. It is discarded when somebody calls index_end() or rnd_end() and cleans up the DsMrr_impl.
-
Thirunarayanan Balathandayuthapani authored
- Add warning suppression in misc_debug2 test.
-
Anel Husakovic authored
- Delete variable HAVE_PTHREAD_CONDATTR_SETCLOCK and check - Delete second HAVE_PTHREAD_KEY_DELETE
-
Daniel Black authored
Detecting the cpus based on sysconf of the online CPUs can significantly over estimate the number of cpus available. Wheither via numactl, cgroups, taskset, systemd constraints, docker containers and probably other mechanisms, the number of threads mysqld can be run on can be quite less. As such we use the pthread_getaffinity_np function on Linux and FreeBSD (identical API) to get the number of CPUs. The number of CPUs is the default for the thread_pool_size and a too high default will resulting in large memory usage and high context switching overhead. Closes PR #922
-
- 19 Feb, 2020 3 commits
-
-
Sergei Petrunia authored
(Backport to 10.3) Partitioning storage now supports MRR but doesn't support Index Condition Pushdown (aka ICP). This causes counter-intuitive query plans for queries that use BKA and conditions that depend on index fields: - If the condition refers to other tables, BKA's variant of ICP is used to handle it. - If the condition depends on this table only, the optimizer will try to use regular ICP for it, which will fail because the storage engine doesn't support ICP. Make the optimizer be smarter in the second case: if we were not able to use regular ICP, use BKA's variant of ICP..
-
Sergei Petrunia authored
The two functions have different signature. Use "using ..." to prevent shadowing
-
Igor Babaev authored
This patch fixes the following defects/bugs. 1. If BKA[H] algorithm was used to join a table for which the optimizer had decided to employ a rowid filter the filter actually was not built. 2. The patch for the bug MDEV-21356 that added the code canceling pushing rowid filter into an engine for the table joined with employment of BKA[H] and MRR was not quite correct for Innodb engine because this cancellation was done after InnoDB code had already bound the the pushed filter to internal InnoDB structures.
-
- 18 Feb, 2020 2 commits
-
-
Eugene Kosov authored
Do not rebuild index when it's key part converted from utf8mb3 to utf8mb4 but key part stays the same. dict_index_add_to_cache(): assert that prefix_len is divided by mbmaxlen ha_innobase::compare_key_parts(): compare key part lenght in symbols instead of bytes.
-
Eugene Kosov authored
Engine specific code moved to engine.
-
- 17 Feb, 2020 2 commits
-
-
Jan Lindström authored
Add missing wait condition before we check the end database state.
-
Jan Lindström authored
-
- 16 Feb, 2020 1 commit
-
-
Jan Lindström authored
* Remove those tests that will not be supported on that release. * Make sure that correct tests are disabled and have MDEVs * Sort test names This should not be merged upwards.
-