- 19 Feb, 2019 19 commits
-
-
Alexander Barkov authored
For example, with this cmake command line: cmake . -DCMAKE_C_FLAGS="-DDBUG_ASSERT_AS_PRINTF" \ -DCMAKE_CXX_FLAGS="-DDBUG_ASSERT_AS_PRINTF"
-
Sergey Vojtovich authored
Apparently DBUG_ASSERT() can co-exist with DBUG_OFF when -DCMAKE_CXX_FLAGS="-DDBUG_ASSERT_AS_PRINTF". Removed assertion as it is useless now, since the type is unsigned.
-
Teemu Ollakka authored
The replayer did not signal replaying waiters. Added mysql_cond_broadcast() after replaying is over. Assertion on client error failed after replay attempt failed due to certification failure. At this point the transaction does not go through client state, so the client error cannot be overridden. Assign ER_LOCK_DEADLOCK to thd directly instead. Use timed cond wait when waiting for replayers to finish and check if the transaction has been BF aborted during the wait.
-
Jan Lindström authored
Transaction XID is not initialized before transaction is started.
-
Igor Babaev authored
-
mkaruza authored
Temporary disable WSREP while executing RESET MASTER. In situation when 2 nodes are both master/slave first stop slave on both and than reset master. Enforce stricter causality check with wsrep_sync_wait.
-
mkaruza authored
Check for garbd executable on different paths. If not found terminate test.
-
Igor Babaev authored
-
Igor Babaev authored
Optimized the code that removed multiple equalities pushed from HAVING into WHERE. Now this removal is postponed until all multiple equalities are eliminated in substitute_for_best_equal_field().
-
Vicențiu Ciorbaru authored
When sampling data through ANALYZE TABLE, use the estimator to get a better estimation of avg_frequency instead of just using the raw sampled data.
-
Vicențiu Ciorbaru authored
The variable controls the amount of sampling analyze table performs. If ANALYZE table with histogram collection is too slow, one can reduce the time taken by setting analyze_sample_percentage to a lower value of the total number of rows. Setting it to 0 will use a formula to compute how many rows to sample: The number of rows collected is capped to a minimum of 50000 and increases logarithmically with a coffecient of 4096. The coffecient is chosen so that we expect an error of less than 3% in our estimations according to the paper: "Random Sampling for Histogram Construction: How much is enough?” – Surajit Chaudhuri, Rajeev Motwani, Vivek Narasayya, ACM SIGMOD, 1998. The drawback of sampling is that avg_frequency number is computed imprecisely and will yeild a smaller number than the real one.
-
Vicențiu Ciorbaru authored
The add method does not need to provide the row order number. It was only used to detect if the minimum/maximum value was populated once or not, so as to force an update for the first encounter of a value.
-
Vladislav Vaintroub authored
Remove CMake INSTALL command for COMPONENT DataFiles. mysql_install_db.exe will calculate default datadir, so that it can be called without any parameters.
-
Varun Gupta authored
The value for eq_range_index_dive_limit is increased to 200.
-
Varun Gupta authored
optimize_join_buffer_size is switched ON.
-
Galina Shalygina authored
-
Vladislav Vaintroub authored
-
Igor Babaev authored
- Removed dead code - Renamed a function - Removed a parameter that not needed. - Corrected comments
-
Teemu Ollakka authored
The check for streaming replication logging format in THD::decide_logging_format() did the check also for DDLs running in TOI mode. This caused DROP DATABASE to fail if streaming replication was enabled. Added check for THD wsrep execution mode and perform the check only if the THD is in local processing mode (i.e. not TOI). Added galera_sr_create_drop test to verify that CREATE/DROP statements pass even if streaming replication is on.
-
- 18 Feb, 2019 21 commits
-
-
Galina Shalygina authored
in the tree bb-10.4-mdev7486 The crash was caused because of the similar problem as in mdev-16765: Item_cond::excl_dep_on_group_fields_for_having_pushdown() was missing.
-
Galina Shalygina authored
in the tree bb-10.4-mdev7486 The crash was caused because after merge of bb-10.4-mdev7486 and 10.4 branches changes for mdev-16727 were missing.
-
Galina Shalygina authored
It was allowed to push UDF functions and that caused a crash. To fix it it was forbidden to push UDF functions from HAVING into WHERE.
-
Marko Mäkelä authored
innobase_instant_try(): Assert that the column length of fixed-length columns is not changing.
-
Marko Mäkelä authored
If we instantly change the size of a fixed-length field and treat it as kind-of variable-length, then we will need conversions between old column values and new ones. I tried adding such a conversion to row_build(), but then I noticed that more conversions would be needed, because old values still appeared in a freshly rebuilt secondary index, causing a mismatch when trying to search with the correct longer value that was converted in my provisional fix to row_build(). So, we will revert the essential part of MDEV-15563: Instant ROW_FORMAT=REDUNDANT column extension (commit 22feb179), but not remove any tests.
-
Teemu Ollakka authored
Make sure that the Annotate_rows_log_events is written into binlog only for the first fragment of the current statement. Also avoid flusing pending rows event when calculating bytes generated by the transaction. Added and recorded a test which verifies that the binlog contains only one Annotate_rows_log_event per statement with various SR settings. Recrded mysql-wsrep-features#136 which produced different output with excession log events suppressed.
-
Marko Mäkelä authored
innobase_build_col_map_add(): Do not assume that old_field->pack_length() equals to field->pack_length(). Fix submitted by Aleksey Midenkov. innobase_instant_try(): Assert that the column length of fixed-length NOT NULL columns is only changing for ROW_FORMAT=REDUNDANT.
-
Varun Gupta authored
More test coverage added for the optimizer trace.
-
Daniele Sciascia authored
-
Sergei Petrunia authored
Followup: update test results
-
Sergei Petrunia authored
Change the defaults: -histogram_size=0 +histogram_size=254 -histogram_type=SINGLE_PREC_HB +histogram_type=DOUBLE_PREC_HB Adjust the testcases: - Some have ignorable changes in EXPLAIN outputs and more counter increments due to EITS table reads. - Testcases that meaningfully depend on the old defaults are changed to use the old values.
-
Marko Mäkelä authored
-
Marko Mäkelä authored
The Create_field::charset can contain garbage for columns that the SQL layer does not consider as being string columns. InnoDB considers BIT a string column for historical reasons (and backward compatibility with old persistent InnoDB metadata), and therefore it checked the charset. The Field::charset() consistently is my_charset_bin for BIT, so we can trust that one.
-
Marko Mäkelä authored
-
Marko Mäkelä authored
Fix clang warning: 'this' pointer cannot be null in well-defined C++ code; pointer may be assumed to always convert to true The only caller of TABLE::best_range_rowid_filter_for_partial_join() already seems to be assuming that s->table != NULL.
-
mkaruza authored
When node is JOINER and bin-log is enabled but bin-log-index is not set in configuration, we use NULL pointer which causes segfault. Fixed by checking for NULL pointer before using variable.
-
Galina Shalygina authored
Condition can be pushed from the HAVING clause into the WHERE clause if it depends only on the fields that are used in the GROUP BY list or depends on the fields that are equal to grouping fields. Aggregate functions can't be pushed down. How the pushdown is performed on the example: SELECT t1.a,MAX(t1.b) FROM t1 GROUP BY t1.a HAVING (t1.a>2) AND (MAX(c)>12); => SELECT t1.a,MAX(t1.b) FROM t1 WHERE (t1.a>2) GROUP BY t1.a HAVING (MAX(c)>12); The implementation scheme: 1. Extract the most restrictive condition cond from the HAVING clause of the select that depends only on the fields that are used in the GROUP BY list of the select (directly or indirectly through equalities) 2. Save cond as a condition that can be pushed into the WHERE clause of the select 3. Remove cond from the HAVING clause if it is possible The optimization is implemented in the function st_select_lex::pushdown_from_having_into_where(). New test file having_cond_pushdown.test is created.
-
Vladislav Vaintroub authored
-
Teemu Ollakka authored
Calls to wsrep_after_statement() were missing on PS protocol codepath. Added calls after mysqld_stmt_execute() and mysqld_stmt_bulk_execute().
-
Teemu Ollakka authored
Galera versions below 4.x do not generate unique sequence number for view events. Take this into account when writing the SE checkpoint to avoid debug assertion in InnoDB.
-
Daniele Sciascia authored
* Created new binlog-header file * Fixed warning on SELECT INTO DUMPFILE * Re-recorded the test result
-