- 03 Jan, 2018 1 commit
-
-
Monty authored
-
- 02 Jan, 2018 6 commits
-
-
Monty authored
- The fix in mf_iocache2.c was just to fix a compiler warning
-
Monty authored
This is to be able to better track where things goes wrong
-
Monty authored
-
Monty authored
Make rpl_ctype_latin1 more portable by printing names in hex Also only run if lower_case_table_names is 0, as this affects the result
-
Varun Gupta authored
up fix for MDEV-12458
-
Monty authored
- openssl_1 errors where system dependent - Used not portable UINT32_MAX instead of UINT_MAX32
-
- 01 Jan, 2018 6 commits
-
-
Monty authored
Conflicts: cmake/make_dist.cmake.in mysql-test/r/func_json.result mysql-test/r/ps.result mysql-test/t/func_json.test mysql-test/t/ps.test sql/item_cmpfunc.h
-
Monty authored
Disabled warnings for directory option as this is depending on compilation options.
-
Monty authored
-
Monty authored
Code in QUICK_RANGE_SELECT::init_ror_merged_scan() could theoretically have caused crashes if this was ever called from an update or delete This also found a bug in the vcol/range.result. file.
-
Monty authored
-
Monty authored
-
- 31 Dec, 2017 1 commit
-
-
Varun Gupta authored
Added a system variabe rocsdb_git_hash to MyRocks which tell us the version of RocksDB being used
-
- 30 Dec, 2017 1 commit
-
-
Varun Gupta authored
Currently explain format=json does not show the order direction of fields used during filesort. This patch would remove this limitation
-
- 29 Dec, 2017 5 commits
-
-
Elena Stepanova authored
-
Monty authored
The reason for adding this was that while testing mysqlbinlog on a replication event with 3G event output, Linux failed reading the whole file in memory with one read (only got 2G on first read even if file had just been written). - Don't reset info->error on write error in IO_CACHE. - In case of write_error in IO_CACHE , always return -1 - Fixed wrong result from my_read when using MY_FULL_IO. Also don't give an error in case of retry.
-
Monty authored
Main problem was that no log-event print function checked for disk full error on the IO_CACHE. All changes in this patch only affects mysqlbinlog, not the server! - Changed all log-event print functions to return 1 on error - Fixed memory usage when not using --flashback. - Added printing of number of rows in row events. Can be disabled with --print-row-count=0 - Print annotated rows when using mysqlbinlog --short-form - Fixed that mysqlbinlog --debug works - Fixed create_drop_binlog.test test failure - Reorganized fields in PRINT_EVENT_INFO to be according to size to optimize storage - Don't change print_row_event_position or print_row_counts if set by user - Remove some testing of argument to my_free is 0 - base64-output=never is now supported and works in all context - Updated help information for --base64-output and --short-form - print_row_count is now on by default. Reset automatically if --short-form is used - Removed obsolote warning for mysql 5.6.0 - More DBUG_PRINT for mysqltest.cc - my_b_write_byte() now checks for flush failures. This fixed a memory overrun on disk full - my_b_printf() now returns 1 on failure, 0 on ok. This simplifies code and no old code was using the old return value of my_b_printf(). - my_b_Write_backtick_quote() now returns 1 on failure and 0 on ok - Fixed some error conditions in log printing that was not previously handled. - Slave_rows_error_report() can now handle longlong positions - Write_on_release_cache() rewritten so that we can detect errors on flush. Not depending on automatic release anymore. - Changed types for Pos and End_log_pos to 64 bit in SHOW BINLOG EVENTS - Fixed that copy_event_cache_to_string_and_reinit() works with strings longer than 4G (Changed to use LEX_STRING instead of String) - Restricted binlog_rows_event_max_size to UINT32_MAX-1 as String's are anyway restricted to UINT32_MAX - Fixed bug in rpl_binlog_state::write_to_iocache() which hide write failures (duplicate variable name) - Fixed bug in String::append if original string was not allocated - Stop mysqlbinlog output at once if there is an error. - Before printing error message, flush result file. This ensures that the error message is printed last. (Easier to find)
-
Sergey Vojtovich authored
Locked_tables_list::unlock_locked_tables Similarly to regular DROP TABLE, don't leave locked tables mode if CREATE OR REPLACE dropped temporary table but failed to cerate new one. The problem is that there's no track of which temporary table was "locked" by LOCK TABLES.
-
Vicențiu Ciorbaru authored
-
- 28 Dec, 2017 6 commits
-
-
Vicențiu Ciorbaru authored
Window definitions are resolved during fix fields. Updating used tables for window functions must be done after all window functions have had a chance to be resolved. There was an additional problem with the implementation: expressions that contained window functions never updated the expression's used tables. To fix both these issues, make sure to call "update_used_tables" on all items that contain window functions after we have passed through all items.
-
Vicențiu Ciorbaru authored
-
Vicențiu Ciorbaru authored
-
Vicențiu Ciorbaru authored
-
Monty authored
-
Sergei Golubchik authored
cherry-pick e6ce97a5
-
- 27 Dec, 2017 3 commits
-
-
Igor Babaev authored
for a query that uses CTE The first reference to a CTE in the processed query uses the unit built by the parser for the CTE specification. This unit is considered as the specification of the derived table created for the first reference of the CTE. This requires some transformation of the original query tree: the unit of the specification must be moved to a new position as a slave of the select where the first reference to the CTE occurs. The transformation is performed by the function st_select_lex_node::move_as_slave(). There was an obvious bug in this function. As a result of this bug in many cases the moved unit turned out to be lost in the query tree. This could cause different problems. In particular the prepared statements for queries that used CTEs could miss cleanup for some selects that was performed at the end of the preparation/execution of the PSs. If such cleanup is not done for a PS the next execution of the PS causes an assertion abort or a crash.
-
Vicențiu Ciorbaru authored
-
Alexander Barkov authored
MDEV-14249 Wrong character set info of Query_log_event and the query in Query_log_event constructed by different charsets cause error when slave apply the event.
-
- 25 Dec, 2017 9 commits
-
-
Sergei Golubchik authored
MDEV-14026 ALTER TABLE ... DELAY_KEY_WRITE=1 creates table copy for partitioned MyISAM table with DATA DIRECTORY/INDEX DIRECTORY options set data_file_name and index_file_name in HA_CREATE_INFO before calling check_if_incompatible_data()
-
Sergei Golubchik authored
-
Sergei Golubchik authored
don't allocate them on THD::mem_root on every init(HA_STATUS_CONST) call, do it once in open() (because they don't change) on TABLE::mem_root (so they stay valid until the table is closed)
-
Daniel Black authored
PKG_CONFIG does not really work on Windows, Strawberry perl's uses mingw libraries, which VS compiler cannot use, BOOST not used. Tests main.query_cache_debug and main.mdev-504 timed out on debug build at 2 minutes so increase the timeout to 4 minutes. Overall build time was 30 min 44 seconds so plenty of time currently. Signed-off-by: Daniel Black <daniel@linux.vnet.ibm.com>
-
Daniel Black authored
Signed-off-by: Daniel Black <daniel.black@au.ibm.com>
-
Sergei Golubchik authored
fix 011497bd in RPM and DEB: storage engine packages must require the server package of exactly correct version.
-
Sergei Golubchik authored
-
Sachin Setiya authored
Problem:- Gtid are not transferred in Galera Cluster. Solution:- We need to transfer gtid in the case on either when cluster is slave/master in async replication. In normal Gtid replication gtid are generated on recieving node itself and it is always on sync with other nodes. Because galera keeps node in sync , So all nodes get same no of event groups. So the issue arises when say galera is slave in async replication. A | (Async replication) D <-> E <-> F {Galera replication} So what should happen is that all node should apply the master gtid but this does node happen, becuase node E, F does not recieve gtid from D in write set , So what E(or F) does is that it applies wsrep_gtid_domain_id, D server-id , E gtid next seq no. This generated gtid does not always work when say A has different domain id. So In this commit, on galera node when we see that this event is recieved from master we simply write Gtid_Log_Event in write_set and send it to other nodes.
-
Alexey Botchkov authored
Item_func_json_extract::val_int fixed. It wasn't tested yet as it's called in exotic cases only.
-
- 23 Dec, 2017 2 commits
-
-
Monty authored
Problem was that MAX_SLAVE_ERROR didn't cover all possible errors.
-
Sergei Petrunia authored
-