An error occurred fetching the project authors.
- 06 Sep, 2013 1 commit
-
-
Sergey Vojtovich authored
ORDER BY does not work Use "dynamic" row format (instead of "block") for MARIA internal temporary tables created for cursors. With "block" row format MARIA may shuffle rows, with "dynamic" row format records are inserted sequentially (there are no gaps in data file while we fill temporary tables). This is needed to preserve row order when scanning materialized cursors.
-
- 12 Jun, 2013 1 commit
-
-
Sergei Golubchik authored
protect THD::db with THD::LOCK_thd_data
-
- 19 Mar, 2013 1 commit
-
-
Murthy Narkedimilli authored
-
- 28 Feb, 2013 1 commit
-
-
Sergei Golubchik authored
revid:georgi.kodinov@oracle.com-20120309130449-82e3bs5v3et1x0ef committer: Georgi Kodinov <Georgi.Kodinov@Oracle.com> timestamp: Fri 2012-03-09 15:04:49 +0200 message: Bug #12408412: GROUP_CONCAT + ORDER BY + INPUT/OUTPUT SAME USER VARIABLE = CRASH Moved the preparation of the variables that receive the output from SELECT INTO from execution time (JOIN:execute) to compile time (JOIN::prepare). This ensures that if the same variable is used in the SELECT part of SELECT INTO it will be properly marked as non-const for this query. Test case added. Used proper fast iterator. a better fix (much smaller and without regressions) is coming from 5.1
-
- 25 Feb, 2013 1 commit
-
-
Murthy Narkedimilli authored
-
- 11 Feb, 2013 1 commit
-
-
unknown authored
Missed update_used_tables() call for multi-update values.
-
- 24 Jan, 2013 1 commit
-
-
Sergei Golubchik authored
allow only three failed change_user per connection. successful change_user do NOT reset the counter tests/mysql_client_test.c: make --error to work for --change_user errors
-
- 21 Jan, 2013 1 commit
-
-
Igor Babaev authored
This bug could result in returning 0 for the expressions of the form <aggregate_function>(distinct field) when the system variable max_heap_table_size was set to a small enough number. It happened because the method Unique::walk() did not support the case when more than one pass was needed to merge the trees of distinct values saved in an external file. Backported a fix in grant_lowercase.test from mariadb 5.5.
-
- 18 Jan, 2013 1 commit
-
-
Sergei Golubchik authored
-
- 15 Jan, 2013 1 commit
-
-
Sergei Golubchik authored
(because it's conceptually wrong. only the user can decide whether the kill is allowed to leave tables in the inconsistent state, storage engine has no say in that)
-
- 10 Jan, 2013 1 commit
-
-
Michael Widenius authored
KILL now breaks locks inside InnoDB Fixed possible deadlock when running INNODB STATUS Added ha_kill_query() and kill_query() to send kill signal to all storage engines Added reset_killed() to ensure we don't reset killed state while awake() is getting called include/mysql/plugin.h: Added thd_mark_as_hard_kill() include/mysql/plugin_audit.h.pp: Added thd_mark_as_hard_kill() include/mysql/plugin_auth.h.pp: Added thd_mark_as_hard_kill() include/mysql/plugin_ftparser.h.pp: Added thd_mark_as_hard_kill() sql/handler.cc: Added ha_kill_query() to send kill signal to all storage engines sql/handler.h: Added ha_kill_query() and kill_query() to send kill signal to all storage engines sql/log_event.cc: Use reset_killed() sql/mdl.cc: use thd->killed instead of thd_killed() to abort on soft kill sql/sp_rcontext.cc: Use reset_killed() sql/sql_class.cc: Fixed possible deadlock in INNODB STATUS by not getting thd->LOCK_thd_data if it's locked. Use reset_killed() Tell storge engines that KILL has been sent sql/sql_class.h: Added reset_killed() to ensure we don't reset killed state while awake() is getting called. Added mark_as_hard_kill() sql/sql_insert.cc: Use reset_killed() sql/sql_parse.cc: Simplify detection of killed queries. Use reset_killed() sql/sql_select.cc: Use reset_killed() sql/sql_union.cc: Use reset_killed() storage/innobase/handler/ha_innodb.cc: Added innobase_kill_query() Fixed error reporting for interrupted queries. storage/xtradb/handler/ha_innodb.cc: Added innobase_kill_query() Fixed error reporting for interrupted queries.
-
- 20 Nov, 2012 1 commit
-
-
Nuno Carvalho authored
When a binlog is replayed into a server, e.g.: $ mysqlbinlog binlog.000001 | mysql it sets a pseudo slave mode on the client connection in order to server be able to read binlog events, there is, a format description event is needed to correctly read following events. Also this pseudo slave mode applies to the current connection replication rules that are needed to correctly apply binlog events. If a binlog dump is sourced on a connection, this pseudo slave mode will remains after it, what will apply unexpected rules from customer perspective to following commands. Added a new SET statement to binlog dump that will unset pseudo slave mode at the end of dump file.
-
- 07 Nov, 2012 1 commit
-
-
Praveenkumar Hulakund authored
VARIABLES Analysis: ------------- After executing the query, new value of the user defined variables are set in the function "select_dumpvar::send_data". "select_dumpvar::send_data" first calls function "Item_func_set_user_var::save_item_result()". This function checks the nullness of the Item_field passed as parameter to it and saves it. The nullness of item is stored with arg[0]'s null_value flag. Then "select_dumpvar::send_data" calls "Item_func_set_user_var::update()" which notices null result that was saved and calls "Item_func_set_user_var:: update_hash". But here null_value is not set and args[0] is different from that given to function "Item_func_set_user_var:: set_item_result()". This causes "Item_func_set_user_var:: update_hash" function to believe that its getting non-null value. "user_var_entry::length" set to 0 and hence "user_var_entry::value" is made to point to extra_area allocated in "user_var_entry". And "Item_func_set_user_var::update_hash" tries to write at memory beyond extra_area for result type DECIMAL. Because of this invalid write issue is reported by Valgrind. Before this bug was introduced, we avoided this problem by creating "Item_func_set_user_var" object with the same Item_field as arg[0] and as parameter to Item_func_set_user_var::save_item_result(). But now they are refering to different args[0]. Because of this null_value flag set in parameter Item_field in function "Item_func_set_user_var::save_item_result()" is not reflected in "Item_func_set_user_var" object. Fix: ------------ This issue is reported on versions 5.5.24. Issue does not exists in 5.5.23, 5.1, 5.6 and trunk. This issue was introduced by revid:georgi.kodinov@oracle.com-20120309130449-82e3bs5v3et1x0ef (fix for bug #12408412), which was pushed into 5.5 and later releases. This patch has later been reversed in 5.6 and trunk by revid:norvald.ryeng@oracle.com-20121010135242-xj34gg73h04hrmyh (fix for bug #14664077). Backported this patch in 5.5 also to fix this issue. sql/item_func.cc: here unsigned value is converted to signed value. sql/item_func.h: last_insert_id() gives an auto_incremented value which can be positive only,so defined it as a unsigned longlong sets the unsigned_flag to 1.
-
- 13 Sep, 2012 1 commit
-
-
Michael Widenius authored
Increment long_query_count also if thd->variables.log_slow_rate_limit is used Added new state "Writing to binlog" sql/sql_class.h: Added THD::utime_after_query to avoid calling current_utime() twice for every end-of-query sql/sql_parse.cc: Increment long_query_count also if thd->variables.log_slow_rate_limit is used Removed extra calls to thd_proc_info(thd, "logging slow query") and thd->current_utime(); sql/sql_table.cc: Added new state "Writing to binlog"
-
- 08 Sep, 2012 1 commit
-
-
Michael Widenius authored
feature_dynamic_columns,feature_fulltext,feature_gis,feature_locale,feature_subquery,feature_timezone,feature_trigger,feature_xml Opened_views, Executed_triggers, Executed_events Added new process status 'updating status' as part of 'freeing items' mysql-test/r/features.result: Test of feature_xxx status variables mysql-test/r/mysqld--help.result: Removed duplicated 'language' variable. mysql-test/r/view.result: Test of opened_views mysql-test/suite/rpl/t/rpl_start_stop_slave.test: Write more information on failure mysql-test/t/features.test: Test of feature_xxx status variables mysql-test/t/view.test: Test of opened_views sql/event_scheduler.cc: Increment executed_events status variable sql/field.cc: Increment status variable sql/item_func.cc: Increment status variable sql/item_strfunc.cc: Increment status variable sql/item_subselect.cc: Increment status variable sql/item_xmlfunc.cc: Increment status variable sql/mysqld.cc: Add new status variables to 'show status' sql/mysqld.h: Added executed_events sql/sql_base.cc: Increment status variable sql/sql_class.h: Add new status variables sql/sql_parse.cc: Added new process status 'updating status' as part of 'freeing items' sql/sql_trigger.cc: Increment status variable sql/sys_vars.cc: Increment status variable sql/tztime.cc: Increment status variable
-
- 22 Aug, 2012 1 commit
-
-
Michael Widenius authored
-
- 31 Jul, 2012 1 commit
-
-
Sergei Golubchik authored
Don't use ER(xxx) in THD::close_connection(), when current_thd is already reset to NULL. Prefer ER_THD() or ER_DEFAULT() instead.
-
- 26 Jun, 2012 1 commit
-
-
Kent Boortz authored
a multiple definition of 'THD::clear_error()' in (at least) libmysqld.a(lib_sql.o) and libmysqld.a(libfederated_a-ha_federated.o). Patch provided by Ramil Kalimullin.
-
- 28 May, 2012 1 commit
-
-
Praveenkumar Hulakund authored
Analysis: ------------- If server is started with limit of MAX_CONNECTIONS and MAX_USER_CONNECTIONS then only MAX_USER_CONNECTIONS of any particular users can be connected to server and total MAX_CONNECTIONS of client can be connected to server. Server maintains a counter for total CONNECTIONS and total CONNECTIONS from particular user. Here, MAX_CONNECTIONS of connections are created to server. Out of this MAX_CONNECTIONS, connections from particular user (say USER1) are also created. The connections from USER1 is lesser than MAX_USER_CONNECTIONS. After that there was one more connection request from USER1. Since USER1 can still create connections as he havent reached MAX_USER_CONNECTIONS, server increments counter of CONNECTIONS per user. As server already has MAX_CONNECTIONS of connections, next check to total CONNECTION count fails. In this case control is returned WITHOUT decrementing the CONNECTIONS per user. So the counter per user CONNECTIONS goes on incrementing for each attempt until current connections are closed. And because of this counter per CONNECTIONS reached MAX_USER_CONNECTIONS. So, next connections form USER1 user always returns with MAX_USER_CONNECTION limit error, even when total connection to sever are less than MAX_CONNECTIONS. Fix: ------------- This issue is occurred because of not handling counters properly in the server. Changed the code to handle per user connection counters properly.
-
- 23 May, 2012 1 commit
-
-
unknown authored
This is a backport of the (unchaged) fix for MySQL bug #11764372, 57197. Analysis: When the outer query finishes its main execution and computes GROUP BY, it needs to construct a new temporary table (and a corresponding JOIN) to execute the last DISTINCT operation. At this point JOIN::exec calls JOIN::join_free, which calls JOIN::cleanup -> TMP_TABLE_PARAM::cleanup for both the outer and the inner JOINs. The call to the inner TMP_TABLE_PARAM::cleanup sets copy_field = NULL, but not copy_field_end. The final execution phase that computes the DISTINCT invokes: evaluate_join_record -> end_write -> copy_funcs The last function copies the results of all functions into the temp table. copy_funcs walks over all functions in join->tmp_table_param.items_to_copy. In this case items_to_copy contains both assignments to user variables. The process of copying user variables invokes Item_func_set_user_var::check which in turn re-evaluates the arguments of the user variable assignment. This in turn triggers re-evaluation of the subquery, and ultimately copy_field. However, the previous call to TMP_TABLE_PARAM::cleanup for the subquery already set copy_field to NULL but not its copy_field_end. This results in a null pointer access, and a crash. Fix: Set copy_field_end and save_copy_field_end to null when deleting copy fields in TMP_TABLE_PARAM::cleanup().
-
- 17 May, 2012 2 commits
-
-
Gopal Shankar authored
PROBLEM: Threads end-up in deadlock due to locks acquired as described below, con1: Run Query on a table. It is important that this SELECT must back-off while trying to open the t1 and enter into wait_for_condition(). The SELECT then is blocked trying to lock mysys_var->mutex which is held by con3. The very significant fact here is that mysys_var->current_mutex will still point to LOCK_open, even if LOCK_open is no longer held by con1 at this point. con2: Try dropping table used in con1 or query some table. It will hold LOCK_open and be blocked trying to lock kernel_mutex held by con4. con3: Try killing the query run by con1. It will hold THD::LOCK_thd_data belonging to con1 while trying to lock mysys_var->current_mutex belonging to con1. But current_mutex will point to LOCK_open which is held by con2. con4: Get innodb engine status It will hold kernel_mutex, trying to lock THD::LOCK_thd_data belonging to con1 which is held by con3. So while technically only con2, con3 and con4 participate in the deadlock, con1's mysys_var->current_mutex pointing to LOCK_open is a vital component of the deadlock. CYCLE = (THD::LOCK_thd_data -> LOCK_open -> kernel_mutex -> THD::LOCK_thd_data) FIX: LOCK_thd_data has responsibility of protecting, 1) thd->query, thd->query_length 2) VIO 3) thd->mysys_var (used by KILL statement and shutdown) 4) THD during thread delete. Among above responsibilities, 1), 2)and (3,4) seems to be three independent group of responsibility. If there is different LOCK owning responsibility of (3,4), the above mentioned deadlock cycle can be avoid. This fix introduces LOCK_thd_kill to handle responsibility (3,4), which eliminates the deadlock issue. Note: The problem is not found in 5.5. Introduction MDL subsystem caused metadata locking responsibility to be moved from TDC/TC to MDL subsystem. Due to this, responsibility of LOCK_open is reduced. As the use of LOCK_open is removed in open_table() and mysql_rm_table() the above mentioned CYCLE does not form. Revision ID for changes, open_table() = dlenev@mysql.com-20100727133458-m3ua9oslnx8fbbvz mysql_rm_table() = jon.hauglid@oracle.com-20101116100012-kxep9txz2fxy3nmw
-
unknown authored
The patch enables back constant subquery execution during query optimization after it was disabled during the development of MWL#89 (cost-based choice of IN-TO-EXISTS vs MATERIALIZATION). The main idea is that constant subqueries are allowed to be executed during optimization if their execution is not expensive. The approach is as follows: - Constant subqueries are recursively optimized in the beginning of JOIN::optimize of the outer query. This is done by the new method JOIN::optimize_constant_subqueries(). This is done so that the cost of executing these queries can be estimated. - Optimization of the outer query proceeds normally. During this phase the optimizer may request execution of non-expensive constant subqueries. Each place where the optimizer may potentially execute an expensive expression is guarded with the predicate Item::is_expensive(). - The implementation of Item_subselect::is_expensive has been extended to use the number of examined rows (estimated by the optimizer) as a way to determine whether the subquery is expensive or not. - The new system variable "expensive_subquery_limit" controls how many examined rows are considered to be not expensive. The default is 100. In addition, multiple changes were needed to make this solution work in the light of the changes made by MWL#89. These changes were needed to fix various crashes and wrong results, and legacy bugs discovered during development.
-
- 20 Apr, 2012 1 commit
-
-
Andrei Elkin authored
BUG#11761686 insert_id event is not filtered. Two issues are covered. INSERT into autoincrement field which is not the first part in the composed primary key is unsafe by autoincrement logging design. The case is specific to MyISAM engine because Innodb does not allow such table definition. However no warnings and row-format logging in the MIXED mode was done, and that is fixed. Int-, Rand-, User-var log-events were not filtered along with their parent query that made possible them to screw up execution context of the following query. Fixed with deferring their execution until the parent query. ****** Bug#11754117 Post review fixes. mysql-test/suite/rpl/r/rpl_auto_increment_bug45679.result: a new result file is added. mysql-test/suite/rpl/r/rpl_filter_tables_not_exist.result: results updated. mysql-test/suite/rpl/t/rpl_auto_increment_bug45679.test: regression test for BUG#11754117-45670 is added. mysql-test/suite/rpl/t/rpl_filter_tables_not_exist.test: regression test for filtering issue of BUG#11754117 - 45670 is added. sql/log_event.cc: Logics are added for deferring and executing events associated with the Query event. sql/log_event.h: Interface to deferred events batch execution is added. sql/rpl_rli.cc: initialization for new RLI members is added. sql/rpl_rli.h: New members to RLI are added to facilitate deferred events gathering and execution control; two general character RLI cleanup methods are constructed. sql/rpl_utility.cc: Deferred_log_events methods are difined. sql/rpl_utility.h: A new class Deferred_log_events is defined to implement IRU events gathering, execution and cleanup. sql/slave.cc: Necessary changes to initialize `rli->deferred_events' and prevent deferred event deletion in the main read-exec branch. sql/sql_base.cc: A new safe-check function for multi-part pk with auto-increment is defined and deployed in lock_tables(). sql/sql_class.cc: Initialization for a new member and replication cleanups are added to THD class. sql/sql_class.h: THD class receives a new member to hold a specific execution context for slave applier. sql/sql_parse.cc: Execution of the deferred event in started prior to its parent query.
-
- 11 Mar, 2012 1 commit
-
-
unknown authored
https://mariadb.atlassian.net/browse/MDEV-28 This task implements a new clause LIMIT ROWS EXAMINED <num> as an extention to the ANSI LIMIT clause. This extension allows to limit the number of rows and/or keys a query would access (read and/or write) during query execution.
-
- 09 Mar, 2012 1 commit
-
-
Georgi Kodinov authored
USER VARIABLE = CRASH Moved the preparation of the variables that receive the output from SELECT INTO from execution time (JOIN:execute) to compile time (JOIN::prepare). This ensures that if the same variable is used in the SELECT part of SELECT INTO it will be properly marked as non-const for this query. Test case added. Used proper fast iterator.
-
- 28 Feb, 2012 1 commit
-
-
Vladislav Vaintroub authored
-
- 25 Feb, 2012 1 commit
-
-
Igor Babaev authored
The result of materialization of the right part of an IN subquery predicate is placed into a temporary table. Each row of the materialized table is distinct. A unique key over all fields of the temporary table is defined and created. It allows to perform key look-ups into the table. The table created for a materialized subquery can be accessed by key as any other table. The function best_access-path search for the best access to join a table to a given partial join. With some where conditions this function considers a possibility of a ref_or_null access. If such access employs the unique key on the temporary table then when estimating the cost this access the function tries to use the array rec_per_key. Yet, such array is not built for this unique key. This causes a crash of the server. Rows returned by the subquery that contain nulls don't have to be placed into temporary table, as they cannot be match any row produced by the left part of the subquery predicate. So all fields of the temporary table can be defined as non-nullable. In this case any ref_or_null access to the temporary table does not make any sense and it does not make sense to estimate such an access. The fix makes sure that the temporary table for a materialized IN subquery is defined with columns that are all non-nullable. The also ensures that any row with nulls returned by the subquery is not placed into the temporary table.
-
- 22 Feb, 2012 2 commits
-
-
Sergey Petrunya authored
Handler_mrr_key_refills, Handler_mrr_rowid_refills.
-
unknown authored
-
- 16 Feb, 2012 2 commits
-
-
Vladislav Vaintroub authored
-
unknown authored
-
- 24 Jan, 2012 1 commit
-
-
Vladislav Vaintroub authored
-
- 23 Jan, 2012 1 commit
-
-
Sergey Petrunya authored
Handler_mrr_extra_key_sorts.
-
- 13 Jan, 2012 1 commit
-
-
Michael Widenius authored
-
- 18 Dec, 2011 1 commit
-
-
Vladislav Vaintroub authored
-
- 12 Dec, 2011 1 commit
-
-
Sergei Golubchik authored
* rename all debugging related command-line options and variables to start from "debug-", and made them all OFF by default. * replace "MySQL" with "MariaDB" in error messages * "Cast ... converted ... integer to it's ... complement" is now a note, not a warning * @@query_cache_strip_comments now has a session scope, not global.
-
- 08 Dec, 2011 1 commit
-
-
Vladislav Vaintroub authored
-
- 02 Dec, 2011 1 commit
-
-
Sergei Golubchik authored
-
- 28 Nov, 2011 1 commit
-
-
Sergei Golubchik authored
-
- 22 Nov, 2011 1 commit
-
-
Sergei Golubchik authored
Make max_user_connections signed, with min allowed value being -1.
-