An error occurred fetching the project authors.
  1. 26 Apr, 2007 1 commit
    • monty@mysql.com/nosik.monty.fi's avatar
      Added sql_mode PAD_CHAR_TO_FULL_LENGTH (WL#921) · eeaa3adb
      monty@mysql.com/nosik.monty.fi authored
      This pads the value of CHAR columns with spaces up to full column length (according to ANSI)
      It's not makde part of oracle or ansi mode yet, as this would cause a notable behaviour change.
      Added uuid_short(), a generator for increasing 'unique' longlong integers (8 bytes)   
      eeaa3adb
  2. 25 Apr, 2007 1 commit
  3. 23 Apr, 2007 1 commit
  4. 17 Apr, 2007 1 commit
  5. 12 Apr, 2007 2 commits
    • mats@romeo.(none)'s avatar
      BUG#25688 (RBR: circular replication may cause STMT_END_F flags to be · 11fc24ef
      mats@romeo.(none) authored
      skipped):
      
      By moving statement end actions from Rows_log_event::do_apply_event() to
      Rows_log_event::do_update_pos() they will always be executed, even if
      Rows_log_event::do_apply_event() is skipped because the event originated
      at the same server. This because Rows_log_event::do_update_pos() is always
      executed (unless Rows_log_event::do_apply_event() failed with an error,
      in which case the slave stops with an error anyway). 
      
      Adding test case.
      
      Fixing logic to detect if inside a group. If a rotate event occured
      when an initial prefix of events for a statement, but for which the
      table did contain a key, last_event_start_time is set to zero, causing
      rotate to end the group but without unlocking any tables. This left a
      lock hanging around, which subsequently triggered an assertion when a
      second attempt was made to lock the same sequence of tables.
      
      In order to solve the above problem, a new flag was added to the relay
      log info structure that is used to indicate that the replication thread
      is currently executing a statement. Using this flag, the replication
      thread is in a group if it is either in a statement or inside a trans-
      action.
      
      The patch also eliminates some gratuitous header file inclusions that
      were not needed (and caused compile errors) and replaced them with
      forward definitions.
      11fc24ef
    • bar@mysql.com's avatar
      mysqld.cc: · e0cc5fd0
      bar@mysql.com authored
        Removing wrong MYF(0) argument.
      e0cc5fd0
  6. 11 Apr, 2007 2 commits
  7. 09 Apr, 2007 1 commit
    • bar@mysql.com's avatar
      Bug#22648 LC_TIME_NAMES: Setting GLOBAL has no effect · 4341df8c
      bar@mysql.com authored
      Problem: setting/displaying @@LC_TIME_NAMES didn't distinguish between
      GLOBAL and SESSION variable types - always SESSION variable
      was set/shonw.
      Fix: set either global or session value.
      Also, "mysqld --lc-time-names" was added to set "global default" value.
      4341df8c
  8. 05 Apr, 2007 1 commit
    • kostja@vajra.(none)'s avatar
      A set of changes aiming to make the Event Scheduler more user-friendly · 98db2300
      kostja@vajra.(none) authored
      when there are no up-to-date system tables to support it:
       - initialize the scheduler before reporting "Ready for connections".
         This ensures that warnings, if any, are printed before "Ready for
         connections", and this message is not mangled.
       - do not abort the scheduler if there are no system tables
       - check the tables once at start up, remember the status and disable
         the scheduler if the tables are not up to date.
         If one attempts to use the scheduler with bad tables,
         issue an error message.
       - clean up the behaviour of the module under LOCK TABLES and pre-locking
         mode
       - make sure implicit commit of Events DDL works as expected.
       - add more tests
      
      
      Collateral clean ups in the events code.
      
      This patch fixes Bug#23631 Events: SHOW VARIABLES doesn't work 
      when mysql.event is damaged
      98db2300
  9. 03 Apr, 2007 2 commits
  10. 02 Apr, 2007 2 commits
  11. 30 Mar, 2007 1 commit
  12. 29 Mar, 2007 2 commits
  13. 28 Mar, 2007 3 commits
  14. 27 Mar, 2007 1 commit
  15. 26 Mar, 2007 1 commit
    • gkodinov/kgeorge@magare.gmz's avatar
      WL3527: 5.1 · 10185729
      gkodinov/kgeorge@magare.gmz authored
       renamed "--old-mode" to "--old" to prevent
       ambiguity.
       "old" now appears in SHOW VARIABLES as a
       read-only option.
      10185729
  16. 24 Mar, 2007 1 commit
  17. 23 Mar, 2007 2 commits
  18. 22 Mar, 2007 2 commits
  19. 19 Mar, 2007 1 commit
  20. 16 Mar, 2007 1 commit
  21. 15 Mar, 2007 2 commits
    • dlenev@mockturtle.local's avatar
      Fix for bug #25966 "2MB per second endless memory consumption after LOCK · 01bd08b5
      dlenev@mockturtle.local authored
      TABLE ... WRITE".
      
      Memory and CPU hogging occured when connection which had to wait for table
      lock was serviced by thread which previously serviced connection that was
      killed (note that connections can reuse threads if thread cache is enabled).
      One possible scenario which exposed this problem was when thread which
      provided binlog dump to replication slave was implicitly/automatically
      killed when the same slave reconnected and started pulling data through
      different thread/connection.
      The problem also occured when one killed particular query in connection
      (using KILL QUERY) and later this connection had to wait for some table
      lock.
      
      This problem was caused by the fact that thread-specific mysys_var::abort
      variable, which indicates that waiting operations on mysys layer should
      be aborted (this includes waiting for table locks), was set by kill
      operation but was never reset back. So this value was "inherited" by the
      following statements or even other connections (which reused the same
      physical thread). Such discrepancy between this variable and THD::killed
      flag broke logic on SQL-layer and caused CPU and memory hogging.
      
      This patch tries to fix this problem by properly resetting this member.
      
      There is no test-case associated with this patch since it is hard to test
      for memory/CPU hogging conditions in our test-suite.
      01bd08b5
    • dlenev@mockturtle.local's avatar
      Fix for bug #25966 "2MB per second endless memory consumption after LOCK · f2cb6641
      dlenev@mockturtle.local authored
      TABLE ... WRITE".
      
      CPU hogging occured when connection which had to wait for table lock was
      serviced by thread which previously serviced connection that was killed
      (note that connections can reuse threads if thread cache is enabled).
      One possible scenario which exposed this problem was when thread which
      provided binlog dump to replication slave was implicitly/automatically
      killed when the same slave reconnected and started pulling data through
      different thread/connection.
      In 5.* versions memory hogging was added to CPU hogging. Moreover in
      those versions the problem also occured when one killed particular query
      in connection (using KILL QUERY) and later this connection had to wait for
      some table lock.
      
      This problem was caused by the fact that thread-specific mysys_var::abort
      variable, which indicates that waiting operations on mysys layer should
      be aborted (this includes waiting for table locks), was set by kill
      operation but was never reset back. So this value was "inherited" by the
      following statements or even other connections (which reused the same
      physical thread). Such discrepancy between this variable and THD::killed
      flag broke logic on SQL-layer and caused CPU and memory hogging.
      
      This patch tries to fix this problem by properly resetting this member.
      
      There is no test-case associated with this patch since it is hard to test
      for memory/CPU hogging conditions in our test-suite.
      f2cb6641
  22. 07 Mar, 2007 1 commit
    • kostja@bodhi.local's avatar
      A fix for Bug#26750 "valgrind leak in sp_head" (and post-review · 86f02cd3
      kostja@bodhi.local authored
      fixes).
      
      The legend: on a replication slave, in case a trigger creation
      was filtered out because of application of replicate-do-table/
      replicate-ignore-table rule, the parsed definition of a trigger was not 
      cleaned up properly. LEX::sphead member was left around and leaked 
      memory. Until the actual implementation of support of 
      replicate-ignore-table rules for triggers by the patch for Bug 24478 it 
      was never the case that "case SQLCOM_CREATE_TRIGGER"
      was not executed once a trigger was parsed,
      so the deletion of lex->sphead there worked and the memory did not leak.
      
      The fix: 
      
      The real cause of the bug is that there is no 1 or 2 places where
      we can clean up the main LEX after parse. And the reason we 
      can not have just one or two places where we clean up the LEX is
      asymmetric behaviour of MYSQLparse in case of success or error. 
      
      One of the root causes of this behaviour is the code in Item::Item()
      constructor. There, a newly created item adds itself to THD::free_list
      - a single-linked list of Items used in a statement. Yuck. This code
      is unaware that we may have more than one statement active at a time,
      and always assumes that the free_list of the current statement is
      located in THD::free_list. One day we need to be able to explicitly
      allocate an item in a given Query_arena.
      Thus, when parsing a definition of a stored procedure, like
      CREATE PROCEDURE p1() BEGIN SELECT a FROM t1; SELECT b FROM t1; END;
      we actually need to reset THD::mem_root, THD::free_list and THD::lex
      to parse the nested procedure statement (SELECT *).
      The actual reset and restore is implemented in semantic actions
      attached to sp_proc_stmt grammar rule.
      The problem is that in case of a parsing error inside a nested statement
      Bison generated parser would abort immediately, without executing the
      restore part of the semantic action. This would leave THD in an 
      in-the-middle-of-parsing state.
      This is why we couldn't have had a single place where we clean up the LEX
      after MYSQLparse - in case of an error we needed to do a clean up
      immediately, in case of success a clean up could have been delayed.
      This left the door open for a memory leak.
      
      One of the following possibilities were considered when working on a fix:
      - patch the replication logic to do the clean up. Rejected
      as breaks module borders, replication code should not need to know the
      gory details of clean up procedure after CREATE TRIGGER.
      - wrap MYSQLparse with a function that would do a clean up.
      Rejected as ideally we should fix the problem when it happens, not
      adjust for it outside of the problematic code.
      - make sure MYSQLparse cleans up after itself by invoking the clean up
      functionality in the appropriate places before return. Implemented in 
      this patch.
      - use %destructor rule for sp_proc_stmt to restore THD - cleaner
      than the prevoius approach, but rejected
      because needs a careful analysis of the side effects, and this patch is 
      for 5.0, and long term we need to use the next alternative anyway
      - make sure that sp_proc_stmt doesn't juggle with THD - this is a 
      large work that will affect many modules.
      
      Cleanup: move main_lex and main_mem_root from Statement to its
      only two descendants Prepared_statement and THD. This ensures that
      when a Statement instance was created for purposes of statement backup,
      we do not involve LEX constructor/destructor, which is fairly expensive.
      In order to track that the transformation produces equivalent 
      functionality please check the respective constructors and destructors
      of Statement, Prepared_statement and THD - these members were
      used only there.
      This cleanup is unrelated to the patch.
      86f02cd3
  23. 06 Mar, 2007 2 commits
    • tsmith@siva.hindu.god's avatar
      Bug #26598: Create variable to allow turning off of statistic gathering on metadata commands · e78b4357
      tsmith@siva.hindu.god authored
      Add innodb_stats_on_metadata option, which enables gathering
      index statistics when processing metadata commands such as
      SHOW TABLE STATUS.  Default behavior of the server does not
      change (this option is enabled by default).
      e78b4357
    • malff/marcsql@weblab.(none)'s avatar
      Bug#8407 (Stored functions/triggers ignore exception handler) · b216d959
      malff/marcsql@weblab.(none) authored
      Bug 18914 (Calling certain SPs from triggers fail)
      Bug 20713 (Functions will not not continue for SQLSTATE VALUE '42S02')
      Bug 21825 (Incorrect message error deleting records in a table with a
        trigger for inserting)
      Bug 22580 (DROP TABLE in nested stored procedure causes strange dependency
        error)
      Bug 25345 (Cursors from Functions)
      
      
      This fix resolves a long standing issue originally reported with bug 8407,
      which affect the behavior of Stored Procedures, Stored Functions and Trigger
      in many different ways, causing symptoms reported by all the bugs listed.
      In all cases, the root cause of the problem traces back to 8407 and how the
      server locks tables involved with sub statements.
      
      Prior to this fix, the implementation of stored routines would:
      - compute the transitive closure of all the tables referenced by a top level
      statement
      - open and lock all the tables involved
      - execute the top level statement
      "transitive closure of tables" means collecting:
      - all the tables,
      - all the stored functions,
      - all the views,
      - all the table triggers
      - all the stored procedures
      involved, and recursively inspect these objects definition to find more
      references to more objects, until the list of every object referenced does
      not grow any more.
      This mechanism is known as "pre-locking" tables before execution.
      The motivation for locking all the tables (possibly) used at once is to
      prevent dead locks.
      
      One problem with this approach is that, if the execution path the code
      really takes during runtime does not use a given table, and if the table is
      missing, the server would not execute the statement.
      This in particular has a major impact on triggers, since a missing table
      referenced by an update/delete trigger would prevent an insert trigger to run.
      
      Another problem is that stored routines might define SQL exception handlers
      to deal with missing tables, but the server implementation would never give
      user code a chance to execute this logic, since the routine is never
      executed when a missing table cause the pre-locking code to fail.
      
      With this fix, the internal implementation of the pre-locking code has been
      relaxed of some constraints, so that failure to open a table does not
      necessarily prevent execution of a stored routine.
      
      In particular, the pre-locking mechanism is now behaving as follows:
      
      1) the first step, to compute the transitive closure of all the tables
      possibly referenced by a statement, is unchanged.
      
      2) the next step, which is to open all the tables involved, only attempts
      to open the tables added by the pre-locking code, but silently fails without
      reporting any error or invoking any exception handler is the table is not
      present. This is achieved by trapping internal errors with
      Prelock_error_handler
      
      3) the locking step only locks tables that were successfully opened.
      
      4) when executing sub statements, the list of tables used by each statements
      is evaluated as before. The tables needed by the sub statement are expected
      to be already opened and locked. Statement referencing tables that were not
      opened in step 2) will fail to find the table in the open list, and only at
      this point will execution of the user code fail.
      
      5) when a runtime exception is raised at 4), the instruction continuation
      destination (the next instruction to execute in case of SQL continue
      handlers) is evaluated.
      This is achieved with sp_instr::exec_open_and_lock_tables()
      
      6) if a user exception handler is present in the stored routine, that
      handler is invoked as usual, so that ER_NO_SUCH_TABLE exceptions can be
      trapped by stored routines. If no handler exists, then the runtime execution
      will fail as expected.
      
      With all these changes, a side effect is that view security is impacted, in
      two different ways.
      
      First, a view defined as "select stored_function()", where the stored
      function references a table that may not exist, is considered valid.
      The rationale is that, because the stored function might trap exceptions
      during execution and still return a valid result, there is no way to decide
      when the view is created if a missing table really cause the view to be invalid.
      
      Secondly, testing for existence of tables is now done later during
      execution. View security, which consist of trapping errors and return a
      generic ER_VIEW_INVALID (to prevent disclosing information) was only
      implemented at very specific phases covering *opening* tables, but not
      covering the runtime execution. Because of this existing limitation,
      errors that were previously trapped and converted into ER_VIEW_INVALID are
      not trapped, causing table names to be reported to the user.
      This change is exposing an existing problem, which is independent and will
      be resolved separately.
      b216d959
  24. 05 Mar, 2007 2 commits
    • gkodinov/kgeorge@macbook.gmz's avatar
      WL#3527: Extend IGNORE INDEX so places where index is ignored · b9c82eaa
      gkodinov/kgeorge@macbook.gmz authored
               can be specified
      Currently MySQL allows one to specify what indexes to ignore during
      join optimization. The scope of the current USE/FORCE/IGNORE INDEX 
      statement is only the FROM clause, while all other clauses are not 
      affected.
      
      However, in certain cases, the optimizer
      may incorrectly choose an index for sorting and/or grouping, and
      produce an inefficient query plan.
      
      This task provides the means to specify what indexes are
      ignored/used for what operation in a more fine-grained manner, thus
      making it possible to manually force a better plan. We do this
      by extending the current IGNORE/USE/FORCE INDEX syntax to:
      
      IGNORE/USE/FORCE INDEX [FOR {JOIN | ORDER | GROUP BY}]
      
      so that:
      - if no FOR is specified, the index hint will apply everywhere.
      - if MySQL is started with the compatibility option --old_mode then
        an index hint without a FOR clause works as in 5.0 (i.e, the 
        index will only be ignored for JOINs, but can still be used to
        compute ORDER BY).
      
      See the WL#3527 for further details.
      b9c82eaa
    • msvensson@pilot.blaudden's avatar
      9a2eea40
  25. 01 Mar, 2007 1 commit
  26. 28 Feb, 2007 2 commits
  27. 23 Feb, 2007 1 commit
    • monty@mysql.com/narttu.mysql.fi's avatar
      Fixed compiler warnings · f0ae3ce9
      monty@mysql.com/narttu.mysql.fi authored
      Fixed compile-pentium64 scripts
      Fixed wrong estimate of update_with_key_prefix in sql-bench
      Merge bk-internal.mysql.com:/home/bk/mysql-5.1 into mysql.com:/home/my/mysql-5.1
      Fixed unsafe define of uint4korr()
      Fixed that --extern works with mysql-test-run.pl
      Small trivial cleanups
      This also fixes a bug in counting number of rows that are updated when we have many simultanous queries
      Move all connection handling and command exectuion main loop from sql_parse.cc to sql_connection.cc
      Split handle_one_connection() into reusable sub functions.
      Split create_new_thread() into reusable sub functions.
      Added thread_scheduler; Preliminary interface code for future thread_handling code.
      
      Use 'my_thread_id' for internal thread id's
      Make thr_alarm_kill() to depend on thread_id instead of thread
      Make thr_abort_locks_for_thread() depend on thread_id instead of thread
      In store_globals(), set my_thread_var->id to be thd->thread_id.
      Use my_thread_var->id as basis for my_thread_name()
      The above changes makes the connection we have between THD and threads more soft.
      
      Added a lot of DBUG_PRINT() and DBUG_ASSERT() functions
      Fixed compiler warnings
      Fixed core dumps when running with --debug
      Removed setting of signal masks (was never used)
      Made event code call pthread_exit() (portability fix)
      Fixed that event code doesn't call DBUG_xxx functions before my_thread_init() is called.
      Made handling of thread_id and thd->variables.pseudo_thread_id uniform.
      Removed one common 'not freed memory' warning from mysqltest
      Fixed a couple of usage of not initialized warnings (unlikely cases)
      Suppress compiler warnings from bdb and (for the moment) warnings from ndb
      f0ae3ce9