1. 25 Dec, 2022 37 commits
    • Sergei Petrunia's avatar
      1c91b443
    • Sergei Petrunia's avatar
      Fix compile on Windows · d1e4d481
      Sergei Petrunia authored
      d1e4d481
    • Monty's avatar
      Update cost for hash and cached joins · 37270f09
      Monty authored
      The old code did not't correctly add TIME_FOR_COMPARE to rows that are
      part of the scan that will be compared with the attached where clause.
      
      Now the cost calculation for hash join and full join cache join are
      identical except for HASH_FANOUT (10%)
      
      The cost for a join with keys is now also uniform.
      The total cost for a using a key for lookup is calculated in one place as:
      
      (cost_of_finding_rows_through_key(records) + records/TIME_FOR_COMPARE)*
      record_count_of_previous_row_combinations + startup_cost
      
      startup_cost is the cost of a creating a temporary table (if needed)
      
      Best_cost now includes the cost of comparing all WHERE clauses and also
      cost of joining with previous row combinations.
      
      Other things:
      - Optimizer trace is now printing the total costs, including testing the
        WHERE clause (TIME_FOR_COMPARE) and comparing with all previous rows.
      - In optimizer trace, include also total cost of query together with the
        final join order. This makes it easier to find out where the cost was
        calculated.
      - Old code used filter even if the cost for it was higher than not using a
        filter. This is not corrected.
      - When rebasing on 10.11, I noticed some changes to access_cost_factor
        calculation. These changes was not picked as the coming changes
        to filtering will make that code obsolete.
      37270f09
    • Monty's avatar
      Adjust costs for doing index scan in cost_group_min_max() · b9632df7
      Monty authored
      The idea is that when doing a tree dive (once per group), we need to
      compare key values, which is fast.  For each new group, we have to
      compare the full where clause for the row.
      Compared to original code, the cost of group_min_max() has slightly
      increased which affects some test with only a few rows.
      main.group_min_max and main.distinct have been modified to show the
      effect of the change.
      
      The patch also adjust the number of groups in case of quick selects:
      - For simple WHERE clauses, ensure that we have at least as many groups
        as we have conditions on the used group-by key parts.
        The assumption is that each condition will create at least one group.
      - Ensure that there are no more groups than rows found by quick_select
      
      Test changes:
      - For some small tables there has been a change of
        Using index for group-by -> Using index for group-by (scanning)
        Range -> Index and Using index for group-by -> Using index
      b9632df7
    • Monty's avatar
      Return >= 1 from matching_candidates_in_table if records > 0.0 · 948284be
      Monty authored
      Having rows >= 1.0 helps ensure that when we calculate total rows of joins
      the number of resulting rows will not be less after the join.
      
      Changes in test cases:
      - Join order change for some tables with few records
      - 'Filtered' is much higher for tables with few rows, as 1 row is a high
        procent of a table with few rows.
      948284be
    • Monty's avatar
      Update matching_candidates_in_table() to treat all conditions similar · d133d691
      Monty authored
      Fixed also that the 'with_found_constraint parameter' to
      matching_candidates_in_table() is as documented: It is now true only
      if there is a reference to a previous table in the WHERE condition for
      the current examined table (as it was originally documented)
      
      Changes in test results:
      - Filtered was 25% smaller for some queries (expected).
      - Some join order changed (probably because the tables had very few rows).
      - Some more table scans, probably because there would be fewer returned
        rows.
      - Some tests exposes a bug that if there is more filtered rows, then the
        cost for table scan will be higher. This will be fixed in a later commit.
      d133d691
    • Monty's avatar
      Fix calculation of selectivity · 9659fcb4
      Monty authored
      calculate_cond_selectivity_for_table() is largely rewritten:
      - Process keys in the order of rows found, smaller ranges first. If two
        ranges has equal number of rows, use the one with more key parts.
        This helps us to mark more used fields to not be used for further
        selectivity calculations. See cmp_quick_ranges().
      - Ignore keys with fields that where used by previous keys
      - Don't use rec_per_key[] to calculate selectivity for smaller
        secondary key parts.  This does not work as rec_per_key[] value
        is calculated in the context of the previous key parts, not for the
        key part itself. The one exception is if the previous key parts
        are all constants.
      
      Other things:
      - Ensure that select->cond_selectivity is always between 0 and 1.
      - Ensure that select->opt_range_condition_rows is never updated to
        a higher value. It is initially set to the number of rows in table.
      - We now store in table->opt_range_condition_rows the lowest number of
        rows that any row-read-method has found so far. Before it was only done
        for QUICK_SELECT_I::QS_TYPE_ROR_UNION and
        QUICK_SELECT_I::QS_TYPE_INDEX_MERGE.
        Now it is done for a lot more methods. See
        calculate_cond_selectivity_for_table() for details.
      - Calculate and use selectivity for the first key part of a multiple key
        part if the first key part is a constant.
        WHERE key1_part1=5 and key2_part1=5.  IF key1 is used, then we can still
        use selectivity for key2
      
      Changes in test results:
      - 'filtered' is slightly changed, usually to something slightly smaller.
      - A few cases where for group by queries the table order changed. This was
        because the number of resulting rows from a group by query with MIN/MAX
        is now set to be smaller.
      - A few index was changed as we now prefer index with more key parts if
        the number of resulting rows is the same.
      9659fcb4
    • Monty's avatar
      Fixed bug in SQL_SELECT_LIMIT · 21b121cb
      Monty authored
      We where comparing costs when we should be comparing number of rows
      that will be examined
      21b121cb
    • Monty's avatar
      Simple optimization to speed up some handler functions when checking killed · f631d5b7
      Monty authored
      - Avoid checking for has_transactions if killed flag is not checked
      - Simplify code (Have checked with gcc -O3 that there is improvements)
      - Added handler::fast_increment_statstics() to be used when a handler
        functions wants to increase two statistics for one row access.
      - Made check_limit_rows_examened() inline (even if it didn't make any
        difference for gcc 7.5.0), still the right thing to do
      f631d5b7
    • Monty's avatar
      Adjusted Range_rowid_filter_cost_info lookup cost slightly. · 272a2d48
      Monty authored
      If the array size would be 1, the cost would be 0 which is wrong.
      Fixed by adding a small (0.001) base value to the lookup cost.
      
      This causes not changes in any result files.
      272a2d48
    • Vicențiu Ciorbaru's avatar
      6e5e6f4a
    • Monty's avatar
      Change class variable names in rowid_filter to longer, more clear names · 41348a43
      Monty authored
      No code logic changes was done
      
      a     -> gain
      b     -> cost_of_building_range_filter
      a_adj -> gain_adj
      r     -> row_combinations
      
      Other things:
      - Optimized the layout of class Range_rowid_filter_cost_info.
        One effect was that I moved key_no to the private section to get
        better alignment and had to introduce a get_key_no() function.
      - Indentation changes in rowid_filter.cc to avoid long rows.
      41348a43
    • Monty's avatar
      Updated convert-debug-for-diff · a543b2c0
      Monty authored
      a543b2c0
    • Monty's avatar
      Optimizer code cleanups, no logic changes · 17b55464
      Monty authored
      - Updated comments
      - Added some extra DEBUG
      - Indentation changes and break long lines
      - Trivial code changes like:
        - Combining 2 statements in one
        - Reorder DBUG lines
        - Use a variable to store a pointer that is used multiple times
      - Moved declaration of variables to start of loop/function
      - Removed dead or commented code
      - Removed wrong DBUG_EXECUTE code in best_extension_by_limited_search()
      17b55464
    • Monty's avatar
      Limit calculated rows to the number of rows in the table · 62fb3c3a
      Monty authored
      The result file changes are mainly that number of rows is one smaller
      for some queries with DISTINCT or GROUP BY
      62fb3c3a
    • Monty's avatar
      Ensure that test_quick_select doesn't return more rows than in the table · a3c100e4
      Monty authored
      Other changes:
      - In test_quick_select(), assume that if table->used_stats_records is 0
        then the table has 0 rows.
      - Fixed prepare_simple_select() to populate table->used_stat_records
      - Enusre that set_statistics_for_tables() doesn't cause used_stats_records
        to be 0 when using stat_tables.
      - To get blackhole to work with replication, set stats.records to 2 so
        that test_quick_select() doesn't assume the table is empty.
      a3c100e4
    • Monty's avatar
      MDEV-14907 FEDERATEDX doesn't respect DISTINCT · e1699320
      Monty authored
      This is a minor cleanup of the original commit
      e1699320
    • Monty's avatar
      Improve comments in the optimizer · 8188982c
      Monty authored
      8188982c
    • Daniel Black's avatar
      MDEV-30203 Move mysql symlinks to different package · 44edf3ed
      Daniel Black authored
      For both Deb and RPM, create mariadb-client-compat and
      mariadb-server-compat containing the mysql links to the mariadb
      named executables/scripts.
      
      The mariadb-client-core mysqlcheck was moved to mariadb-client-compat.
      
      resolve-stack-dump was moved from server to client (RPM), like
      where the man page and Debian package put it.
      
      The symlinks in MYSQL_ADD_EXECUTABLE is tagged as a
      {Client,Server}Symlinks component and placed in
      the symlinks packages.
      
      Man pages are restructured be installed into compat package
      if that matches the executable.
      
      Columnstore has a workaround as it doesn't use the cmake/plugin.cmake.
      
      Scripts likewise have compatibility symlinks are in
      the {server,client}-compat packages.
      
      Co-author: Andrew Hutchings <andrew@linuxjedi.co.uk>
      
      Closes #2390
      44edf3ed
    • Sergei Golubchik's avatar
      more manpages · 81f36569
      Sergei Golubchik authored
      * move them from ManPagesX component to X (works better for plugins),
        but keep ManPagesDevelopment as C/C is using it
      * move backup manpages to Backup
      * move plugin manpages (s3, rocksdb) to plugins
      81f36569
    • Sergei Golubchik's avatar
      cmake: rename backup component to Backup · 53fc59cb
      Sergei Golubchik authored
      for consistency
      53fc59cb
    • Sergei Golubchik's avatar
      cmake: simplify handling of man pages · 13092688
      Sergei Golubchik authored
      and remove unused function INSTALL_MANPAGE
      13092688
    • Sergei Golubchik's avatar
    • Andrew Hutchings's avatar
      MDEV-30275: mariadb names rather than mysql names should be used · c5b8f471
      Andrew Hutchings authored
      Fix bug in mariadb-service-convert
      
      If mariadb-service-convert is run and the user variable is unset then
      this sets `User=` in `[Service]`, which then tries to run mariadb as
      root, which in-turn fails. This only happens when mysqld_safe is missing
      which is all the time now. So...
      
      1. Don't set `User=` if there is no user variable.
      2. User mariadbd-safe instead.
      
      Also
      * galera_recovery to use mariadbd
      * mtr - wsrep use mariadb executables
      * debian/mariadb-server.mariadb.init use mariadbd-safe
      * debian/tests/smoke uses mariadb instead of mysql as client.
      
      Co-Author: Daniel Black <daniel@mariadb.org>
      c5b8f471
    • Denis Protivensky's avatar
    • Jan Lindström's avatar
      MDEV-30133 : MariaDB startup does not validate plugin-wsrep-provider when... · a280dded
      Jan Lindström authored
      MDEV-30133 : MariaDB startup does not validate plugin-wsrep-provider when wsrep_mode=off or wsrep_provider is not set
      
      Refuse to start if WSREP_ON=OFF or wsrep_provider is not set or
      it is set to 'none' if plugin-wsrep-provider is used.
      a280dded
    • Jan Lindström's avatar
      MDEV-30120 : Update the wsrep_provider_options read_only value in the system_variables table. · 19a33e83
      Jan Lindström authored
      When wsrep-provider-options plugin is initialized we need to update
      wsrep-provider-options variable as READ_ONLY.
      19a33e83
    • Jan Lindström's avatar
      Update wsrep-lib submodule · 499a60d8
      Jan Lindström authored
      499a60d8
    • Daniele Sciascia's avatar
      MDEV-22570 Implement wsrep_provider_options as plugin · 47b6c644
      Daniele Sciascia authored
      - Provider options are read from the provider during
        startup, before plugins are initialized.
      - New wsrep_provider plugin for which sysvars are generated
        dynamically from options read from the provider.
      - The plugin is enabled by option plugin-wsrep-provider=ON.
        If enabled, wsrep_provider_options can no longer be used,
        (an error is raised on attempts to do so).
      - Each option is either string, integer, double or bool
      - Options can be dynamic / readonly
      - Options can be deprecated
      
      Limitations:
      
      - We do not check that the value of a provider option falls
        within a certain range. This type of validation is still
        done in Galera side.
      Reviewed-by: default avatarJan Lindström <jan.lindstrom@mariadb.com>
      47b6c644
    • Marko Mäkelä's avatar
      MDEV-29986 Set innodb_undo_tablespaces=3 by default · 4864992c
      Marko Mäkelä authored
      Starting with commit baf276e6 (MDEV-19229)
      the parameter innodb_undo_tablespaces can be increased from its
      previous default value 0 while allowing an upgrade from old databases.
      
      We will change the default setting to innodb_undo_tablespaces=3
      so that the space occupied by possible bursts of undo log records
      can be reclaimed after SET GLOBAL innodb_undo_log_truncate=ON.
      
      We will not enable innodb_undo_log_truncate by default, because it
      causes some observable performance degradation.
      
      Special thanks to Thirunarayanan Balathandayuthapani for diagnosing
      and fixing a number of bugs related to this new default setting.
      
      Tested by: Matthias Leich, Axel Schwenke, Vladislav Vaintroub
      (with both values of innodb_undo_log_truncate)
      4864992c
    • Julius Goryavsky's avatar
      MDEV-30157: Galera SST doesn't properly handle undo* files from innodb · e8366b37
      Julius Goryavsky authored
      This fix adds separate handling for "undo*" files that contain undo
      logs as part of innodb files and adds a filter for undo* to the main
      filter used when initially transferring files with rsync.
      e8366b37
    • Julius Goryavsky's avatar
      pre-MDEV-30157 & pre-MDEV-28669: fixes before the main corrections · 63c46001
      Julius Goryavsky authored
      This commit adds even more correct handling of parameters
      with paths if they contain leading or trailing spaces, and
      also fixes problems if the user specified an explicit path
      to additional directories, but these directories match the
      non-standard datadir path - in this In this case, additional
      subdirectories must be treated (in relation to datadir) as
      if the path was not specified or implicitly specified via
      "." or "./", but the script code before this fix worked
      as if the user had specified separate paths for each of
      the additional subdirectories (although in fact they
      point to the same place where datadir points).
      
      This fix does not contain separate tests, as tests will
      be part of the main commit(s). This fix has been made as
      a separate commit to facilitate review for major substantive
      fixes related to MDEV-30157 and MDEV-28669.
      63c46001
    • Marko Mäkelä's avatar
      MDEV-19506 Remove the global sequence DICT_HDR_ROW_ID for DB_ROW_ID · 637d22af
      Marko Mäkelä authored
      InnoDB tables that lack a primary key (and any UNIQUE INDEX whose
      all columns are NOT NULL) will use an internally generated index,
      called GEN_CLUST_INDEX(DB_ROW_ID) in the InnoDB data dictionary,
      and hidden from the SQL layer.
      
      The 48-bit (6-byte) DB_ROW_ID is being assigned from a
      global sequence that is persisted in the DICT_HDR page.
      
      There is absolutely no reason for the DB_ROW_ID to be globally
      unique across all InnoDB tables.
      
      A downgrade to earlier versions will be prevented by the file format
      change related to removing the InnoDB change buffer (MDEV-29694).
      
      DICT_HDR_ROW_ID, dict_sys_t::row_id: Remove.
      
      dict_table_t::row_id: The per-table sequence of DB_ROW_ID.
      
      commit_try_rebuild(): Copy dict_table_t::row_id from the old table.
      
      btr_cur_instant_init(), row_import_cleanup(): If needed, perform
      the equivalent of SELECT MAX(DB_ROW_ID) to initialize
      dict_table_t::row_id.
      
      row_ins(): If needed, obtain DB_ROW_ID from dict_table_t::row_id.
      Should it exceed the maximum 48-bit value, return DB_OUT_OF_FILE_SPACE
      to prevent further inserts into the table.
      
      dict_load_table_one(): Move a condition to btr_cur_instant_init_low()
      so that dict_table_t::row_id will be restored also for
      ROW_FORMAT=COMPRESSED tables.
      
      Tested by: Matthias Leich
      637d22af
    • Marko Mäkelä's avatar
      MDEV-29694 Remove the InnoDB change buffer · 9c889cfa
      Marko Mäkelä authored
      The purpose of the change buffer was to reduce random disk access,
      which could be useful on rotational storage, but maybe less so on
      solid-state storage.
      When we wished to
      (1) insert a record into a non-unique secondary index,
      (2) delete-mark a secondary index record,
      (3) delete a secondary index record as part of purge (but not ROLLBACK),
      and the B-tree leaf page where the record belongs to is not in the buffer
      pool, we inserted a record into the change buffer B-tree, indexed by
      the page identifier. When the page was eventually read into the buffer
      pool, we looked up the change buffer B-tree for any modifications to the
      page, applied these upon the completion of the read operation. This
      was called the insert buffer merge.
      
      We remove the change buffer, because it has been the source of
      various hard-to-reproduce corruption bugs, including those fixed in
      commit 5b9ee8d8 and
      commit 165564d3 but not limited to them.
      
      A downgrade will fail with a clear message starting with
      commit db14eb16 (MDEV-30106).
      
      buf_page_t::state: Merge IBUF_EXIST to UNFIXED and
      WRITE_FIX_IBUF to WRITE_FIX.
      
      buf_pool_t::watch[]: Remove.
      
      trx_t: Move isolation_level, check_foreigns, check_unique_secondary,
      bulk_insert into the same bit-field. The only purpose of
      trx_t::check_unique_secondary is to enable bulk insert into an
      empty table. It no longer enables insert buffering for UNIQUE INDEX.
      
      btr_cur_t::thr: Remove. This field was originally needed for change
      buffering. Later, its use was extended to cover SPATIAL INDEX.
      Much of the time, rtr_info::thr holds this field. When it does not,
      we will add parameters to SPATIAL INDEX specific functions.
      
      ibuf_upgrade_needed(): Check if the change buffer needs to be updated.
      
      ibuf_upgrade(): Merge and upgrade the change buffer after all redo log
      has been applied. Free any pages consumed by the change buffer, and
      zero out the change buffer root page to mark the upgrade completed,
      and to prevent a downgrade to an earlier version.
      
      dict_load_tablespaces(): Renamed from
      dict_check_tablespaces_and_store_max_id(). This needs to be invoked
      before ibuf_upgrade().
      
      btr_cur_open_at_rnd_pos(): Specialize for use in persistent statistics.
      The change buffer merge does not need this function anymore.
      
      btr_page_alloc(): Renamed from btr_page_alloc_low(). We no longer
      allocate any change buffer pages.
      
      btr_cur_open_at_rnd_pos(): Specialize for use in persistent statistics.
      The change buffer merge does not need this function anymore.
      
      row_search_index_entry(), btr_lift_page_up(): Add a parameter thr
      for the SPATIAL INDEX case.
      
      rtr_page_split_and_insert(): Specialized from btr_page_split_and_insert().
      
      rtr_root_raise_and_insert(): Specialized from btr_root_raise_and_insert().
      
      Note: The support for upgrading from the MySQL 3.23 or MySQL 4.0
      change buffer format that predates the MySQL 4.1 introduction of
      the option innodb_file_per_table was removed in MySQL 5.6.5
      as part of mysql/mysql-server@69b6241a79876ae98bb0c9dce7c8d8799d6ad273
      and MariaDB 10.0.11 as part of 1d0f70c2.
      
      In the tests innodb.log_upgrade and innodb.log_corruption, we create
      valid (upgraded) change buffer pages.
      
      Tested by: Matthias Leich
      9c889cfa
    • Marko Mäkelä's avatar
      MDEV-30136: Deprecate innodb_flush_method · 30431170
      Marko Mäkelä authored
      We introduce the following settable Boolean global variables:
      
      innodb_log_file_write_through: Whether writes to ib_logfile0 are
      write-through (disabling any caching, as in O_SYNC or O_DSYNC).
      
      innodb_data_file_write_through: Whether writes to any InnoDB data files
      (including the temporary tablespace) are write-through.
      
      innodb_data_file_buffering: Whether the file system cache is enabled
      for InnoDB data files.
      
      All these parameters are OFF by default, that is, the file system cache
      will be disabled, but any hardware caching is enabled, that is,
      explicit calls to fsync(), fdatasync() or similar functions are needed.
      
      On systems that support FUA it may make sense to enable write-through,
      to avoid extra system calls.
      
      If the deprecated read-only start-up parameter is set to one of the
      following values, then the values of the 4 Boolean flags (the above 3
      plus innodb_log_file_buffering) will be set as follows:
      
      O_DSYNC:
      innodb_log_file_write_through=ON, innodb_data_file_write_through=ON,
      innodb_data_file_buffering=OFF, and
      (if supported) innodb_log_file_buffering=OFF.
      
      fsync, littlesync, nosync, or (Microsoft Windows specific) normal:
      innodb_log_file_write_through=OFF, innodb_data_file_write_through=OFF,
      and innodb_data_file_buffering=ON.
      
      Note: fsync() or fdatasync() will only be disabled if the separate
      parameter debug_no_sync (in the code, my_disable_sync) is set.
      
      In mariadb-backup, the parameter innodb_flush_method will be ignored.
      
      The Boolean parameters can be modified by SET GLOBAL while the
      server is running. This will require reopening the ib_logfile0
      or all currently open InnoDB data files.
      
      We will open files straight in O_DSYNC or O_SYNC mode when applicable.
      Data files we will try to open straight in O_DIRECT mode when the
      page size is at least 4096 bytes. For atomically creating data files,
      we will invoke os_file_set_nocache() to enable O_DIRECT afterwards,
      because O_DIRECT is not supported on some file systems. We will also
      continue to invoke os_file_set_nocache() on ib_logfile0 when
      innodb_log_file_buffering=OFF can be fulfilled.
      
      For reopening the ib_logfile0, we use the same logic that was developed
      for online log resizing and reused for updates of
      innodb_log_file_buffering.
      
      Reopening all data files is implemented in the new function
      fil_space_t::reopen_all().
      
      Reviewed by: Vladislav Vaintroub
      Tested by: Matthias Leich
      30431170
    • Marko Mäkelä's avatar
      MDEV-29983 Deprecate innodb_file_per_table · e328bb2f
      Marko Mäkelä authored
      Before commit 6112853c in MySQL 4.1.1
      introduced the parameter innodb_file_per_table, all InnoDB data was
      written to the InnoDB system tablespace (often named ibdata1).
      A serious design problem is that once the system tablespace has grown to
      some size, it cannot shrink even if the data inside it has been deleted.
      
      There are also other design problems, such as the server hang MDEV-29930
      that should only be possible when using innodb_file_per_table=0 and
      innodb_undo_tablespaces=0 (storing both tables and undo logs in the
      InnoDB system tablespace).
      
      The parameter innodb_change_buffering was deprecated
      in commit b5852ffb.
      Starting with commit baf276e6
      (MDEV-19229) the number of innodb_undo_tablespaces can be increased,
      so that the undo logs can be moved out of the system tablespace
      of an existing installation.
      
      If all these things (tables, undo logs, and the change buffer) are
      removed from the InnoDB system tablespace, the only variable-size
      data structure inside it is the InnoDB data dictionary.
      
      DDL operations on .ibd files was optimized in
      commit 86dc7b4d (MDEV-24626).
      That should have removed any thinkable performance advantage of
      using innodb_file_per_table=0.
      
      Since there should be no benefit of setting innodb_file_per_table=0,
      the parameter should be deprecated. Starting with MySQL 5.6 and
      MariaDB Server 10.0, the default value is innodb_file_per_table=1.
      e328bb2f
    • Sergei Golubchik's avatar
      MDEV-29582 post-review fixes · 647f10ce
      Sergei Golubchik authored
      don't include my_progname in the error message, my_error starts from it
      automatically, resulting in, like
      
      /usr/bin/mysqladmin: Notice: /usr/bin/mysqladmin is deprecated and will be removed in a future release, use command 'mariadb-admin'
      
      and remove "Notice" so that the problem description would directly
      follow the executable name.
      
      make the check to work when the executable is in the PATH
      (so, invoked simply like 'mysql' and thus readlink cannot find it)
      
      fix the check in mysql_install_db and mysql_secure_installation to not
      print the warning if the intermediate path contains "mysql" substring
      
      add this message also to
      * mysql_waitpid
      * mysql_convert_table_format
      * mysql_find_rows
      * mysql_setpermissions
      * mysqlaccess
      * mysqld_multi
      * mysqld_safe
      * mysqldumpslow
      * mysqlhotcopy
      * mysql_ldb
      
      Closes #2273
      647f10ce
  2. 24 Dec, 2022 3 commits
    • Daniel Black's avatar
      MDEV-29582 deprecate mysql* names · bdf1b8ee
      Daniel Black authored
      Eventually mysql symlinks will go away, as MariaDB and MySQL keep
      diverging and we do not want to make it impossible to install
      MariaDB and MySQL side-by-side when users want it.
      
      It also useful if people start using MariaDB tools with MariaDB.
      
      If the exe doesn't begine with "mariadb" or is a symlink,
      print a warning to use the resolved name.
      
      In my_readlink, add check on my_thread_var as its used by comp_err
      and other build utils that also use my_init.
      bdf1b8ee
    • Sergei Golubchik's avatar
      MDEV-30128 remove support for 5.1- replication events · 27a0c239
      Sergei Golubchik authored
      including patches from Andrei Elkin
      27a0c239
    • Sergei Golubchik's avatar
      MDEV-30153 ad hoc client versions are confusing · cdf26c55
      Sergei Golubchik authored
      try to make them less confusing for users.
      Hopefully, if the version string will be changed like
      
      - mariadb Ver 15.1 Distrib 10.11.2-MariaDB for Linux (x86_64)
      + mariadb from 10.11.2-MariaDB, client 15.1 for Linux (x86_64)
      
      users will be less inclined to reply "15.1" to the question
      "what mariadb version are you using?"
      cdf26c55