1. 15 Sep, 2023 8 commits
    • Lena Startseva's avatar
      MDEV-31460: main.order_by_pack_big fails with view-protocol · 7bef86ea
      Lena Startseva authored
      Fixed tests:
      main.order_by_pack_big - disabled view-protocol for some queries
      because the view is created with wrong column name if column
      name > 64 symbols
      7bef86ea
    • Yuchen Pei's avatar
      Merge branch '10.4' into 10.5 · cf816263
      Yuchen Pei authored
      cf816263
    • Yuchen Pei's avatar
      MDEV-32157 MDEV-28856 Spider: Tests, documentation, small fixes and cleanups · 18990f00
      Yuchen Pei authored
      Removed some redundant hint related string literals from
      spd_db_conn.cc
      
      Clean up SPIDER_PARAM_*_[CHAR]LEN[S]
      
      Adding tests covering monitoring_kind=2. What it does is that it reads
      from mysql.spider_link_mon_servers with matching db_name, table_name,
      link_id, and does not do anything about that...
      
      How monitoring_* can be useful: in the deprecated spider high
      availability feature, when one remote fails, spider will try another
      remote, which apparently makes use of these table parameters.
      
      A test covering the query_cache_sync table param. Some further tests
      on some spider table params.
      
      Wrapper should be case insensitive.
      
      Code documentation on spider priority binary tree.
      
      Add an assertion that static_key_cardinality is always -1. All tests
      pass still
      18990f00
    • Yuchen Pei's avatar
      MDEV-32157 MDEV-28856 Spider: drop server in tests · 3b3200e2
      Yuchen Pei authored
      This helps eliminate "server exists" failures
      
      Also, spider/bugfix.mdev_29676, when enabled after MDEV-29525 is
      pushed will fail because we have not --recorded the result. But the
      failure will only emerge when working on MDEV-31138 where we manually
      re-enable this test, so let's worry about that then.
      3b3200e2
    • Yuchen Pei's avatar
      MDEV-29502 Fix some issues with spider direct aggregate · 68a00207
      Yuchen Pei authored
      The direct aggregate mechanism sems to be only intended to work when
      otherwise a full table scan query will be executed from the spider
      node and the aggregation done at the spider node too. Typically this
      happens in sub_select(). In the test spider.direct_aggregate_part
      direct aggregate allows to send COUNT statements directly to the data
      nodes and adds up the results at the spider node, instead of iterating
      over the rows one by one at the spider node.
      
      By contrast, the group by handler (GBH) typically sends aggregated
      queries directly to data nodes, in which case DA does not improve the
      situation here.
      
      That is why we should fix it by disabling DA when GBH is used.
      
      There are other reasons supporting this change. First, the creation of
      GBH results in a call to change_to_use_tmp_fields() (as opposed to
      setup_copy_fields()) which causes the spider DA function
      spider_db_fetch_for_item_sum_funcs() to work on wrong items. Second,
      the spider DA function only calls direct_add() on the items, and the
      follow-up add() needs to be called by the sql layer code. In
      do_select(), after executing the query with the GBH, it seems that the
      required add() would not necessarily be called.
      
      Disabling DA when GBH is used does fix the bug. There are a few
      other things included in this commit to improve the situation with
      spider DA:
      
      1. Add a session variable that allows user to disable DA completely,
      this will help as a temporary measure if/when further bugs with DA
      emerge.
      
      2. Move the increment of direct_aggregate_count to the spider DA
      function. Currently this is done in rather bizarre and random
      locations.
      
      3. Fix the spider_db_mbase_row creation so that the last of its row
      field (sentinel) is NULL. The code is already doing a null check, but
      somehow the sentinel field is on an invalid address, causing the
      segfaults. With a correct implementation of the row creation, we can
      avoid such segfaults.
      68a00207
    • Yuchen Pei's avatar
      Merge branch '10.4' into 10.5 · e95e9a22
      Yuchen Pei authored
      e95e9a22
    • Yuchen Pei's avatar
      MDEV-31787 MDEV-26151 Add a test exercising non-0 spider_casual_read · 96760d3a
      Yuchen Pei authored
      Also:
      - clean up spider_check_and_get_casual_read_conn() and
        spider_check_and_set_autocommit()
      - remove a couple of commented out code blocks
      96760d3a
    • Yuchen Pei's avatar
  2. 14 Sep, 2023 7 commits
  3. 13 Sep, 2023 5 commits
    • Brandon Nesterenko's avatar
      MDEV-31177: SHOW SLAVE STATUS Last_SQL_Errno Race Condition on Errored Slave Restart · 1407f999
      Brandon Nesterenko authored
      The SQL thread and a user connection executing SHOW SLAVE STATUS
      have a race condition on Last_SQL_Errno, such that a slave which
      previously errored and stopped, on its next start, SHOW SLAVE STATUS
      can show that the SQL Thread is running while the previous error is
      also showing.
      
      The fix is to move when the last error is cleared when the SQL
      thread starts to occur before setting the status of
      Slave_SQL_Running.
      
      Thanks to Kristian Nielson for his work diagnosing the problem!
      
      Reviewed By:
      ============
      Andrei Elkin <andrei.elkin@mariadb.com>
      Kristian Nielson <knielsen@knielsen-hq.org>
      1407f999
    • Brandon Nesterenko's avatar
      MDEV-31038: rpl.rpl_xa_prepare_gtid_fail clean up · 7de0c7b5
      Brandon Nesterenko authored
      - Removed commented out and unused lines.
      - Updated test to reference true failure of timeout
        rather than deadlock
      - Switched save variables from MTR to user
      - Forced relay-log purge to not potentially re-execute
        an already prepared transaction
      7de0c7b5
    • Daniel Black's avatar
      MDEV-31369 Disable TLS v1.0 and 1.1 for MariaDB · 1831f8e4
      Daniel Black authored
      Remove TLSv1.1 from the default tls_version system variable.
      
      Output a warning if TLSv1.0 or TLSv1.1 are selected.
      
      Thanks Tingyao Nian for the feature request.
      1831f8e4
    • Sergei Golubchik's avatar
      post-merge fix · 9e9cefde
      Sergei Golubchik authored
      9e9cefde
    • Oleg Smirnov's avatar
      MDEV-31315 Add client_ed25519.dll to the list of plugins shipped with HeidiSQL · 5fe8d0d5
      Oleg Smirnov authored
      There is a list of plugins in the WiX configuration file for HeidiSQL,
      and the installer only installs DLLs from that list although the HeidiSQL
      portable archive may include other plugins.
      
      This commit adds client_ed25519.dll to this list and also rearranges
      the list alphabetically, so it is easier to verify its contents
      5fe8d0d5
  4. 12 Sep, 2023 2 commits
    • Marko Mäkelä's avatar
      MDEV-32150 InnoDB reports corruption on 32-bit platforms with ibd files sizes > 4GB · d20a4da2
      Marko Mäkelä authored
      buf_read_page_low(): Use 64-bit arithmetics when computing the
      file byte offset. In other calls to fil_space_t::io() the offset
      was being computed correctly, for example by
      buf_page_t::physical_offset().
      d20a4da2
    • sjaakola's avatar
      MDEV-31833 replication breaks when using optimistic replication and replica is a galera node · a3cbc44b
      sjaakola authored
      MariaDB async replication SQL thread was stopped for any failure
      in applying of replication events and error message logged for the failure
      was: "Node has dropped from cluster". The assumption was that event applying
      failure is always due to node dropping out.
      With optimistic parallel replication, event applying can fail for natural
      reasons and applying should be retried to handle the failure. This retry
      logic was never exercised because the slave SQL thread was stopped with first
      applying failure.
      
      To support optimistic parallel replication retrying logic this commit will
      now skip replication slave abort, if node remains in cluster (wsrep_ready==ON)
      and replication is configured for optimistic or aggressive retry logic.
      
      During the development of this fix, galera.galera_as_slave_nonprim test showed
      some problems. The test was analyzed, and it appears to need some attention.
      One excessive sleep command was removed in this commit, but it will need more
      fixes still to be fully deterministic. After this commit galera_as_slave_nonprim
      is successful, though.
      Signed-off-by: default avatarJulius Goryavsky <julius.goryavsky@mariadb.com>
      a3cbc44b
  5. 11 Sep, 2023 12 commits
  6. 09 Sep, 2023 1 commit
  7. 08 Sep, 2023 5 commits