- 27 Oct, 2021 26 commits
-
-
Sergei Golubchik authored
-
Sergei Golubchik authored
because plugin code is not only about encryption anymore (also loads provider plugins), and xb_ prefix prevents name clashes with the server code (that mariabackup links with).
-
Sergei Golubchik authored
prefer backup-my.cnf from the incremental-dir over the one in target-dir
-
Sergei Golubchik authored
* Change InnoDB message text to mention compression. * make "not loaded compression provider" warning to be issued also when current_thd == 0.
-
Sergei Golubchik authored
-
Sergei Golubchik authored
make mariabackup to load not only encryption but also provider plugins.
-
Sergei Golubchik authored
-
Sergei Golubchik authored
like ==31311==ERROR: AddressSanitizer: odr-violation (0x7f3cda2e1480): [1] size=8 'provider_service_lz4' libservices/provider_service_lz4.c:14:17 [2] size=8 'provider_service_lz4' sql/sql_plugin_services.ic:301:33
-
Sergei Golubchik authored
-
Kartik Soneji authored
bzip2/lz4/lzma/lzo/snappy compression is now provided via *services* they're almost like normal services, but in include/providers/ and they're supposed to provide exactly the same interface as original compression libraries (but not everything, only enough of if for the code to compile). the services are implemented via dummy functions that return corresponding error values (LZMA_PROG_ERROR, LZO_E_INTERNAL_ERROR, etc). the actual compression libraries are linked into corresponding provider plugins. Providers are daemon plugins that when loaded replace service pointers to point to actual compression functions. That is, run-time dependency on compression libraries is now on plugins, and the server doesn't need any compression libraries to run, but will automatically support the compression when a plugin is loaded. InnoDB and Mroonga use compression plugins now. RocksDB doesn't, because it comes with standalone utility binaries that cannot load plugins.
-
Kartik Soneji authored
-
Sergei Golubchik authored
if plugin->deinit() returns a failure, it is no longer ignored, it means that the plugin isn't ready to be unloaded from memory yet. So it's marked "dying", deinitialized as much as possible, but stays in memory until shutdown. also: * increment MARIA_PLUGIN_INTERFACE_VERSION * rewrite ha_rocksdb to use the new approach, update the test
-
Sergei Golubchik authored
-
Sergei Golubchik authored
* reduce code duplication
-
Marko Mäkelä authored
-
Marko Mäkelä authored
-
Marko Mäkelä authored
-
Marko Mäkelä authored
-
Marko Mäkelä authored
-
Marko Mäkelä authored
It turns out that WFRM_WRITE_EXTRACTED was added only in a debug assertion and in a comment, as part of commit b7bba721.
-
Marko Mäkelä authored
-
Marko Mäkelä authored
-
Marko Mäkelä authored
mysql_write_frm(): Correctly enclose code inside #ifdef WITH_PARTITION_STORAGE_ENGINE so that cmake -DPLUGIN_PARTITION=NO builds can succeed. This was broken in commit b7bba721.
-
Alexander Barkov authored
Adding 10.4 specific tests.
-
Alexander Barkov authored
-
Alexander Barkov authored
Also fixes: MDEV-25399 Assertion `name.length == strlen(name.str)' failed in Item_func_sp::make_send_field Also fixes a problem that in this scenario: SET NAMES binary; SELECT 'some not well-formed utf8 string'; the auto-generated column name copied the binary string value directly to the Item name, without checking utf8 well-formedness. After this change auto-generated column names work as follows: - Zero bytes 0x00 are copied to the name using HEX notation - In case of "SET NAMES binary", all bytes sequences that do not make well-formed utf8 characters are copied to the name using HEX notation.
-
- 26 Oct, 2021 14 commits
-
-
Eugene Kosov authored
ALTER TABLE IMPORT doesn't properly handle instant alter metadata. This patch makes IMPORT read, parse and apply instant alter metadata at the very beginning of operation. So, cases when source table has some metadata and destination table doesn't have it now works fine. DISCARD already removes instant metadata so importing normal table into instant table worked fine before this patch. decrypt_decompress(): decrypts and decompresses page if needed handle_instant_metadata(): this should be the first thing to read source table. Basically, it applies instant metadata to a destination dict_table_t object. This is the first thing to read FSP flags so all possible checks of it were moved to this function. PageConverter::update_index_page(): it doesn't now read instant metadata. This logic were moved into handle_instant_metadata() row_import::match_flags(): this is a first part row_import::match_schema(). As a separate function it's used by handle_instant_metadata(). fil_space_t::is_full_crc32_compressed(): added convenient function ha_innobase::discard_or_import_tablespace(): do not reload table definition to read instant metadata because handle_instant_metadata() does it better. The reverted code was originally added in 4e7ee166 ANONYMOUS_VAR: this is a handy thing to use along with make_scope_exit() full_crc32_import.test shows different results, because no dict_table_close() and dict_table_open_on_id() happens. Thus, SHOW CREATE TABLE shows a little bit older table definition.
-
Rucha Deodhar authored
ER_TRUNCATED_WRONG_VALUE Part 1: Fix for DELETE without ORDER BY Analysis: m_current_row_for_warning doesn't increment and assumes default value which is then used by ROW_NUMBER. Fix: Increment m_current_row_for_warning for each processed row.
-
Rucha Deodhar authored
CHECK violation Analysis: When there is constraint fail we return non-zero value for view_check_option(). So we continue the loop which doesn't increment the counter because it increments at the end of the loop. Fix: Increment m_current_row_for_warning() at the beginning of loop. This will also fix similar bugs if any, about counter not incrementing correctly because of continue.
-
Rucha Deodhar authored
ER_WRONG_VALUE_COUNT_ON_ROW for the 1st row Analysis: Current row for warning does not increment for prepare phase Fix: Increment current row for warning if number of fields in the table and row values dont match and number of values in rows is greater than number of fields
-
Rucha Deodhar authored
MDEV-26842: ROW_NUMBER is not set and differs from the message upon WARN_DATA_TRUNCATED produced by inplace ALTER Analysis: When row number is passed as parameter to set_warning() it is only used for error/warning text but m_current_row_for_warning is not updated. Hence default value of m_current_row_for_warning is assumed. Fix: update m_current_row_for_warning when error/warning occurs.
-
Sergei Golubchik authored
just a test case
-
Sergei Golubchik authored
in case of a bulk insert the server sends all rows to the engine, and then the engine replies that there was ER_DUP_ENTRY somewhere. the exact number of the row that caused the error is unknown.
-
Sergei Golubchik authored
-
Sergei Golubchik authored
to remove Sql_condition* raise_condition(const Sql_condition *cond) { Sql_condition *raised= raise_condition(cond->get_sql_errno(), cond->get_sqlstate(), cond->get_level(), *cond, cond->get_message_text()); return raised; }
-
Sergei Golubchik authored
-
Sergei Golubchik authored
otherwise how can we know that the row counter is incremented?
-
Rucha Deodhar authored
Analysis: Parser was missing ROW_NUMBER as syntax for SIGNAL and RESIGNAL. Fix: Fix parser and fix how m_row_number is copied like other attributes to avoid ROW_NUMBER from assuming default value.
-
Aleksey Midenkov authored
Wrong assertion leftover removed. m_sql_cmd can be allocated by any ALTER subcommand and before allocation it is checked for NULL first.
-
Aleksey Midenkov authored
Syntax for CONVERT TABLE ALTER TABLE tbl_name CONVERT TABLE tbl_name TO PARTITION partition_name partition_spec Examples: ALTER TABLE t1 CONVERT TABLE tp2 TO PARTITION p2 VALUES LESS THAN MAX_VALUE(); New ALTER_PARTITION_CONVERT_IN command for fast_alter_partition_table() is done in alter_partition_convert_in() function which basically does ha_rename_table(). Table structure and data check is basically the same as in EXCHANGE PARTITION command. And these are done by compare_table_with_partition() and check_table_data(). Atomic DDL is done by the scheme from MDEV-22166 (see the corresponding commit message). The only differnce is that it also has to drop source table frm and that is done by WFRM_DROP_CONVERTED_FROM. Initial patch was done by Dmitry Shulga <dmitry.shulga@mariadb.com>
-