- 02 Apr, 2024 1 commit
-
-
Alexander Barkov authored
A user variable and a literal with different collation produce an illegal mix of collation error. Rewriting the script to avoid such arguments.
-
- 17 Mar, 2024 39 commits
-
-
Sergei Golubchik authored
-
Sergei Golubchik authored
-
Sergei Golubchik authored
Error:Run-Time Check Failure #3 - The variable 'r_filtered' is being used without being initialized. At :0
-
Sergei Golubchik authored
-
Sergei Golubchik authored
tests for privileges need not_embedded
-
Sergei Golubchik authored
-
Sergei Golubchik authored
-
Sergei Golubchik authored
for embedded
-
Sergei Golubchik authored
compilation failure on 32-bit unsigned = unsigned int. values can be longlong.
-
Sergei Golubchik authored
it's a pointer into the net buffer, so it might be overwritten by the next read or write. And the next plugin switch (in multi-auth) will try to compare it (in send_plugin_request_packet) which is normally harmless but fails the assert with Lex_ident::is_valid_ident()
-
Sergei Golubchik authored
-
Sergei Golubchik authored
allow RPM upgrades from a different minor version, if the major version is the same.
-
Sergei Golubchik authored
sql_sequence.h:233:19: runtime error: signed integer overflow: -9223372036854775808 + -1 cannot be represented in type 'long long int' followup for 374783c3
-
Sergei Golubchik authored
-
Monty authored
This commit can be ignored when columnstore changes to use rows_stats.updated+rows_stats.rows_inserted+rows_stats.rows_deleted instead of 'rows_changed' in storage/columnstore/columnstore/dbcon/mysql/ha_mcs_impl.cpp
-
Sergei Golubchik authored
* show it as a datetime, not number of seconds * show all users * show manually expired users as 0000-00-00 00:00:00 * show default expiration interval correctly * numerous test fixes, add more tests * fix compilation of embedded
-
Nikita Malyavin authored
* A new table INFORMATION_SCHEMA.USERS is introduced. * It stores auxiliary user data * An unprivileged user can access their own data, and that is the main difference with what mysql.global_priv provides * The fields are currently: USER, PASSWORD_ERRORS, PASSWORD_EXPIRATION_TIME * If password_errors is ignored for the user, PASSWORD_ERRORS is NULL * PASSWORD_EXPIRATION_TIME is a timestamp with exact point in time, calculated from password_last_changed and password_lifetime (i.e. days) stored for the user
-
Sergei Golubchik authored
C/C 3.4 disables mysql_old_password by default, so add an option for the `connect` command to support specifying allowed authentication plugins (MARIADB_OPT_RESTRICTED_AUTH). use it to enable mysql_old_password when needed for testing
-
Sergei Golubchik authored
-
Sergei Golubchik authored
it no longer supports TLSv1.0
-
Sergei Golubchik authored
COM_STMT_BULK_STMT new flag to server to returns all unitary results
-
Alexander Barkov authored
Step#3 The main patch
-
Alexander Barkov authored
Step#2 - Adding a new collation derivation level for CAST and CONVERT. Now character string cast functions: - CAST(string_expr AS CHAR) - CONVERT(expr USING charset_name) have a new collation derivation level between: - string literals - utf8 metadata functions, e.g. user() and database() Before the change these cast functions had collation derivation equal to table columns, which caused more illegal mix of collation conflicts. Note, binary string cast functions: - BINARY(expr) - CAST(string_expr AS BINARY) - CONVERT(expr USING binary) did not change their collation derivation, to preserve the behaviour of queries like these: SELECT database()=BINARY'test'; SELECT user()=CAST('root' AS BINARY); SELECT current_role()=CONVERT('role' USING binary); Derivation levels after the change look as follows: DERIVATION_IGNORABLE= 7, // Explicit NULL DERIVATION_NUMERIC= 6, // Numbers in string context, // Numeric user variables // CAST(numeric_expr AS CHAR) DERIVATION_COERCIBLE= 5, // Literals, string user variables DERIVATION_CAST= 4, // CAST(string_expr AS CHAR), // CONVERT(string_expr USING cs) DERIVATION_SYSCONST= 3, // utf8 metadata functions, e.g. user(), database() DERIVATION_IMPLICIT= 2, // Table columns, SP variables, BINARY(expr) DERIVATION_NONE= 1, // A mix (e.g. CONCAT) of two differrent collations DERIVATION_EXPLICIT= 0 // An explicit COLLATE clause
-
Alexander Barkov authored
Step#1 - Changing collation derivation for string user variables from IMPLICIT to COERCIBLE. Retionale: Without this preparatory change, switching the default collation for Unicode character sets from xxx_general_ci to uca1400_ai_ci would cause "Illegal mix of collations" errors in scenarios comparing a column with a non-default collation to a string user variable This is especially important for queries to INFORMATION_SCHEMA tables, whose columns use utf8mb3_general_ci. See the description of MDEV-25829 for more details and SQL script examples.
-
Yuchen Pei authored
The corresponding table param was deprecated as part of MDEV-28861
-
Yuchen Pei authored
Values of all session tracking system variables will be sent in the first ok packet upon connection after successful authentication. Also updated mtr to print session track info on connection (h/t Sergei Golubchik) so that we can write mtr tests for this change.
-
Yuchen Pei authored
We do this by checking server status. By doing this we avoid printing session tracking info from previous (but not the last) statement. The change is from Sergei Golubchik
-
Brandon Nesterenko authored
This patch augments Gtid_log_event with the user thread-id. In particular that compensates for the loss of this info in Rows_log_events. Gtid_log_event::thread_id gets visible in mysqlbinlog output like #231025 16:21:45 server id 1 end_log_pos 537 CRC32 0x1cf1d963 GTID 0-1-2 ddl thread_id=10 as 64 bit unsigned integer. While the size of Gtid event has grown by 8-9 bytes replication from OLD <-> NEW is not affected by it. This patch also slightly changes the logic to convert Gtid events to Query events for older replicas which don't support Gtid. Instead of hard-coding the padding of the sys var section of the generated Query event, the length to pad is dynamically calculated based on the length of the Gtid event. This work was started by the late Sujatha Sivakumar. Brandon Nesterenko took it over, reviewed initial patches and extended the work. Reviewed-by: ============= Andrei Elkin <andrei.elkin@mariadb.com> Kristian Nielsen <knielsen@knielsen-hq.org>
-
Alexander Barkov authored
This patch also fixes: MDEV-33050 Build-in schemas like oracle_schema are accent insensitive MDEV-33084 LASTVAL(t1) and LASTVAL(T1) do not work well with lower-case-table-names=0 MDEV-33085 Tables T1 and t1 do not work well with ENGINE=CSV and lower-case-table-names=0 MDEV-33086 SHOW OPEN TABLES IN DB1 -- is case insensitive with lower-case-table-names=0 MDEV-33088 Cannot create triggers in the database `MYSQL` MDEV-33103 LOCK TABLE t1 AS t2 -- alias is not case sensitive with lower-case-table-names=0 MDEV-33109 DROP DATABASE MYSQL -- does not drop SP with lower-case-table-names=0 MDEV-33110 HANDLER commands are case insensitive with lower-case-table-names=0 MDEV-33119 User is case insensitive in INFORMATION_SCHEMA.VIEWS MDEV-33120 System log table names are case insensitive with lower-cast-table-names=0 - Removing the virtual function strnncoll() from MY_COLLATION_HANDLER - Adding a wrapper function CHARSET_INFO::streq(), to compare two strings for equality. For now it calls strnncoll() internally. In the future it will turn into a virtual function. - Adding new accent sensitive case insensitive collations: - utf8mb4_general1400_as_ci - utf8mb3_general1400_as_ci They implement accent sensitive case insensitive comparison. The weight of a character is equal to the code point of its upper case variant. These collations use Unicode-14.0.0 casefolding data. The result of my_charset_utf8mb3_general1400_as_ci.strcoll() is very close to the former my_charset_utf8mb3_general_ci.strcasecmp() There is only a difference in a couple dozen rare characters, because: - the switch from "tolower" to "toupper" comparison, to make utf8mb3_general1400_as_ci closer to utf8mb3_general_ci - the switch from Unicode-3.0.0 to Unicode-14.0.0 This difference should be tolarable. See the list of affected characters in the MDEV description. Note, utf8mb4_general1400_as_ci correctly handles non-BMP characters! Unlike utf8mb4_general_ci, it does not treat all BMP characters as equal. - Adding classes representing names of the file based database objects: Lex_ident_db Lex_ident_table Lex_ident_trigger Their comparison collation depends on the underlying file system case sensitivity and on --lower-case-table-names and can be either my_charset_bin or my_charset_utf8mb3_general1400_as_ci. - Adding classes representing names of other database objects, whose names have case insensitive comparison style, using my_charset_utf8mb3_general1400_as_ci: Lex_ident_column Lex_ident_sys_var Lex_ident_user_var Lex_ident_sp_var Lex_ident_ps Lex_ident_i_s_table Lex_ident_window Lex_ident_func Lex_ident_partition Lex_ident_with_element Lex_ident_rpl_filter Lex_ident_master_info Lex_ident_host Lex_ident_locale Lex_ident_plugin Lex_ident_engine Lex_ident_server Lex_ident_savepoint Lex_ident_charset engine_option_value::Name - All the mentioned Lex_ident_xxx classes implement a method streq(): if (ident1.streq(ident2)) do_equal(); This method works as a wrapper for CHARSET_INFO::streq(). - Changing a lot of "LEX_CSTRING name" to "Lex_ident_xxx name" in class members and in function/method parameters. - Replacing all calls like system_charset_info->coll->strcasecmp(ident1, ident2) to ident1.streq(ident2) - Taking advantage of the c++11 user defined literal operator for LEX_CSTRING (see m_strings.h) and Lex_ident_xxx (see lex_ident.h) data types. Use example: const Lex_ident_column primary_key_name= "PRIMARY"_Lex_ident_column; is now a shorter version of: const Lex_ident_column primary_key_name= Lex_ident_column({STRING_WITH_LEN("PRIMARY")});
-
Vladislav Vaintroub authored
New option works just like --tab, wrt output (sql file for table definition and tab-separated for data, same options, e.g --parallel) Compared to --tab it allows --databases and --all-databases. When --dir is used , it creates directory structure in the output directory, pointed to by --dir. For every database to be dumped, there will be a directory with database name. All options that --tab supports, are also supported by --dir, in particular --parallel
-
Sergei Petrunia authored
Add assertions about limitations one has when using Index Condition Pushdown: - add handler::assert_icp_limitations() - call this function from functions that may attempt violations. Verified that assert_icp_limitations() as well as calls to it are compiled away in release build.
-
Dave Gosselin authored
Support index condition pushdown within partitioned tables. - ha_partition will pass the pushed index condition into all of the used partitions. - We require that all of the partitions to handle the pushed index condition in the same way. - When using ICP, one may read rows (e.g. call h->index_read_map(buf, ...) only to buf= table->record[0], for two reasons: * Pushed index condition's Item_field objects point into record[0] * InnoDB requires this: it calls offset() which assumes record[0]. So, when using ICP, ha_partition will read partition records to table->record[0] and then will copy record away if it needs it to be elsewhere.
-
Sergei Petrunia authored
-
Sergei Petrunia authored
Part#2, variant 2: Make the printed r_ values in JSON output consistent. After this patch, ANALYZE output has: - r_index_rows (NEW) - Observed number of rows before ICP or Rowid Filtering checks. This is a per-scan average. like r_rows and "rows" are. - r_rows (AS BEFORE) - Observed number of rows after ICP and Rowid Filtering. - r_icp_filtered (NEW) - Observed selectivity of ICP condition. - (AS BEFORE) observed selectivity of Rowid Filter is in $.rowid_filter.r_selectivity_pct - r_total_filtered - Observed combined selectivity: fraction of rows left after applying ICP condition, Rowid Filter, and attached_condition. This is now comparable with "filtered" and is printed right after it. - r_filtered (AS BEFORE) - Observed selectivity of "attached_condition". Tabular ANALYZE output is not changed. Note that JSON's r_filtered and r_rows have the same meanings as before and have the same meaning as in tabular output.
-
Sergei Petrunia authored
(Based on the original patch by Jason Cu) Part #1: - Add ha_handler_stats::{icp_attempts,icp_match}, make handler_index_cond_check() increment them. - ANALYZE FORMAT=JSON now prints r_icp_filtered based on these counters.
-
Sergei Golubchik authored
-
Monty authored
MDEV-32188 make TIMESTAMP use whole 32-bit unsigned range - Added --update-history option to mariadb-dump to change 2038 row_end timestamp to 2106. - Updated ALTER TABLE ... to convert old row_end timestamps to 2106 timestamp for tables created before MariaDB 11.4.0. - Fixed bug in CHECK TABLE where we wrongly suggested to USE REPAIR TABLE when ALTER TABLE...FORCE is needed. - mariadb-check printed table names that where used with REPAIR TABLE but did not print table names used with ALTER TABLE or with name repair. Fixed by always printing a table that is fixed if --silent is not used. - Added TABLE::vers_fix_old_timestamp() that will change max-timestamp for versioned tables when replication from a pre-11.4.0 server. A few test cases changed. This is caused by: - CHECK TABLE now prints 'Please do ALTER TABLE... instead of 'Please do REPAIR TABLE' when there is a problem with the information in the .frm file (for example a very old frm file). - mariadb-check now prints repaired table names. - mariadb-check also now prints nicer error message in case ALTER TABLE is needed to repair a table.
-
Monty authored
This task is to ensure we have a clear definition and rules of how to repair or optimize a table. The rules are: - REPAIR should be used with tables that are crashed and are unreadable (hardware issues with not readable blocks, blocks with 'unexpected data' etc) - OPTIMIZE table should be used to optimize the storage layout for the table (recover space for delete rows and optimize the index structure. - ALTER TABLE table_name FORCE should be used to rebuild the .frm file (the table definition) and the table (with the original table row format). If the table is from and older MariaDB/MySQL release with a different storage format, it will convert the data to the new format. ALTER TABLE ... FORCE is used as part of mariadb-upgrade Here follows some more background: The 3 ways to repair a table are: 1) ALTER TABLE table_name FORCE" (not other options). As an alias we allow: "ALTER TABLE table_name ENGINE=original_engine" 2) "REPAIR TABLE" (without FORCE) 3) "OPTIMIZE TABLE" All of the above commands will optimize row space usage (which means that space will be needed to hold a temporary copy of the table) and re-generate all indexes. They will also try to replicate the original table definition as exact as possible. For ALTER TABLE and "REPAIR TABLE without FORCE", the following will hold: If the table is from an older MariaDB version and data conversion is needed (for example for old type HASH columns, MySQL JSON type or new TIMESTAMP format) "ALTER TABLE table_name FORCE, algorithm=COPY" will be used. The differences between the algorithms are 1) Will use the fastest algorithm the engine supports to do a full repair of the table (except if data conversions are is needed). 2) Will use the storage engine internal REPAIR facility (MyISAM, Aria). If the engine does not support REPAIR then "ALTER TABLE FORCE, ALGORITHM=COPY" will be used. If there was data incompatibilities (which means that FORCE was used) then there will be a warning after REPAIR that ALTER TABLE FORCE is still needed. The reason for this is that REPAIR may be able to go around data errors (wrong incompatible data, crashed or unreadable sectors) that ALTER TABLE cannot do. 3) Will use the storage engine internal OPTIMIZE. If engine does not support optimize, then "ALTER TABLE FORCE" is used. The above will ensure that ALTER TABLE FORCE is able to correct almost any errors in the row or index data. In case of corrupted blocks then REPAIR possible followed by ALTER TABLE is needed. This is important as mariadb-upgrade executes ALTER TABLE table_name FORCE for any table that must be re-created. Bugs fixed with InnoDB tables when using ALTER TABLE FORCE: - No error for INNODB_DEFAULT_ROW_FORMAT=COMPACT even if row length would be too wide. (Independent of innodb_strict_mode). - Tables using symlinks will be symlinked after any of the above commands (Independent of the setting of --symbolic-links) If one specifies an algorithm together with ALTER TABLE FORCE, things will work as before (except if data conversion is required as then the COPY algorithm is enforced). ALTER TABLE .. OPTIMIZE ALL PARTITIONS will work as before. Other things: - FORCE argument added to REPAIR to allow one to first run internal repair to fix damaged blocks and then follow it with ALTER TABLE. - REPAIR will not update frm_version if ha_check_for_upgrade() finds that table is still incompatible with current version. In this case the REPAIR will end with an error. - REPAIR for storage engines that does not have native repair, like InnoDB, is now using ALTER TABLE FORCE. - REPAIR csv-table USE_FRM now works. - It did not work before as CSV tables had extension list in wrong order. - Default error messages length for %M increased from 128 to 256 to not cut information from REPAIR. - Documented HA_ADMIN_XX variables related to repair. - Added HA_ADMIN_NEEDS_DATA_CONVERSION to signal that we have to do data conversions when converting the table (and thus ALTER TABLE copy algorithm is needed). - Fixed typo in error message (caused test changes).
-
Monty authored
Remove alter_algorithm but keep the variable as no-op (with a warning). The reasons for removing alter_algorithm are: - alter_algorithm was introduced as a replacement for the old_alter_table that was used to force the usage of the original alter table algorithm (copy) in the cases where the new alter algorithm did not work. The new option was added as a way to force the usage of a specific algorithm when it should instead have made it possible to disable algorithms that would not work for some reason. - alter_algorithm introduced some cases where ALTER TABLE would not work without specifying the ALGORITHM=XXX option together with ALTER TABLE. - Having different values of alter_algorithm on master and slave could cause slave to stop unexpectedly. - ALTER TABLE FORCE, as used by mariadb-upgrade, would not always work if alter_algorithm was set for the server. - As part of the MDEV-33449 "improving repair of tables" it become clear that alter- algorithm made it harder to provide a better and more consistent ALTER TABLE FORCE and REPAIR TABLE and it would be better to remove it.
-