- 06 Oct, 2010 1 commit
-
-
Mats Kindahl authored
Replication SET and ENUM fields from a big-endian to a little- endian machine (or the opposite) that are represented using more than 1 byte (SET fields with more than 8 members or ENUM fields with more than 256 constants) will fail to replicate correctly when using row-based replication. The reason is that there are no pack() or unpack() functions for Field_set or Field_enum, which make them rely on Field::pack and Field::unpack. These functions pack data as strings, but since Field_set and Field_enum use integral types for representation, the fields are stored incorrectly on big-endian machines. This patch adds Field_enum::pack and Field_enum::unpack functions that store the integral value correctly in the binary log even on big-endian machines. Since Field_set inherits from Field_enum, it will use the same functions for packing and unpacking the field. sql/field.cc: Removing some obsolete debug printouts and adding Field_enum::pack and Field_enum::unpack functions. sql/field.h: Adding helper functions for packing and unpacking 16- and 24-bit integral types. Field_short::pack and Field_short::unpack now use these functions. sql/rpl_record.cc: Removing some obsolete debug printouts and adding some more useful ones.
-
- 02 Oct, 2010 1 commit
-
-
Alexander Nozdrin authored
-
- 28 Sep, 2010 3 commits
-
-
Alexander Nozdrin authored
-
Alexander Nozdrin authored
-
Alexander Nozdrin authored
-
- 27 Sep, 2010 1 commit
-
-
Alexey Botchkov authored
per-file comments: mysql-test/t/log_tables_debug.test This test shouldn't be run at the embedded server.
-
- 28 Sep, 2010 1 commit
-
-
Marc Alff authored
-
- 24 Sep, 2010 9 commits
-
-
Davi Arnaut authored
-
Davi Arnaut authored
Use UNINIT_VAR workaround instead of LINT_INIT. The former can also be used to silence false-positives in non-debug builds as it actually does not cause new code to be generated.
-
Davi Arnaut authored
Use UNINIT_VAR workaround instead of LINT_INIT. The former is also used in non-debug builds as it doesn't cause changes.
-
Dmitry Lenev authored
argument of inline_mysql_mutex_init in sql_base.cc. When initializing LOCK_dd_owns_lock_open mutex pass correct PSI key instead of NULL value. mysql-test/suite/perfschema/r/dml_setup_instruments.result: Updated test results after adding P_S instrumentation for LOCK_dd_owns_lock_open. sql/sql_base.cc: When initializing LOCK_dd_owns_lock_open mutex pass correct PSI key instead of NULL value.
-
Konstantin Osipov authored
-
Davi Arnaut authored
-
Davi Arnaut authored
Temporarily disable strict aliasing warnings in order to get wider coverage for optimized builds. Once the violations are fixed and false-positives silenced, this flag should be removed.
-
Dmitry Shulga authored
-
Dmitry Shulga authored
sql/sql_cache.cc: Added include of send_data_in_chunks() definiton when macros EMBEDDED_LIBRARY is on.
-
- 23 Sep, 2010 1 commit
-
-
Mats Kindahl authored
-
- 22 Sep, 2010 2 commits
-
-
Dmitry Shulga authored
-
Dmitry Shulga authored
sql/log.cc: reopen_fstreams modified: fixed error in processing of stdout/stderr when run mysqld as Windows service.
-
- 21 Sep, 2010 4 commits
-
-
Mats Kindahl authored
-
Ingo Struewing authored
Null-merge from 5.1
-
Ingo Struewing authored
Merge from saved bundle.
-
Evgeny Potemkin authored
-
- 19 Sep, 2010 1 commit
-
-
Joerg Bruehe authored
-
- 17 Sep, 2010 7 commits
-
-
Davi Arnaut authored
The problem was that the x86 assembly based atomic CAS (compare and swap) implementation could copy the wrong value to the ebx register, where the cmpxchg8b expects to see part of the "comparand" value. Since the original value in the ebx register is saved in the stack (that is, the push instruction causes the stack pointer to change), a wrong offset could be used if the compiler decides to put the source of the comparand value in the stack. The solution is to copy the comparand value directly from memory. Since the comparand value is 64-bits wide, it is copied in two steps over to the ebx and ecx registers. include/atomic/x86-gcc.h: For reference, an excerpt from a faulty binary follows. It is a disassembly of my_atomic-t, compiled at -O3 with ICC 11.0. Most of the code deals with preparations for a atomic cmpxchg8b operation. This instruction compares the value in edx:eax with the destination operand. If the values are equal, the value in ecx:ebx is stored in the destination, otherwise the value in the destination operand is copied into edx:eax. In this case, my_atomic_add64 is implemented as a compare and exchange. The addition is done over temporary storage and loaded into the destination if the original term value is still valid. volatile int64 a64; int64 b=0x1000200030004000LL; a64=0; mov 0xfffffda8(%ebx),%eax xor %ebp,%ebp mov %ebp,(%eax) mov %ebp,0x4(%eax) my_atomic_add64(&a64, b); mov 0xfffffda8(%ebx),%ebp # Load address of a64 mov 0x0(%ebp),%edx # Copy value mov 0x4(%ebp),%ecx mov %edx,0xc(%esp) # Assign to tmp var in the stack mov %ecx,0x10(%esp) add $0x30004000,%edx # Sum values adc $0x10002000,%ecx mov %edx,0x8(%esp) # Save part of result for later mov 0x0(%ebp),%esi # Copy value of a64 again mov 0x4(%ebp),%edi mov 0xc(%esp),%eax # Load the value of a64 used mov 0x10(%esp),%edx # for comparison mov %esi,(%esp) mov %edi,0x4(%esp) push %ebx # Push %ebx into stack. Changes esp. mov 0x8(%esp),%ebx # Wrong restore of the result. lock cmpxchg8b 0x0(%ebp) sete %cl pop %ebx
-
Alfranio Correia authored
-
Marc Alff authored
CHECKSUM TABLE for performance schema tables could cause uninitialized memory reads. The root cause is a design flaw in the implementation of mysql_checksum_table(), which do not honor null fields. However, fixing this bug in CHECKSUM TABLE is risky, as it can cause the checksum value to change. This fix implements a work around, to systematically reset fields values even for null fields, so that the field memory representation is always initialized with a known value.
-
Marc Alff authored
-
Alfranio Correia authored
-
Alfranio Correia authored
-
Marc Alff authored
Before this fix, the test output for perfschema.server_init would vary between executions, because some of the objects tested were not guaranteed to exist in all configurations / code paths. This fix removes these weak tests. Also, comments referring to abandonned code have been cleaned up.
-
- 16 Sep, 2010 8 commits
-
-
Sergey Glukhov authored
-
Sergey Glukhov authored
Subselect executes twice, at JOIN::optimize stage and at JOIN::execute stage. At optimize stage Innodb prebuilt struct which is used for the retrieval of column values is initialized in. ha_innobase::index_read(), prebuilt->sql_stat_start is true. After QUICK_ROR_INTERSECT_SELECT finished his job it restores read_set/write_set bitmaps with initial values and deactivates one of the handlers used by QUICK_ROR_INTERSECT_SELECT in JOIN::cleanup (it's the case when we reuse original handler as one of handlers required by QUICK_ROR_INTERSECT_SELECT object). On second subselect execution inactive handler is activated in QUICK_RANGE_SELECT::reset, file->ha_index_init(). In ha_index_init Innodb prebuilt struct is reinitialized with inappropriate read_set/write_set bitmaps. Further reinitialization in ha_innobase::index_read() does not happen as prebuilt->sql_stat_start is false. It leads to partial retrieval of required field values and we get a mix of field values from different records in the record buffer. The fix is to reset read_set/write_set bitmaps as these values are required for proper intialization of internal InnoDB struct which is used for the retrieval of column values (see build_template(), ha_innodb.cc) mysql-test/include/index_merge_ror_cpk.inc: test case mysql-test/r/index_merge_innodb.result: test case mysql-test/r/index_merge_myisam.result: test case sql/opt_range.cc: if ROR merge scan is used we need to reset read_set/write_set bitmaps as these values are required for proper intialization of internal InnoDB struct which is used for the retrieval of column values (see build_template(), ha_innodb.cc)
-
Magne Mahre authored
-
Magne Mahre authored
adding new indexes A fast alter table requires that the existing (old) table and indices are unchanged (i.e only new indices can be added). To verify this, the layout and flags of the old table/indices are compared for equality with the new. The PACK_KEYS option is a no-op in InnoDB, but the flag exists, and is used in the table compare. We need to check this (table) option flag before deciding whether an index should be packed or not. If the table has explicitly set PACK_KEYS to 0, the created indices should not be marked as packed/packable.
-
Dmitry Shulga authored
-
Dmitry Shulga authored
compression protocol. The loss of connection was caused by a malformed packet sent by the server in case when query cache was in use. When storing data in the query cache, the query cache memory allocation algorithm had a tendency to reduce the amount of memory block necessary to store a result set, up to finally storing the entire result set in a single block. With a significant result set, this memory block could turn out to be quite large - 30, 40 MB and on. When such a result set was sent to the client, the entire memory block was compressed and written to network as a single network packet. However, the length of the network packet is limited by 0xFFFFFF (16MB), since the packet format only allows 3 bytes for packet length. As a result, a malformed, overly large packet with truncated length would be sent to the client and break the client/server protocol. The solution is, when sending result sets from the query cache, to ensure that the data is chopped into network packets of size <= 16MB, so that there is no corruption of packet length. This solution, however, has a shortcoming: since the result set is still stored in the query cache as a single block, at the time of sending, we've lost boundaries of individual logical packets (one logical packet = one row of the result set) and thus can end up sending a truncated logical packet in a compressed network packet. As a result, on the client we may require more memory than max_allowed_packet to keep, both, the truncated last logical packet, and the compressed next packet. This never (or in practice never) happens without compression, since without compression it's very unlikely that a) a truncated logical packet would remain on the client when it's time to read the next packet b) a subsequent logical packet that is being read would be so large that size-of-new-packet + size-of-old-packet-tail > max_allowed_packet. To remedy this issue, we send data in 1MB sized packets, that's below the current client default of 16MB for max_allowed_packet, but large enough to ensure there is no unnecessary overhead from too many syscalls per result set. sql/net_serv.cc: net_realloc() modified: consider already used memory when compare packet buffer length sql/sql_cache.cc: modified Query_cache::send_result_to_client: send result to client in chunks limited by 1 megabyte.
-
Mikael Ronstrom authored
-
Mikael Ronstrom authored
-
- 15 Sep, 2010 1 commit
-
-
Marc Alff authored
Before this fix, the server could crash inside a memcpy when reading data from the EVENTS_WAITS_CURRENT / HISTORY / HISTORY_LONG tables. The root cause is that the length used in a memcpy could be corrupted, when another thread writes data in the wait record being read. Reading unsafe data is ok, per design choice, and the code does sanitize the data in general, but did not sanitize the length given to memcpy. The fix is to also sanitize the schema name / object name / file name length when extracting the data to produce a row.
-