- 27 Sep, 2007 13 commits
-
-
marko authored
with "col" in two debug assertions. This mistake was made in r1910.
-
vasil authored
Fix a warning produced when UNIV_DEBUG is defined.
-
vasil authored
Move part of the code from lock_rec_print() in a separate function buf_page_try_get() because the same functionality is needed in INFORMATION SCHEMA code. Approved by: Heikki
-
vasil authored
Add auxiliary function lock_rec_get_index() to retrieve the index on which the lock is. Approved by: Heikki
-
vasil authored
Fix a bug where the condition (prtype & DATA_ROW_ID) is unexpectedly always false becasue DATA_ROW_ID is 0. Use a switch instead of if-else in order to avoid repeating (prtype & DATA_SYS_PRTYPE_MASK). Approved by: Heikki
-
marko authored
-
marko authored
innobase_col_to_mysql(): New function, adapted from row_sel_field_store_in_mysql_format(). innobase_rec_to_mysql(): Correct the function comment, which was still saying "clustered index record", although we can convert any record. Make use of innobase_col_to_mysql(). Always call field->reset(), so that innobase_col_to_mysql() will not have to pad anything.
-
marko authored
"data" from byte* to const void*.
-
marko authored
if-else with switch.
-
marko authored
-
marko authored
Since r1905, innobase_rec_to_mysql() does not require a clustered index record. row_merge_dup_t: Remove old_table. row_merge_dup_report(): Do not fetch the clustered index record. Simply convert the tuple by innobase_rec_to_mysql(). row_merge_blocks(), row_merge(), row_merge_sort(): Add a TABLE* parameter for reporting duplicate key values during file sort. row_merge_read_clustered_index(): Replace UNIV_PAGE_SIZE with the more appropriate sizeof(mrec_buf_t).
-
marko authored
clustered or secondary. Remove the rec_offs_validate() assertion, because the function may be passed a mrec_t* that would fail the check.
-
marko authored
This should have been done in r1903, where the minimum size of row_merge_block_t was noted to be UNIV_PAGE_SIZE.
-
- 26 Sep, 2007 8 commits
-
-
marko authored
row_merge_buf_add(): Add ut_ad(data_size < sizeof(row_merge_block_t)) and document why it may fail if sizeof row_merge_block_t < UNIV_PAGE_SIZE.
-
marko authored
row_merge_dup_report(): Do not call innobase_rec_reset().
-
marko authored
dtuple_create_for_mysql(), dtuple_free_for_mysql(): Remove. ha_innobase::records_in_range(): Use mem_heap_create(), mem_heap_free(), and dtuple_create() instead of the removed functions above. Since r1587, InnoDB C++ functions can invoke inlined C functions.
-
marko authored
innobase_rec_to_mysql(): New function, for converting an InnoDB clustered index record to MySQL table->record[0]. TODO: convert integer fields. Currently, integer fields are in big-endian byte order instead of host byte order, and signed integer fields are offset by 0x80000000. innobase_rec_reset(): New function, for resetting table->record[0]. row_merge_build_indexes(): Add the parameter TABLE* table (the MySQL table handle) for reporting duplicate key values. dtuple_from_fields(): New function, to convert an array of dfield_t* to dtuple_t. dtuple_get_n_ext(): New function, to compute the number of externally stored fields. row_merge_dup_t: Structure for counting and reporting duplicate records. row_merge_dup_report(): Function for counting and reporting duplicate records. row_merge_tuple_cmp(), row_merge_tuple_sort(): Replace the ulint* n_dup parameter with row_merge_dup_t* dup. row_merge_buf_sort(): Add the parameter row_merge_dup_t* dup, which is NULL when sorting a non-unique index. row_merge_buf_write(), row_merge_heap_create(), row_merge_read_rec(), row_merge_cmp(), row_merge_read_clustered_index(), row_merge_blocks(), row_merge(), row_merge_sort(): Add const qualifiers. row_merge_read_clustered_index(): Use a common error handling branch err_exit. Invoke row_merge_buf_sort() differently on unique indexes. row_merge_blocks(): note TODO: We could invoke innobase_rec_to_mysql() to report duplicate key values when creating a clustered index.
-
marko authored
dict_find_index_by_max_id(): Rename this static function to its only caller, dict_table_get_index_by_max_id(). dict_table_get_index_by_max_id(): Copy the function comment from dict_find_index_by_max_id().
-
marko authored
rec_get_converted_size_comp(), rec_convert_dtuple_to_rec_comp(), rec_convert_dtuple_to_rec_new(), rec_convert_dtuple_to_rec(): Add a const qualifier to dict_index_t*. row_search_on_row_ref(): Add const qualifiers to the dict_table_t* and dtuple_t* parameters. Note that pcur is an "out" parameter and mtr is "in/out".
-
marko authored
row_build_row_ref_fast(): Note that "ref" is an in/out parameter. row_build_row_ref_from_row(): Add const qualifiers to all "in" parameters.
-
marko authored
dtuple_create(): Simplify a pointer expression. Flag the fields uninitialized after initializing them in the debug version. dtuple_t: Only declare magic_n if UNIV_DEBUG is defined. The field is not assigned to nor tested unless UNIV_DEBUG is defined.
-
- 25 Sep, 2007 1 commit
-
-
marko authored
to avoid a rec_get_offsets() call. Add some const qualifiers. row_sel_get_clust_rec_for_mysql(): Note that "offsets" will also be an input parameter.
-
- 24 Sep, 2007 6 commits
-
-
marko authored
dict_index_t* and dict_table_t* parameters of some functions.
-
vasil authored
Copy any data (currently table name and table index) that may be destroyed after releasing the kernel mutex into internal cache's storage. This is done in efficient manner using ha_storage type and a given string is copied only once into the cache's storage. Later additions of the same string use the already stored string, thus allocating memory only once per unique string. Approved by: Marko
-
vasil authored
Add a type that stores chunks of data in its own storage and avoids duplicates. Supported methods: ha_storage_create() Allocates new storage object. ha_storage_put() Copies a given data chunk into the storage and returns pointer to the copy. If the data chunk is already present, a pointer to the existing object is returned and the given data chunk is not copied. ha_storage_empty() Clears (empties) the storage from all data chunks that are stored in it. ha_storage_free() Destroys a storage object. Opposite to ha_storage_create(). Approved by: Marko
-
marko authored
rec_get_n_fields(), rec_offs_validate(), and rec_offs_make_valid().
-
marko authored
This was inadvertently reduced to 16384 bytes in r1861. For testing, this can be set as low as UNIV_PAGE_SIZE.
-
marko authored
row_merge(): Add the assertion ut_ad(half > 0). row_merge_sort(): Compute the half of the merge file correctly. The previous implementation used truncating division, which may result in loss of records when the file size in blocks is not a power of 2.
-
- 22 Sep, 2007 4 commits
-
-
vasil authored
Non-functional: put the code that clears the IS cache into a separate function.
-
vasil authored
Cosmetic: initialize the members of the cache in the same order as they are defined in the structure.
-
vasil authored
Make comment more clear (hopefully).
-
vasil authored
Use the newly introduced mem_alloc2() to use the memory that has been allocated in addition to the requested memory. This is done in order to avoid wasting memory. Do not calculate the sizes and offsets of the chunks in advance in table_cache_init() because it is unknown how much bytes will actually be allocated by mem_alloc2(). Rather calculate these on the run: after each chunk is allocated set its size and the offset of the next chunk. Similar patch approved by: Marko
-
- 21 Sep, 2007 7 commits
-
-
marko authored
merge buffer, write the next record to the beginning of the emptied buffer. This fixes one of the bugs mentioned in r1872.
-
marko authored
Some bug still remains, because innodb-index.test will lose some records from the clustered index after add primary key (a,b(255),c(255)) when row_merge_block_t is reduced to 8192 bytes. row_merge(): Add the parameter "half". Add some Valgrind instrumentation. Note that either stream can end before the other one. row_merge_sort(): Calculate "half" for row_merge().
-
marko authored
mem_alloc2(): New macro. This is a variant of mem_alloc() that returns the allocated size, which is equal to or greater than the requested size. mem_alloc_func(): Add the output parameter *size for the allocated size. When it is set, adjust the parameter passed to mem_heap_alloc(). rec_copy_prefix_to_buf_old(), rec_copy_prefix_to_buf(): Use mem_alloc2() instead of mem_alloc().
-
marko authored
row_merge_print_read and row_merge_print_write.
-
marko authored
was actually obtained from the buddy allocator. This should avoid some internal memory fragmentation in mem_heap_create() and mem_heap_alloc(). mem_area_alloc(): Change the in parameter size to an in/out parameter. Adjust the size based on what was obtained from pool->free_list[]. mem_heap_create_block(): Adjust block->len to what was obtained from mem_area_alloc().
-
marko authored
column d to two SELECT FROM t1.
-
marko authored
rec_print_comp(): New function, sliced from rec_print_new(). rec_print_old(), rec_print_comp(): Print the untruncated length of the column. row_merge_print_read, row_merge_print_write, row_merge_print_cmp: New flags, to enable debug printout in UNIV_DEBUG builds. row_merge_tuple_print(): New function for UNIV_DEBUG builds. row_merge_read_rec(): Obey row_merge_print_read. row_merge_buf_write(), row_merge_write_rec_low(), row_merge_write_eof(): Obey row_merge_print_write. row_merge_cmp(): Obey row_merge_print_cmp.
-
- 20 Sep, 2007 1 commit
-
-
marko authored
in fast index creation. row_merge_write_eof(), row_merge_buf_write(): When UNIV_DEBUG_VALGRIND is defined, fill the rest of the block (after the end-of-block marker) with 0xff.
-