Commit e477dba5 authored by Linus Torvalds's avatar Linus Torvalds

Merge tag 'for-6.12/dm-changes' of...

Merge tag 'for-6.12/dm-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/device-mapper/linux-dm

Pull device mapper updates from Mikulas Patocka:

 - Misc VDO fixes

 - Remove unused declarations dm_get_rq_mapinfo() and dm_zone_map_bio()

 - Dm-delay: Improve kernel documentation

 - Dm-crypt: Allow to specify the integrity key size as an option

 - Dm-bufio: Remove pointless NULL check

 - Small code cleanups: Use ERR_CAST; remove unlikely() around IS_ERR;
   use __assign_bit

 - Dm-integrity: Fix gcc 5 warning; convert comma to semicolon; fix
   smatch warning

 - Dm-integrity: Support recalculation in the 'I' mode

 - Revert "dm: requeue IO if mapping table not yet available"

 - Dm-crypt: Small refactoring to make the code more readable

 - Dm-cache: Remove pointless error check

 - Dm: Fix spelling errors

 - Dm-verity: Restart or panic on an I/O error if restart or panic was
   requested

 - Dm-verity: Fallback to platform keyring also if key in trusted
   keyring is rejected

* tag 'for-6.12/dm-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/device-mapper/linux-dm: (26 commits)
  dm verity: fallback to platform keyring also if key in trusted keyring is rejected
  dm-verity: restart or panic on an I/O error
  dm: fix spelling errors
  dm-cache: remove pointless error check
  dm vdo: handle unaligned discards correctly
  dm vdo indexer: Convert comma to semicolon
  dm-crypt: Use common error handling code in crypt_set_keyring_key()
  dm-crypt: Use up_read() together with key_put() only once in crypt_set_keyring_key()
  Revert "dm: requeue IO if mapping table not yet available"
  dm-integrity: check mac_size against HASH_MAX_DIGESTSIZE in sb_mac()
  dm-integrity: support recalculation in the 'I' mode
  dm integrity: Convert comma to semicolon
  dm integrity: fix gcc 5 warning
  dm: Make use of __assign_bit() API
  dm integrity: Remove extra unlikely helper
  dm: Convert to use ERR_CAST()
  dm bufio: Remove NULL check of list_entry()
  dm-crypt: Allow to specify the integrity key size as option
  dm: Remove unused declaration and empty definition "dm_zone_map_bio"
  dm delay: enhance kernel documentation
  ...
parents b6c49fca 579b2ba4
...@@ -3,29 +3,52 @@ dm-delay ...@@ -3,29 +3,52 @@ dm-delay
======== ========
Device-Mapper's "delay" target delays reads and/or writes Device-Mapper's "delay" target delays reads and/or writes
and maps them to different devices. and/or flushs and optionally maps them to different devices.
Parameters:: Arguments::
<device> <offset> <delay> [<write_device> <write_offset> <write_delay> <device> <offset> <delay> [<write_device> <write_offset> <write_delay>
[<flush_device> <flush_offset> <flush_delay>]] [<flush_device> <flush_offset> <flush_delay>]]
With separate write parameters, the first set is only used for reads. Table line has to either have 3, 6 or 9 arguments:
3: apply offset and delay to read, write and flush operations on device
6: apply offset and delay to device, also apply write_offset and write_delay
to write and flush operations on optionally different write_device with
optionally different sector offset
9: same as 6 arguments plus define flush_offset and flush_delay explicitely
on/with optionally different flush_device/flush_offset.
Offsets are specified in sectors. Offsets are specified in sectors.
Delays are specified in milliseconds. Delays are specified in milliseconds.
Example scripts Example scripts
=============== ===============
:: ::
#!/bin/sh #!/bin/sh
# Create device delaying rw operation for 500ms #
echo "0 `blockdev --getsz $1` delay $1 0 500" | dmsetup create delayed # Create mapped device named "delayed" delaying read, write and flush operations for 500ms.
#
dmsetup create delayed --table "0 `blockdev --getsz $1` delay $1 0 500"
:: ::
#!/bin/sh
#
# Create mapped device delaying write and flush operations for 400ms and
# splitting reads to device $1 but writes and flushs to different device $2
# to different offsets of 2048 and 4096 sectors respectively.
#
dmsetup create delayed --table "0 `blockdev --getsz $1` delay $1 2048 0 $2 4096 400"
::
#!/bin/sh #!/bin/sh
# Create device delaying only write operation for 500ms and #
# splitting reads and writes to different devices $1 $2 # Create mapped device delaying reads for 50ms, writes for 100ms and flushs for 333ms
echo "0 `blockdev --getsz $1` delay $1 0 0 $2 0 500" | dmsetup create delayed # onto the same backing device at offset 0 sectors.
#
dmsetup create delayed --table "0 `blockdev --getsz $1` delay $1 0 50 $2 0 100 $1 0 333"
...@@ -160,6 +160,10 @@ iv_large_sectors ...@@ -160,6 +160,10 @@ iv_large_sectors
The <iv_offset> must be multiple of <sector_size> (in 512 bytes units) The <iv_offset> must be multiple of <sector_size> (in 512 bytes units)
if this flag is specified. if this flag is specified.
integrity_key_size:<bytes>
Use an integrity key of <bytes> size instead of using an integrity key size
of the digest size of the used HMAC algorithm.
Module parameters:: Module parameters::
max_read_size max_read_size
......
...@@ -251,7 +251,12 @@ The messages are: ...@@ -251,7 +251,12 @@ The messages are:
by the vdostats userspace program to interpret the output by the vdostats userspace program to interpret the output
buffer. buffer.
dump: config:
Outputs useful vdo configuration information. Mostly used
by users who want to recreate a similar VDO volume and
want to know the creation configuration used.
dump:
Dumps many internal structures to the system log. This is Dumps many internal structures to the system log. This is
not always safe to run, so it should only be used to debug not always safe to run, so it should only be used to debug
a hung vdo. Optional parameters to specify structures to a hung vdo. Optional parameters to specify structures to
......
...@@ -529,9 +529,6 @@ static struct dm_buffer *list_to_buffer(struct list_head *l) ...@@ -529,9 +529,6 @@ static struct dm_buffer *list_to_buffer(struct list_head *l)
{ {
struct lru_entry *le = list_entry(l, struct lru_entry, list); struct lru_entry *le = list_entry(l, struct lru_entry, list);
if (!le)
return NULL;
return le_to_buffer(le); return le_to_buffer(le);
} }
......
...@@ -1368,7 +1368,7 @@ static void mg_copy(struct work_struct *ws) ...@@ -1368,7 +1368,7 @@ static void mg_copy(struct work_struct *ws)
*/ */
bool rb = bio_detain_shared(mg->cache, mg->op->oblock, mg->overwrite_bio); bool rb = bio_detain_shared(mg->cache, mg->op->oblock, mg->overwrite_bio);
BUG_ON(rb); /* An exclussive lock must _not_ be held for this block */ BUG_ON(rb); /* An exclusive lock must _not_ be held for this block */
mg->overwrite_bio = NULL; mg->overwrite_bio = NULL;
inc_io_migrations(mg->cache); inc_io_migrations(mg->cache);
mg_full_copy(ws); mg_full_copy(ws);
...@@ -3200,8 +3200,6 @@ static int parse_cblock_range(struct cache *cache, const char *str, ...@@ -3200,8 +3200,6 @@ static int parse_cblock_range(struct cache *cache, const char *str,
* Try and parse form (ii) first. * Try and parse form (ii) first.
*/ */
r = sscanf(str, "%llu-%llu%c", &b, &e, &dummy); r = sscanf(str, "%llu-%llu%c", &b, &e, &dummy);
if (r < 0)
return r;
if (r == 2) { if (r == 2) {
result->begin = to_cblock(b); result->begin = to_cblock(b);
...@@ -3213,8 +3211,6 @@ static int parse_cblock_range(struct cache *cache, const char *str, ...@@ -3213,8 +3211,6 @@ static int parse_cblock_range(struct cache *cache, const char *str,
* That didn't work, try form (i). * That didn't work, try form (i).
*/ */
r = sscanf(str, "%llu%c", &b, &dummy); r = sscanf(str, "%llu%c", &b, &dummy);
if (r < 0)
return r;
if (r == 1) { if (r == 1) {
result->begin = to_cblock(b); result->begin = to_cblock(b);
......
...@@ -530,10 +530,7 @@ static int __load_bitset_in_core(struct dm_clone_metadata *cmd) ...@@ -530,10 +530,7 @@ static int __load_bitset_in_core(struct dm_clone_metadata *cmd)
return r; return r;
for (i = 0; ; i++) { for (i = 0; ; i++) {
if (dm_bitset_cursor_get_value(&c)) __assign_bit(i, cmd->region_map, dm_bitset_cursor_get_value(&c));
__set_bit(i, cmd->region_map);
else
__clear_bit(i, cmd->region_map);
if (i >= (cmd->nr_regions - 1)) if (i >= (cmd->nr_regions - 1))
break; break;
......
...@@ -147,6 +147,7 @@ enum cipher_flags { ...@@ -147,6 +147,7 @@ enum cipher_flags {
CRYPT_MODE_INTEGRITY_AEAD, /* Use authenticated mode for cipher */ CRYPT_MODE_INTEGRITY_AEAD, /* Use authenticated mode for cipher */
CRYPT_IV_LARGE_SECTORS, /* Calculate IV from sector_size, not 512B sectors */ CRYPT_IV_LARGE_SECTORS, /* Calculate IV from sector_size, not 512B sectors */
CRYPT_ENCRYPT_PREPROCESS, /* Must preprocess data for encryption (elephant) */ CRYPT_ENCRYPT_PREPROCESS, /* Must preprocess data for encryption (elephant) */
CRYPT_KEY_MAC_SIZE_SET, /* The integrity_key_size option was used */
}; };
/* /*
...@@ -2613,35 +2614,31 @@ static int crypt_set_keyring_key(struct crypt_config *cc, const char *key_string ...@@ -2613,35 +2614,31 @@ static int crypt_set_keyring_key(struct crypt_config *cc, const char *key_string
key = request_key(type, key_desc + 1, NULL); key = request_key(type, key_desc + 1, NULL);
if (IS_ERR(key)) { if (IS_ERR(key)) {
kfree_sensitive(new_key_string); ret = PTR_ERR(key);
return PTR_ERR(key); goto free_new_key_string;
} }
down_read(&key->sem); down_read(&key->sem);
ret = set_key(cc, key); ret = set_key(cc, key);
if (ret < 0) {
up_read(&key->sem);
key_put(key);
kfree_sensitive(new_key_string);
return ret;
}
up_read(&key->sem); up_read(&key->sem);
key_put(key); key_put(key);
if (ret < 0)
goto free_new_key_string;
/* clear the flag since following operations may invalidate previously valid key */ /* clear the flag since following operations may invalidate previously valid key */
clear_bit(DM_CRYPT_KEY_VALID, &cc->flags); clear_bit(DM_CRYPT_KEY_VALID, &cc->flags);
ret = crypt_setkey(cc); ret = crypt_setkey(cc);
if (ret)
goto free_new_key_string;
if (!ret) { set_bit(DM_CRYPT_KEY_VALID, &cc->flags);
set_bit(DM_CRYPT_KEY_VALID, &cc->flags); kfree_sensitive(cc->key_string);
kfree_sensitive(cc->key_string); cc->key_string = new_key_string;
cc->key_string = new_key_string; return 0;
} else
kfree_sensitive(new_key_string);
free_new_key_string:
kfree_sensitive(new_key_string);
return ret; return ret;
} }
...@@ -2937,7 +2934,8 @@ static int crypt_ctr_auth_cipher(struct crypt_config *cc, char *cipher_api) ...@@ -2937,7 +2934,8 @@ static int crypt_ctr_auth_cipher(struct crypt_config *cc, char *cipher_api)
if (IS_ERR(mac)) if (IS_ERR(mac))
return PTR_ERR(mac); return PTR_ERR(mac);
cc->key_mac_size = crypto_ahash_digestsize(mac); if (!test_bit(CRYPT_KEY_MAC_SIZE_SET, &cc->cipher_flags))
cc->key_mac_size = crypto_ahash_digestsize(mac);
crypto_free_ahash(mac); crypto_free_ahash(mac);
cc->authenc_key = kmalloc(crypt_authenckey_size(cc), GFP_KERNEL); cc->authenc_key = kmalloc(crypt_authenckey_size(cc), GFP_KERNEL);
...@@ -3219,6 +3217,13 @@ static int crypt_ctr_optional(struct dm_target *ti, unsigned int argc, char **ar ...@@ -3219,6 +3217,13 @@ static int crypt_ctr_optional(struct dm_target *ti, unsigned int argc, char **ar
cc->cipher_auth = kstrdup(sval, GFP_KERNEL); cc->cipher_auth = kstrdup(sval, GFP_KERNEL);
if (!cc->cipher_auth) if (!cc->cipher_auth)
return -ENOMEM; return -ENOMEM;
} else if (sscanf(opt_string, "integrity_key_size:%u%c", &val, &dummy) == 1) {
if (!val) {
ti->error = "Invalid integrity_key_size argument";
return -EINVAL;
}
cc->key_mac_size = val;
set_bit(CRYPT_KEY_MAC_SIZE_SET, &cc->cipher_flags);
} else if (sscanf(opt_string, "sector_size:%hu%c", &cc->sector_size, &dummy) == 1) { } else if (sscanf(opt_string, "sector_size:%hu%c", &cc->sector_size, &dummy) == 1) {
if (cc->sector_size < (1 << SECTOR_SHIFT) || if (cc->sector_size < (1 << SECTOR_SHIFT) ||
cc->sector_size > 4096 || cc->sector_size > 4096 ||
...@@ -3607,10 +3612,10 @@ static void crypt_status(struct dm_target *ti, status_type_t type, ...@@ -3607,10 +3612,10 @@ static void crypt_status(struct dm_target *ti, status_type_t type,
num_feature_args += test_bit(DM_CRYPT_NO_OFFLOAD, &cc->flags); num_feature_args += test_bit(DM_CRYPT_NO_OFFLOAD, &cc->flags);
num_feature_args += test_bit(DM_CRYPT_NO_READ_WORKQUEUE, &cc->flags); num_feature_args += test_bit(DM_CRYPT_NO_READ_WORKQUEUE, &cc->flags);
num_feature_args += test_bit(DM_CRYPT_NO_WRITE_WORKQUEUE, &cc->flags); num_feature_args += test_bit(DM_CRYPT_NO_WRITE_WORKQUEUE, &cc->flags);
num_feature_args += !!cc->used_tag_size;
num_feature_args += cc->sector_size != (1 << SECTOR_SHIFT); num_feature_args += cc->sector_size != (1 << SECTOR_SHIFT);
num_feature_args += test_bit(CRYPT_IV_LARGE_SECTORS, &cc->cipher_flags); num_feature_args += test_bit(CRYPT_IV_LARGE_SECTORS, &cc->cipher_flags);
if (cc->used_tag_size) num_feature_args += test_bit(CRYPT_KEY_MAC_SIZE_SET, &cc->cipher_flags);
num_feature_args++;
if (num_feature_args) { if (num_feature_args) {
DMEMIT(" %d", num_feature_args); DMEMIT(" %d", num_feature_args);
if (ti->num_discard_bios) if (ti->num_discard_bios)
...@@ -3631,6 +3636,8 @@ static void crypt_status(struct dm_target *ti, status_type_t type, ...@@ -3631,6 +3636,8 @@ static void crypt_status(struct dm_target *ti, status_type_t type,
DMEMIT(" sector_size:%d", cc->sector_size); DMEMIT(" sector_size:%d", cc->sector_size);
if (test_bit(CRYPT_IV_LARGE_SECTORS, &cc->cipher_flags)) if (test_bit(CRYPT_IV_LARGE_SECTORS, &cc->cipher_flags))
DMEMIT(" iv_large_sectors"); DMEMIT(" iv_large_sectors");
if (test_bit(CRYPT_KEY_MAC_SIZE_SET, &cc->cipher_flags))
DMEMIT(" integrity_key_size:%u", cc->key_mac_size);
} }
break; break;
...@@ -3758,7 +3765,7 @@ static void crypt_io_hints(struct dm_target *ti, struct queue_limits *limits) ...@@ -3758,7 +3765,7 @@ static void crypt_io_hints(struct dm_target *ti, struct queue_limits *limits)
static struct target_type crypt_target = { static struct target_type crypt_target = {
.name = "crypt", .name = "crypt",
.version = {1, 27, 0}, .version = {1, 28, 0},
.module = THIS_MODULE, .module = THIS_MODULE,
.ctr = crypt_ctr, .ctr = crypt_ctr,
.dtr = crypt_dtr, .dtr = crypt_dtr,
......
This diff is collapsed.
...@@ -2519,7 +2519,7 @@ static int super_validate(struct raid_set *rs, struct md_rdev *rdev) ...@@ -2519,7 +2519,7 @@ static int super_validate(struct raid_set *rs, struct md_rdev *rdev)
rdev->saved_raid_disk = rdev->raid_disk; rdev->saved_raid_disk = rdev->raid_disk;
} }
/* Reshape support -> restore repective data offsets */ /* Reshape support -> restore respective data offsets */
rdev->data_offset = le64_to_cpu(sb->data_offset); rdev->data_offset = le64_to_cpu(sb->data_offset);
rdev->new_data_offset = le64_to_cpu(sb->new_data_offset); rdev->new_data_offset = le64_to_cpu(sb->new_data_offset);
......
...@@ -496,8 +496,10 @@ static blk_status_t dm_mq_queue_rq(struct blk_mq_hw_ctx *hctx, ...@@ -496,8 +496,10 @@ static blk_status_t dm_mq_queue_rq(struct blk_mq_hw_ctx *hctx,
map = dm_get_live_table(md, &srcu_idx); map = dm_get_live_table(md, &srcu_idx);
if (unlikely(!map)) { if (unlikely(!map)) {
DMERR_LIMIT("%s: mapping table unavailable, erroring io",
dm_device_name(md));
dm_put_live_table(md, srcu_idx); dm_put_live_table(md, srcu_idx);
return BLK_STS_RESOURCE; return BLK_STS_IOERR;
} }
ti = dm_table_find_target(map, 0); ti = dm_table_find_target(map, 0);
dm_put_live_table(md, srcu_idx); dm_put_live_table(md, srcu_idx);
......
...@@ -2948,7 +2948,7 @@ static struct pool *pool_create(struct mapped_device *pool_md, ...@@ -2948,7 +2948,7 @@ static struct pool *pool_create(struct mapped_device *pool_md,
pmd = dm_pool_metadata_open(metadata_dev, block_size, format_device); pmd = dm_pool_metadata_open(metadata_dev, block_size, format_device);
if (IS_ERR(pmd)) { if (IS_ERR(pmd)) {
*error = "Error creating metadata object"; *error = "Error creating metadata object";
return (struct pool *)pmd; return ERR_CAST(pmd);
} }
pool = kzalloc(sizeof(*pool), GFP_KERNEL); pool = kzalloc(sizeof(*pool), GFP_KERNEL);
......
...@@ -501,6 +501,7 @@ static void launch_data_vio(struct data_vio *data_vio, logical_block_number_t lb ...@@ -501,6 +501,7 @@ static void launch_data_vio(struct data_vio *data_vio, logical_block_number_t lb
memset(&data_vio->record_name, 0, sizeof(data_vio->record_name)); memset(&data_vio->record_name, 0, sizeof(data_vio->record_name));
memset(&data_vio->duplicate, 0, sizeof(data_vio->duplicate)); memset(&data_vio->duplicate, 0, sizeof(data_vio->duplicate));
vdo_reset_completion(&data_vio->decrement_completion);
vdo_reset_completion(completion); vdo_reset_completion(completion);
completion->error_handler = handle_data_vio_error; completion->error_handler = handle_data_vio_error;
set_data_vio_logical_callback(data_vio, attempt_logical_block_lock); set_data_vio_logical_callback(data_vio, attempt_logical_block_lock);
...@@ -1273,12 +1274,14 @@ static void clean_hash_lock(struct vdo_completion *completion) ...@@ -1273,12 +1274,14 @@ static void clean_hash_lock(struct vdo_completion *completion)
static void finish_cleanup(struct data_vio *data_vio) static void finish_cleanup(struct data_vio *data_vio)
{ {
struct vdo_completion *completion = &data_vio->vio.completion; struct vdo_completion *completion = &data_vio->vio.completion;
u32 discard_size = min_t(u32, data_vio->remaining_discard,
VDO_BLOCK_SIZE - data_vio->offset);
VDO_ASSERT_LOG_ONLY(data_vio->allocation.lock == NULL, VDO_ASSERT_LOG_ONLY(data_vio->allocation.lock == NULL,
"complete data_vio has no allocation lock"); "complete data_vio has no allocation lock");
VDO_ASSERT_LOG_ONLY(data_vio->hash_lock == NULL, VDO_ASSERT_LOG_ONLY(data_vio->hash_lock == NULL,
"complete data_vio has no hash lock"); "complete data_vio has no hash lock");
if ((data_vio->remaining_discard <= VDO_BLOCK_SIZE) || if ((data_vio->remaining_discard <= discard_size) ||
(completion->result != VDO_SUCCESS)) { (completion->result != VDO_SUCCESS)) {
struct data_vio_pool *pool = completion->vdo->data_vio_pool; struct data_vio_pool *pool = completion->vdo->data_vio_pool;
...@@ -1287,12 +1290,12 @@ static void finish_cleanup(struct data_vio *data_vio) ...@@ -1287,12 +1290,12 @@ static void finish_cleanup(struct data_vio *data_vio)
return; return;
} }
data_vio->remaining_discard -= min_t(u32, data_vio->remaining_discard, data_vio->remaining_discard -= discard_size;
VDO_BLOCK_SIZE - data_vio->offset);
data_vio->is_partial = (data_vio->remaining_discard < VDO_BLOCK_SIZE); data_vio->is_partial = (data_vio->remaining_discard < VDO_BLOCK_SIZE);
data_vio->read = data_vio->is_partial; data_vio->read = data_vio->is_partial;
data_vio->offset = 0; data_vio->offset = 0;
completion->requeue = true; completion->requeue = true;
data_vio->first_reference_operation_complete = false;
launch_data_vio(data_vio, data_vio->logical.lbn + 1); launch_data_vio(data_vio, data_vio->logical.lbn + 1);
} }
...@@ -1965,7 +1968,8 @@ static void allocate_block(struct vdo_completion *completion) ...@@ -1965,7 +1968,8 @@ static void allocate_block(struct vdo_completion *completion)
.state = VDO_MAPPING_STATE_UNCOMPRESSED, .state = VDO_MAPPING_STATE_UNCOMPRESSED,
}; };
if (data_vio->fua) { if (data_vio->fua ||
data_vio->remaining_discard > (u32) (VDO_BLOCK_SIZE - data_vio->offset)) {
prepare_for_dedupe(data_vio); prepare_for_dedupe(data_vio);
return; return;
} }
...@@ -2042,7 +2046,6 @@ void continue_data_vio_with_block_map_slot(struct vdo_completion *completion) ...@@ -2042,7 +2046,6 @@ void continue_data_vio_with_block_map_slot(struct vdo_completion *completion)
return; return;
} }
/* /*
* We don't need to write any data, so skip allocation and just update the block map and * We don't need to write any data, so skip allocation and just update the block map and
* reference counts (via the journal). * reference counts (via the journal).
...@@ -2051,7 +2054,7 @@ void continue_data_vio_with_block_map_slot(struct vdo_completion *completion) ...@@ -2051,7 +2054,7 @@ void continue_data_vio_with_block_map_slot(struct vdo_completion *completion)
if (data_vio->is_zero) if (data_vio->is_zero)
data_vio->new_mapped.state = VDO_MAPPING_STATE_UNCOMPRESSED; data_vio->new_mapped.state = VDO_MAPPING_STATE_UNCOMPRESSED;
if (data_vio->remaining_discard > VDO_BLOCK_SIZE) { if (data_vio->remaining_discard > (u32) (VDO_BLOCK_SIZE - data_vio->offset)) {
/* This is not the final block of a discard so we can't acknowledge it yet. */ /* This is not the final block of a discard so we can't acknowledge it yet. */
update_metadata_for_data_vio_write(data_vio, NULL); update_metadata_for_data_vio_write(data_vio, NULL);
return; return;
......
...@@ -729,6 +729,7 @@ static void process_update_result(struct data_vio *agent) ...@@ -729,6 +729,7 @@ static void process_update_result(struct data_vio *agent)
!change_context_state(context, DEDUPE_CONTEXT_COMPLETE, DEDUPE_CONTEXT_IDLE)) !change_context_state(context, DEDUPE_CONTEXT_COMPLETE, DEDUPE_CONTEXT_IDLE))
return; return;
agent->dedupe_context = NULL;
release_context(context); release_context(context);
} }
...@@ -1648,6 +1649,7 @@ static void process_query_result(struct data_vio *agent) ...@@ -1648,6 +1649,7 @@ static void process_query_result(struct data_vio *agent)
if (change_context_state(context, DEDUPE_CONTEXT_COMPLETE, DEDUPE_CONTEXT_IDLE)) { if (change_context_state(context, DEDUPE_CONTEXT_COMPLETE, DEDUPE_CONTEXT_IDLE)) {
agent->is_duplicate = decode_uds_advice(context); agent->is_duplicate = decode_uds_advice(context);
agent->dedupe_context = NULL;
release_context(context); release_context(context);
} }
} }
...@@ -2321,6 +2323,7 @@ static void timeout_index_operations_callback(struct vdo_completion *completion) ...@@ -2321,6 +2323,7 @@ static void timeout_index_operations_callback(struct vdo_completion *completion)
* send its requestor on its way. * send its requestor on its way.
*/ */
list_del_init(&context->list_entry); list_del_init(&context->list_entry);
context->requestor->dedupe_context = NULL;
continue_data_vio(context->requestor); continue_data_vio(context->requestor);
timed_out++; timed_out++;
} }
......
...@@ -1105,6 +1105,9 @@ static int vdo_message(struct dm_target *ti, unsigned int argc, char **argv, ...@@ -1105,6 +1105,9 @@ static int vdo_message(struct dm_target *ti, unsigned int argc, char **argv,
if ((argc == 1) && (strcasecmp(argv[0], "stats") == 0)) { if ((argc == 1) && (strcasecmp(argv[0], "stats") == 0)) {
vdo_write_stats(vdo, result_buffer, maxlen); vdo_write_stats(vdo, result_buffer, maxlen);
result = 1; result = 1;
} else if ((argc == 1) && (strcasecmp(argv[0], "config") == 0)) {
vdo_write_config(vdo, &result_buffer, &maxlen);
result = 1;
} else { } else {
result = vdo_status_to_errno(process_vdo_message(vdo, argc, argv)); result = vdo_status_to_errno(process_vdo_message(vdo, argc, argv));
} }
...@@ -2293,6 +2296,14 @@ static void handle_load_error(struct vdo_completion *completion) ...@@ -2293,6 +2296,14 @@ static void handle_load_error(struct vdo_completion *completion)
return; return;
} }
if ((completion->result == VDO_UNSUPPORTED_VERSION) &&
(vdo->admin.phase == LOAD_PHASE_MAKE_DIRTY)) {
vdo_log_error("Aborting load due to unsupported version");
vdo->admin.phase = LOAD_PHASE_FINISHED;
load_callback(completion);
return;
}
vdo_log_error_strerror(completion->result, vdo_log_error_strerror(completion->result,
"Entering read-only mode due to load error"); "Entering read-only mode due to load error");
vdo->admin.phase = LOAD_PHASE_WAIT_FOR_READ_ONLY; vdo->admin.phase = LOAD_PHASE_WAIT_FOR_READ_ONLY;
...@@ -2737,6 +2748,19 @@ static int vdo_preresume_registered(struct dm_target *ti, struct vdo *vdo) ...@@ -2737,6 +2748,19 @@ static int vdo_preresume_registered(struct dm_target *ti, struct vdo *vdo)
vdo_log_info("starting device '%s'", device_name); vdo_log_info("starting device '%s'", device_name);
result = perform_admin_operation(vdo, LOAD_PHASE_START, load_callback, result = perform_admin_operation(vdo, LOAD_PHASE_START, load_callback,
handle_load_error, "load"); handle_load_error, "load");
if (result == VDO_UNSUPPORTED_VERSION) {
/*
* A component version is not supported. This can happen when the
* recovery journal metadata is in an old version format. Abort the
* load without saving the state.
*/
vdo->suspend_type = VDO_ADMIN_STATE_SUSPENDING;
perform_admin_operation(vdo, SUSPEND_PHASE_START,
suspend_callback, suspend_callback,
"suspend");
return result;
}
if ((result != VDO_SUCCESS) && (result != VDO_READ_ONLY)) { if ((result != VDO_SUCCESS) && (result != VDO_READ_ONLY)) {
/* /*
* Something has gone very wrong. Make sure everything has drained and * Something has gone very wrong. Make sure everything has drained and
...@@ -2808,7 +2832,8 @@ static int vdo_preresume(struct dm_target *ti) ...@@ -2808,7 +2832,8 @@ static int vdo_preresume(struct dm_target *ti)
vdo_register_thread_device_id(&instance_thread, &vdo->instance); vdo_register_thread_device_id(&instance_thread, &vdo->instance);
result = vdo_preresume_registered(ti, vdo); result = vdo_preresume_registered(ti, vdo);
if ((result == VDO_PARAMETER_MISMATCH) || (result == VDO_INVALID_ADMIN_STATE)) if ((result == VDO_PARAMETER_MISMATCH) || (result == VDO_INVALID_ADMIN_STATE) ||
(result == VDO_UNSUPPORTED_VERSION))
result = -EINVAL; result = -EINVAL;
vdo_unregister_thread_device_id(); vdo_unregister_thread_device_id();
return vdo_status_to_errno(result); return vdo_status_to_errno(result);
...@@ -2832,7 +2857,7 @@ static void vdo_resume(struct dm_target *ti) ...@@ -2832,7 +2857,7 @@ static void vdo_resume(struct dm_target *ti)
static struct target_type vdo_target_bio = { static struct target_type vdo_target_bio = {
.features = DM_TARGET_SINGLETON, .features = DM_TARGET_SINGLETON,
.name = "vdo", .name = "vdo",
.version = { 9, 0, 0 }, .version = { 9, 1, 0 },
.module = THIS_MODULE, .module = THIS_MODULE,
.ctr = vdo_ctr, .ctr = vdo_ctr,
.dtr = vdo_dtr, .dtr = vdo_dtr,
......
...@@ -177,7 +177,7 @@ int uds_pack_open_chapter_index_page(struct open_chapter_index *chapter_index, ...@@ -177,7 +177,7 @@ int uds_pack_open_chapter_index_page(struct open_chapter_index *chapter_index,
if (list_number < 0) if (list_number < 0)
return UDS_OVERFLOW; return UDS_OVERFLOW;
next_list = first_list + list_number--, next_list = first_list + list_number--;
result = uds_start_delta_index_search(delta_index, next_list, 0, result = uds_start_delta_index_search(delta_index, next_list, 0,
&entry); &entry);
if (result != UDS_SUCCESS) if (result != UDS_SUCCESS)
......
...@@ -346,7 +346,6 @@ void __submit_metadata_vio(struct vio *vio, physical_block_number_t physical, ...@@ -346,7 +346,6 @@ void __submit_metadata_vio(struct vio *vio, physical_block_number_t physical,
VDO_ASSERT_LOG_ONLY(!code->quiescent, "I/O not allowed in state %s", code->name); VDO_ASSERT_LOG_ONLY(!code->quiescent, "I/O not allowed in state %s", code->name);
VDO_ASSERT_LOG_ONLY(vio->bio->bi_next == NULL, "metadata bio has no next bio");
vdo_reset_completion(completion); vdo_reset_completion(completion);
completion->error_handler = error_handler; completion->error_handler = error_handler;
......
...@@ -4,6 +4,7 @@ ...@@ -4,6 +4,7 @@
*/ */
#include "dedupe.h" #include "dedupe.h"
#include "indexer.h"
#include "logger.h" #include "logger.h"
#include "memory-alloc.h" #include "memory-alloc.h"
#include "message-stats.h" #include "message-stats.h"
...@@ -430,3 +431,50 @@ int vdo_write_stats(struct vdo *vdo, char *buf, unsigned int maxlen) ...@@ -430,3 +431,50 @@ int vdo_write_stats(struct vdo *vdo, char *buf, unsigned int maxlen)
vdo_free(stats); vdo_free(stats);
return VDO_SUCCESS; return VDO_SUCCESS;
} }
static void write_index_memory(u32 mem, char **buf, unsigned int *maxlen)
{
char *prefix = "memorySize : ";
/* Convert index memory to fractional value */
if (mem == (u32)UDS_MEMORY_CONFIG_256MB)
write_string(prefix, "0.25, ", NULL, buf, maxlen);
else if (mem == (u32)UDS_MEMORY_CONFIG_512MB)
write_string(prefix, "0.50, ", NULL, buf, maxlen);
else if (mem == (u32)UDS_MEMORY_CONFIG_768MB)
write_string(prefix, "0.75, ", NULL, buf, maxlen);
else
write_u32(prefix, mem, ", ", buf, maxlen);
}
static void write_index_config(struct index_config *config, char **buf,
unsigned int *maxlen)
{
write_string("index : ", "{ ", NULL, buf, maxlen);
/* index mem size */
write_index_memory(config->mem, buf, maxlen);
/* whether the index is sparse or not */
write_bool("isSparse : ", config->sparse, ", ", buf, maxlen);
write_string(NULL, "}", ", ", buf, maxlen);
}
int vdo_write_config(struct vdo *vdo, char **buf, unsigned int *maxlen)
{
struct vdo_config *config = &vdo->states.vdo.config;
write_string(NULL, "{ ", NULL, buf, maxlen);
/* version */
write_u32("version : ", 1, ", ", buf, maxlen);
/* physical size */
write_block_count_t("physicalSize : ", config->physical_blocks * VDO_BLOCK_SIZE, ", ",
buf, maxlen);
/* logical size */
write_block_count_t("logicalSize : ", config->logical_blocks * VDO_BLOCK_SIZE, ", ",
buf, maxlen);
/* slab size */
write_block_count_t("slabSize : ", config->slab_size, ", ", buf, maxlen);
/* index config */
write_index_config(&vdo->geometry.index_config, buf, maxlen);
write_string(NULL, "}", NULL, buf, maxlen);
return VDO_SUCCESS;
}
...@@ -8,6 +8,7 @@ ...@@ -8,6 +8,7 @@
#include "types.h" #include "types.h"
int vdo_write_config(struct vdo *vdo, char **buf, unsigned int *maxlen);
int vdo_write_stats(struct vdo *vdo, char *buf, unsigned int maxlen); int vdo_write_stats(struct vdo *vdo, char *buf, unsigned int maxlen);
#endif /* VDO_MESSAGE_STATS_H */ #endif /* VDO_MESSAGE_STATS_H */
...@@ -1202,17 +1202,14 @@ static bool __must_check is_valid_recovery_journal_block(const struct recovery_j ...@@ -1202,17 +1202,14 @@ static bool __must_check is_valid_recovery_journal_block(const struct recovery_j
* @journal: The journal to use. * @journal: The journal to use.
* @header: The unpacked block header to check. * @header: The unpacked block header to check.
* @sequence: The expected sequence number. * @sequence: The expected sequence number.
* @type: The expected metadata type.
* *
* Return: True if the block matches. * Return: True if the block matches.
*/ */
static bool __must_check is_exact_recovery_journal_block(const struct recovery_journal *journal, static bool __must_check is_exact_recovery_journal_block(const struct recovery_journal *journal,
const struct recovery_block_header *header, const struct recovery_block_header *header,
sequence_number_t sequence, sequence_number_t sequence)
enum vdo_metadata_type type)
{ {
return ((header->metadata_type == type) && return ((header->sequence_number == sequence) &&
(header->sequence_number == sequence) &&
(is_valid_recovery_journal_block(journal, header, true))); (is_valid_recovery_journal_block(journal, header, true)));
} }
...@@ -1371,7 +1368,8 @@ static void extract_entries_from_block(struct repair_completion *repair, ...@@ -1371,7 +1368,8 @@ static void extract_entries_from_block(struct repair_completion *repair,
get_recovery_journal_block_header(journal, repair->journal_data, get_recovery_journal_block_header(journal, repair->journal_data,
sequence); sequence);
if (!is_exact_recovery_journal_block(journal, &header, sequence, format)) { if (!is_exact_recovery_journal_block(journal, &header, sequence) ||
(header.metadata_type != format)) {
/* This block is invalid, so skip it. */ /* This block is invalid, so skip it. */
return; return;
} }
...@@ -1557,10 +1555,13 @@ static int parse_journal_for_recovery(struct repair_completion *repair) ...@@ -1557,10 +1555,13 @@ static int parse_journal_for_recovery(struct repair_completion *repair)
sequence_number_t i, head; sequence_number_t i, head;
bool found_entries = false; bool found_entries = false;
struct recovery_journal *journal = repair->completion.vdo->recovery_journal; struct recovery_journal *journal = repair->completion.vdo->recovery_journal;
struct recovery_block_header header;
enum vdo_metadata_type expected_format;
head = min(repair->block_map_head, repair->slab_journal_head); head = min(repair->block_map_head, repair->slab_journal_head);
header = get_recovery_journal_block_header(journal, repair->journal_data, head);
expected_format = header.metadata_type;
for (i = head; i <= repair->highest_tail; i++) { for (i = head; i <= repair->highest_tail; i++) {
struct recovery_block_header header;
journal_entry_count_t block_entries; journal_entry_count_t block_entries;
u8 j; u8 j;
...@@ -1572,19 +1573,15 @@ static int parse_journal_for_recovery(struct repair_completion *repair) ...@@ -1572,19 +1573,15 @@ static int parse_journal_for_recovery(struct repair_completion *repair)
}; };
header = get_recovery_journal_block_header(journal, repair->journal_data, i); header = get_recovery_journal_block_header(journal, repair->journal_data, i);
if (header.metadata_type == VDO_METADATA_RECOVERY_JOURNAL) { if (!is_exact_recovery_journal_block(journal, &header, i)) {
/* This is an old format block, so we need to upgrade */
vdo_log_error_strerror(VDO_UNSUPPORTED_VERSION,
"Recovery journal is in the old format, a read-only rebuild is required.");
vdo_enter_read_only_mode(repair->completion.vdo,
VDO_UNSUPPORTED_VERSION);
return VDO_UNSUPPORTED_VERSION;
}
if (!is_exact_recovery_journal_block(journal, &header, i,
VDO_METADATA_RECOVERY_JOURNAL_2)) {
/* A bad block header was found so this must be the end of the journal. */ /* A bad block header was found so this must be the end of the journal. */
break; break;
} else if (header.metadata_type != expected_format) {
/* There is a mix of old and new format blocks, so we need to rebuild. */
vdo_log_error_strerror(VDO_CORRUPT_JOURNAL,
"Recovery journal is in an invalid format, a read-only rebuild is required.");
vdo_enter_read_only_mode(repair->completion.vdo, VDO_CORRUPT_JOURNAL);
return VDO_CORRUPT_JOURNAL;
} }
block_entries = header.entry_count; block_entries = header.entry_count;
...@@ -1620,8 +1617,14 @@ static int parse_journal_for_recovery(struct repair_completion *repair) ...@@ -1620,8 +1617,14 @@ static int parse_journal_for_recovery(struct repair_completion *repair)
break; break;
} }
if (!found_entries) if (!found_entries) {
return validate_heads(repair); return validate_heads(repair);
} else if (expected_format == VDO_METADATA_RECOVERY_JOURNAL) {
/* All journal blocks have the old format, so we need to upgrade. */
vdo_log_error_strerror(VDO_UNSUPPORTED_VERSION,
"Recovery journal is in the old format. Downgrade and complete recovery, then upgrade with a clean volume");
return VDO_UNSUPPORTED_VERSION;
}
/* Set the tail to the last valid tail block, if there is one. */ /* Set the tail to the last valid tail block, if there is one. */
if (repair->tail_recovery_point.sector_count == 0) if (repair->tail_recovery_point.sector_count == 0)
......
...@@ -28,7 +28,7 @@ const struct error_info vdo_status_list[] = { ...@@ -28,7 +28,7 @@ const struct error_info vdo_status_list[] = {
{ "VDO_LOCK_ERROR", "A lock is held incorrectly" }, { "VDO_LOCK_ERROR", "A lock is held incorrectly" },
{ "VDO_READ_ONLY", "The device is in read-only mode" }, { "VDO_READ_ONLY", "The device is in read-only mode" },
{ "VDO_SHUTTING_DOWN", "The device is shutting down" }, { "VDO_SHUTTING_DOWN", "The device is shutting down" },
{ "VDO_CORRUPT_JOURNAL", "Recovery journal entries corrupted" }, { "VDO_CORRUPT_JOURNAL", "Recovery journal corrupted" },
{ "VDO_TOO_MANY_SLABS", "Exceeds maximum number of slabs supported" }, { "VDO_TOO_MANY_SLABS", "Exceeds maximum number of slabs supported" },
{ "VDO_INVALID_FRAGMENT", "Compressed block fragment is invalid" }, { "VDO_INVALID_FRAGMENT", "Compressed block fragment is invalid" },
{ "VDO_RETRY_AFTER_REBUILD", "Retry operation after rebuilding finishes" }, { "VDO_RETRY_AFTER_REBUILD", "Retry operation after rebuilding finishes" },
......
...@@ -52,7 +52,7 @@ enum vdo_status_codes { ...@@ -52,7 +52,7 @@ enum vdo_status_codes {
VDO_READ_ONLY, VDO_READ_ONLY,
/* the VDO is shutting down */ /* the VDO is shutting down */
VDO_SHUTTING_DOWN, VDO_SHUTTING_DOWN,
/* the recovery journal has corrupt entries */ /* the recovery journal has corrupt entries or corrupt metadata */
VDO_CORRUPT_JOURNAL, VDO_CORRUPT_JOURNAL,
/* exceeds maximum number of slabs supported */ /* exceeds maximum number of slabs supported */
VDO_TOO_MANY_SLABS, VDO_TOO_MANY_SLABS,
......
...@@ -273,8 +273,10 @@ static int verity_handle_err(struct dm_verity *v, enum verity_block_type type, ...@@ -273,8 +273,10 @@ static int verity_handle_err(struct dm_verity *v, enum verity_block_type type,
if (v->mode == DM_VERITY_MODE_LOGGING) if (v->mode == DM_VERITY_MODE_LOGGING)
return 0; return 0;
if (v->mode == DM_VERITY_MODE_RESTART) if (v->mode == DM_VERITY_MODE_RESTART) {
kernel_restart("dm-verity device corrupted"); pr_emerg("dm-verity device corrupted\n");
emergency_restart();
}
if (v->mode == DM_VERITY_MODE_PANIC) if (v->mode == DM_VERITY_MODE_PANIC)
panic("dm-verity device corrupted"); panic("dm-verity device corrupted");
...@@ -597,6 +599,23 @@ static void verity_finish_io(struct dm_verity_io *io, blk_status_t status) ...@@ -597,6 +599,23 @@ static void verity_finish_io(struct dm_verity_io *io, blk_status_t status)
if (!static_branch_unlikely(&use_bh_wq_enabled) || !io->in_bh) if (!static_branch_unlikely(&use_bh_wq_enabled) || !io->in_bh)
verity_fec_finish_io(io); verity_fec_finish_io(io);
if (unlikely(status != BLK_STS_OK) &&
unlikely(!(bio->bi_opf & REQ_RAHEAD)) &&
!verity_is_system_shutting_down()) {
if (v->mode == DM_VERITY_MODE_RESTART ||
v->mode == DM_VERITY_MODE_PANIC)
DMERR_LIMIT("%s has error: %s", v->data_dev->name,
blk_status_to_str(status));
if (v->mode == DM_VERITY_MODE_RESTART) {
pr_emerg("dm-verity device corrupted\n");
emergency_restart();
}
if (v->mode == DM_VERITY_MODE_PANIC)
panic("dm-verity device corrupted");
}
bio_endio(bio); bio_endio(bio);
} }
......
...@@ -127,7 +127,7 @@ int verity_verify_root_hash(const void *root_hash, size_t root_hash_len, ...@@ -127,7 +127,7 @@ int verity_verify_root_hash(const void *root_hash, size_t root_hash_len,
#endif #endif
VERIFYING_UNSPECIFIED_SIGNATURE, NULL, NULL); VERIFYING_UNSPECIFIED_SIGNATURE, NULL, NULL);
#ifdef CONFIG_DM_VERITY_VERIFY_ROOTHASH_SIG_PLATFORM_KEYRING #ifdef CONFIG_DM_VERITY_VERIFY_ROOTHASH_SIG_PLATFORM_KEYRING
if (ret == -ENOKEY) if (ret == -ENOKEY || ret == -EKEYREJECTED)
ret = verify_pkcs7_signature(root_hash, root_hash_len, sig_data, ret = verify_pkcs7_signature(root_hash, root_hash_len, sig_data,
sig_len, sig_len,
VERIFY_USE_PLATFORM_KEYRING, VERIFY_USE_PLATFORM_KEYRING,
......
...@@ -2030,10 +2030,15 @@ static void dm_submit_bio(struct bio *bio) ...@@ -2030,10 +2030,15 @@ static void dm_submit_bio(struct bio *bio)
struct dm_table *map; struct dm_table *map;
map = dm_get_live_table(md, &srcu_idx); map = dm_get_live_table(md, &srcu_idx);
if (unlikely(!map)) {
DMERR_LIMIT("%s: mapping table unavailable, erroring io",
dm_device_name(md));
bio_io_error(bio);
goto out;
}
/* If suspended, or map not yet available, queue this IO for later */ /* If suspended, queue this IO for later */
if (unlikely(test_bit(DMF_BLOCK_IO_FOR_SUSPEND, &md->flags)) || if (unlikely(test_bit(DMF_BLOCK_IO_FOR_SUSPEND, &md->flags))) {
unlikely(!map)) {
if (bio->bi_opf & REQ_NOWAIT) if (bio->bi_opf & REQ_NOWAIT)
bio_wouldblock_error(bio); bio_wouldblock_error(bio);
else if (bio->bi_opf & REQ_RAHEAD) else if (bio->bi_opf & REQ_RAHEAD)
......
...@@ -109,7 +109,6 @@ void dm_zone_endio(struct dm_io *io, struct bio *clone); ...@@ -109,7 +109,6 @@ void dm_zone_endio(struct dm_io *io, struct bio *clone);
int dm_blk_report_zones(struct gendisk *disk, sector_t sector, int dm_blk_report_zones(struct gendisk *disk, sector_t sector,
unsigned int nr_zones, report_zones_cb cb, void *data); unsigned int nr_zones, report_zones_cb cb, void *data);
bool dm_is_zone_write(struct mapped_device *md, struct bio *bio); bool dm_is_zone_write(struct mapped_device *md, struct bio *bio);
int dm_zone_map_bio(struct dm_target_io *io);
int dm_zone_get_reset_bitmap(struct mapped_device *md, struct dm_table *t, int dm_zone_get_reset_bitmap(struct mapped_device *md, struct dm_table *t,
sector_t sector, unsigned int nr_zones, sector_t sector, unsigned int nr_zones,
unsigned long *need_reset); unsigned long *need_reset);
...@@ -119,10 +118,6 @@ static inline bool dm_is_zone_write(struct mapped_device *md, struct bio *bio) ...@@ -119,10 +118,6 @@ static inline bool dm_is_zone_write(struct mapped_device *md, struct bio *bio)
{ {
return false; return false;
} }
static inline int dm_zone_map_bio(struct dm_target_io *tio)
{
return DM_MAPIO_KILL;
}
#endif #endif
/* /*
......
...@@ -524,7 +524,6 @@ int dm_post_suspending(struct dm_target *ti); ...@@ -524,7 +524,6 @@ int dm_post_suspending(struct dm_target *ti);
int dm_noflush_suspending(struct dm_target *ti); int dm_noflush_suspending(struct dm_target *ti);
void dm_accept_partial_bio(struct bio *bio, unsigned int n_sectors); void dm_accept_partial_bio(struct bio *bio, unsigned int n_sectors);
void dm_submit_bio_remap(struct bio *clone, struct bio *tgt_clone); void dm_submit_bio_remap(struct bio *clone, struct bio *tgt_clone);
union map_info *dm_get_rq_mapinfo(struct request *rq);
#ifdef CONFIG_BLK_DEV_ZONED #ifdef CONFIG_BLK_DEV_ZONED
struct dm_report_zones_args { struct dm_report_zones_args {
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment