Commit b42cd7b3 authored by Yu Kuai's avatar Yu Kuai Committed by Song Liu

md/raid5: replace suspend with quiesce() callback

raid5 is the only personality to suspend array in check_reshape() and
start_reshape() callback, suspend and quiesce() callback can both wait
for all normal io to be done, and prevent new io to be dispatched, the
difference is that suspend is implemented in common layer, and quiesce()
callback is implemented in raid5.

In order to cleanup all the usage of mddev_suspend(), the new apis
__mddev_suspend() need to be called before 'reconfig_mutex' is held,
and it's not good to affect all the personalities in common layer just
for raid5. Hence replace suspend with quiesce() callaback, prepare to
reomove all the users of mddev_suspend().
Signed-off-by: default avatarYu Kuai <yukuai3@huawei.com>
Signed-off-by: default avatarSong Liu <song@kernel.org>
Link: https://lore.kernel.org/r/20231010151958.145896-17-yukuai1@huaweicloud.com
parent 1978c742
...@@ -70,6 +70,8 @@ MODULE_PARM_DESC(devices_handle_discard_safely, ...@@ -70,6 +70,8 @@ MODULE_PARM_DESC(devices_handle_discard_safely,
"Set to Y if all devices in each array reliably return zeroes on reads from discarded regions"); "Set to Y if all devices in each array reliably return zeroes on reads from discarded regions");
static struct workqueue_struct *raid5_wq; static struct workqueue_struct *raid5_wq;
static void raid5_quiesce(struct mddev *mddev, int quiesce);
static inline struct hlist_head *stripe_hash(struct r5conf *conf, sector_t sect) static inline struct hlist_head *stripe_hash(struct r5conf *conf, sector_t sect)
{ {
int hash = (sect >> RAID5_STRIPE_SHIFT(conf)) & HASH_MASK; int hash = (sect >> RAID5_STRIPE_SHIFT(conf)) & HASH_MASK;
...@@ -2492,15 +2494,12 @@ static int resize_chunks(struct r5conf *conf, int new_disks, int new_sectors) ...@@ -2492,15 +2494,12 @@ static int resize_chunks(struct r5conf *conf, int new_disks, int new_sectors)
unsigned long cpu; unsigned long cpu;
int err = 0; int err = 0;
/* /* Never shrink. */
* Never shrink. And mddev_suspend() could deadlock if this is called
* from raid5d. In that case, scribble_disks and scribble_sectors
* should equal to new_disks and new_sectors
*/
if (conf->scribble_disks >= new_disks && if (conf->scribble_disks >= new_disks &&
conf->scribble_sectors >= new_sectors) conf->scribble_sectors >= new_sectors)
return 0; return 0;
mddev_suspend(conf->mddev);
raid5_quiesce(conf->mddev, true);
cpus_read_lock(); cpus_read_lock();
for_each_present_cpu(cpu) { for_each_present_cpu(cpu) {
...@@ -2514,7 +2513,8 @@ static int resize_chunks(struct r5conf *conf, int new_disks, int new_sectors) ...@@ -2514,7 +2513,8 @@ static int resize_chunks(struct r5conf *conf, int new_disks, int new_sectors)
} }
cpus_read_unlock(); cpus_read_unlock();
mddev_resume(conf->mddev); raid5_quiesce(conf->mddev, false);
if (!err) { if (!err) {
conf->scribble_disks = new_disks; conf->scribble_disks = new_disks;
conf->scribble_sectors = new_sectors; conf->scribble_sectors = new_sectors;
...@@ -8551,8 +8551,8 @@ static int raid5_start_reshape(struct mddev *mddev) ...@@ -8551,8 +8551,8 @@ static int raid5_start_reshape(struct mddev *mddev)
* the reshape wasn't running - like Discard or Read - have * the reshape wasn't running - like Discard or Read - have
* completed. * completed.
*/ */
mddev_suspend(mddev); raid5_quiesce(mddev, true);
mddev_resume(mddev); raid5_quiesce(mddev, false);
/* Add some new drives, as many as will fit. /* Add some new drives, as many as will fit.
* We know there are enough to make the newly sized array work. * We know there are enough to make the newly sized array work.
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment