Commit e43111f5 authored by Stephen Boyd's avatar Stephen Boyd Committed by Bjorn Andersson

soc: qcom: rpmh-rsc: Ensure irqs aren't disabled by rpmh_rsc_send_data() callers

Dan pointed out that Smatch is concerned about this code because it uses
spin_lock_irqsave() and then calls wait_event_lock_irq() which enables
irqs before going to sleep. The comment above the function says it
should be called with interrupts enabled, but we simply hope that's true
without really confirming that. Let's add a might_sleep() here to
confirm that interrupts and preemption aren't disabled. Once we do that,
we can change the lock to be non-saving, spin_lock_irq(), to clarify
that we don't expect irqs to be disabled. If irqs are disabled by
callers they're going to be enabled anyway in the wait_event_lock_irq()
call which would be bad.

This should make Smatch happier and find bad callers faster with the
might_sleep(). We can drop the WARN_ON() in the caller because we have
the might_sleep() now, simplifying the code.
Reported-by: default avatarDan Carpenter <dan.carpenter@linaro.org>
Closes: https://lore.kernel.org/r/911181ed-c430-4592-ad26-4dc948834e08@moroto.mountain
Fixes: 2bc20f3c ("soc: qcom: rpmh-rsc: Sleep waiting for tcs slots to be free")
Cc: Douglas Anderson <dianders@chromium.org>
Signed-off-by: default avatarStephen Boyd <swboyd@chromium.org>
Reviewed-by: default avatarDouglas Anderson <dianders@chromium.org>
Link: https://lore.kernel.org/r/20240509184129.3924422-1-swboyd@chromium.orgSigned-off-by: default avatarBjorn Andersson <andersson@kernel.org>
parent 0780c836
...@@ -646,13 +646,14 @@ int rpmh_rsc_send_data(struct rsc_drv *drv, const struct tcs_request *msg) ...@@ -646,13 +646,14 @@ int rpmh_rsc_send_data(struct rsc_drv *drv, const struct tcs_request *msg)
{ {
struct tcs_group *tcs; struct tcs_group *tcs;
int tcs_id; int tcs_id;
unsigned long flags;
might_sleep();
tcs = get_tcs_for_msg(drv, msg); tcs = get_tcs_for_msg(drv, msg);
if (IS_ERR(tcs)) if (IS_ERR(tcs))
return PTR_ERR(tcs); return PTR_ERR(tcs);
spin_lock_irqsave(&drv->lock, flags); spin_lock_irq(&drv->lock);
/* Wait forever for a free tcs. It better be there eventually! */ /* Wait forever for a free tcs. It better be there eventually! */
wait_event_lock_irq(drv->tcs_wait, wait_event_lock_irq(drv->tcs_wait,
...@@ -670,7 +671,7 @@ int rpmh_rsc_send_data(struct rsc_drv *drv, const struct tcs_request *msg) ...@@ -670,7 +671,7 @@ int rpmh_rsc_send_data(struct rsc_drv *drv, const struct tcs_request *msg)
write_tcs_reg_sync(drv, drv->regs[RSC_DRV_CMD_ENABLE], tcs_id, 0); write_tcs_reg_sync(drv, drv->regs[RSC_DRV_CMD_ENABLE], tcs_id, 0);
enable_tcs_irq(drv, tcs_id, true); enable_tcs_irq(drv, tcs_id, true);
} }
spin_unlock_irqrestore(&drv->lock, flags); spin_unlock_irq(&drv->lock);
/* /*
* These two can be done after the lock is released because: * These two can be done after the lock is released because:
......
...@@ -183,7 +183,6 @@ static int __rpmh_write(const struct device *dev, enum rpmh_state state, ...@@ -183,7 +183,6 @@ static int __rpmh_write(const struct device *dev, enum rpmh_state state,
} }
if (state == RPMH_ACTIVE_ONLY_STATE) { if (state == RPMH_ACTIVE_ONLY_STATE) {
WARN_ON(irqs_disabled());
ret = rpmh_rsc_send_data(ctrlr_to_drv(ctrlr), &rpm_msg->msg); ret = rpmh_rsc_send_data(ctrlr_to_drv(ctrlr), &rpm_msg->msg);
} else { } else {
/* Clean up our call by spoofing tx_done */ /* Clean up our call by spoofing tx_done */
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment