Commit 555701a4 authored by Douglas Anderson's avatar Douglas Anderson Committed by Bjorn Andersson

soc: qcom: rpmh-rsc: Simplify locking by eliminating the per-TCS lock

The rpmh-rsc code had both a driver-level lock (sometimes referred to
in comments as drv->lock) and a lock per-TCS.  The idea was supposed
to be that there would be times where you could get by with just
locking a TCS lock and therefor other RPMH users wouldn't be blocked.

The above didn't work out so well.

Looking at tcs_write() the bigger drv->lock was held for most of the
function anyway.  Only the __tcs_buffer_write() and
__tcs_set_trigger() calls were called without holding the drv->lock.
It actually turns out that in tcs_write() we don't need to hold the
drv->lock for those function calls anyway even if the per-TCS lock
isn't there anymore.  From the newly added comments in the code, this
is because:
- We marked "tcs_in_use" under lock.
- Once "tcs_in_use" has been marked nobody else could be writing
  to these registers until the interrupt goes off.
- The interrupt can't go off until we trigger w/ the last line
  of __tcs_set_trigger().
Thus, from a tcs_write() point of view, the per-TCS lock was useless.

Looking at rpmh_rsc_write_ctrl_data(), only the per-TCS lock was held.
It turns out, though, that this function already needs to be called
with the equivalent of the drv->lock held anyway (we either need to
hold drv->lock as we will in a future patch or we need to know no
other CPUs could be running as happens today).  Specifically
rpmh_rsc_write_ctrl_data() might be writing to a TCS that has been
borrowed for writing an active transation but it never checks this.

Let's eliminate this extra overhead and avoid possible AB BA locking
headaches.
Suggested-by: default avatarMaulik Shah <mkshah@codeaurora.org>
Signed-off-by: default avatarDouglas Anderson <dianders@chromium.org>
Reviewed-by: default avatarStephen Boyd <swboyd@chromium.org>
Link: https://lore.kernel.org/r/20200504104917.v6.4.Ib8dccfdb10bf6b1fb1d600ca1c21d9c0db1ef746@changeidSigned-off-by: default avatarBjorn Andersson <bjorn.andersson@linaro.org>
parent b5945214
......@@ -28,7 +28,6 @@ struct rsc_drv;
* @offset: Start of the TCS group relative to the TCSes in the RSC.
* @num_tcs: Number of TCSes in this type.
* @ncpt: Number of commands in each TCS.
* @lock: Lock for synchronizing this TCS writes.
* @req: Requests that are sent from the TCS; only used for ACTIVE_ONLY
* transfers (could be on a wake/sleep TCS if we are borrowing for
* an ACTIVE_ONLY transfer).
......@@ -48,7 +47,6 @@ struct tcs_group {
u32 offset;
int num_tcs;
int ncpt;
spinlock_t lock;
const struct tcs_request *req[MAX_TCS_PER_TYPE];
DECLARE_BITMAP(slots, MAX_TCS_SLOTS);
};
......@@ -103,14 +101,9 @@ struct rpmh_ctrlr {
* @tcs_in_use: S/W state of the TCS; only set for ACTIVE_ONLY
* transfers, but might show a sleep/wake TCS in use if
* it was borrowed for an active_only transfer. You
* must hold both the lock in this struct and the
* tcs_lock for the TCS in order to mark a TCS as
* in-use, but you only need the lock in this structure
* (aka the drv->lock) to mark one freed.
* @lock: Synchronize state of the controller. If you will be
* grabbing this lock and a tcs_lock at the same time,
* grab the tcs_lock first so we always have a
* consistent lock ordering.
* must hold the lock in this struct (AKA drv->lock) in
* order to update this.
* @lock: Synchronize state of the controller.
* @pm_lock: Synchronize during PM notifications.
* Used when solver mode is not present.
* @client: Handle to the DRV's client.
......
......@@ -192,11 +192,7 @@ static void write_tcs_reg_sync(const struct rsc_drv *drv, int reg, int tcs_id,
*
* Returns true if nobody has claimed this TCS (by setting tcs_in_use).
*
* Context: Must be called with the drv->lock held or the tcs_lock for the TCS
* being tested. If only the tcs_lock is held then it is possible that
* this function will return that a tcs is still busy when it has been
* recently been freed but it will never return free when a TCS is
* actually in use.
* Context: Must be called with the drv->lock held.
*
* Return: true if the given TCS is free.
*/
......@@ -255,8 +251,6 @@ void rpmh_rsc_invalidate(struct rsc_drv *drv)
* This is normally pretty straightforward except if we are trying to send
* an ACTIVE_ONLY message but don't have any active_only TCSes.
*
* Called without drv->lock held and with no tcs_lock locks held.
*
* Return: A pointer to a tcs_group or an ERR_PTR.
*/
static struct tcs_group *get_tcs_for_msg(struct rsc_drv *drv,
......@@ -594,24 +588,19 @@ static int tcs_write(struct rsc_drv *drv, const struct tcs_request *msg)
if (IS_ERR(tcs))
return PTR_ERR(tcs);
spin_lock_irqsave(&tcs->lock, flags);
spin_lock(&drv->lock);
spin_lock_irqsave(&drv->lock, flags);
/*
* The h/w does not like if we send a request to the same address,
* when one is already in-flight or being processed.
*/
ret = check_for_req_inflight(drv, tcs, msg);
if (ret) {
spin_unlock(&drv->lock);
goto done_write;
}
if (ret)
goto unlock;
tcs_id = find_free_tcs(tcs);
if (tcs_id < 0) {
ret = tcs_id;
spin_unlock(&drv->lock);
goto done_write;
}
ret = find_free_tcs(tcs);
if (ret < 0)
goto unlock;
tcs_id = ret;
tcs->req[tcs_id - tcs->offset] = msg;
set_bit(tcs_id, drv->tcs_in_use);
......@@ -625,13 +614,22 @@ static int tcs_write(struct rsc_drv *drv, const struct tcs_request *msg)
write_tcs_reg_sync(drv, RSC_DRV_CMD_WAIT_FOR_CMPL, tcs_id, 0);
enable_tcs_irq(drv, tcs_id, true);
}
spin_unlock(&drv->lock);
spin_unlock_irqrestore(&drv->lock, flags);
/*
* These two can be done after the lock is released because:
* - We marked "tcs_in_use" under lock.
* - Once "tcs_in_use" has been marked nobody else could be writing
* to these registers until the interrupt goes off.
* - The interrupt can't go off until we trigger w/ the last line
* of __tcs_set_trigger() below.
*/
__tcs_buffer_write(drv, tcs_id, 0, msg);
__tcs_set_trigger(drv, tcs_id, true);
done_write:
spin_unlock_irqrestore(&tcs->lock, flags);
return 0;
unlock:
spin_unlock_irqrestore(&drv->lock, flags);
return ret;
}
......@@ -686,8 +684,6 @@ int rpmh_rsc_send_data(struct rsc_drv *drv, const struct tcs_request *msg)
* Only for use on sleep/wake TCSes since those are the only ones we maintain
* tcs->slots for.
*
* Must be called with the tcs_lock for the group held.
*
* Return: -ENOMEM if there was no room, else 0.
*/
static int find_slots(struct tcs_group *tcs, const struct tcs_request *msg,
......@@ -722,25 +718,25 @@ static int find_slots(struct tcs_group *tcs, const struct tcs_request *msg,
* This should only be called for for sleep/wake state, never active-only
* state.
*
* The caller must ensure that no other RPMH actions are happening and the
* controller is idle when this function is called since it runs lockless.
*
* Return: 0 if no error; else -error.
*/
int rpmh_rsc_write_ctrl_data(struct rsc_drv *drv, const struct tcs_request *msg)
{
struct tcs_group *tcs;
int tcs_id = 0, cmd_id = 0;
unsigned long flags;
int ret;
tcs = get_tcs_for_msg(drv, msg);
if (IS_ERR(tcs))
return PTR_ERR(tcs);
spin_lock_irqsave(&tcs->lock, flags);
/* find the TCS id and the command in the TCS to write to */
ret = find_slots(tcs, msg, &tcs_id, &cmd_id);
if (!ret)
__tcs_buffer_write(drv, tcs_id, cmd_id, msg);
spin_unlock_irqrestore(&tcs->lock, flags);
return ret;
}
......@@ -769,8 +765,8 @@ static bool rpmh_rsc_ctrlr_is_busy(struct rsc_drv *drv)
* should be checked for not busy, because we used wake TCSes for
* active requests in this case.
*
* Since this is called from the last cpu, need not take drv or tcs
* lock before checking tcs_is_free().
* Since this is called from the last cpu, need not take drv->lock
* before checking tcs_is_free().
*/
if (!tcs->num_tcs)
tcs = &drv->tcs[WAKE_TCS];
......@@ -899,7 +895,6 @@ static int rpmh_probe_tcs_config(struct platform_device *pdev,
tcs->type = tcs_cfg[i].type;
tcs->num_tcs = tcs_cfg[i].n;
tcs->ncpt = ncpt;
spin_lock_init(&tcs->lock);
if (!tcs->num_tcs || tcs->type == CONTROL_TCS)
continue;
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment