Commit d29a8134 authored by Michal Schmidt's avatar Michal Schmidt Committed by Tony Nguyen

ice: avoid the PTP hardware semaphore in gettimex64 path

The PTP hardware semaphore (PFTSYN_SEM) is used to synchronize
operations that program the PTP timers. The operations involve issuing
commands to the sideband queue. The E810 does not have a hardware
sideband queue, so the admin queue is used. The admin queue is slow.
I have observed delays in hundreds of milliseconds waiting for
ice_sq_done.

When phc2sys reads the time from the ice PTP clock and PFTSYN_SEM is
held by a task performing one of the slow operations, ice_ptp_lock can
easily time out. phc2sys gets -EBUSY and the kernel prints:
  ice 0000:XX:YY.0: PTP failed to get time
These messages appear once every few seconds, causing log spam.

The E810 datasheet recommends an algorithm for reading the upper 64 bits
of the GLTSYN_TIME register. It matches what's implemented in
ice_ptp_read_src_clk_reg. It is robust against wrap-around, but not
necessarily against the concurrent setting of the register (with
GLTSYN_CMD_{INIT,ADJ}_TIME commands). Perhaps that's why
ice_ptp_gettimex64 also takes PFTSYN_SEM.

The race with time setters can be prevented without relying on the PTP
hardware semaphore. Using the "ice_adapter" from the previous patch,
we can have a common spinlock for the PFs that share the clock hardware.
It will protect the reading and writing to the GLTSYN_TIME register.
The writing is performed indirectly, by the hardware, as a result of
the driver writing GLTSYN_CMD_SYNC in ice_ptp_exec_tmr_cmd. I wasn't
sure if the ice_flush there is enough to make sure GLTSYN_TIME has been
updated, but it works well in my testing.

My test code can be seen here:
https://gitlab.com/mschmidt2/linux/-/commits/ice-ptp-host-side-lock-10
It consists of:
 - kernel threads reading the time in a busy loop and looking at the
   deltas between consecutive values, reporting new maxima.
 - a shell script that sets the time repeatedly;
 - a bpftrace probe to produce a histogram of the measured deltas.
Without the spinlock ptp_gltsyn_time_lock, it is easy to see tearing.
Deltas in the [2G, 4G) range appear in the histograms.
With the spinlock added, there is no tearing and the biggest delta I saw
was in the range [1M, 2M), that is under 2 ms.
Reviewed-by: default avatarJacob Keller <jacob.e.keller@intel.com>
Reviewed-by: default avatarPrzemek Kitszel <przemyslaw.kitszel@intel.com>
Signed-off-by: default avatarMichal Schmidt <mschmidt@redhat.com>
Tested-by: Pucha Himasekhar Reddy <himasekharx.reddy.pucha@intel.com> (A Contingent worker at Intel)
Signed-off-by: default avatarTony Nguyen <anthony.l.nguyen@intel.com>
parent 0e2bddf9
...@@ -6,6 +6,7 @@ ...@@ -6,6 +6,7 @@
#include <linux/mutex.h> #include <linux/mutex.h>
#include <linux/pci.h> #include <linux/pci.h>
#include <linux/slab.h> #include <linux/slab.h>
#include <linux/spinlock.h>
#include <linux/xarray.h> #include <linux/xarray.h>
#include "ice_adapter.h" #include "ice_adapter.h"
...@@ -35,6 +36,7 @@ static struct ice_adapter *ice_adapter_new(void) ...@@ -35,6 +36,7 @@ static struct ice_adapter *ice_adapter_new(void)
if (!adapter) if (!adapter)
return NULL; return NULL;
spin_lock_init(&adapter->ptp_gltsyn_time_lock);
refcount_set(&adapter->refcount, 1); refcount_set(&adapter->refcount, 1);
return adapter; return adapter;
......
...@@ -4,15 +4,21 @@ ...@@ -4,15 +4,21 @@
#ifndef _ICE_ADAPTER_H_ #ifndef _ICE_ADAPTER_H_
#define _ICE_ADAPTER_H_ #define _ICE_ADAPTER_H_
#include <linux/spinlock_types.h>
#include <linux/refcount_types.h> #include <linux/refcount_types.h>
struct pci_dev; struct pci_dev;
/** /**
* struct ice_adapter - PCI adapter resources shared across PFs * struct ice_adapter - PCI adapter resources shared across PFs
* @ptp_gltsyn_time_lock: Spinlock protecting access to the GLTSYN_TIME
* register of the PTP clock.
* @refcount: Reference count. struct ice_pf objects hold the references. * @refcount: Reference count. struct ice_pf objects hold the references.
*/ */
struct ice_adapter { struct ice_adapter {
/* For access to the GLTSYN_TIME register */
spinlock_t ptp_gltsyn_time_lock;
refcount_t refcount; refcount_t refcount;
}; };
......
...@@ -374,6 +374,7 @@ ice_ptp_read_src_clk_reg(struct ice_pf *pf, struct ptp_system_timestamp *sts) ...@@ -374,6 +374,7 @@ ice_ptp_read_src_clk_reg(struct ice_pf *pf, struct ptp_system_timestamp *sts)
u8 tmr_idx; u8 tmr_idx;
tmr_idx = ice_get_ptp_src_clock_index(hw); tmr_idx = ice_get_ptp_src_clock_index(hw);
guard(spinlock)(&pf->adapter->ptp_gltsyn_time_lock);
/* Read the system timestamp pre PHC read */ /* Read the system timestamp pre PHC read */
ptp_read_system_prets(sts); ptp_read_system_prets(sts);
...@@ -1925,15 +1926,8 @@ ice_ptp_gettimex64(struct ptp_clock_info *info, struct timespec64 *ts, ...@@ -1925,15 +1926,8 @@ ice_ptp_gettimex64(struct ptp_clock_info *info, struct timespec64 *ts,
struct ptp_system_timestamp *sts) struct ptp_system_timestamp *sts)
{ {
struct ice_pf *pf = ptp_info_to_pf(info); struct ice_pf *pf = ptp_info_to_pf(info);
struct ice_hw *hw = &pf->hw;
if (!ice_ptp_lock(hw)) {
dev_err(ice_pf_to_dev(pf), "PTP failed to get time\n");
return -EBUSY;
}
ice_ptp_read_time(pf, ts, sts); ice_ptp_read_time(pf, ts, sts);
ice_ptp_unlock(hw);
return 0; return 0;
} }
......
...@@ -274,6 +274,9 @@ void ice_ptp_src_cmd(struct ice_hw *hw, enum ice_ptp_tmr_cmd cmd) ...@@ -274,6 +274,9 @@ void ice_ptp_src_cmd(struct ice_hw *hw, enum ice_ptp_tmr_cmd cmd)
*/ */
static void ice_ptp_exec_tmr_cmd(struct ice_hw *hw) static void ice_ptp_exec_tmr_cmd(struct ice_hw *hw)
{ {
struct ice_pf *pf = container_of(hw, struct ice_pf, hw);
guard(spinlock)(&pf->adapter->ptp_gltsyn_time_lock);
wr32(hw, GLTSYN_CMD_SYNC, SYNC_EXEC_CMD); wr32(hw, GLTSYN_CMD_SYNC, SYNC_EXEC_CMD);
ice_flush(hw); ice_flush(hw);
} }
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment