Commit 47549650 authored by Joe Lawrence's avatar Joe Lawrence Committed by David S. Miller

team: avoid race condition in scheduling delayed work

When team_notify_peers and team_mcast_rejoin are called, they both reset
their respective .count_pending atomic variable. Then when the actual
worker function is executed, the variable is atomically decremented.
This pattern introduces a potential race condition where the
.count_pending rolls over and the worker function keeps rescheduling
until .count_pending decrements to zero again:

THREAD 1                           THREAD 2

========                           ========
team_notify_peers(teamX)
  atomic_set count_pending = 1
  schedule_delayed_work
                                   team_notify_peers(teamX)
                                   atomic_set count_pending = 1
team_notify_peers_work
  atomic_dec_and_test
    count_pending = 0
  (return)
                                   schedule_delayed_work
                                   team_notify_peers_work
                                   atomic_dec_and_test
                                     count_pending = -1
                                   schedule_delayed_work
                                   (repeat until count_pending = 0)

Instead of assigning a new value to .count_pending, use atomic_add to
tack-on the additional desired worker function invocations.
Signed-off-by: default avatarJoe Lawrence <joe.lawrence@stratus.com>
Acked-by: default avatarJiri Pirko <jiri@resnulli.us>
Fixes: fc423ff0 ("team: add peer notification")
Fixes: 492b200e ("team: add support for sending multicast rejoins")
Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
parent 34a419d4
...@@ -647,7 +647,7 @@ static void team_notify_peers(struct team *team) ...@@ -647,7 +647,7 @@ static void team_notify_peers(struct team *team)
{ {
if (!team->notify_peers.count || !netif_running(team->dev)) if (!team->notify_peers.count || !netif_running(team->dev))
return; return;
atomic_set(&team->notify_peers.count_pending, team->notify_peers.count); atomic_add(team->notify_peers.count, &team->notify_peers.count_pending);
schedule_delayed_work(&team->notify_peers.dw, 0); schedule_delayed_work(&team->notify_peers.dw, 0);
} }
...@@ -687,7 +687,7 @@ static void team_mcast_rejoin(struct team *team) ...@@ -687,7 +687,7 @@ static void team_mcast_rejoin(struct team *team)
{ {
if (!team->mcast_rejoin.count || !netif_running(team->dev)) if (!team->mcast_rejoin.count || !netif_running(team->dev))
return; return;
atomic_set(&team->mcast_rejoin.count_pending, team->mcast_rejoin.count); atomic_add(team->mcast_rejoin.count, &team->mcast_rejoin.count_pending);
schedule_delayed_work(&team->mcast_rejoin.dw, 0); schedule_delayed_work(&team->mcast_rejoin.dw, 0);
} }
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment