summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorJason A. Donenfeld <Jason@zx2c4.com>2020-05-06 15:33:04 -0600
committerDavid S. Miller <davem@davemloft.net>2020-05-06 20:03:47 -0700
commit4005f5c3c9d006157ba716594e0d70c88a235c5e (patch)
treed275974f4516371281eb5d8dc1b508a02b2e242e
parentb673e24aad36981f327a6570412ffa7754de8911 (diff)
wireguard: send/receive: cond_resched() when processing worker ringbuffers
Users with pathological hardware reported CPU stalls on CONFIG_ PREEMPT_VOLUNTARY=y, because the ringbuffers would stay full, meaning these workers would never terminate. That turned out not to be okay on systems without forced preemption, which Sultan observed. This commit adds a cond_resched() to the bottom of each loop iteration, so that these workers don't hog the core. Note that we don't need this on the napi poll worker, since that terminates after its budget is expended. Suggested-by: Sultan Alsawaf <sultan@kerneltoast.com> Reported-by: Wang Jian <larkwang@gmail.com> Fixes: e7096c131e51 ("net: WireGuard secure network tunnel") Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-rw-r--r--drivers/net/wireguard/receive.c2
-rw-r--r--drivers/net/wireguard/send.c4
2 files changed, 6 insertions, 0 deletions
diff --git a/drivers/net/wireguard/receive.c b/drivers/net/wireguard/receive.c
index 267f202f1931..2566e13a292d 100644
--- a/drivers/net/wireguard/receive.c
+++ b/drivers/net/wireguard/receive.c
@@ -516,6 +516,8 @@ void wg_packet_decrypt_worker(struct work_struct *work)
&PACKET_CB(skb)->keypair->receiving)) ?
PACKET_STATE_CRYPTED : PACKET_STATE_DEAD;
wg_queue_enqueue_per_peer_napi(skb, state);
+ if (need_resched())
+ cond_resched();
}
}
diff --git a/drivers/net/wireguard/send.c b/drivers/net/wireguard/send.c
index 3e030d614df5..dc3079e17c7f 100644
--- a/drivers/net/wireguard/send.c
+++ b/drivers/net/wireguard/send.c
@@ -281,6 +281,8 @@ void wg_packet_tx_worker(struct work_struct *work)
wg_noise_keypair_put(keypair, false);
wg_peer_put(peer);
+ if (need_resched())
+ cond_resched();
}
}
@@ -304,6 +306,8 @@ void wg_packet_encrypt_worker(struct work_struct *work)
}
wg_queue_enqueue_per_peer(&PACKET_PEER(first)->tx_queue, first,
state);
+ if (need_resched())
+ cond_resched();
}
}