summaryrefslogtreecommitdiff
path: root/include/linux/sched.h
diff options
context:
space:
mode:
authorLinus Torvalds <torvalds@linux-foundation.org>2021-08-30 15:00:33 -0700
committerLinus Torvalds <torvalds@linux-foundation.org>2021-08-30 15:00:33 -0700
commit0a096f240aa1992ddac65f8e704f7b0c0795fe1c (patch)
tree04c64aca17b94b0862214e09784fc23b413df578 /include/linux/sched.h
parent7d6e3fa87e732ec1e7761bf325c0907685c8571b (diff)
parentb7fe54f6c2d437082dcbecfbd832f38edd9caaf4 (diff)
Merge tag 'x86-cpu-2021-08-30' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Pull x86 cache flush updates from Thomas Gleixner: "A reworked version of the opt-in L1D flush mechanism. This is a stop gap for potential future speculation related hardware vulnerabilities and a mechanism for truly security paranoid applications. It allows a task to request that the L1D cache is flushed when the kernel switches to a different mm. This can be requested via prctl(). Changes vs the previous versions: - Get rid of the software flush fallback - Make the handling consistent with other mitigations - Kill the task when it ends up on a SMT enabled core which defeats the purpose of L1D flushing obviously" * tag 'x86-cpu-2021-08-30' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: Documentation: Add L1D flushing Documentation x86, prctl: Hook L1D flushing in via prctl x86/mm: Prepare for opt-in based L1D flush in switch_mm() x86/process: Make room for TIF_SPEC_L1D_FLUSH sched: Add task_work callback for paranoid L1D flush x86/mm: Refactor cond_ibpb() to support other use cases x86/smp: Add a per-cpu view of SMT state
Diffstat (limited to 'include/linux/sched.h')
-rw-r--r--include/linux/sched.h10
1 files changed, 10 insertions, 0 deletions
diff --git a/include/linux/sched.h b/include/linux/sched.h
index af7179f8572c..1780260f237b 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -1474,6 +1474,16 @@ struct task_struct {
struct llist_head kretprobe_instances;
#endif
+#ifdef CONFIG_ARCH_HAS_PARANOID_L1D_FLUSH
+ /*
+ * If L1D flush is supported on mm context switch
+ * then we use this callback head to queue kill work
+ * to kill tasks that are not running on SMT disabled
+ * cores
+ */
+ struct callback_head l1d_flush_kill;
+#endif
+
/*
* New fields for task_struct should be added above here, so that
* they are included in the randomized portion of task_struct.