summaryrefslogtreecommitdiff
path: root/arch/powerpc/include/asm/svm.h
diff options
context:
space:
mode:
authorAnshuman Khandual <khandual@linux.vnet.ibm.com>2019-08-19 23:13:18 -0300
committerMichael Ellerman <mpe@ellerman.id.au>2019-08-30 09:55:40 +1000
commitbd104e6db6f0ad124e507a9ecf1a468efe5697db (patch)
treea7add55ca321c68c3116de09762d567b9b7786f0 /arch/powerpc/include/asm/svm.h
parente311a92da18cbdd4972dab0cda88b1b8484b8fef (diff)
powerpc/pseries/svm: Use shared memory for LPPACA structures
LPPACA structures need to be shared with the host. Hence they need to be in shared memory. Instead of allocating individual chunks of memory for a given structure from memblock, a contiguous chunk of memory is allocated and then converted into shared memory. Subsequent allocation requests will come from the contiguous chunk which will be always shared memory for all structures. While we are able to use a kmem_cache constructor for the Debug Trace Log, LPPACAs are allocated very early in the boot process (before SLUB is available) so we need to use a simpler scheme here. Introduce helper is_svm_platform() which uses the S bit of the MSR to tell whether we're running as a secure guest. Signed-off-by: Anshuman Khandual <khandual@linux.vnet.ibm.com> Signed-off-by: Thiago Jung Bauermann <bauerman@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20190820021326.6884-9-bauerman@linux.ibm.com
Diffstat (limited to 'arch/powerpc/include/asm/svm.h')
-rw-r--r--arch/powerpc/include/asm/svm.h26
1 files changed, 26 insertions, 0 deletions
diff --git a/arch/powerpc/include/asm/svm.h b/arch/powerpc/include/asm/svm.h
new file mode 100644
index 000000000000..2689d8d841f8
--- /dev/null
+++ b/arch/powerpc/include/asm/svm.h
@@ -0,0 +1,26 @@
+/* SPDX-License-Identifier: GPL-2.0+ */
+/*
+ * SVM helper functions
+ *
+ * Copyright 2018 Anshuman Khandual, IBM Corporation.
+ */
+
+#ifndef _ASM_POWERPC_SVM_H
+#define _ASM_POWERPC_SVM_H
+
+#ifdef CONFIG_PPC_SVM
+
+static inline bool is_secure_guest(void)
+{
+ return mfmsr() & MSR_S;
+}
+
+#else /* CONFIG_PPC_SVM */
+
+static inline bool is_secure_guest(void)
+{
+ return false;
+}
+
+#endif /* CONFIG_PPC_SVM */
+#endif /* _ASM_POWERPC_SVM_H */