summaryrefslogtreecommitdiff
path: root/arch/powerpc/include/asm/pte-common.h
diff options
context:
space:
mode:
authorChristophe Leroy <christophe.leroy@c-s.fr>2018-01-12 13:45:31 +0100
committerMichael Ellerman <mpe@ellerman.id.au>2018-01-16 23:47:14 +1100
commitde0f93873937e999fadaba011d368bc042af37b2 (patch)
treef39a5d9ef994400df0c0bea5c6cab9eafd7567c4 /arch/powerpc/include/asm/pte-common.h
parent351750331fc1580cdb60d2efef04238f5faa89fe (diff)
powerpc/8xx: Remove _PAGE_USER and handle user access at PMD level
As Linux kernel separates KERNEL and USER address spaces, there is therefore no need to flag USER access at page level. Today, the 8xx TLB handlers already handle user access in the L1 entry through Access Protection Groups, it is then natural to move the user access handling at PMD level once _PAGE_NA allows to handle PAGE_NONE protection without _PAGE_USER In the mean time, as we free up one bit in the PTE, we can use it to include SPS (page size flag) in the PTE and avoid handling it at every TLB miss hence removing special handling based on compiled page size. For _PAGE_EXEC, we rework it to use PP PTE bits, avoiding the copy of _PAGE_EXEC bit into the L1 entry. Unfortunatly we are not able to put it at the correct location as it conflicts with NA/RO/RW bits for data entries. Upper bits of APG in L1 entry overlap with PMD base address. In order to avoid having to filter that out, we set up all groups so that upper bits can have any value. Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Diffstat (limited to 'arch/powerpc/include/asm/pte-common.h')
-rw-r--r--arch/powerpc/include/asm/pte-common.h6
1 files changed, 6 insertions, 0 deletions
diff --git a/arch/powerpc/include/asm/pte-common.h b/arch/powerpc/include/asm/pte-common.h
index 426a902816c5..c4a72c7a8c83 100644
--- a/arch/powerpc/include/asm/pte-common.h
+++ b/arch/powerpc/include/asm/pte-common.h
@@ -53,6 +53,9 @@
#ifndef _PAGE_NA
#define _PAGE_NA 0
#endif
+#ifndef _PAGE_HUGE
+#define _PAGE_HUGE 0
+#endif
#ifndef _PMD_PRESENT_MASK
#define _PMD_PRESENT_MASK _PMD_PRESENT
@@ -61,6 +64,9 @@
#define _PMD_SIZE 0
#define PMD_PAGE_SIZE(pmd) bad_call_to_PMD_PAGE_SIZE()
#endif
+#ifndef _PMD_USER
+#define _PMD_USER 0
+#endif
#ifndef _PAGE_KERNEL_RO
#define _PAGE_KERNEL_RO (_PAGE_PRIVILEGED | _PAGE_RO)
#endif