summaryrefslogtreecommitdiff
path: root/arch/arc/mm/tlb.c
AgeCommit message (Collapse)Author
2023-12-08ARC: mm: retire support for aliasing VIPT D$Vineet Gupta
Legacy ARC700 processors (first generation of MMU enabled ARC cores) had VIPT cached which could be configured such that they could alias. Corresponding support in kernel (with all the obnoxious cache flush overhead) was added in ARC port 10 years ago to support 1 silicon. That is long bygone and we can let it RIP. Cc: Matthew Wilcox (Oracle) <willy@infradead.org> Signed-off-by: Vineet Gupta <vgupta@kernel.org>
2023-09-04Merge tag 'arc-6.6-rc1' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/vgupta/arc Pull ARC updates from Vineet Gupta: - fixes for -Wmissing-prototype warnings - missing compiler barrier in relaxed atomics - some uaccess simplification, declutter - removal of massive glocal struct cpuinfo_arc from bootlog code - __switch_to consolidation (removal of inline asm variant) - use GP to cache task pointer (vs. r25) - misc rework of entry code * tag 'arc-6.6-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/vgupta/arc: (24 commits) ARC: boot log: fix warning arc: Explicitly include correct DT includes ARC: pt_regs: create seperate type for ecr ARCv2: entry: rearrange pt_regs slightly ARC: entry: replace 8 byte ADD.ne with 4 byte ADD2.ne ARC: entry: replace 8 byte OR with 4 byte BSET ARC: entry: Add more common chores to EXCEPTION_PROLOGUE ARC: entry: EV_MachineCheck dont re-read ECR ARC: entry: ARcompact EV_ProtV to use r10 directly ARC: entry: rework (non-functional) ARC: __switch_to: move ksp to thread_info from thread_struct ARC: __switch_to: asm with dwarf ops (vs. inline asm) ARC: kernel stack: INIT_THREAD need not setup @init_stack in @ksp ARC: entry: use gp to cache task pointer (vs. r25) ARC: boot log: eliminate struct cpuinfo_arc #4: boot log per ISA ARC: boot log: eliminate struct cpuinfo_arc #3: don't export ARC: boot log: eliminate struct cpuinfo_arc #2: cache ARC: boot log: eliminate struct cpuinfo_arc #1: mm ARCv2: memset: don't prefetch for len == 0 which happens a alot ARC: uaccess: elide unaliged handling if hardware supports ...
2023-08-24arc: implement the new page table range APIMatthew Wilcox (Oracle)
Add PFN_PTE_SHIFT, update_mmu_cache_range(), flush_dcache_folio() and flush_icache_pages(). Change the PG_dc_clean flag from being per-page to per-folio (which means it cannot always be set as we don't know that all pages in this folio were cleaned). Enhance the internal flush routines to take the number of pages to flush. Link: https://lkml.kernel.org/r/20230802151406.3735276-9-willy@infradead.org Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org> Acked-by: Mike Rapoport (IBM) <rppt@kernel.org> Cc: Vineet Gupta <vgupta@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2023-08-17ARC: boot log: eliminate struct cpuinfo_arc #4: boot log per ISAVineet Gupta
- boot log now clearly per ISA - global struct cpuinfo_arc[] elimiated - local struct struct arcinfo kept for passing info between functions Tested-by: kernel test robot <lkp@intel.com> Closes: https://lore.kernel.org/oe-kbuild-all/202308162101.Ve5jBg80-lkp@intel.com Signed-off-by: Vineet Gupta <vgupta@kernel.org>
2023-08-17ARC: boot log: eliminate struct cpuinfo_arc #1: mmVineet Gupta
This is first step in eliminating struct cpuinfo_arc[NR_CPUS] Back when we had just ARCompact ISA, the idea was to read/bit-fiddle the BCRs once and and cache decoded information in a global struct ready to use. With ARCv2 it was modified to contained abstract / ISA agnostic information. However with ARCv3 there 's too much disparity to abstract in common structures. So drop the entire decode once and store paradigm. Afterall there's only 2 users of this machinery anyways: boot printing and cat /proc/cpuinfo. None is performance critical to warrant locking away resident memory per cpu. This patch is first step in that direction - decouples struct cpuinfo_arc_mmu from global struct cpuinfo_arc - mmu code still has a trimmed down static version of struct cpuinfo_arc_mmu to cache information needed in performance critical code such as tlb flush routines - folds read_decode_mmu_bcr() into arc_mmu_mumbojumbo() - setup_processor() directly calls arc_mmu_init() and not via arc_cpu_init() Tested-by: kernel test robot <lkp@intel.com> Closes: https://lore.kernel.org/oe-kbuild-all/202308151213.qKZPMiyz-lkp@intel.com/ Signed-off-by: Vineet Gupta <vgupta@kernel.org>
2023-08-13ARC: -Wmissing-prototype warning fixesVineet Gupta
Anrd reported [1] new compiler warnings due to -Wmissing-protype. These are for non static functions mostly used in asm code hence not exported already. Fix this by adding the prototypes. [1] https://lore.kernel.org/lkml/20230810141947.1236730-1-arnd@kernel.org Reviewed-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Vineet Gupta <vgupta@kernel.org>
2021-08-26ARC: mm: support 3 levels of page tablesVineet Gupta
ARCv2 MMU is software walked and Linux implements 2 levels of paging: pgd/pte. Forthcoming hw will have multiple levels, so this change preps mm code for same. It is also fun to try multi levels even on soft-walked code to ensure generic mm code is robust to handle. overview ________ 2 levels {pgd, pte} : pmd is folded but pmd_* macros are valid and operate on pgd 3 levels {pgd, pmd, pte}: - pud is folded and pud_* macros point to pgd - pmd_* macros operate on actual pmd code changes ____________ 1. #include <asm-generic/pgtable-nopud.h> 2. Define CONFIG_PGTABLE_LEVELS 3 3a. Define PMD_SHIFT, PMD_SIZE, PMD_MASK, pmd_t 3b. Define pmd_val() which actually deals with pmd (pmd_offset(), pmd_index() are provided by generic code) 3c. pmd_alloc_one()/pmd_free() also provided by generic code (pmd_populate/pmd_free already exist) 4. Define pud_none(), pud_bad() macros based on generic pud_val() which internally pertains to pgd now. 4b. define pud_populate() to just setup pgd Acked-by: Mike Rapoport <rppt@linux.ibm.com> Signed-off-by: Vineet Gupta <vgupta@kernel.org>
2021-08-26ARC: mm: switch pgtable_t back to struct page *Vineet Gupta
So far ARC pgtable_t has not been struct page based to avoid extra page_address() calls involved. However the differences are down to noise and get in the way of using generic code, hence this patch. This also allows us to reuse generic THP depost/withdraw code. There's some additional consideration for PGDIR_SHIFT in 4K page config. Now due to page tables being PAGE_SIZE deep only, the address split can't be really arbitrary. Tested-by: kernel test robot <lkp@intel.com> Suggested-by: Mike Rapoport <rppt@linux.ibm.com> Acked-by: Mike Rapoport <rppt@linux.ibm.com> Signed-off-by: Vineet Gupta <vgupta@kernel.org>
2021-08-24ARC: mm: move MMU specific bits out of ASID allocatorVineet Gupta
And while at it, rewrite commentary on ASID allocator Signed-off-by: Vineet Gupta <vgupta@kernel.org>
2021-08-24ARC: mm: Fixes to allow STRICT_MM_TYPECHECKSVineet Gupta
Signed-off-by: Vineet Gupta <vgupta@kernel.org>
2021-08-24ARC: mm: remove tlb paranoid codeVineet Gupta
This was used back in arc700 days when ASID allocator was fragile. Not needed in last 5 years Signed-off-by: Vineet Gupta <vgupta@kernel.org>
2021-08-24ARC: mm: use SCRATCH_DATA0 register for caching pgdir in ARCv2 onlyVineet Gupta
MMU SCRATCH_DATA0 register is intended to cache task pgd. However in ARC700 SMP port, it has to be repurposed for re-entrant interrupt handling, while UP port doesn't. We currently handle these use-cases using a fabricated #define which has usual issues of dependency nesting and obvious ugliness. So clean this up: for ARC700 don't use to cache pgd (even in UP) and do the opposite for ARCv2. And while here, switch to canonical pgd_offset(). Acked-by: Mike Rapoport <rppt@linux.ibm.com> Signed-off-by: Vineet Gupta <vgupta@kernel.org>
2021-08-24ARC: retire MMUv1 and MMUv2 supportVineet Gupta
There's no known/active customer using them with latest kernels anyways. Removal helps cleanup code and remove the hack for MMU_VER to MMU_V[3-4] conversion Signed-off-by: Vineet Gupta <vgupta@kernel.org>
2021-05-10ARC: mm: PAE: use 40-bit physical page maskVladimir Isaev
32-bit PAGE_MASK can not be used as a mask for physical addresses when PAE is enabled. PAGE_MASK_PHYS must be used for physical addresses instead of PAGE_MASK. Without this, init gets SIGSEGV if pte_modify was called: | potentially unexpected fatal signal 11. | Path: /bin/busybox | CPU: 0 PID: 1 Comm: init Not tainted 5.12.0-rc5-00003-g1e43c377a79f-dirty | Insn could not be fetched | @No matching VMA found | ECR: 0x00040000 EFA: 0x00000000 ERET: 0x00000000 | STAT: 0x80080082 [IE U ] BTA: 0x00000000 | SP: 0x5f9ffe44 FP: 0x00000000 BLK: 0xaf3d4 | LPS: 0x000d093e LPE: 0x000d0950 LPC: 0x00000000 | r00: 0x00000002 r01: 0x5f9fff14 r02: 0x5f9fff20 | ... | Kernel panic - not syncing: Attempted to kill init! exitcode=0x0000000b Signed-off-by: Vladimir Isaev <isaev@synopsys.com> Reported-by: kernel test robot <lkp@intel.com> Cc: Vineet Gupta <vgupta@synopsys.com> Cc: stable@vger.kernel.org Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2020-11-17ARC: mm: fix spelling mistakesFlavio Suligoi
Signed-off-by: Flavio Suligoi <f.suligoi@asem.it> Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2019-10-28ARC: mm: tlb flush optim: elide redundant uTLB invalidates for MMUv3Vineet Gupta
For MMUv3 (and prior) the flush_tlb_{range,mm,page} API use the MMU TLBWrite cmd which already nukes the entire uTLB, so NO need for additional IVUTLB cmd from utlb_invalidate() - hence this patch local_flush_tlb_all() is special since it uses a weaker TLBWriteNI cmd (prec commit) to shoot down JTLB, hence we retain the explicit uTLB flush Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2019-10-28ARC: mm: tlb flush optim: elide repeated uTLB invalidate in loopVineet Gupta
The unconditional full TLB flush (on say ASID rollover) iterates over each entry and uses TLBWrite to zero it out. TLBWrite by design also invalidates the uTLBs thus we end up invalidating it as many times as numbe rof entries (512 or 1k) Optimize this by using a weaker TLBWriteNI cmd in loop, which doesn't tinker with uTLBs and an explicit one time IVUTLB, outside the loop to invalidate them all once. And given the optimiztion, the IVUTLB is now needed on MMUv4 too where the uTLBs and JTLBs are otherwise coherent given the TLBInsertEntry / TLBDeleteEntry commands Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2019-10-28ARCv2: mm: TLB Miss optim: SMP builds can cache pgd pointer in mmu scratch regVineet Gupta
ARC700 exception (and intr handling) didn't have auto stack switching thus had to rely on stashing a reg temporarily (to free it up) at a known place in memory, allowing to code up the low level stack switching. This however was not re-entrant in SMP which thus had to repurpose the per-cpu MMU SCRATCH DATA register otherwise used to "cache" the task pdg pointer (vs. reading it from mm struct) The newer HS cores do have auto-stack switching and thus even SMP builds can use the MMU SCRATCH reg as originally intended. This patch fixes the restriction to ARC700 SMP builds only Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2019-06-19treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 500Thomas Gleixner
Based on 2 normalized pattern(s): this program is free software you can redistribute it and or modify it under the terms of the gnu general public license version 2 as published by the free software foundation this program is free software you can redistribute it and or modify it under the terms of the gnu general public license version 2 as published by the free software foundation # extracted by the scancode license scanner the SPDX license identifier GPL-2.0-only has been chosen to replace the boilerplate/reference in 4122 file(s). Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Enrico Weigelt <info@metux.net> Reviewed-by: Kate Stewart <kstewart@linuxfoundation.org> Reviewed-by: Allison Randal <allison@lohutok.net> Cc: linux-spdx@vger.kernel.org Link: https://lkml.kernel.org/r/20190604081206.933168790@linutronix.de Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-05-20ARC: fix build warningsVineet Gupta
| arch/arc/mm/tlb.c:914:2: warning: variable length array 'pd0' is used [-Wvla] | arch/arc/include/asm/cmpxchg.h:95:29: warning: value computed is not used [-Wunused-value] Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2017-11-06ARCv2: Accomodate HS48 MMUv5 by relaxing MMU ver checkingVineet Gupta
HS48 cpus will have a new MMUv5, although Linux is currently not explicitly supporting the newer features (so remains at V4). The existing software/hardware version check is very tight and causes boot abort. Given that the MMUv5 hardware is backwards compatible, relax the boot check to allow current kernel support level to work with new hardware. Also while at it, move the ancient MMU related code to under ARCompact builds as baseline MMU for HS cpus is v4. Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2017-09-01ARC: Re-enable MMU upon Machine Check exceptionJose Abreu
I recently came upon a scenario where I would get a double fault machine check exception tiriggered by a kernel module. However the ensuing crash stacktrace (ksym lookup) was not working correctly. Turns out that machine check auto-disables MMU while modules are allocated in kernel vaddr spapce. This patch re-enables the MMU before start printing the stacktrace making stacktracing of modules work upon a fatal exception. Cc: stable@kernel.org Signed-off-by: Jose Abreu <joabreu@synopsys.com> Reviewed-by: Alexey Brodkin <abrodkin@synopsys.com> Signed-off-by: Vineet Gupta <vgupta@synopsys.com> [vgupta: moved code into low level handler to avoid in 2 places]
2017-08-28ARC: set boot print log level to PR_INFONoam Camus
Some of the boot printing code had printk() w/o explicit log level. This patch introduces consistency allowing platforms to switch to less verbose console logging using cmdline. NPS400 with 4K CPUs needs to avoid the cpu info printing for faster bootup. Signed-off-by: Noam Camus <noamca@mellanox.com> Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2017-08-04ARCv2: PAE40: set MSB even if !CONFIG_ARC_HAS_PAE40 but PAE exists in SoCVineet Gupta
PAE40 confiuration in hardware extends some of the address registers for TLB/cache ops to 2 words. So far kernel was NOT setting the higher word if feature was not enabled in software which is wrong. Those need to be set to 0 in such case. Normally this would be done in the cache flush / tlb ops, however since these registers only exist conditionally, this would have to be conditional to a flag being set on boot which is expensive/ugly - specially for the more common case of PAE exists but not in use. Optimize that by zero'ing them once at boot - nobody will write to them afterwards Cc: stable@vger.kernel.org #4.4+ Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2017-03-02sched/headers: Prepare to remove the <linux/mm_types.h> dependency from ↵Ingo Molnar
<linux/sched.h> Update code that relied on sched.h including various MM types for them. This will allow us to remove the <linux/mm_types.h> include from <linux/sched.h>. Acked-by: Linus Torvalds <torvalds@linux-foundation.org> Cc: Mike Galbraith <efault@gmx.de> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: linux-kernel@vger.kernel.org Signed-off-by: Ingo Molnar <mingo@kernel.org>
2016-10-28ARC: boot log: remove awkward space comma from MMU lineVineet Gupta
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2016-05-09ARC: [plat-eznps] Use dedicated user stack topNoam Camus
NPS use special mapping right below TASK_SIZE. Hence we need to lower STACK_TOP so that user stack won't overlap NPS special mapping. Signed-off-by: Noam Camus <noamc@ezchip.com> Acked-by: Vineet Gupta <vgupta@synopsys.com>
2016-05-09ARC: Make vmalloc size configurableNoam Camus
On ARC, lower 2G of address space is translated and used for - user vaddr space (region 0 to 5) - unused kernel-user gutter (region 6) - kernel vaddr space (region 7) where each region simply represents 256MB of address space. The kernel vaddr space of 256MB is used to implement vmalloc, modules So far this was enough, but not on EZChip system with 4K CPUs (given that per cpu mechanism uses vmalloc for allocating chunks) So allow VMALLOC_SIZE to be configurable by expanding down into the unused kernel-user gutter region which at default 256M was excessive anyways. Also use _BITUL() to fix a build error since PGDIR_SIZE cannot use "1UL" as called from assembly code in mm/tlbex.S Signed-off-by: Noam Camus <noamc@ezchip.com> [vgupta: rewrote changelog, debugged bootup crash due to int vs. hex] Acked-by: Vineet Gupta <vgupta@synopsys.com>
2016-03-11ARC: Fix misspellings in comments.Adam Buchbinder
Signed-off-by: Adam Buchbinder <adam.buchbinder@gmail.com> Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2015-11-16ARC: comments updateVineet Gupta
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2015-10-29ARC: mm: PAE40 supportVineet Gupta
This is the first working implementation of 40-bit physical address extension on ARCv2. Signed-off-by: Alexey Brodkin <abrodkin@synopsys.com> Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2015-10-28ARC: mm: PAE40: switch to using phys_addr_t for physical addressesVineet Gupta
That way a single flip of phys_addr_t to 64 bit ensures all places dealing with physical addresses get correct data Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2015-10-28ARC: mm: Improve Duplicate PD Fault handlerVineet Gupta
- Move the verbosity knob from .data to .bss by using inverted logic - No need to readout PD1 descriptor - clip the non pfn bits of PD0 to avoid clipping inside the loop Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2015-10-17ARC: boot log: decode more mmu config itemsVineet Gupta
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2015-10-17ARC: boot log: move helper macros to header for reuseVineet Gupta
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2015-10-17ARC: mm: compute TLB size as needed from ways * setsVineet Gupta
This frees up some bits to hold more high level info such as PAE being present, w/o increasing the size of already bloated cpuinfo struct Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2015-10-17ARCv2: mm: THP: flush_pmd_tlb_range make SMP safeVineet Gupta
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2015-10-17ARCv2: mm: THP: Implement flush_pmd_tlb_range() optimizationVineet Gupta
Implement the TLB flush routine to evict a sepcific Super TLB entry, vs. moving to a new ASID on every such flush. Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2015-10-17ARCv2: mm: THP: boot validation/reportingVineet Gupta
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2015-10-17ARCv2: mm: THP supportVineet Gupta
MMUv4 in HS38x cores supports Super Pages which are basis for Linux THP support. Normal and Super pages can co-exist (ofcourse not overlap) in TLB with a new bit "SZ" in TLB page desciptor to distinguish between them. Super Page size is configurable in hardware (4K to 16M), but fixed once RTL builds. The exact THP size a Linx configuration will support is a function of: - MMU page size (typical 8K, RTL fixed) - software page walker address split between PGD:PTE:PFN (typical 11:8:13, but can be changed with 1 line) So for above default, THP size supported is 8K * 256 = 2M Default Page Walker is 2 levels, PGD:PTE:PFN, which in THP regime reduces to 1 level (as PTE is folded into PGD and canonically referred to as PMD). Thus thp PMD accessors are implemented in terms of PTE (just like sparc) Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2015-06-22ARCv2: MMUv4: TLB programming Model changesVineet Gupta
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2015-06-19ARC: compress cpuinfo_arc_mmu (mainly save page size in KB)Vineet Gupta
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2014-10-13ARC: boot: cpu feature print enhancementsVineet Gupta
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2013-11-06ARC: [SMP] TLB flushVineet Gupta
- Add mm_cpumask setting (aggregating only, unlike some other arches) used to restrict the TLB flush cross-calling - cross-calling versions of TLB flush routines (thanks to Noam) Signed-off-by: Noam Camus <noamc@ezchip.com> Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2013-11-06ARC: [SMP] ASID allocationVineet Gupta
-Track a Per CPU ASID counter -mm-per-cpu ASID (multiple threads, or mm migrated around) Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2013-11-06ARC: Fix bogus gcc warning and micro-optimise TLB iteration loopVineet Gupta
------------------>8---------------------- arch/arc/mm/tlb.c: In function ‘do_tlb_overlap_fault’: arch/arc/mm/tlb.c:688:13: warning: array subscript is above array bounds [-Warray-bounds] (pd0[n] & PAGE_MASK)) { ^ ------------------>8---------------------- While at it, remove the usless last iteration of outer loop when reading a TLB SET for duplicate entries. Suggested-by: Mischa Jonker <mjonker@synopsys.com> Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2013-08-30ARC: [ASID] Track ASID allocation cycles/generationsVineet Gupta
This helps remove asid-to-mm reverse map While mm->context.id contains the ASID assigned to a process, our ASID allocator also used asid_mm_map[] reverse map. In a new allocation cycle (mm->ASID >= @asid_cache), the Round Robin ASID allocator used this to check if new @asid_cache belonged to some mm2 (from prev cycle). If so, it could locate that mm using the ASID reverse map, and mark that mm as unallocated ASID, to force it to refresh at the time of switch_mm() However, for SMP, the reverse map has to be maintained per CPU, so becomes 2 dimensional, hence got rid of it. With reverse map gone, it is NOT possible to reach out to current assignee. So we track the ASID allocation generation/cycle and on every switch_mm(), check if the current generation of CPU ASID is same as mm's ASID; If not it is refreshed. (Based loosely on arch/sh implementation) Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2013-08-30ARC: [ASID] get_new_mmu_context() to conditionally allocate new ASIDVineet Gupta
ASID allocation changes/1 This patch does 2 things: (1) get_new_mmu_context() NOW moves mm->ASID to a new value ONLY if it was from a prev allocation cycle/generation OR if mm had no ASID allocated (vs. before would unconditionally moving to a new ASID) Callers desiring unconditional update of ASID, e.g.local_flush_tlb_mm() (for parent's address space invalidation at fork) need to first force the parent to an unallocated ASID. (2) get_new_mmu_context() always sets the MMU PID reg with unchanged/new ASID value. The gains are: - consolidation of all asid alloc logic into get_new_mmu_context() - avoiding code duplication in switch_mm() for PID reg setting - Enables future change to fold activate_mm() into switch_mm() Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2013-08-30ARC: [ASID] Refactor the TLB paranoid debug codeVineet Gupta
-Asm code already has values of SW and HW ASID values, so they can be passed to the printing routine. Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2013-08-30ARC: No need to flush the TLB in early bootVineet Gupta
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>