Age | Commit message (Collapse) | Author |
|
To clean up mach-* directories move external declaration of malta_dt_shim()
to mips-boards/malta.h and remove malta-dtshim.h.
Signed-off-by: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
|
|
Remove unused heasder file asm/mach-malta/malta-pm.h.
Signed-off-by: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
|
|
Remove unused config option ALCHEMY_GPIOINT_AU1300 and related code.
Signed-off-by: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
|
|
Move HEART specific parts of mach-ip30/irq.h to asm/sgi/heart.h and IP30
specific parts to sgi-ip30/ip30-common.h.
Signed-off-by: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
|
|
Loongson-3's COP2 is Multi-Media coprocessor, it is disabled in kernel
mode by default. However, gslq/gssq (16-bytes load/store instructions)
overrides the instruction format of lwc2/swc2. If we wan't to use gslq/
gssq for optimization in kernel, we should enable COP2 usage in kernel.
Please pay attention that in this patch we only enable COP2 in kernel,
which means it will lose ST0_CU2 when a process go to user space (try
to use COP2 in user space will trigger an exception and then grab COP2,
which is similar to FPU). And as a result, we need to modify the context
switching code because the new scheduled process doesn't contain ST0_CU2
in its THERAD_STATUS probably.
For zboot, we disable gslq/gssq be generated by toolchain.
Signed-off-by: Huacai Chen <chenhc@lemote.com>
Signed-off-by: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
|
|
Some processors (such as Loongson-3) need to enable CU2 in kernel mode,
current set/clear method will lose Status.CU2 during context switching,
so use save/restore method instead.
Signed-off-by: Huacai Chen <chenhc@lemote.com>
Signed-off-by: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
|
|
Now that all the jz4740 platform code has been removed, and we're left
with only a Kconfig and the cpu-feature-overrides.h file, finalize the
cleanup process by renaming the jz4740 and include/mach-jz4740 folders
to ingenic and include/mach-ingenic.
Signed-off-by: Paul Cercueil <paul@crapouillou.net>
Signed-off-by: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
|
|
Support for Ingenic SoCs is now provided by the arch/mips/generic/ code,
so all files in the arch/mips/jz4740/ folder can dropped, except for the
Kconfig, and the cpu-feature-overrides.h header file.
Signed-off-by: Paul Cercueil <paul@crapouillou.net>
Signed-off-by: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
|
|
128 IRQs is not enough to support Ingenic SoCs.
Signed-off-by: Paul Cercueil <paul@crapouillou.net>
Signed-off-by: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
|
|
Previously, in cpu_probe_ingenic(), c->writecombine was set to
_CACHE_UNCACHED_ACCELERATED, but this macro was defined differently when
CONFIG_MACH_INGENIC was set. This made it impossible to support multiple
CPUs.
Address this issue by setting c->writecombine to _CACHE_CACHABLE_WA
directly and removing the dependency on CONFIG_MACH_INGENIC.
Signed-off-by: Paul Cercueil <paul@crapouillou.net>
Signed-off-by: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
|
|
Modernized Loongson64 uses a hierarchical organization for interrupt
controllers (INTCs), all INTC nodes (not only leaf nodes) need some IRQ
numbers. This means 280 (i.e., NR_IRQS_LEGACY + NR_MIPS_CPU_IRQS + 256)
is not enough to represent all interrupts, so let's increase NR_IRQS to
320 (NR_IRQS_LEGACY + NR_MIPS_CPU_IRQS + NR_MAX_CHAINED_IRQS + 256).
Signed-off-by: Huacai Chen <chenhc@lemote.com>
Signed-off-by: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
|
|
Remove some unused code.
Signed-off-by: Youling Tang <tangyouling@loongson.cn>
Signed-off-by: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
|
|
Rename the header guard of r4k-timer.h from __ASM_R4K_TYPES_H to
__ASM_R4K_TIMER_H what corresponding with the file name.
Signed-off-by: Wei Li <liwei391@huawei.com>
Signed-off-by: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
|
|
lift the compat_s64 and compat_u64 definitions into common code using the
COMPAT_FOR_U64_ALIGNMENT symbol for the x86 special case.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
|
|
The __phys_to_dma vs phys_to_dma distinction isn't exactly obvious. Try
to improve the situation by renaming __phys_to_dma to
phys_to_dma_unencryped, and not forcing architectures that want to
override phys_to_dma to actually provide __phys_to_dma.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Robin Murphy <robin.murphy@arm.com>
|
|
There is no harm in just always clearing the SME encryption bit, while
significantly simplifying the interface.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Robin Murphy <robin.murphy@arm.com>
|
|
Signed-off-by: Christoph Hellwig <hch@lst.de>
Acked-by: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
|
|
After conversion of all WAR defines we can now remove all mach-*/war.h
files.
Signed-off-by: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
|
|
CAVIUM_OCTEON_DCACHE_PREFETCH_WAR is a check for Octeon model CN6XXXX.
By using the version check we can remove the define.
Signed-off-by: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
|
|
BCM1250_M3_WAR is depending on CONFIG_CONFIG_SB1_PASS_2_WORKAROUNDS.
So using this option directly lets and remove define.
Signed-off-by: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
|
|
SB1250 uart bug is related to PASS 2 workarounds. Use config
CONFIG_SB1_PASS_2_WORKAROUNDS directly and get rid of SIBYTE_1956_WAR.
Signed-off-by: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
|
|
Use a new config option to enable MIPS 34K ITLB workaround and remove
define from different war.h files.
Signed-off-by: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
|
|
Use a new config option to enabel R1000_LLSC workaound and remove
define from different war.h files.
Signed-off-by: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
|
|
Use a new config option to enable I-cache refill workaround and remove
define from different war.h files.
Signed-off-by: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
|
|
Use a new config option to enable TX49XX I-cache index invalidate
workaround and remove define from different war.h files.
Signed-off-by: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
|
|
Neither MIPS4K_ICACHE_REFILL_WAR nor MIPS_CACHE_SYNC_WAR are implemented,
so removing defines for it won't change anything.
Signed-off-by: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
|
|
Use a new config option to enable R4600 V2 cacheop hit workaround
and remove define from different war.h files.
Signed-off-by: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
|
|
Use a new config option to enable R4600 V1 cacheop hit workaround
and remove define from the different war.h files.
Signed-off-by: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
|
|
Use a new config option to enable R4600 V1 index I-cacheop workaround
and remove define from different war.h files.
Signed-off-by: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/mips/linux
Pull MIPS fixes from Thomas Bogendoerfer:
"A few MIPS fixes:
- fallthrough fallout fix
- BMIPS fixes
- MSA fix to avoid leaking MSA register contents
- Loongson perf and cpu feature fix
- SNI interrupt fix"
* tag 'mips_fixes_5.9_1' of git://git.kernel.org/pub/scm/linux/kernel/git/mips/linux:
MIPS: SNI: Fix SCSI interrupt
MIPS: add missing MSACSR and upper MSA initialization
MIPS: perf: Fix wrong check condition of Loongson event IDs
mips/oprofile: Fix fallthrough placement
MIPS: Loongson64: Remove unnecessary inclusion of boot_param.h
MIPS: BMIPS: Also call bmips_cpu_setup() for secondary cores
MIPS: mm: BMIPS5000 has inclusive physical caches
MIPS: Loongson64: Do not override watch and ejtag feature
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Pull locking fixes from Thomas Gleixner:
"A set of fixes for lockdep, tracing and RCU:
- Prevent recursion by using raw_cpu_* operations
- Fixup the interrupt state in the cpu idle code to be consistent
- Push rcu_idle_enter/exit() invocations deeper into the idle path so
that the lock operations are inside the RCU watching sections
- Move trace_cpu_idle() into generic code so it's called before RCU
goes idle.
- Handle raw_local_irq* vs. local_irq* operations correctly
- Move the tracepoints out from under the lockdep recursion handling
which turned out to be fragile and inconsistent"
* tag 'locking-urgent-2020-08-30' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
lockdep,trace: Expose tracepoints
lockdep: Only trace IRQ edges
mips: Implement arch_irqs_disabled()
arm64: Implement arch_irqs_disabled()
nds32: Implement arch_irqs_disabled()
locking/lockdep: Cleanup
x86/entry: Remove unused THUNKs
cpuidle: Move trace_cpu_idle() into generic code
cpuidle: Make CPUIDLE_FLAG_TLB_FLUSHED generic
sched,idle,rcu: Push rcu_idle deeper into the idle path
cpuidle: Fixup IRQ state
lockdep: Use raw_cpu_*() for per-cpu variables
|
|
Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
Cc: Paul Burton <paulburton@kernel.org>
Reported-by: kernel test robot <lkp@intel.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: https://lkml.kernel.org/r/20200826101653.GE1362448@hirez.programming.kicks-ass.net
|
|
No users -> remove it.
Signed-off-by: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
|
|
Loonson2ef's mc146818rtc.h is the same as the generic one -> remove it.
Signed-off-by: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
Acked-by: Jiaxun Yang <jiaxun.yang@flygoat.com>
Signed-off-by: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
|
|
SGI-IP27 is always cache coherent so we can use generic kmalloc.h and
remove the ip27 specific one.
Signed-off-by: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
|
|
Remove another unused MIPS platform.
Signed-off-by: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
|
|
Commit 35546aeede8e ("MIPS: Retire kvm paravirt") removed
kvm paravirt support, but missed arch/mips/include/mach-paravirt.
Remove it as well.
Signed-off-by: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
|
|
Replace the existing /* fall through */ comments and its variants with
the new pseudo-keyword macro fallthrough[1]. Also, remove unnecessary
fall-through markings when it is the case.
[1] https://www.kernel.org/doc/html/v5.7/process/deprecated.html?highlight=fallthrough#implicit-switch-case-fall-through
Signed-off-by: Gustavo A. R. Silva <gustavoars@kernel.org>
|
|
Pull kvm fixes from Paolo Bonzini:
- PAE and PKU bugfixes for x86
- selftests fix for new binutils
- MMU notifier fix for arm64
* tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm:
KVM: arm64: Only reschedule if MMU_NOTIFIER_RANGE_BLOCKABLE is not set
KVM: Pass MMU notifier range flags to kvm_unmap_hva_range()
kvm: x86: Toggling CR4.PKE does not load PDPTEs in PAE mode
kvm: x86: Toggling CR4.SMAP does not load PDPTEs in PAE mode
KVM: x86: fix access code passed to gva_to_gpa
selftests: kvm: Use a shorter encoding to clear RAX
|
|
The couple of #includes are unused by now; remove to prevent namespace
pollution.
This fixes e.g. build of dm_thin, which has a VIRTUAL symbol that
conflicted with the newly-introduced one in mach-loongson64/boot_param.h.
Fixes: 39c1485c8baa ("MIPS: KVM: Add kvm guest support for Loongson-3")
Signed-off-by: WANG Xuerui <git@xen0n.name>
Reviewed-by: Huacai Chen <chenhc@lemote.com>
Signed-off-by: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
|
|
The 'flags' field of 'struct mmu_notifier_range' is used to indicate
whether invalidate_range_{start,end}() are permitted to block. In the
case of kvm_mmu_notifier_invalidate_range_start(), this field is not
forwarded on to the architecture-specific implementation of
kvm_unmap_hva_range() and therefore the backend cannot sensibly decide
whether or not to block.
Add an extra 'flags' parameter to kvm_unmap_hva_range() so that
architectures are aware as to whether or not they are permitted to block.
Cc: <stable@vger.kernel.org>
Cc: Marc Zyngier <maz@kernel.org>
Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
Cc: James Morse <james.morse@arm.com>
Signed-off-by: Will Deacon <will@kernel.org>
Message-Id: <20200811102725.7121-2-will@kernel.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
|
|
__csum_partial_copy_..._user()
and turn the exception handlers into simply returning 0, which
simplifies the hell out of things in csum_partial.S
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
|
|
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
|
|
they are only called for iovec-backed iov_iter and under KERNEL_DS an
attempt to create such a beast will yield a kvec-backed one.
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
|
|
All callers of these primitives will
* discard anything we might've copied in case of error
* ignore the csum value in case of error
* always pass 0xffffffff as the initial sum, so the
resulting csum value (in case of success, that is) will never be 0.
That suggest the following calling conventions:
* don't pass err_ptr - just return 0 on error.
* don't bother with zeroing destination, etc. in case of error
* don't pass the initial sum - just use 0xffffffff.
This commit does the minimal conversion in the instances of csum_and_copy_...();
the changes of actual asm code behind them are done later in the series.
Note that this asm code is often shared with csum_partial_copy_nocheck();
the difference is that csum_partial_copy_nocheck() passes 0 for initial
sum while csum_and_copy_..._user() pass 0xffffffff. Fortunately, we are
free to pass 0xffffffff in all cases and subsequent patches will use that
freedom without any special comments.
A part that could be split off: parisc and uml/i386 claimed to have
csum_and_copy_to_user() instances of their own, but those were identical
to the generic one, so we simply drop them. Not sure if it's worth
a separate commit...
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
|
|
It's always 0. Note that we theoretically could use ~0U as well -
result will be the same modulo 0xffff, _if_ the damn thing did the
right thing for any value of initial sum; later we'll make use of
that when convenient.
However, unlike csum_and_copy_..._user(), there are instances that
did not work for arbitrary initial sums; c6x is one such.
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
|
|
quite a few architectures have the same csum_partial_copy_nocheck() -
simply memcpy() the data and then return the csum of the copy.
hexagon, parisc, ia64, s390, um: explicitly spelled out that way.
arc, arm64, csky, h8300, m68k/nommu, microblaze, mips/GENERIC_CSUM, nds32,
nios2, openrisc, riscv, unicore32: end up picking the same thing spelled
out in lib/checksum.h (with varying amounts of perversions along the way).
everybody else (alpha, arm, c6x, m68k/mmu, mips/!GENERIC_CSUM, powerpc,
sh, sparc, x86, xtensa) have non-generic variants. For all except c6x
the declaration is in their asm/checksum.h. c6x uses the wrapper
from asm-generic/checksum.h that would normally lead to the lib/checksum.h
instance, but in case of c6x we end up using an asm function from arch/c6x
instead.
Screw that mess - have architectures with private instances define
_HAVE_ARCH_CSUM_AND_COPY in their asm/checksum.h and have the default
one right in net/checksum.h conditional on _HAVE_ARCH_CSUM_AND_COPY
*not* defined.
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
|
|
Now that bcm47xx_sprom.h contains a prototype for bcm47xx_fill_sprom,
include that header file directly from bcm47xx.h.
Signed-off-by: Florian Fainelli <f.fainelli@gmail.com>
Signed-off-by: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
|
|
Do not override ejtag feature to 0 as Loongson 3A1000+ do have ejtag.
For watch, as KVM emulated CPU doesn't have watch feature, we should
not enable it unconditionally.
Signed-off-by: Jiaxun Yang <jiaxun.yang@flygoat.com>
Reviewed-by: Huacai Chen <chenhc@lemote.com>
Signed-off-by: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Pull timekeeping updates from Thomas Gleixner:
"A set of timekeeping/VDSO updates:
- Preparatory work to allow S390 to switch over to the generic VDSO
implementation.
S390 requires that the VDSO data pointer is handed in to the
counter read function when time namespace support is enabled.
Adding the pointer is a NOOP for all other architectures because
the compiler is supposed to optimize that out when it is unused in
the architecture specific inline. The change also solved a similar
problem for MIPS which fortunately has time namespaces not yet
enabled.
S390 needs to update clock related VDSO data independent of the
timekeeping updates. This was solved so far with yet another
sequence counter in the S390 implementation. A better solution is
to utilize the already existing VDSO sequence count for this. The
core code now exposes helper functions which allow to serialize
against the timekeeper code and against concurrent readers.
S390 needs extra data for their clock readout function. The initial
common VDSO data structure did not provide a way to add that. It
now has an embedded architecture specific struct embedded which
defaults to an empty struct.
Doing this now avoids tree dependencies and conflicts post rc1 and
allows all other architectures which work on generic VDSO support
to work from a common upstream base.
- A trivial comment fix"
* tag 'timers-urgent-2020-08-14' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
time: Delete repeated words in comments
lib/vdso: Allow to add architecture-specific vdso data
timekeeping/vsyscall: Provide vdso_update_begin/end()
vdso/treewide: Add vdso_data pointer argument to __arch_get_hw_counter()
|