Age | Commit message (Collapse) | Author |
|
This patch switches MIPS to make use of generically implemented queued
read/write locks, rather than the custom implementation used previously.
This allows us to drop a whole load of inline assembly, share more
generic code, and is also a performance win.
Results from running the AIM7 short workload on a MIPS Creator Ci40 (ie.
2 core 2 thread interAptiv CPU clocked at 546MHz) with v4.12-rc4
pistachio_defconfig, with ftrace disabled due to a current bug, and both
with & without use of queued rwlocks & spinlocks:
Forks | v4.12-rc4 | +qlocks | Change
-------|-----------|----------|--------
10 | 52630.32 | 53316.31 | +1.01%
20 | 51777.80 | 52623.15 | +1.02%
30 | 51645.92 | 52517.26 | +1.02%
40 | 51634.88 | 52419.89 | +1.02%
50 | 51506.75 | 52307.81 | +1.02%
60 | 51500.74 | 52322.72 | +1.02%
70 | 51434.81 | 52288.60 | +1.02%
80 | 51423.22 | 52434.85 | +1.02%
90 | 51428.65 | 52410.10 | +1.02%
The kernels used for these tests also had my "MIPS: Hardcode cpu_has_*
where known at compile time due to ISA" patch applied, which allows the
kernel_uses_llsc checks in cmpxchg() & xchg() to be optimised away at
compile time.
Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/16357/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
|
|
The __xchg() function declares its first 2 arguments in reverse order
compared to the xchg() macro, which is confusing & serves no purpose.
Reorder the arguments such that __xchg() & xchg() match.
Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/16356/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
|
|
Implement support for 1 & 2 byte cmpxchg() using read-modify-write atop
a 4 byte cmpxchg(). This allows us to support these atomic operations
despite the MIPS ISA only providing 4 & 8 byte atomic operations.
This is required in order to support queued rwlocks (qrwlock) in a later
patch, since these make use of a 1 byte cmpxchg() in their slow path.
Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/16355/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
|
|
Implement 1 & 2 byte xchg() using read-modify-write atop a 4 byte
cmpxchg(). This allows us to support these atomic operations despite the
MIPS ISA only providing for 4 & 8 byte atomic operations.
This is required in order to support queued spinlocks (qspinlock) in a
later patch, since these make use of a 2 byte xchg() in their slow path.
Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/16354/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
|
|
Replace the macro definition of __cmpxchg() with an inline function,
which is easier to read & modify. The cmpxchg() & cmpxchg_local() macros
are adjusted to call the new __cmpxchg() function.
Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/16353/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
|
|
The __xchg_u32() & __xchg_u64() functions now add very little value.
This patch therefore removes them, by:
- Moving memory barriers out of them & into xchg(), which also removes
the duplication & readies us to support xchg_relaxed() if we wish to.
- Calling __xchg_asm() directly from __xchg().
- Performing the check for CONFIG_64BIT being enabled in the size=8
case of __xchg().
Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/16352/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
|
|
xchg() has up until now simply returned the x parameter in cases where
it is called with a pointer to a value of an unsupported size. This will
often cause the calling code to hit a failure path, presuming that the
value of x differs from the content of the memory pointed at by ptr, but
we can do better by producing a compile-time or link-time error such
that unsupported calls to xchg() are detectable earlier than runtime.
This patch does this in the same was as is already done for cmpxchg(),
using a call to a missing function annotated with __compiletime_error().
Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/16351/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
|
|
Our cmpxchg() implementation relies upon generating a call to a function
which doesn't really exist (__cmpxchg_called_with_bad_pointer) to create
a link failure in cases where cmpxchg() is called with a pointer to a
value of an unsupported size.
The __compiletime_error macro can be used to decorate a function such
that a call to it generates a compile-time, rather than a link-time,
error. This patch uses __compiletime_error to cause bad cmpxchg() calls
to error out at compile time rather than link time, allowing errors to
occur more quickly & making it easier to spot where the problem comes
from.
Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/16350/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
|
|
Use a macro to generate the 32 & 64 bit variants of the backing code for
xchg(), much as is already done for cmpxchg(). This removes the
duplication that could previously be found in __xchg_u32() &
__xchg_u64().
Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/16349/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
|
|
Prior to this patch the xchg & cmpxchg functions have duplicated code
which is for all intents & purposes identical apart from use of a
branch-likely instruction in the R10000_LLSC_WAR case & a regular branch
instruction in the non-R10000_LLSC_WAR case.
This patch removes the duplication, declaring a __scbeqz macro to select
the branch instruction suitable for use when checking the result of an
sc instruction & making use of it to unify the 2 cases.
In __xchg_u{32,64}() this means writing the branch in asm, where it was
previously being done in C as a do...while loop for the
non-R10000_LLSC_WAR case. As this is a single instruction, and adds
consistency with the R10000_LLSC_WAR cases & the cmpxchg() code, this
seems worthwhile.
Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/16348/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
|
|
Add handling of missaligned access for DSP load instructions
lwx & lhx.
Since DSP instructions share SPECIAL3 opcode with other non-DSP
instructions, necessary logic was inserted for distinguishing
between instructions with SPECIAL3 opcode. For that purpose,
the instruction format for DSP instructions is added to
arch/mips/include/uapi/asm/inst.h.
Signed-off-by: Miodrag Dinic <miodrag.dinic@imgtec.com>
Signed-off-by: Aleksandar Markovic <aleksandar.markovic@imgtech.com>
Cc: James.Hogan@imgtec.com
Cc: Paul.Burton@imgtec.com
Cc: Raghu.Gandham@imgtec.com
Cc: Leonid.Yegoshin@imgtec.com
Cc: Douglas.Leung@imgtec.com
Cc: Petar.Jovanovic@imgtec.com
Cc: Goran.Ferenc@imgtec.com
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/16511/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
|
|
Disable usage of PREF instruction usage by memcpy for MIPS R6.
MIPS R6 redefines PREF instruction with smaller offset than
ordinary MIPS. However, the memcpy code uses PREF instruction
with offsets bigger than +-256 bytes.
Malta kernels already disable usage of PREF for memcpy.
This was found during adaptation of MIPS R6 for virtual board
used by Android emulator.
Signed-off-by: Leonid Yegoshin <Leonid.Yegoshin@imgtec.com>
Signed-off-by: Miodrag Dinic <miodrag.dinic@imgtec.com>
Signed-off-by: Goran Ferenc <goran.ferenc@imgtec.com>
Signed-off-by: Aleksandar Markovic <aleksandar.markovic@imgtech.com>
Cc: James.Hogan@imgtec.com
Cc: Paul.Burton@imgtec.com
Cc: Raghu.Gandham@imgtec.com
Cc: Leonid.Yegoshin@imgtec.com
Cc: Douglas.Leung@imgtec.com
Cc: Petar.Jovanovic@imgtec.com
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/16510/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
|
|
Add "-modd-spreg" when compiling the kernel for mips32r6 target.
This makes sure the kernel builds properly even with toolchains that
use "-mno-odd-spreg" by default. This is the case with Android gcc.
Prior to this patch, kernel builds using gcc for Android failed with
following error messages, if target architecture is set to mips32r6:
arch/mips/kernel/r4k_switch.S: Assembler messages:
.../r4k_switch.S:210: Error: float register should be even, was 1
.../r4k_switch.S:212: Error: float register should be even, was 3
.../r4k_switch.S:214: Error: float register should be even, was 5
.../r4k_switch.S:216: Error: float register should be even, was 7
.../r4k_switch.S:218: Error: float register should be even, was 9
.../r4k_switch.S:220: Error: float register should be even, was 11
.../r4k_switch.S:222: Error: float register should be even, was 13
.../r4k_switch.S:224: Error: float register should be even, was 15
.../r4k_switch.S:226: Error: float register should be even, was 17
.../r4k_switch.S:228: Error: float register should be even, was 19
.../r4k_switch.S:230: Error: float register should be even, was 21
.../r4k_switch.S:232: Error: float register should be even, was 23
.../r4k_switch.S:234: Error: float register should be even, was 25
.../r4k_switch.S:236: Error: float register should be even, was 27
.../r4k_switch.S:238: Error: float register should be even, was 29
.../r4k_switch.S:240: Error: float register should be even, was 31
make[2]: *** [arch/mips/kernel/r4k_switch.o] Error 1
Signed-off-by: Miodrag Dinic <miodrag.dinic@imgtec.com>
Signed-off-by: Goran Ferenc <goran.ferenc@imgtec.com>
Signed-off-by: Aleksandar Markovic <aleksandar.markovic@imgtec.com>
Cc: James.Hogan@imgtec.com
Cc: Paul.Burton@imgtec.com
Cc: Raghu.Gandham@imgtec.com
Cc: Leonid.Yegoshin@imgtec.com
Cc: Douglas.Leung@imgtec.com
Cc: Petar.Jovanovic@imgtec.com
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/16509/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
|
|
Implement support for parsing 'memmap' kernel command line parameter.
This patch covers parsing of the following two formats for 'memmap'
parameter values:
- nn[KMG]@ss[KMG]
- nn[KMG]$ss[KMG]
([KMG] = K M or G (kilo, mega, giga))
These two allowed formats for parameter value are already documented
in file kernel-parameters.txt in Documentation/admin-guide folder.
Some architectures already support them, but Mips did not prior to
this patch.
Excerpt from Documentation/admin-guide/kernel-parameters.txt:
memmap=nn[KMG]@ss[KMG]
[KNL] Force usage of a specific region of memory.
Region of memory to be used is from ss to ss+nn.
memmap=nn[KMG]$ss[KMG]
Mark specific memory as reserved.
Region of memory to be reserved is from ss to ss+nn.
Example: Exclude memory from 0x18690000-0x1869ffff
memmap=64K$0x18690000
or
memmap=0x10000$0x18690000
There is no need to update this documentation file with respect to
this patch.
Signed-off-by: Miodrag Dinic <miodrag.dinic@imgtec.com>
Signed-off-by: Goran Ferenc <goran.ferenc@imgtec.com>
Signed-off-by: Aleksandar Markovic <aleksandar.markovic@imgtec.com>
Cc: James.Hogan@imgtec.com
Cc: Paul.Burton@imgtec.com
Cc: Raghu.Gandham@imgtec.com
Cc: Leonid.Yegoshin@imgtec.com
Cc: Douglas.Leung@imgtec.com
Cc: Petar.Jovanovic@imgtec.com
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/16508/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
|
|
Sort enum loongson_cpu_type in a more reasonable manner, this makes the
CPU names more clear and extensible. Those already defined enum values
are renamed to Legacy_* for compatibility.
Signed-off-by: Huacai Chen <chenhc@lemote.com>
Cc: John Crispin <john@phrozen.org>
Cc: Steven J . Hill <Steven.Hill@cavium.com>
Cc: Fuxin Zhang <zhangfx@lemote.com>
Cc: Zhangjin Wu <wuzhangjin@gmail.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/16591/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
|
|
With this patch we can set irq affinity via procfs, so as to improve
network performance.
Signed-off-by: Huacai Chen <chenhc@lemote.com>
Cc: John Crispin <john@phrozen.org>
Cc: Steven J . Hill <Steven.Hill@cavium.com>
Cc: Fuxin Zhang <zhangfx@lemote.com>
Cc: Zhangjin Wu <wuzhangjin@gmail.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/16590/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
|
|
IRQ0 (HPET), IRQ1 (Keyboard), IRQ2 (Cascade), IRQ7 (SCI), IRQ8 (RTC)
and IRQ12 (Mouse) are handled by core-0 locally. Other PCI IRQs (3, 4,
5, 6, 14, 15) are balanced by all cores from Node-0. This can improve
I/O performance significantly.
Signed-off-by: Huacai Chen <chenhc@lemote.com>
Cc: John Crispin <john@phrozen.org>
Cc: Steven J . Hill <Steven.Hill@cavium.com>
Cc: Fuxin Zhang <zhangfx@lemote.com>
Cc: Zhangjin Wu <wuzhangjin@gmail.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/16589/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
|
|
Signed-off-by: Huacai Chen <chenhc@lemote.com>
Cc: John Crispin <john@phrozen.org>
Cc: Steven J . Hill <Steven.Hill@cavium.com>
Cc: Fuxin Zhang <zhangfx@lemote.com>
Cc: Zhangjin Wu <wuzhangjin@gmail.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/16587/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
|
|
The only user of thread_saved_pc() in non-arch-specific code was removed
in commit 8243d5597793 ("sched/core: Remove pointless printout in
sched_show_task()"). Remove the implementations as well.
Some architectures use thread_saved_pc() in their arch-specific code.
Leave their thread_saved_pc() intact.
Signed-off-by: Tobias Klauser <tklauser@distanz.ch>
Acked-by: Geert Uytterhoeven <geert@linux-m68k.org>
Cc: Ingo Molnar <mingo@kernel.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
Neither soft poweroff (transition to ACPI power state S5) nor
suspend-to-RAM (transition to state S3) works on the Macbook Pro 11,4 and
11,5.
The problem is related to the [mem 0x7fa00000-0x7fbfffff] space. When we
use that space, e.g., by assigning it to the 00:1c.0 Root Port, the ACPI
Power Management 1 Control Register (PM1_CNT) at [io 0x1804] doesn't work
anymore.
Linux does a soft poweroff (transition to S5) by writing to PM1_CNT. The
theory about why this doesn't work is:
- The write to PM1_CNT causes an SMI
- The BIOS SMI handler depends on something in
[mem 0x7fa00000-0x7fbfffff]
- When Linux assigns [mem 0x7fa00000-0x7fbfffff] to the 00:1c.0 Port, it
covers up whatever the SMI handler uses, so the SMI handler no longer
works correctly
Reserve the [mem 0x7fa00000-0x7fbfffff] space so we don't assign it to
anything.
This is voodoo programming, since we don't know what the real conflict is,
but we've failed to find the root cause.
Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=103211
Tested-by: thejoe@gmail.com
Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>
Cc: stable@vger.kernel.org
Cc: Rafael J. Wysocki <rafael@kernel.org>
Cc: Lukas Wunner <lukas@wunner.de>
Cc: Chen Yu <yu.c.chen@intel.com>
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/kvms390/linux into HEAD
KVM: s390: fixes and features for 4.13
- initial machine check forwarding
- migration support for the CMMA page hinting information
- cleanups
- fixes
|
|
The memory operand fetched for INVVPID is 128 bits. Bits 63:16 are
reserved and must be zero. Otherwise, the instruction fails with
VMfail(Invalid operand to INVEPT/INVVPID). If the INVVPID_TYPE is 0
(individual address invalidation), then bits 127:64 must be in
canonical form, or the instruction fails with VMfail(Invalid operand
to INVEPT/INVVPID).
Signed-off-by: Jim Mattson <jmattson@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
|
|
All x86 PCI configuration space accessors have either their own
serialization or can operate completely lockless (ECAM).
Disable the global lock in the generic PCI configuration space accessors.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Acked-by: Bjorn Helgaas <helgaas@kernel.org>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Cc: Borislav Petkov <bp@alien8.de>
Cc: linux-pci@vger.kernel.org
Link: http://lkml.kernel.org/r/20170316215057.295079391@linutronix.de
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
|
|
x86 wants to get rid of the global pci_lock protecting the config space
accessors so ECAM mode can operate completely lockless, but the CE4100 PCI
code relies on that to protect the simulation registers.
Restructure the code so it uses the x86 specific pci_config_lock to
serialize the inner workings of the CE4100 PCI magic. That allows to remove
the global locking via pci_lock later.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Acked-by: Bjorn Helgaas <helgaas@kernel.org>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Cc: Borislav Petkov <bp@alien8.de>
Cc: linux-pci@vger.kernel.org
Link: http://lkml.kernel.org/r/20170316215057.126873574@linutronix.de
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
|
|
If the legacy PCI init fails, then there are no PCI config space accesors
available, but the code continues and tries to scan the busses, which fails
due to the lack of config space accessors.
Return right away, if the last init fallback fails.
Switch the few printks to pr_info while at it.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Acked-by: Bjorn Helgaas <helgaas@kernel.org>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Cc: Borislav Petkov <bp@alien8.de>
Cc: linux-pci@vger.kernel.org
Link: http://lkml.kernel.org/r/20170316215057.047576516@linutronix.de
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
|
|
For some historic reason these defines are duplicated and also available in
arch/x86/include/asm/pci_x86.h,
Remove them.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Acked-by: Bjorn Helgaas <helgaas@kernel.org>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Cc: Borislav Petkov <bp@alien8.de>
Cc: linux-pci@vger.kernel.org
Link: http://lkml.kernel.org/r/20170316215056.967808646@linutronix.de
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
|
|
The introduction of pci_scan_root_bus_bridge() provides a PCI core API to
scan a PCI root bus backed by an already initialized struct pci_host_bridge
object, which simplifies the bus scan interface and makes the PCI scan root
bus interface easier to generalize as members are added to the struct
pci_host_bridge.
Convert ARM bios32 code to pci_scan_root_bus_bridge() to improve the PCI
root bus scanning interface.
Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
[bhelgaas: fold in warning fix from Arnd Bergmann <arnd@arndb.de>:
http://lkml.kernel.org/r/20170621215323.3921382-1-arnd@arndb.de]
[bhelgaas: set bridge->ops for mv78xx0]
[bhelgaas: fold in fixes from Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>:
http://lkml.kernel.org/r/20170701135457.GB8977@red-moon]
Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>
Cc: Jason Cooper <jason@lakedaemon.net>
Cc: Russell King <linux@armlinux.org.uk>
Cc: Andrew Lunn <andrew@lunn.ch>
|
|
Many subsystems will not use refcount_t unless there is a way to build the
kernel so that there is no regression in speed compared to atomic_t. This
adds CONFIG_REFCOUNT_FULL to enable the full refcount_t implementation
which has the validation but is slightly slower. When not enabled,
refcount_t uses the basic unchecked atomic_t routines, which results in
no code changes compared to just using atomic_t directly.
Signed-off-by: Kees Cook <keescook@chromium.org>
Acked-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: Alexey Dobriyan <adobriyan@gmail.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Christoph Hellwig <hch@infradead.org>
Cc: David S. Miller <davem@davemloft.net>
Cc: David Windsor <dwindsor@gmail.com>
Cc: Davidlohr Bueso <dave@stgolabs.net>
Cc: Elena Reshetova <elena.reshetova@intel.com>
Cc: Eric Biggers <ebiggers3@gmail.com>
Cc: Eric W. Biederman <ebiederm@xmission.com>
Cc: Hans Liljestrand <ishkamiel@gmail.com>
Cc: James Bottomley <James.Bottomley@hansenpartnership.com>
Cc: Jann Horn <jannh@google.com>
Cc: Josh Poimboeuf <jpoimboe@redhat.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Manfred Spraul <manfred@colorfullife.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Rik van Riel <riel@redhat.com>
Cc: Serge E. Hallyn <serge@hallyn.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: arozansk@redhat.com
Cc: axboe@kernel.dk
Cc: linux-arch <linux-arch@vger.kernel.org>
Link: http://lkml.kernel.org/r/20170621200026.GA115679@beast
Signed-off-by: Ingo Molnar <mingo@kernel.org>
|
|
Signed-off-by: Christoph Hellwig <hch@lst.de>
|
|
Signed-off-by: Christoph Hellwig <hch@lst.de>
|
|
Besides removing the last instance of the set_dma_mask method this also
reduced the code duplication.
Signed-off-by: Christoph Hellwig <hch@lst.de>
|
|
By the time cell_pci_dma_dev_setup calls cell_dma_dev_setup no device can
have the fixed map_ops set yet as it's only set by the set_dma_mask
method. So move the setup for the fixed case to be only called in that
place instead of indirecting through cell_dma_dev_setup.
Signed-off-by: Christoph Hellwig <hch@lst.de>
|
|
These just duplicate the default behavior if no method is provided.
Signed-off-by: Christoph Hellwig <hch@lst.de>
|
|
Same behavior, less code duplication.
Signed-off-by: Christoph Hellwig <hch@lst.de>
|
|
Same behavior, less code duplication.
Signed-off-by: Christoph Hellwig <hch@lst.de>
|
|
And instead wire it up as method for all the dma_map_ops instances.
Note that this also means the arch specific check will be fully instead
of partially applied in the AMD iommu driver.
Signed-off-by: Christoph Hellwig <hch@lst.de>
|
|
And instead wire it up as method for all the dma_map_ops instances.
Note that the code seems a little fishy for dmabounce and iommu, but
for now I'd like to preserve the existing behavior 1:1.
Signed-off-by: Christoph Hellwig <hch@lst.de>
|
|
This implementation is simply bogus - openrisc only has a simple
direct mapped DMA implementation and thus doesn't care about the
address.
Signed-off-by: Christoph Hellwig <hch@lst.de>
|
|
Signed-off-by: Christoph Hellwig <hch@lst.de>
|
|
This implementation is simply bogus - hexagon only has a simple
direct mapped DMA implementation and thus doesn't care about the
address.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Acked-by: Richard Kuo <rkuo@codeaurora.org>
|
|
Usually dma_supported decisions are done by the dma_map_ops instance.
Switch sparc to that model by providing a ->dma_supported instance for
sbus that always returns false, and implementations tailored to the sun4u
and sun4v cases for sparc64, and leave it unimplemented for PCI on
sparc32, which means always supported.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Acked-by: David S. Miller <davem@davemloft.net>
|
|
We can just use pci32_dma_ops directly.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Acked-by: David S. Miller <davem@davemloft.net>
|
|
DMA_ERROR_CODE is going to go away, so don't rely on it.
Signed-off-by: Christoph Hellwig <hch@lst.de>
|
|
All dma_map_ops instances now handle their errors through
->mapping_error.
Signed-off-by: Christoph Hellwig <hch@lst.de>
|
|
DMA_ERROR_CODE is going to go away, so don't rely on it.
Signed-off-by: Christoph Hellwig <hch@lst.de>
|
|
DMA_ERROR_CODE is going to go away, so don't rely on it.
Signed-off-by: Christoph Hellwig <hch@lst.de>
|
|
DMA_ERROR_CODE is going to go away, so don't rely on it. Instead
define a ->mapping_error method for all IOMMU based dma operation
instances. The direct ops don't ever return an error and don't
need a ->mapping_error method.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Acked-by: Michael Ellerman <mpe@ellerman.id.au>
|
|
DMA_ERROR_CODE is going to go away, so don't rely on it.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Acked-by: David S. Miller <davem@davemloft.net>
|
|
s390 can also use noop_dma_ops, and while that currently does not return
errors it will so in the future. Implementing the mapping_error method
is the proper way to have per-ops error conditions.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Acked-by: Gerald Schaefer <gerald.schaefer@de.ibm.com>
|
|
Signed-off-by: Christoph Hellwig <hch@lst.de>
Acked-by: Richard Kuo <rkuo@codeaurora.org>
|