Age | Commit message (Collapse) | Author |
|
devel-stable
Two different fixes for the same problem making some ARM nommu configurations
not boot since 3.6-rc1. The problem is that user_addr_max returned the biggest
available RAM address which makes some copy_from_user variants fail to read
from XIP memory.
Even in the presence of one of the two fixes the other still makes sense, so
both patches are included here.
This problem was the last one preventing efm32 boot to a prompt with mainline.
|
|
Under perf, the fp unwinding scheme requires access to user space memory
and can provoke a pagefault via call to __copy_from_user_inatomic from
user_backtrace. This unwinding can take place in response to an interrupt
(__perf_event_overflow). This is undesirable as we may already have
mmap_sem held for write. One example being a process that calls mprotect
just as a the PMU counters overflow.
An example that can provoke this behaviour:
perf record -e event:tocapture --call-graph fp ./application_to_test
This patch addresses this issue by disabling pagefaults briefly in
user_backtrace (as is done in the other architectures: ARM64, x86, Sparc etc.).
Without the patch a deadlock occurs when __perf_event_overflow is called
while reading the data from the user space:
[ INFO: possible recursive locking detected ]
3.16.0-rc2-00038-g0ed7ff6 #46 Not tainted
---------------------------------------------
stress/1634 is trying to acquire lock:
(&mm->mmap_sem){++++++}, at: [<c001dc04>] do_page_fault+0xa8/0x428
but task is already holding lock:
(&mm->mmap_sem){++++++}, at: [<c00f4098>] SyS_mprotect+0xa8/0x1c8
other info that might help us debug this:
Possible unsafe locking scenario:
CPU0
----
lock(&mm->mmap_sem);
lock(&mm->mmap_sem);
*** DEADLOCK ***
May be due to missing lock nesting notation
2 locks held by stress/1634:
#0: (&mm->mmap_sem){++++++}, at: [<c00f4098>] SyS_mprotect+0xa8/0x1c8
#1: (rcu_read_lock){......}, at: [<c00c29dc>] __perf_event_overflow+0x120/0x294
stack backtrace:
CPU: 1 PID: 1634 Comm: stress Not tainted 3.16.0-rc2-00038-g0ed7ff6 #46
[<c0017c8c>] (unwind_backtrace) from [<c0012eec>] (show_stack+0x20/0x24)
[<c0012eec>] (show_stack) from [<c04de914>] (dump_stack+0x7c/0x98)
[<c04de914>] (dump_stack) from [<c006a360>] (__lock_acquire+0x1484/0x1cf0)
[<c006a360>] (__lock_acquire) from [<c006b14c>] (lock_acquire+0xa4/0x11c)
[<c006b14c>] (lock_acquire) from [<c04e3880>] (down_read+0x40/0x7c)
[<c04e3880>] (down_read) from [<c001dc04>] (do_page_fault+0xa8/0x428)
[<c001dc04>] (do_page_fault) from [<c00084ec>] (do_DataAbort+0x44/0xa8)
[<c00084ec>] (do_DataAbort) from [<c0013a1c>] (__dabt_svc+0x3c/0x60)
Exception stack(0xed7c5ae0 to 0xed7c5b28)
5ae0: ed7c5b5c b6dadff4 ffffffec 00000000 b6dadff4 ebc08000 00000000 ebc08000
5b00: 0000007e 00000000 ed7c4000 ed7c5b94 00000014 ed7c5b2c c001a438 c0236c60
5b20: 00000013 ffffffff
[<c0013a1c>] (__dabt_svc) from [<c0236c60>] (__copy_from_user+0xa4/0x3a4)
Acked-by: Steve Capper <steve.capper@linaro.org>
Signed-off-by: Jean Pihet <jean.pihet@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
|
|
An event may occur when an mm is already released.
As per commit 20afc60f892d285fde179ead4b24e6a7938c2f1b
'x86, perf: Check that current->mm is alive before getting user callchain'
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Jean Pihet <jean.pihet@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
|
|
Currently the krait_pmu_{enable,disable}_event functions use the global
cpu_pmu variable while all the other pmu enable/disable functions
derive this from the event argument.
This patch brings the Krait functions into line with the rest of the PMU
backends by deriving the address of the pmu from the event argument.
Acked-by: Will Deacon <will.deacon@arm.com>
Acked-by: Stephen Boyd <sboyd@codeaurora.org>
Tested-by: Christopher Covington <cov@codeaurora.org>
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
|
|
When described in DT, PMUs are given specific compatible strings
(e.g. "arm,cortex-a15-pmu") which makes it very easy to reorganise the
way individual PMUs are handled (i.e. we can easily split them into
separate drivers). The same is not true of PMUs described in board
files, which are all use the platform_device_id "arm-pmu" and must all
be handled by the same driver.
To enable splitting the ARMv6, ARMv7, and XScale PMU drivers we need
board files to identify which variant they provide. As a first step,
this patch adds new platform_device_id values: "armv6-pmu", "armv7-pmu,
and "xscale-pmu".
Once board files are moved over and all existing uses of "arm-pmu" are
gone, we can split the existing driver apart.
Acked-by: Will Deacon <will.deacon@arm.com>
Tested-by: Christopher Covington <cov@codeaurora.org>
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
|
|
The perf userspace tools can't handle dashes or spaces in PMU names,
which conflicts with the current naming scheme in the arm perf backend.
This prevents these PMUs from being accessed by name from the perf
tools. Additionally the ARMv6 pmus are named "v6", which does not fully
distinguish them in the sys/bus/event_source namespace.
This patch renames the PMUs consistently to a lower case form with
underscores, e.g. "armv6_1176", "armv7_cortex_a9". This is both readily
accepted by today's perf tool, and far easier to type than the
(apparently unused) convention in use previously. The OProfile name
conversion code is updated to handle this.
Due to a copy-paste error involving two "xscale1" entries, "xscale2" has
never been matched by the name OProfile name mapping. While we're
updating names, this is corrected.
Acked-by: Will Deacon <will.deacon@arm.com>
Tested-by: Christopher Covington <cov@codeaurora.org>
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
[sachin: fixed missing semicolons in armv6 backend]
Signed-off-by: Sachin Kamat <sachin.kamat@samsung.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
|
|
Now that we have macros for declaring fully invalid event maps, put them
to work for the XScale PMU event maps. While this necessitates repeating
common indices, we no longer need to refer to *_UNSUPPORTED events at
all, and it makes it possible for the even maps to fit on a single page
on a reasonably sized monitor.
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
|
|
Now that we have macros for declaring fully invalid event maps, put them
to work for all the ARMv6 PMU event maps. While this necessitates
repeating common indices, we no longer need to refer to *_UNSUPPORTED
events at all, and it makes it possible for the even maps to fit on a
single page on a reasonably sized monitor.
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
|
|
Now that we have macros for declaring fully invalid event maps, put them
to work for all the ARMv7 PMU event maps. While this necessitates
repeating common indices, we no longer need to refer to *_UNSUPPORTED
events at all, and it makes it possible for the even maps to fit on a
single page on a reasonably sized monitor.
Acked-by: Will Deacon <will.deacon@arm.com>
Tested-by: Christopher Covington <cov@codeaurora.org>
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
|
|
We currently map from userspace-ABI standard event numbers to
hardware-specific IDs by use of two arrays, *_perf_map and
*_perf_cache_map. While we use designated initializers to initialize the
events we care about, zero is typically a valid hardware event number,
and thus we have to explicitly initialize unsupported event mappings to a
nonzero value ({HW,CACHE}_OP_UNSUPPORTED).
In the case of the *_cache_map, this requires initialising almost every
entry in a 3-dimensional array to CACHE_OP_UNSUPPORTED, requiring over a
hundred lines to add eleven supported events in the case of Cortex A9.
So as to take up less space and make the tables easier to deal with,
this patch adds two new macros to initialize every entry in these tables
to the *_UNSUPPORTED values. Supported events can be overridden
individually through the use of designated initializers.
Acked-by: Will Deacon <will.deacon@arm.com>
Tested-by: Christopher Covington <cov@codeaurora.org>
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
|
|
A few PMU-related macros are now looking a little lonely in
asm/perf_event.h now that all other PMU-specific structs, function
prototypes and macros live in pmu.h.
So as to make their placement consistent and to make it easier to build
atop of the current PMU functionality, let's reunite the entire family in
pmu.h
Acked-by: Will Deacon <will.deacon@arm.com>
Tested-by: Christopher Covington <cov@codeaurora.org>
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
|
|
With CONFIG_MMU=y get_fs() returns current_thread_info()->addr_limit
which is initialized as USER_DS (which in turn is defined to TASK_SIZE)
for userspace processes. At least theoretically
current_thread_info()->addr_limit is changable by set_fs() to a
different limit, so checking for KERNEL_DS is more robust.
With !CONFIG_MMU get_fs returns KERNEL_DS. To see what the old variant
did you'd have to find out that USER_DS == KERNEL_DS which isn't needed
any more with the variant this patch introduces. So it's a bit easier to
understand, too.
Also if the limit was changed this limit should be returned, not
TASK_SIZE.
Signed-off-by: Uwe Kleine-König <u.kleine-koenig@pengutronix.de>
|
|
With TASK_SIZE set to the maximal RAM address booting in some XIP
configurations fails (e.g. on efm32 DK3750). The problem is that
strncpy_from_user et al. check for the address not being above TASK_SIZE
(since 8c56cc8be5b3 (ARM: 7449/1: use generic strnlen_user and
strncpy_from_user functions)) and this makes booting fail if the XIP
flash is above the RAM address space.
This change is in line with blackfin, frv and m68k which also use
0xffffffff for TASK_SIZE with !MMU.
Signed-off-by: Uwe Kleine-König <u.kleine-koenig@pengutronix.de>
|
|
Pull ARM fixes from Russell King:
"Another round of ARM fixes. The largest change here is the L2 changes
to work around problems for the Armada 37x/380 devices, where most of
the size comes down to comments rather than code.
The other significant fix here is for the ptrace code, to ensure that
rewritten syscalls work as intended. This was pointed out by Kees
Cook, but Will Deacon reworked the patch to be more elegant.
The remainder are fairly trivial changes"
* 'fixes' of git://ftp.arm.linux.org.uk/~rmk/linux-arm:
ARM: 8087/1: ptrace: reload syscall number after secure_computing() check
ARM: 8086/1: Set memblock limit for nommu
ARM: 8085/1: sa1100: collie: add top boot mtd partition
ARM: 8084/1: sa1100: collie: revert back to cfi_probe
ARM: 8080/1: mcpm.h: remove unused variable declaration
ARM: 8076/1: mm: add support for HW coherent systems in PL310 cache
|
|
On the syscall tracing path, we call out to secure_computing() to allow
seccomp to check the syscall number being attempted. As part of this, a
SIGTRAP may be sent to the tracer and the syscall could be re-written by
a subsequent SET_SYSCALL ptrace request. Unfortunately, this new syscall
is ignored by the current code unless TIF_SYSCALL_TRACE is also set on
the current thread.
This patch slightly reworks the enter path of the syscall tracing code
so that we always reload the syscall number from
current_thread_info()->syscall after the potential ptrace traps.
Acked-by: Kees Cook <keescook@chromium.org>
Tested-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
Commit 1c2f87c (ARM: 8025/1: Get rid of meminfo) changed find_limits
to use memblock_get_current_limit for calculating the max_low pfn.
nommu targets never actually set a limit on memblock though which
means memblock_get_current_limit will just return the default
value. Set the memblock_limit to be the end of DDR to make sure
bounds are calculated correctly.
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
The CFI mapping is now perfect so we can expose the top block, read only.
There isn't much to read, though, just the sharpsl_params values.
Signed-off-by: Andrea Adami <andrea.adami@gmail.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
Reverts commit d26b17edafc45187c30cae134a5e5429d58ad676
ARM: sa1100: collie.c: fall back to jedec_probe flash detection
Unfortunately the detection was challenged on the defective unit used for tests:
one of the NOR chips did not respond to the CFI query.
Moreover that bad device needed extra delays on erase-suspend/resume cycles.
Tested personally on 3 different units and with feedback of two other users.
Signed-off-by: Andrea Adami <andrea.adami@gmail.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
The sync_phys variable has been replaced by link time computation in
mcpm_head.S before the code was submitted upstream.
Signed-off-by: Nicolas Pitre <nico@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
When a PL310 cache is used on a system that provides hardware
coherency, the outer cache sync operation is useless, and can be
skipped. Moreover, on some systems, it is harmful as it causes
deadlocks between the Marvell coherency mechanism, the Marvell PCIe
controller and the Cortex-A9.
To avoid this, this commit introduces a new Device Tree property
'arm,io-coherent' for the L2 cache controller node, valid only for the
PL310 cache. It identifies the usage of the PL310 cache in an I/O
coherent configuration. Internally, it makes the driver disable the
outer cache sync operation.
Note that technically speaking, a fully coherent system wouldn't
require any of the other .outer_cache operations. However, in
practice, when booting secondary CPUs, these are not yet coherent, and
therefore a set of cache maintenance operations are necessary at this
point. This explains why we keep the other .outer_cache operations and
only ->sync is disabled.
While in theory any write to a PL310 register could cause the
deadlock, in practice, disabling ->sync is sufficient to workaround
the deadlock, since the other cache maintenance operations are only
used in very specific situations.
Contrary to previous versions of this patch, this new version does not
simply NULL-ify the ->sync member, because the l2c_init_data
structures are now 'const' and therefore cannot be modified, which is
a good thing. Therefore, this patch introduces a separate
l2c_init_data instance, called of_l2c310_coherent_data.
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Pull x86 fixes from Peter Anvin:
"A pile of fixes related to the VDSO, EFI and 32-bit badsys handling.
It turns out that removing the section headers from the VDSO breaks
gdb, so this puts back most of them. A very simple typo broke
rt_sigreturn on some versions of glibc, with obviously disastrous
results. The rest is pretty much fixes for the corresponding fallout.
The EFI fixes fixes an arithmetic overflow on 32-bit systems and
quiets some build warnings.
Finally, when invoking an invalid system call number on x86-32, we
bypass a bunch of handling, which can make the audit code oops"
* 'x86/urgent' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
efi-pstore: Fix an overflow on 32-bit builds
x86/vdso: Error out in vdso2c if DT_RELA is present
x86/vdso: Move DISABLE_BRANCH_PROFILING into the vdso makefile
x86_32, signal: Fix vdso rt_sigreturn
x86_32, entry: Do syscall exit work on badsys (CVE-2014-4508)
x86/vdso: Create .build-id links for unstripped vdso files
x86/vdso: Remove some redundant in-memory section headers
x86/vdso: Improve the fake section headers
x86/vdso2c: Use better macros for ELF bitness
x86/vdso: Discard the __bug_table section
efi: Fix compiler warnings (unused, const, type)
|
|
Pull MIPS fixes from Ralf Baechle:
"This is dominated by a large number of changes necessary for the MIPS
BPF code. code. Aside of that there are
- a fix for the MSC system controller support code.
- a Turbochannel fix.
- a recordmcount fix that's MIPS-specific.
- barrier fixes to smp-cps / pm-cps after unrelated changes elsewhere
in the kernel.
- revert support for MSA registers in the signal frames. The
reverted patch did modify the signal stack frame which of course is
inacceptable.
- fix math-emu build breakage with older compilers.
- some related cleanup.
- fix Lasat build error if CONFIG_CRC32 isn't set to y by the user"
* 'upstream' of git://git.linux-mips.org/pub/scm/ralf/upstream-linus: (27 commits)
MIPS: Lasat: Fix build error if CRC32 is not enabled.
TC: Handle device_register() errors.
MIPS: MSC: Prevent out-of-bounds writes to MIPS SC ioremap'd region
MIPS: bpf: Fix stack space allocation for BPF memwords on MIPS64
MIPS: BPF: Use 32 or 64-bit load instruction to load an address to register
MIPS: bpf: Fix PKT_TYPE case for big-endian cores
MIPS: BPF: Prevent kernel fall over for >=32bit shifts
MIPS: bpf: Drop update_on_xread and always initialize the X register
MIPS: bpf: Fix is_range() semantics
MIPS: bpf: Use pr_debug instead of pr_warn for unhandled opcodes
MIPS: bpf: Fix return values for VLAN_TAG_PRESENT case
MIPS: bpf: Use correct mask for VLAN_TAG case
MIPS: bpf: Fix branch conditional for BPF_J{GT/GE} cases
MIPS: bpf: Add SEEN_SKB to flags when looking for the PKT_TYPE
MIPS: bpf: Use 'andi' instead of 'and' for the VLAN cases
MIPS: bpf: Return error code if the offset is a negative number
MIPS: bpf: Use the LO register to get division's quotient
MIPS: mm: uasm: Fix lh micro-assembler instruction
MIPS: uasm: Add SLT uasm instruction
MIPS: uasm: Add s3s1s2 instruction builder
...
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/vgupta/arc
Pull ARC fixes from Vineet Gupta:
"Some SMP changes, a ptrace request for NPTL debugging, bunch of build
breakages/warnings"
* tag 'arc-fixes-for-3.16' of git://git.kernel.org/pub/scm/linux/kernel/git/vgupta/arc:
ARC: [SMP] Enable icache coherency
ARC: [SMP] Fix IPI IRQ registration
ARC: Implement ptrace(PTRACE_GET_THREAD_AREA)
ARC: optimize kernel bss clearing in early boot code
ARC: Fix build breakage for !CONFIG_ARC_DW2_UNWIND
ARC: fix build warning in devtree
ARC: remove checks for CONFIG_ARC_MMU_V4
|
|
Kconfig doesn't select CRC32 so it's possible to build a Lasat kernel
without CONFIG_CRC32 resulting in a build error:
LD vmlinux
arch/mips/built-in.o: In function `lasat_init_board_info':
(.text+0x22c): undefined reference to `crc32_le'
arch/mips/built-in.o: In function `lasat_write_eeprom_info':
(.text+0x7fc): undefined reference to `crc32_le'
make: *** [vmlinux] Error 1
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
|
|
Previously, the lower limit for the MIPS SC initialization loop was
set incorrectly allowing one extra loop leading to writes
beyond the MSC ioremap'd space. More precisely, the value of the 'imp'
in the last loop increased beyond the msc_irqmap_t boundaries and
as a result of which, the 'n' variable was loaded with an incorrect
value. This value was used later on to calculate the offset in the
MSC01_IC_SUP which led to random crashes like the following one:
CPU 0 Unable to handle kernel paging request at virtual address e75c0200,
epc == 8058dba4, ra == 8058db90
[...]
Call Trace:
[<8058dba4>] init_msc_irqs+0x104/0x154
[<8058b5bc>] arch_init_irq+0xd8/0x154
[<805897b0>] start_kernel+0x220/0x36c
Kernel panic - not syncing: Attempted to kill the idle task!
This patch fixes the problem
Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
Reviewed-by: James Hogan <james.hogan@imgtec.com>
Cc: stable@vger.kernel.org
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/7118/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
|
|
When allocating stack space for BPF memwords we need to use the
appropriate 32 or 64-bit instruction to avoid losing the top 32 bits
of the stack pointer.
Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: Daniel Borkmann <dborkman@redhat.com>
Cc: Alexei Starovoitov <ast@plumgrid.com>
Cc: netdev@vger.kernel.org
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/7135/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
|
|
When loading a pointer to register we need to use the appropriate
32 or 64bit instruction to preserve the pointers' top 32bits.
Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
Cc: David S. Miller <davem@davemloft.net>
Cc: Daniel Borkmann <dborkman@redhat.com>
Cc: Alexei Starovoitov <ast@plumgrid.com>
Cc: netdev@vger.kernel.org
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/7180/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
|
|
The skb->pkt_type field is defined as follows:
u8 pkt_type:3,
fclone:2,
ipvs_property:1,
peeked:1,
nf_trace:1
resulting to the following layout in big-endian systems
[pkt_type][fclone][ipvs_propery][peeked][nf_trace]
^ ^
| |
LSB MSB
As a result, the existing code did not work because it was trying to
match pkt_type == 7 whereas in reality it is 7<<5 on big-endian
systems.
This has been fixed in the interpreter in
0dcceabb0c1bf2d4c12a748df9933fad303072a7
"net: filter: fix SKF_AD_PKTTYPE extension on big-endian"
The fix is to look for 7<<5 on big-endian systems for the pkt_type
field, and shift by 5 so the packet type will be at the lower 3 bits
of the A register.
Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
Cc: David S. Miller <davem@davemloft.net>
Cc: Daniel Borkmann <dborkman@redhat.com>
Cc: Alexei Starovoitov <ast@plumgrid.com>
Cc: netdev@vger.kernel.org
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/7132/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
|
|
Remove BUG_ON() if the shift immediate is >=32 to avoid kernel crashes
due to malicious user input. If the shift immediate is >= 32,
we simply load the destination register with 0 since only
32-bit instructions are used by JIT so this will do the
correct thing even on MIPS64.
Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
Cc: David S. Miller <davem@davemloft.net>
Cc: Daniel Borkmann <dborkman@redhat.com>
Cc: Alexei Starovoitov <ast@plumgrid.com>
Cc: netdev@vger.kernel.org
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/7179/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
|
|
Previously, update_on_xread() only set the reset flag if SEEN_X hasn't
been set already. However, SEEN_X is used to indicate that X is used
as destination or source register so there are some cases where X
is only used as source register and we really need to make sure that it
has been initialized in time. As a result of which, drop this function and
always set X to zero if it's used in any of the opcodes.
Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
Cc: David S. Miller <davem@davemloft.net>
Cc: Daniel Borkmann <dborkman@redhat.com>
Cc: Alexei Starovoitov <ast@plumgrid.com>
Cc: netdev@vger.kernel.org
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/7133/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
|
|
is_range() was meant to check whether the number is within
the s16 range or not. However the return values and consumers expected
the exact opposite. We fix that by inverting the logic in the function
to return 'true' for < s16 and 'false' for > s16.
Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
Reported-by: Alexei Starovoitov <ast@plumgrid.com>
Cc: David S. Miller <davem@davemloft.net>
Cc: Daniel Borkmann <dborkman@redhat.com>
Cc: Alexei Starovoitov <ast@plumgrid.com>
Cc: netdev@vger.kernel.org
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/7131/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
|
|
We should prevent spamming the logs during normal execution of bpf-jit.
Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
Suggested-by: Alexei Starovoitov <ast@plumgrid.com>
Cc: David S. Miller <davem@davemloft.net>
Cc: Daniel Borkmann <dborkman@redhat.com>
Cc: Alexei Starovoitov <ast@plumgrid.com>
Cc: netdev@vger.kernel.org
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/7129/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
|
|
If VLAN_TAG_PRESENT is not zero, then return 1 as expected by
classic BPF. Otherwise return 0.
Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
Cc: David S. Miller <davem@davemloft.net>
Cc: Daniel Borkmann <dborkman@redhat.com>
Cc: Alexei Starovoitov <ast@plumgrid.com>
Cc: netdev@vger.kernel.org
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/7128/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
|
|
Using VLAN_VID_MASK is not correct to get the vlan tag. Use
~VLAN_PRESENT_MASK instead and make sure it's u16 so the top 16-bits
will be removed. This will ensure that the emit_andi() code will not
treat this as a big 32-bit unsigned value.
Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
Cc: David S. Miller <davem@davemloft.net>
Cc: Daniel Borkmann <dborkman@redhat.com>
Cc: Alexei Starovoitov <ast@plumgrid.com>
Cc: netdev@vger.kernel.org
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/7127/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
|
|
The sltiu and sltu instructions will set the scratch register
to 1 if A <= X|K so fix the emitted branch conditional to check
for scratch != zero rather than scratch >= zero which would complicate
the resuling branch logic given that MIPS does not have a BGT or BGET
instructions to compare general purpose registers directly.
Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
Cc: David S. Miller <davem@davemloft.net>
Cc: Daniel Borkmann <dborkman@redhat.com>
Cc: Alexei Starovoitov <ast@plumgrid.com>
Cc: netdev@vger.kernel.org
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/7126/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
|
|
The SKF_AD_PKTTYPE uses the skb pointer so make sure it's in the
flags so it will be initialized in time.
Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
Cc: David S. Miller <davem@davemloft.net>
Cc: Daniel Borkmann <dborkman@redhat.com>
Cc: Alexei Starovoitov <ast@plumgrid.com>
Cc: netdev@vger.kernel.org
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/7125/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
|
|
The VLAN_VID_MASK and VLAN_TAG_PRESENT are immediates, so using
'and' which expects 3 registers will produce wrong results. Fix
this by using the 'andi' instruction.
Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
Cc: David S. Miller <davem@davemloft.net>
Cc: Daniel Borkmann <dborkman@redhat.com>
Cc: Alexei Starovoitov <ast@plumgrid.com>
Cc: netdev@vger.kernel.org
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/7124/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
|
|
Previously, the negative offset was not checked leading to failures
due to trying to load data beyond the skb struct boundaries. Until we
have proper asm helpers in place, it's best if we return ENOSUPP if K
is negative when trying to JIT the filter or 0 during runtime if we
do an indirect load where the value of X is unknown during build time.
Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
Cc: David S. Miller <davem@davemloft.net>
Cc: Daniel Borkmann <dborkman@redhat.com>
Cc: Alexei Starovoitov <ast@plumgrid.com>
Cc: netdev@vger.kernel.org
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/7123/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
|
|
Reading from the HI register to get the division result is wrong.
The quotient is placed in the LO register.
Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: Daniel Borkmann <dborkman@redhat.com>
Cc: Alexei Starovoitov <ast@plumgrid.com>
Cc: netdev@vger.kernel.org
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/7122/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
|
|
Commit d6b3314b49e12e8c349deb4ca28e7028db00728f "MIPS: uasm: Add lh uam
instruction" added the 'lh' micro-assembler instruction but it used the
'lw' opcode for it. Fix it by using the correct 'lh' opcode.
Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/7121/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
|
|
It will be used later on by bpf-jit
Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
Cc: linux-mips@linux-mips.org
Cc: Markos Chandras <markos.chandras@imgtec.com>
Patchwork: https://patchwork.linux-mips.org/patch/7120/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
|
|
It will be used later on by the SLT instruction.
Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/7119/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
|
|
mips: allmodconfig fails in 3.16-rc1 with lots of undefined symbols.
arch/mips/net/bpf_jit.c: In function 'is_load_to_a':
arch/mips/net/bpf_jit.c:559:7: error: 'BPF_S_LD_W_LEN' undeclared (first use in this function)
arch/mips/net/bpf_jit.c:559:7: note: each undeclared identifier is reported only once for each function it appears in
arch/mips/net/bpf_jit.c:560:7: error: 'BPF_S_LD_W_ABS' undeclared (first use in this function)
[...]
The reason behind this is that 3480593131e0 ("net: filter: get rid of
BPF_S_* enum") was routed via net-next tree, that takes all BPF-related
changes, at a time where MIPS BPF JIT was not part of net-next, while
c6610de353da ("MIPS: net: Add BPF JIT") was routed via mips arch tree
and went into mainline within the same merge window. Thus, fix it up by
converting BPF_S_* in a similar fashion as in 3480593131e0 for MIPS.
Reported-by: Guenter Roeck <linux@roeck-us.net>
Signed-off-by: Daniel Borkmann <dborkman@redhat.com>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: Alexei Starovoitov <ast@plumgrid.com>
Cc: Markos Chandras <markos.chandras@imgtec.com>
Cc: linux-kernel@vger.kernel.org <linux-kernel@vger.kernel.org>
Cc: Linux MIPS Mailing List <linux-mips@linux-mips.org>
Patchwork: https://patchwork.linux-mips.org/patch/7099/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
|
|
This reverts commit eec43a224cf1 "MIPS: Save/restore MSA context around
signals" and the MSA parts of ca750649e08c "MIPS: kernel: signal:
Prevent save/restore FPU context in user memory" (the restore path of
which appears incorrect anyway...).
The reverted patch took care not to break compatibility with userland
users of struct sigcontext, but inadvertantly changed the offset of the
uc_sigmask field of struct ucontext. Thus Linux v3.15 breaks the
userland ABI. The MSA context will need to be saved via some other
opt-in mechanism, but for now revert the change to reduce the fallout.
This will have minimal impact upon use of MSA since the only supported
CPU which includes it (the P5600) is 32-bit and therefore requires that
the experimental CONFIG_MIPS_O32_FP64_SUPPORT Kconfig option be selected
before the kernel will set FR=1 for a task, a requirement for MSA use.
Thus the users of MSA are limited to known small groups of people & this
patch won't be breaking any previously working MSA-using userland
outside of experimental settings.
[ralf@linux-mips.org: Fixed rejects.]
Cc: stable@vger.kernel.org
Reported-by: Joseph S. Myers <joseph@codesourcery.com>
Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Cc: linux-mips@linux-mips.org
Cc: stable@vger.kernel.org
Patchwork: https://patchwork.linux-mips.org/patch/7107/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
|
|
The fix in the preceeding commit did do exactly the same thing in two
places showing some code cleanup was due.
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
|
|
icaches are not snooped hence not cohrent in SMP setups which means
kernel has to do cross core calls to ensure the same.
The leaf routine __ic_line_inv_vaddr() now does cross core calls.
__sync_icache_dcache() is affected due to this:
* local dcache line flushed ahead of remote icache inv requests
* can't disable interrupts anymore, since
__ic_line_inv_vaddr()->on_each_cpu() can deadlock.
| WARNING: CPU: 0 PID: 1 at kernel/smp.c:374
| smp_call_function_many+0x25a/0x2c4()
|
| init_kprobes+0x90/0xc8
| register_kprobe+0x1d6/0x510
| __sync_icache_dcache+0x28/0x80
|
| DISABLE IRQ
|
| __ic_line_inv_vaddr
| on_each_cpu
| smp_call_function_many+0x25a/0x2c4 --> WARN
| __ic_line_inv_vaddr_local
| __dc_line_op
* TODO: Needs to use mask of relevant CPUs to avoid broadcasting
Signed-off-by: Noam Camus <noamc@ezchip.com>
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
|
|
Handle it just like timer. Current request_percpu_irq() would fail on
non-boot cpus and thus IRQ will remian unmasked on those cpus.
[vgupta: fix changelong]
Signed-off-by: Noam Camus <noamc@ezchip.com>
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
|
|
This patch adds implementation of GET_THREAD_AREA ptrace request type. This
is required by GDB to debug NPTL applications.
Signed-off-by: Anton Kolesov <Anton.Kolesov@synopsys.com>
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
|
|
using ARC ZOL which reduces tot num of instructions by half
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
|
|
Fixes: ec7ac6afd07b (ARC: switch to generic ENTRY/END assembler annotations)
Reported-by: Anton Kolesov <akolesov@synopsys.com>
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
|