Age | Commit message (Collapse) | Author |
|
This adds code to the load and store emulation code to byte-swap
the data appropriately when the process being emulated is set to
the opposite endianness to that of the kernel.
This also enables the emulation for the multiple-register loads
and stores (lmw, stmw, lswi, stswi, lswx, stswx) to work for
little-endian. In little-endian mode, the partial word at the
end of a transfer for lsw*/stsw* (when the byte count is not a
multiple of 4) is loaded/stored at the least-significant end of
the register. Additionally, this fixes a bug in the previous
code in that it could call read_mem/write_mem with a byte count
that was not 1, 2, 4 or 8.
Note that this only works correctly on processors with "true"
little-endian mode, such as IBM POWER processors from POWER6 on, not
the so-called "PowerPC" little-endian mode that uses address swizzling
as implemented on the old 32-bit 603, 604, 740/750, 74xx CPUs.
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
|
|
This adds code to the instruction emulation code to set regs->dar
to the address of any memory access that fails. This address is
not necessarily the same as the effective address of the instruction,
because if the memory access is unaligned, it might cross a page
boundary and fault on the second page.
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
|
|
This adds code to analyse_instr() and emulate_step() to understand the
dcbz (data cache block zero) instruction. The emulate_dcbz() function
is made public so it can be used by the alignment handler in future.
(The apparently unnecessary cropping of the address to 32 bits is
there because it will be needed in that situation.)
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
|
|
This adds lfdp[x] and stfdp[x] to the set of instructions that
analyse_instr() and emulate_step() understand.
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
|
|
This adds code to analyse_instr() and emulate_step() to handle the
vector element loads and stores:
lvebx, lvehx, lvewx, stvebx, stvehx, stvewx.
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
|
|
At present, the analyse_instr/emulate_step code checks for the
relevant MSR_FP/VEC/VSX bit being set when a FP/VMX/VSX load
or store is decoded, but doesn't recheck the bit before reading or
writing the relevant FP/VMX/VSX register in emulate_step().
Since we don't have preemption disabled, it is possible that we get
preempted between checking the MSR bit and doing the register access.
If that happened, then the registers would have been saved to the
thread_struct for the current process. Accesses to the CPU registers
would then potentially read stale values, or write values that would
never be seen by the user process.
Another way that the registers can become non-live is if a page
fault occurs when accessing user memory, and the page fault code
calls a copy routine that wants to use the VMX or VSX registers.
To fix this, the code for all the FP/VMX/VSX loads gets restructured
so that it forms an image in a local variable of the desired register
contents, then disables preemption, checks the MSR bit and either
sets the CPU register or writes the value to the thread struct.
Similarly, the code for stores checks the MSR bit, copies either the
CPU register or the thread struct to a local variable, then reenables
preemption and then copies the register image to memory.
If the instruction being emulated is in the kernel, then we must not
use the register values in the thread_struct. In this case, if the
relevant MSR enable bit is not set, then emulate_step refuses to
emulate the instruction.
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
|
|
At the moment, emulation of loads and stores of up to 8 bytes to
unaligned addresses on a little-endian system uses a sequence of
single-byte loads or stores to memory. This is rather inefficient,
and the code is hard to follow because it has many ifdefs.
In addition, the Power ISA has requirements on how unaligned accesses
are performed, which are not met by doing all accesses as
sequences of single-byte accesses.
Emulation of VSX loads and stores uses __copy_{to,from}_user,
which means the emulation code has no control on the size of
accesses.
To simplify this, we add new copy_mem_in() and copy_mem_out()
functions for accessing memory. These use a sequence of the largest
possible aligned accesses, up to 8 bytes (or 4 on 32-bit systems),
to copy memory between a local buffer and user memory. We then
rewrite {read,write}_mem_unaligned and the VSX load/store
emulation using these new functions.
These new functions also simplify the code in do_fp_load() and
do_fp_store() for the unaligned cases.
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
|
|
The addpcis instruction puts the sum of the next instruction address
plus a constant into a register. Since the result depends on the
address of the instruction, it will give an incorrect result if it
is single-stepped out of line, which is what the *probes subsystem
will currently do if a probe is placed on an addpcis instruction.
This fixes the problem by adding emulation of it to analyse_instr().
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
|
|
The architecture shows the least-significant bit of the instruction
word as reserved for the popcnt[bwd], prty[wd] and bpermd
instructions, that is, these instructions never update CR0.
Therefore this changes the emulation of these instructions to
skip the CR0 update.
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
|
|
The case added for the isel instruction was added inside a switch
statement which uses the 10-bit minor opcode field in the 0x7fe
bits of the instruction word. However, for the isel instruction,
the minor opcode field is only the 0x3e bits, and the 0x7c0 bits
are used for the "BC" field, which indicates which CR bit to use
to select the result.
Therefore, for the isel emulation to work correctly when BC != 0,
we need to match on ((instr >> 1) & 0x1f) == 15). To do this, we
pull the isel case out of the switch statement and put it in an
if statement of its own.
Fixes: e27f71e5ff3c ("powerpc/lib/sstep: Add isel instruction emulation")
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
|
|
When a 64-bit processor is executing in 32-bit mode, the update forms
of load and store instructions are required by the architecture to
write the full 64-bit effective address into the RA register, though
only the bottom 32 bits are used to address memory. Currently,
the instruction emulation code writes the truncated address to the
RA register. This fixes it by keeping the full 64-bit EA in the
instruction_op structure, truncating the address in emulate_step()
where it is used to address memory, rather than in the address
computations in analyse_instr().
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
|
|
This extends the instruction emulation infrastructure in sstep.c to
handle all the load and store instructions defined in the Power ISA
v3.0, except for the atomic memory operations, ldmx (which was never
implemented), lfdp/stfdp, and the vector element load/stores.
The instructions added are:
Integer loads and stores: lbarx, lharx, lqarx, stbcx., sthcx., stqcx.,
lq, stq.
VSX loads and stores: lxsiwzx, lxsiwax, stxsiwx, lxvx, lxvl, lxvll,
lxvdsx, lxvwsx, stxvx, stxvl, stxvll, lxsspx, lxsdx, stxsspx, stxsdx,
lxvw4x, lxsibzx, lxvh8x, lxsihzx, lxvb16x, stxvw4x, stxsibx, stxvh8x,
stxsihx, stxvb16x, lxsd, lxssp, lxv, stxsd, stxssp, stxv.
These instructions are handled both in the analyse_instr phase and in
the emulate_step phase.
The code for lxvd2ux and stxvd2ux has been taken out, as those
instructions were never implemented in any processor and have been
taken out of the architecture, and their opcodes have been reused for
other instructions in POWER9 (lxvb16x and stxvb16x).
The emulation for the VSX loads and stores uses helper functions
which don't access registers or memory directly, which can hopefully
be reused by KVM later.
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
|
|
This removes the checks for the FP/VMX/VSX enable bits in the MSR
from analyse_instr() and adds them to emulate_step() instead.
The reason for this is that we may want to use analyse_instr() in
a situation where the FP/VMX/VSX register values are stored in the
current thread_struct and the FP/VMX/VSX enable bits in the MSR
image in the pt_regs are zero. Since analyse_instr() doesn't make
any changes to register state, it is reasonable for it to indicate
what the effect of an instruction would be even though the relevant
enable bit is off.
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
|
|
The analyse_instr function currently doesn't just work out what an
instruction does, it also executes those instructions whose effect
is only to update CPU registers that are stored in struct pt_regs.
This is undesirable because optprobes uses analyse_instr to work out
if an instruction could be successfully emulated in future.
This changes analyse_instr so it doesn't modify *regs; instead it
stores information in the instruction_op structure to indicate what
registers (GPRs, CR, XER, LR) would be set and what value they would
be set to. A companion function called emulate_update_regs() can
then use that information to update a pt_regs struct appropriately.
As a minor cleanup, this replaces inline asm using the cntlzw and
cntlzd instructions with calls to __builtin_clz() and __builtin_clzl().
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
|
|
bitmap_resize() does not work for file-backed bitmaps.
The buffer_heads are allocated and initialized when
the bitmap is read from the file, but resize doesn't
read from the file, it loads from the internal bitmap.
When it comes time to write the new bitmap, the bh is
non-existent and we crash.
The common case when growing an array involves making the array larger,
and that normally means making the bitmap larger. Doing
that inside the kernel is possible, but would need more code.
It is probably easier to require people who use file-backed
bitmaps to remove them and re-add after a reshape.
So this patch disables the resizing of arrays which have
file-backed bitmaps. This is better than crashing.
Reported-by: Zhilong Liu <zlliu@suse.com>
Fixes: d60b479d177a ("md/bitmap: add bitmap_resize function to allow bitmap resizing.")
Cc: stable@vger.kernel.org (v3.5+).
Signed-off-by: NeilBrown <neilb@suse.com>
Signed-off-by: Shaohua Li <shli@fb.com>
|
|
When mounting to older servers, such as Windows XP (or even Windows 7),
the limited error messages that can be passed back to user space can
get confusing since the default dialect has changed from SMB1 (CIFS) to
more secure SMB3 dialect. Log additional information when the user chooses
to use the default dialects and when the server does not support the
dialect requested.
Signed-off-by: Steve French <smfrench@gmail.com>
Reviewed-by: Ronnie Sahlberg <lsahlber@redhat.com>
Acked-by: Pavel Shilovsky <pshilov@microsoft.com>
|
|
David Ahern says:
====================
bpf: Add option to set mark and priority in cgroup sock programs
Add option to set mark and priority in addition to bound device for newly
created sockets. Also, allow the bpf programs to use the get_current_uid_gid
helper meaning socket marks, priority and device can be set based on the
uid/gid of the running process.
Sample programs are updated to demonstrate the new options.
v3
- no changes to Patches 1 and 2 which Alexei acked in previous versions
- dropped change related to recursive programs in a cgroup
- updated tests per dropped patch
v2
- added flag to control recursive behavior as requested by Alexei
- added comment to sock_filter_func_proto regarding use of
get_current_uid_gid helper
- updated test programs for recursive option
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Signed-off-by: David Ahern <dsahern@gmail.com>
Acked-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Update cgrp2 bpf sock tests to check that device, mark and priority
can all be set on a socket via bpf programs attached to a cgroup.
Signed-off-by: David Ahern <dsahern@gmail.com>
Acked-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Add option to dump socket settings. Will be used in the next patch
to verify bpf programs are correctly setting mark, priority and
device based on the cgroup attachment for the program run.
Signed-off-by: David Ahern <dsahern@gmail.com>
Acked-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Add option to detach programs from a cgroup.
Signed-off-by: David Ahern <dsahern@gmail.com>
Acked-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Update sock test to set mark and priority on socket create.
Signed-off-by: David Ahern <dsahern@gmail.com>
Acked-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Allow BPF programs run on sock create to use the get_current_uid_gid
helper. IPv4 and IPv6 sockets are created in a process context so
there is always a valid uid/gid
Signed-off-by: David Ahern <dsahern@gmail.com>
Acked-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Daniel Borkmann <daniel@iogearbox.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Add socket mark and priority to fields that can be set by
ebpf program when a socket is created.
Signed-off-by: David Ahern <dsahern@gmail.com>
Acked-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Daniel Borkmann <daniel@iogearbox.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
One of the two scsi-mq functions that requeue a request unprepares a
request before requeueing (scsi_io_completion()) but the other function
not (__scsi_queue_insert()). Make sure that a request is unprepared
before requeuing it.
Fixes: commit d285203cf647 ("scsi: add support for a blk-mq based I/O path.")
Signed-off-by: Bart Van Assche <bart.vanassche@wdc.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Hannes Reinecke <hare@suse.com>
Cc: Damien Le Moal <damien.lemoal@wdc.com>
Cc: Johannes Thumshirn <jthumshirn@suse.de>
Reviewed-by: Johannes Thumshirn <jthumshirn@suse.de>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
|
|
Make these two member variables available in debugfs such that their
value can be verified by kernel developers. An example of the new
output:
ffff8804a513d480 {.op=READ, .cmd_flags=META|PRIO, .rq_flags=MQ_INFLIGHT|DONTPREP|IO_STAT|STATS, .atomic_flags=STARTED, .tag=17, .internal_tag=-1, .cmd=Read(10) 28 00 08 81 32 38 00 00 08 00, .retries=0, allocated 0.010 s ago}
Signed-off-by: Bart Van Assche <bart.vanassche@wdc.com>
Cc: Hannes Reinecke <hare@suse.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Johannes Thumshirn <jthumshirn@suse.de>
Reviewed-by: Johannes Thumshirn <jthumshirn@suse.de>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
|
|
Requests are unprepared and reprepared when being requeued. Avoid that
requeuing resets .jiffies_at_alloc and .retries by initializing these
two member variables from inside scsi_initialize_rq() and by preserving
both member variables when preparing a request. This patch affects the
requeuing behavior of both the legacy scsi and the scsi-mq code paths.
Reported-by: Brian King <brking@linux.vnet.ibm.com>
References: https://lkml.org/lkml/2017/8/18/923 ("Re: [BUG][bisected 270065e] linux-next fails to boot on powerpc")
Signed-off-by: Bart Van Assche <bart.vanassche@wdc.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Brian King <brking@linux.vnet.ibm.com>
Cc: Hannes Reinecke <hare@suse.com>
Cc: Johannes Thumshirn <jthumshirn@suse.de>
Reviewed-by: Johannes Thumshirn <jthumshirn@suse.de>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
|
|
If a pass-through request is submitted then blk_get_request()
initializes that request by calling scsi_initialize_rq(). Also call this
function for filesystem requests. Introduce CMD_INITIALIZED to keep
track of whether or not a request has already been initialized.
Signed-off-by: Bart Van Assche <bart.vanassche@wdc.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Brian King <brking@linux.vnet.ibm.com>
Cc: Hannes Reinecke <hare@suse.com>
Cc: Johannes Thumshirn <jthumshirn@suse.de>
Reviewed-by: Johannes Thumshirn <jthumshirn@suse.de>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
|
|
git://git.samba.org/sfrench/cifs-2.6
Pull cifs fixes from Steve French:
"Two cifs bug fixes for stable"
* tag 'cifs-fixes-for-4.13-rc7-and-stable' of git://git.samba.org/sfrench/cifs-2.6:
CIFS: remove endian related sparse warning
CIFS: Fix maximum SMB2 header size
|
|
strrchr can potentially return a null so the following strlen on the
null pointer can cause a null dereference. Add a check to see if
the string postfix is not null before calling strlen.
Detected by CoverityScan, CID#1452039 ("Dereference null return")
Signed-off-by: Colin Ian King <colin.king@canonical.com>
Acked-by: Tero Kristo <t-kristo@ti.com>
Signed-off-by: Stephen Boyd <sboyd@codeaurora.org>
|
|
Add a check for error returned by divider value calculation to avoid
writing error code into hw register.
Signed-off-by: Alex Frid <afrid@nvidia.com>
Reviewed-by: Peter De Schrijver <pdeschrijver@nvidia.com>
Reviewed-by: Jon Mayo <jmayo@nvidia.com>
Fixes: bca9690b9426 ("clk: divider: Make generic for usage elsewhere")
Signed-off-by: Stephen Boyd <sboyd@codeaurora.org>
|
|
Pull block fixes from Jens Axboe:
"Unfortunately a few issues that warrant sending another pull request,
even if I had hoped to avoid it. This contains:
- A fix for multiqueue xen-blkback, on tear down / disconnect.
- A few fixups for NVMe, including a wrong bit definition, fix for
host memory buffers, and an nvme rdma page size fix"
* 'for-linus' of git://git.kernel.dk/linux-block:
nvme: fix the definition of the doorbell buffer config support bit
nvme-pci: use dma memory for the host memory buffer descriptors
nvme-rdma: default MR page size to 4k
xen-blkback: stop blkback thread of every queue in xen_blkif_disconnect
|
|
Add a clock for video input subsystem (EXIV) on
UniPhier LD11/LD20 SoCs.
Signed-off-by: Katsuhiro Suzuki <suzuki.katsuhiro@socionext.com>
Acked-by: Masahiro Yamada <yamada.masahiro@socionext.com>
Signed-off-by: Stephen Boyd <sboyd@codeaurora.org>
|
|
Add clock for audio subsystem (AIO) and SoC internal audio codec
(EVEA) on UniPhier LD11/LD20 SoCs.
Signed-off-by: Katsuhiro Suzuki <suzuki.katsuhiro@socionext.com>
Acked-by: Masahiro Yamada <yamada.masahiro@socionext.com>
Signed-off-by: Stephen Boyd <sboyd@codeaurora.org>
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/device-mapper/linux-dm
Pull device mapper fixes from Mike Snitzer:
- A couple fixes for bugs introduced as part of the blk_status_t block
layer changes during the 4.13 merge window
- A printk throttling fix to use discrete rate limiting state for each
DM log level
- A stable@ fix for DM multipath that delays request requeueing to
avoid CPU lockup if/when the request queue is "dying"
* tag 'for-4.13/dm-fixes-2' of git://git.kernel.org/pub/scm/linux/kernel/git/device-mapper/linux-dm:
dm mpath: do not lock up a CPU with requeuing activity
dm: fix printk() rate limiting code
dm mpath: retry BLK_STS_RESOURCE errors
dm: fix the second dec_pending() argument in __split_and_process_bio()
|
|
This patch enables clocks for STM32H743 boards.
Signed-off-by: Gabriel Fernandez <gabriel.fernandez@st.com>
for MFD changes:
Acked-by: Lee Jones <lee.jones@linaro.org>
for DT-Bindings
Acked-by: Rob Herring <robh@kernel.org>
Signed-off-by: Stephen Boyd <sboyd@codeaurora.org>
|
|
This patch exposes clk_gate_ops::is_enabled as functions
that can be directly called and assigned in places like this so
we don't need wrapper functions that do nothing besides forward
the call.
Signed-off-by: Gabriel Fernandez <gabriel.fernandez@st.com>
Suggested-by: Stephen Boyd <sboyd@codeaurora.org>
Signed-off-by: Stephen Boyd <sboyd@codeaurora.org>
|
|
We need to export clk_gate_is_enabled() from clk framework, then
to avoid compilation issue we have to rename clk_gate_is_enabled()
in NXP LPC32xx clock driver.
We changed all gate op with 'lpc32xx_' prefix:
lpc32xx_clk_gate_enable(),
lpc32xx_clk_gate_disable(),
lpc32xx_clk_gate_is_enabled().
Signed-off-by: Gabriel Fernandez <gabriel.fernandez@st.com>
Acked-by: Vladimir Zapolskiy <vz@mleia.com>
Signed-off-by: Stephen Boyd <sboyd@codeaurora.org>
|
|
Add basic clock data for Socionext's new SoC PXs3.
Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
Signed-off-by: Stephen Boyd <sboyd@codeaurora.org>
|
|
The old code uses tcxo (19.2MHz) as watchdog clock but actually the
watchdog uses 32K clock, as result the watchdog timeout cannot be set
correctly and delay long time to reset SoC.
So this patch is to use 'ref32k' as clock source for watchdog.
Fixes: 72ea48610d43 ("clk: hi6220: Clock driver support for Hisilicon hi6220 SoC")
Signed-off-by: Leo Yan <leo.yan@linaro.org>
Signed-off-by: Stephen Boyd <sboyd@codeaurora.org>
|
|
Don't populate the array seq on the stack, instead make it static.
Makes the object code smaller by over 1100 bytes:
Before:
text data bss dec hex filename
6152 1216 64 7432 1d08 drivers/input/mouse/byd.o
After:
text data bss dec hex filename
4974 1280 64 6318 18ae drivers/input/mouse/byd.o
Signed-off-by: Colin Ian King <colin.king@canonical.com>
Signed-off-by: Dmitry Torokhov <dmitry.torokhov@gmail.com>
|
|
there are three concepts represent backlight in int3406_thermal driver.
1. the raw brightness value from native graphics driver.
2. the percentage numbers from ACPI _BCL control method.
3. the consecutive numbers represent cooling states.
int3406_thermal driver
1. uses value from DDDL/DDPC as the lower/upper limit, which is consistent
with ACPI _BCL control methods.
2. reads current and maximum brightness from the native graphics driver.
3. expose them to thermal sysfs I/F
This patch fixes the code that switches between the raw brightness value
and the cooling state, which results in bogus value in thermal sysfs I/F.
Signed-off-by: Zhang Rui <rui.zhang@intel.com>
|
|
Merge more fixes from Andrew Morton:
"6 fixes"
* emailed patches from Andrew Morton <akpm@linux-foundation.org>:
scripts/dtc: fix '%zx' warning
include/linux/compiler.h: don't perform compiletime_assert with -O0
mm, madvise: ensure poisoned pages are removed from per-cpu lists
mm, uprobes: fix multiple free of ->uprobes_state.xol_area
kernel/kthread.c: kthread_worker: don't hog the cpu
mm,page_alloc: don't call __node_reclaim() with oom_lock held.
|
|
Merge mmu_notifier fixes from Jérôme Glisse:
"The invalidate_page callback suffered from 2 pitfalls. First it used
to happen after page table lock was release and thus a new page might
have been setup for the virtual address before the call to
invalidate_page().
This is in a weird way fixed by commit c7ab0d2fdc84 ("mm: convert
try_to_unmap_one() to use page_vma_mapped_walk()") which moved the
callback under the page table lock. Which also broke several existing
user of the mmu_notifier API that assumed they could sleep inside this
callback.
The second pitfall was invalidate_page being the only callback not
taking a range of address in respect to invalidation but was giving an
address and a page. Lot of the callback implementer assumed this could
never be THP and thus failed to invalidate the appropriate range for
THP pages.
By killing this callback we unify the mmu_notifier callback API to
always take a virtual address range as input.
There is now two clear API (I am not mentioning the youngess API which
is seldomly used):
- invalidate_range_start()/end() callback (which allow you to sleep)
- invalidate_range() where you can not sleep but happen right after
page table update under page table lock
Note that a lot of existing user feels broken in respect to
range_start/ range_end. Many user only have range_start() callback but
there is nothing preventing them to undo what was invalidated in their
range_start() callback after it returns but before any CPU page table
update take place.
The code pattern use in kvm or umem odp is an example on how to
properly avoid such race. In a nutshell use some kind of sequence
number and active range invalidation counter to block anything that
might undo what the range_start() callback did.
If you do not care about keeping fully in sync with CPU page table (ie
you can live with CPU page table pointing to new different page for a
given virtual address) then you can take a reference on the pages
inside the range_start callback and drop it in range_end or when your
driver is done with those pages.
Last alternative is to use invalidate_range() if you can do
invalidation without sleeping as invalidate_range() callback happens
under the CPU page table spinlock right after the page table is
updated.
The first two patches convert existing mmu_notifier_invalidate_page()
calls to mmu_notifier_invalidate_range() and bracket those call with
call to mmu_notifier_invalidate_range_start()/end().
The next ten patches remove existing invalidate_page() callback as it
can no longer happen.
Finally the last page remove the invalidate_page() callback completely
so it can RIP.
Changes since v1:
- remove more dead code in kvm (no testing impact)
- more accurate end address computation (patch 2) in page_mkclean_one
and try_to_unmap_one
- added tested-by/reviewed-by gotten so far"
* emailed patches from Jérôme Glisse <jglisse@redhat.com>:
mm/mmu_notifier: kill invalidate_page
KVM: update to new mmu_notifier semantic v2
xen/gntdev: update to new mmu_notifier semantic
sgi-gru: update to new mmu_notifier semantic
misc/mic/scif: update to new mmu_notifier semantic
iommu/intel: update to new mmu_notifier semantic
iommu/amd: update to new mmu_notifier semantic
IB/hfi1: update to new mmu_notifier semantic
IB/umem: update to new mmu_notifier semantic
drm/amdgpu: update to new mmu_notifier semantic
powerpc/powernv: update to new mmu_notifier semantic
mm/rmap: update to new mmu_notifier semantic v2
dax: update to new mmu_notifier semantic
|
|
We do ctx = kzalloc(sizeof(*ctx), GFP_KERNEL) and then later on call
anon_inode_getfd(), but if that fails we don't free ctx, so that
memory gets leaked. To fix it, this adds kfree(ctx) in the failure
path.
Signed-off-by: nixiaoming <nixiaoming@huawei.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
|
|
The common code needs to export the probe and remove symbols in order
for the SMEM and RPM drivers to access them when compiled as a module.
Signed-off-by: Bjorn Andersson <bjorn.andersson@linaro.org>
|
|
jfs had previously avoided the use of MAX_LFS_FILESIZE because it hadn't
accounted for the whole 32-bit index range on 32-bit systems. That has
been fixed by commit 0cc3b0ec23ce ("Clarify (and fix) MAX_LFS_FILESIZE
macros"), so we can simplify the code now.
Suggested by Andreas Dilger.
Signed-off-by: Dave Kleikamp <dave.kleikamp@oracle.com>
Reviewed-by: Andreas Dilger <adilger@dilger.ca>
Cc: jfs-discussion@lists.sourceforge.net
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
When a ftrace filter has a module function, and that module is removed, the
filter still has its address as being enabled. This can cause interesting
side effects. Nothing dangerous, but unwanted functions can be traced
because of it.
# cd /sys/kernel/tracing
# echo ':mod:snd_seq' > set_ftrace_filter
# cat set_ftrace_filter
snd_use_lock_sync_helper [snd_seq]
check_event_type_and_length [snd_seq]
snd_seq_ioctl_pversion [snd_seq]
snd_seq_ioctl_client_id [snd_seq]
snd_seq_ioctl_get_queue_tempo [snd_seq]
update_timestamp_of_queue [snd_seq]
snd_seq_ioctl_get_queue_status [snd_seq]
snd_seq_set_queue_tempo [snd_seq]
snd_seq_ioctl_set_queue_tempo [snd_seq]
snd_seq_ioctl_get_queue_timer [snd_seq]
seq_free_client1 [snd_seq]
[..]
# rmmod snd_seq
# cat set_ftrace_filter
# modprobe kvm
# cat set_ftrace_filter
kvm_set_cr4 [kvm]
kvm_emulate_hypercall [kvm]
kvm_set_dr [kvm]
This is because removing the snd_seq module after it was being filtered,
left the address of the snd_seq functions in the hash. When the kvm module
was loaded, some of its functions were loaded at the same address as the
snd_seq module. This would enable them to be filtered and traced.
Now we don't want to clear the hash completely. That would cause removing a
module where only its functions are filtered, to cause the tracing to enable
all functions, as an empty filter means to trace all functions. Instead,
just set the hash ip address to zero. Then it will never match any function.
Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
|
|
We have lots of dead defines and macros in drivers, lets offer users a way
to detect and eventually remove them.
Signed-off-by: Johannes Thumshirn <jthumshirn@suse.de>
Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
|
|
Kbuild conventionally uses $(shell cd ... && /bin/pwd) idiom to get
the absolute path of the directory because GNU Make 3.80, the minimal
supported version at that time, did not support $(abspath ...) or
$(realpath ...).
Commit 37d69ee30808 ("docs: bump minimal GNU Make version to 3.81")
dropped the GNU Make 3.80 support, so we are now allowed to use those
make-builtin helpers.
This conversion will provide better portability without relying on
the pwd command or its location /bin/pwd.
I am intentionally using $(realpath ...) instead $(abspath ...) in
some places. The difference between the two is $(realpath ...)
returns an empty string if the given path does not exist. It is
convenient in places where we need to error-out if the makefile fails
to create an output directory.
Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
Acked-by: Thierry Reding <treding@nvidia.com>
|