Age | Commit message (Collapse) | Author |
|
<mirq-linux@rere.qmqm.pl>:
Three simple patches to aid in debugging regulators.
Michał Mirosław (3):
regulator: print state at boot
regulator: print symbolic errors in kernel messages
regulator: resolve supply after creating regulator
drivers/regulator/core.c | 124 ++++++++++++++++++++++-----------------
1 file changed, 69 insertions(+), 55 deletions(-)
--
2.20.1
|
|
Add DT-binding document for Richtek RTMV20
Signed-off-by: ChiYuan Huang <cy_huang@richtek.com>
Link: https://lore.kernel.org/r/1601277584-5526-2-git-send-email-u0084500@gmail.com
Signed-off-by: Mark Brown <broonie@kernel.org>
|
|
Add support for Richtek RTMV20 load switch regulator.
Signed-off-by: ChiYuan Huang <cy_huang@richtek.com>
Link: https://lore.kernel.org/r/1601277584-5526-1-git-send-email-u0084500@gmail.com
Signed-off-by: Mark Brown <broonie@kernel.org>
|
|
Pull NFS client bugfixes from Trond Myklebust:
"Highlights include:
- NFSv4.2: copy_file_range needs to invalidate caches on success
- NFSv4.2: Fix security label length not being reset
- pNFS/flexfiles: Ensure we initialise the mirror bsizes correctly
on read
- pNFS/flexfiles: Fix signed/unsigned type issues with mirror
indices"
* tag 'nfs-for-5.9-3' of git://git.linux-nfs.org/projects/trondmy/linux-nfs:
pNFS/flexfiles: Be consistent about mirror index types
pNFS/flexfiles: Ensure we initialise the mirror bsizes correctly on read
NFSv4.2: fix client's attribute cache management for copy_file_range
nfs: Fix security label length not being reset
|
|
When creating a new regulator its supply cannot create the sysfs link
because the device is not yet published. Remove early supply resolving
since it will be done later anyway. This makes the following error
disappear and the symlinks get created instead.
DCDC_REG1: supplied by VSYS
VSYS: could not add device link regulator.3 err -2
Note: It doesn't fix the problem for bypassed regulators, though.
Fixes: 45389c47526d ("regulator: core: Add early supply resolution for regulators")
Signed-off-by: Michał Mirosław <mirq-linux@rere.qmqm.pl>
Link: https://lore.kernel.org/r/ba09e0a8617ffeeb25cb4affffe6f3149319cef8.1601155770.git.mirq-linux@rere.qmqm.pl
Signed-off-by: Mark Brown <broonie@kernel.org>
|
|
Change all error-printing messages to include error name via %pe instead
of numeric error or nothing.
Signed-off-by: Michał Mirosław <mirq-linux@rere.qmqm.pl>
Link: https://lore.kernel.org/r/1dcf25f39188882eb56918a9aa281ab17b792aa5.1601155770.git.mirq-linux@rere.qmqm.pl
Signed-off-by: Mark Brown <broonie@kernel.org>
|
|
Make the initial state of the regulator shown when debugging.
Signed-off-by: Michał Mirosław <mirq-linux@rere.qmqm.pl>
Link: https://lore.kernel.org/r/53c4f3d394d68f0989174f89e3b0882cebbbd787.1601155770.git.mirq-linux@rere.qmqm.pl
Signed-off-by: Mark Brown <broonie@kernel.org>
|
|
Add required PMU interrupt operations for NMIs. Request interrupt lines as
NMIs when possible, otherwise fall back to normal interrupts.
NMIs are only supported on the arm64 architecture with a GICv3 irqchip.
[Alexandru E.: Added that NMIs only work on arm64 + GICv3, print message
when PMU is using NMIs]
Signed-off-by: Julien Thierry <julien.thierry@arm.com>
Signed-off-by: Alexandru Elisei <alexandru.elisei@arm.com>
Tested-by: Sumit Garg <sumit.garg@linaro.org> (Developerbox)
Cc: Julien Thierry <julien.thierry.kdev@gmail.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Link: https://lore.kernel.org/r/20200924110706.254996-8-alexandru.elisei@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
|
|
Currently the PMU interrupt can either be a normal irq or a percpu irq.
Supporting NMI will introduce two cases for each existing one. It becomes
a mess of 'if's when managing the interrupt.
Define sets of callbacks for operations commonly done on the interrupt. The
appropriate set of callbacks is selected at interrupt request time and
simplifies interrupt enabling/disabling and freeing.
Signed-off-by: Julien Thierry <julien.thierry@arm.com>
Signed-off-by: Alexandru Elisei <alexandru.elisei@arm.com>
Tested-by: Sumit Garg <sumit.garg@linaro.org> (Developerbox)
Cc: Julien Thierry <julien.thierry.kdev@gmail.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Link: https://lore.kernel.org/r/20200924110706.254996-7-alexandru.elisei@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
|
|
kvm_vcpu_kick() is not NMI safe. When the overflow handler is called from
NMI context, defer waking the vcpu to an irq_work queue.
A vcpu can be freed while it's not running by kvm_destroy_vm(). Prevent
running the irq_work for a non-existent vcpu by calling irq_work_sync() on
the PMU destroy path.
[Alexandru E.: Added irq_work_sync()]
Signed-off-by: Julien Thierry <julien.thierry@arm.com>
Signed-off-by: Alexandru Elisei <alexandru.elisei@arm.com>
Tested-by: Sumit Garg <sumit.garg@linaro.org> (Developerbox)
Cc: Julien Thierry <julien.thierry.kdev@gmail.com>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Suzuki K Pouloze <suzuki.poulose@arm.com>
Cc: kvm@vger.kernel.org
Cc: kvmarm@lists.cs.columbia.edu
Link: https://lore.kernel.org/r/20200924110706.254996-6-alexandru.elisei@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
|
|
When handling events, armv8pmu_handle_irq() calls perf_event_overflow(),
and subsequently calls irq_work_run() to handle any work queued by
perf_event_overflow(). As perf_event_overflow() raises IPI_IRQ_WORK when
queuing the work, this isn't strictly necessary and the work could be
handled as part of the IPI_IRQ_WORK handler.
In the common case the IPI handler will run immediately after the PMU IRQ
handler, and where the PE is heavily loaded with interrupts other handlers
may run first, widening the window where some counters are disabled.
In practice this window is unlikely to be a significant issue, and removing
the call to irq_work_run() would make the PMU IRQ handler NMI safe in
addition to making it simpler, so let's do that.
[Alexandru E.: Reworded commit message]
Signed-off-by: Julien Thierry <julien.thierry@arm.com>
Signed-off-by: Alexandru Elisei <alexandru.elisei@arm.com>
Cc: Julien Thierry <julien.thierry.kdev@gmail.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Link: https://lore.kernel.org/r/20200924110706.254996-5-alexandru.elisei@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
|
|
The PMU is disabled and enabled, and the counters are programmed from
contexts where interrupts or preemption is disabled.
The functions to toggle the PMU and to program the PMU counters access the
registers directly and don't access data modified by the interrupt handler.
That, and the fact that they're always called from non-preemptible
contexts, means that we don't need to disable interrupts or use a spinlock.
[Alexandru E.: Explained why locking is not needed, removed WARN_ONs]
Signed-off-by: Julien Thierry <julien.thierry@arm.com>
Signed-off-by: Alexandru Elisei <alexandru.elisei@arm.com>
Tested-by: Sumit Garg <sumit.garg@linaro.org> (Developerbox)
Cc: Will Deacon <will.deacon@arm.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Link: https://lore.kernel.org/r/20200924110706.254996-4-alexandru.elisei@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
|
|
Currently we access the counter registers and their respective type
registers indirectly. This requires us to write to PMSELR, issue an ISB,
then access the relevant PMXEV* registers.
This is unfortunate, because:
* Under virtualization, accessing one register requires two traps to
the hypervisor, even though we could access the register directly with
a single trap.
* We have to issue an ISB which we could otherwise avoid the cost of.
* When we use NMIs, the NMI handler will have to save/restore the select
register in case the code it preempted was attempting to access a
counter or its type register.
We can avoid these issues by directly accessing the relevant registers.
This patch adds helpers to do so.
In armv8pmu_enable_event() we still need the ISB to prevent the PE from
reordering the write to PMINTENSET_EL1 register. If the interrupt is
enabled before we disable the counter and the new event is configured,
we might get an interrupt triggered by the previously programmed event
overflowing, but which we wrongly attribute to the event that we are
enabling. Execute an ISB after we disable the counter.
In the process, remove the comment that refers to the ARMv7 PMU.
[Julien T.: Don't inline read/write functions to avoid big code-size
increase, remove unused read_pmevtypern function,
fix counter index issue.]
[Alexandru E.: Removed comment, removed trailing semicolons in macros,
added ISB]
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Julien Thierry <julien.thierry@arm.com>
Signed-off-by: Alexandru Elisei <alexandru.elisei@arm.com>
Tested-by: Sumit Garg <sumit.garg@linaro.org> (Developerbox)
Cc: Julien Thierry <julien.thierry.kdev@gmail.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Link: https://lore.kernel.org/r/20200924110706.254996-3-alexandru.elisei@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
|
|
Writes to the PMXEVTYPER_EL0 register are not self-synchronising. In
armv8pmu_enable_event(), the PE can reorder configuring the event type
after we have enabled the counter and the interrupt. This can lead to an
interrupt being asserted because of the previous event type that we were
counting using the same counter, not the one that we've just configured.
The same rationale applies to writes to the PMINTENSET_EL1 register. The PE
can reorder enabling the interrupt at any point in the future after we have
enabled the event.
Prevent both situations from happening by adding an ISB just before we
enable the event counter.
Fixes: 030896885ade ("arm64: Performance counters support")
Reported-by: Julien Thierry <julien.thierry@arm.com>
Signed-off-by: Alexandru Elisei <alexandru.elisei@arm.com>
Tested-by: Sumit Garg <sumit.garg@linaro.org> (Developerbox)
Cc: Julien Thierry <julien.thierry.kdev@gmail.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Link: https://lore.kernel.org/r/20200924110706.254996-2-alexandru.elisei@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
|
|
Initial driver for PMU event counting on the Arm CMN-600 interconnect.
CMN sports an obnoxiously complex distributed PMU system as part of
its debug and trace features, which can do all manner of things like
sampling, cross-triggering and generating CoreSight trace. This driver
covers the PMU functionality, plus the relevant aspects of watchpoints
for simply counting matching flits.
Tested-by: Tsahi Zidenberg <tsahee@amazon.com>
Tested-by: Tuan Phan <tuanphan@os.amperecomputing.com>
Signed-off-by: Robin Murphy <robin.murphy@arm.com>
Signed-off-by: Will Deacon <will@kernel.org>
|
|
Document the requirements for the CMN-600 DT binding. The internal
topology is almost entirely discoverable by walking a tree of ID
registers, but sadly both the starting point for that walk and the
exact format of those registers are configuration-dependent and not
discoverable from some sane fixed location. Oh well.
Signed-off-by: Robin Murphy <robin.murphy@arm.com>
Signed-off-by: Will Deacon <will@kernel.org>
|
|
Julia Lawall <Julia.Lawall@inria.fr>:
These patches replace commas by semicolons. This was done using the
Coccinelle semantic patch (http://coccinelle.lip6.fr/) shown below.
This semantic patch ensures that commas inside for loop headers will not be
transformed. It also doesn't touch macro definitions.
Coccinelle ensures that braces are added as needed when a single-statement
branch turns into a multi-statement one.
This semantic patch has a few false positives, for variable delcarations
such as:
LIST_HEAD(x), *y;
The semantic patch could be improved to avoid these, but for the moment
they have been removed manually (2 occurrences).
// <smpl>
@initialize:ocaml@
@@
let infunction p =
(* avoid macros *)
(List.hd p).current_element <> "something_else"
let combined p1 p2 =
(List.hd p1).line_end = (List.hd p2).line ||
(((List.hd p1).line_end < (List.hd p2).line) &&
((List.hd p1).col < (List.hd p2).col))
@bad@
statement S;
declaration d;
position p;
@@
S@p
d
// special cases where newlines are needed (hope for no more than 5)
@@
expression e1,e2;
statement S;
position p != bad.p;
position p1;
position p2 :
script:ocaml(p1) { infunction p1 && combined p1 p2 };
@@
- e1@p1,@S@p e2@p2;
+ e1; e2;
@@
expression e1,e2;
statement S;
position p != bad.p;
position p1;
position p2 :
script:ocaml(p1) { infunction p1 && combined p1 p2 };
@@
- e1@p1,@S@p e2@p2;
+ e1; e2;
@@
expression e1,e2;
statement S;
position p != bad.p;
position p1;
position p2 :
script:ocaml(p1) { infunction p1 && combined p1 p2 };
@@
- e1@p1,@S@p e2@p2;
+ e1; e2;
@@
expression e1,e2;
statement S;
position p != bad.p;
position p1;
position p2 :
script:ocaml(p1) { infunction p1 && combined p1 p2 };
@@
- e1@p1,@S@p e2@p2;
+ e1; e2;
@@
expression e1,e2;
statement S;
position p != bad.p;
position p1;
position p2 :
script:ocaml(p1) { infunction p1 && combined p1 p2 };
@@
- e1@p1,@S@p e2@p2;
+ e1; e2;
@r@
expression e1,e2;
statement S;
position p != bad.p;
@@
e1 ,@S@p e2;
@@
expression e1,e2;
position p1;
position p2 :
script:ocaml(p1) { infunction p1 && not(combined p1 p2) };
statement S;
position r.p;
@@
e1@p1
-,@S@p
+;
e2@p2
... when any
// </smpl>
---
drivers/acpi/processor_idle.c | 4 +++-
drivers/ata/pata_icside.c | 21 +++++++++++++--------
drivers/base/regmap/regmap-debugfs.c | 2 +-
drivers/bcma/driver_pci_host.c | 4 ++--
drivers/block/drbd/drbd_receiver.c | 6 ++++--
drivers/char/agp/amd-k7-agp.c | 2 +-
drivers/char/agp/nvidia-agp.c | 2 +-
drivers/char/agp/sworks-agp.c | 2 +-
drivers/char/hw_random/iproc-rng200.c | 8 ++++----
drivers/char/hw_random/mxc-rnga.c | 6 +++---
drivers/char/hw_random/stm32-rng.c | 8 ++++----
drivers/char/ipmi/bt-bmc.c | 6 +++---
drivers/clk/meson/meson-aoclk.c | 2 +-
drivers/clk/mvebu/ap-cpu-clk.c | 2 +-
drivers/clk/uniphier/clk-uniphier-cpugear.c | 2 +-
drivers/clk/uniphier/clk-uniphier-mux.c | 2 +-
drivers/clocksource/mps2-timer.c | 6 +++---
drivers/clocksource/timer-armada-370-xp.c | 8 ++++----
drivers/counter/ti-eqep.c | 2 +-
drivers/crypto/amcc/crypto4xx_alg.c | 2 +-
drivers/crypto/atmel-tdes.c | 2 +-
drivers/crypto/hifn_795x.c | 4 ++--
drivers/crypto/talitos.c | 8 ++++----
23 files changed, 60 insertions(+), 51 deletions(-)
|
|
Srinivas Kandagatla <srinivas.kandagatla@linaro.org>:
Usage of regmap_field_alloc becomes much overhead when number of fields
exceed more than 3. Most of driver seems to totally covered up with these
allocs/free making to very hard to read the code! On such driver is QCOM LPASS
driver has extensively converted to use regmap_fields.
This patchset add this new api and a user of it.
Using new bulk api to allocate fields makes it much more cleaner code to read!
Changes since v1:
- Fix lot of spelling! No code changes!
Srinivas Kandagatla (2):
regmap: add support to regmap_field_bulk_alloc/free apis
ASoC: lpass-platform: use devm_regmap_field_bulk_alloc
drivers/base/regmap/regmap.c | 100 ++++++++++++++++++++++++++++++++
include/linux/regmap.h | 11 ++++
sound/soc/qcom/lpass-platform.c | 31 +++-------
3 files changed, 118 insertions(+), 24 deletions(-)
--
2.21.0
base-commit: f75aef392f869018f78cfedf3c320a6b3fcfda6b
|
|
While not destroying mutexes doesn't lead to memory leaks, it's still
the correct thing to do for mutex debugging accounting.
Signed-off-by: Bartosz Golaszewski <bgolaszewski@baylibre.com>
Link: https://lore.kernel.org/r/20200928120614.23172-1-brgl@bgdev.pl
Signed-off-by: Mark Brown <broonie@kernel.org>
|
|
Replace commas with semicolons. What is done is essentially described by
the following Coccinelle semantic patch (http://coccinelle.lip6.fr/):
// <smpl>
@@ expression e1,e2; @@
e1
-,
+;
e2
... when any
// </smpl>
Signed-off-by: Julia Lawall <Julia.Lawall@inria.fr>
Link: https://lore.kernel.org/r/1601233948-11629-15-git-send-email-Julia.Lawall@inria.fr
Signed-off-by: Mark Brown <broonie@kernel.org>
|
|
It seems likely this block was pasted from internal_get_user_pages_fast,
which is not passed an mm struct and therefore uses current's. But
__get_user_pages_locked is passed an explicit mm, and current->mm is not
always valid. This was hit when being called from i915, which uses:
pin_user_pages_remote->
__get_user_pages_remote->
__gup_longterm_locked->
__get_user_pages_locked
Before, this would lead to an OOPS:
BUG: kernel NULL pointer dereference, address: 0000000000000064
#PF: supervisor write access in kernel mode
#PF: error_code(0x0002) - not-present page
CPU: 10 PID: 1431 Comm: kworker/u33:1 Tainted: P S U O 5.9.0-rc7+ #140
Hardware name: LENOVO 20QTCTO1WW/20QTCTO1WW, BIOS N2OET47W (1.34 ) 08/06/2020
Workqueue: i915-userptr-acquire __i915_gem_userptr_get_pages_worker [i915]
RIP: 0010:__get_user_pages_remote+0xd7/0x310
Call Trace:
__i915_gem_userptr_get_pages_worker+0xc8/0x260 [i915]
process_one_work+0x1ca/0x390
worker_thread+0x48/0x3c0
kthread+0x114/0x130
ret_from_fork+0x1f/0x30
CR2: 0000000000000064
This commit fixes the problem by using the mm pointer passed to the
function rather than the bogus one in current.
Fixes: 008cfe4418b3 ("mm: Introduce mm_struct.has_pinned")
Tested-by: Chris Wilson <chris@chris-wilson.co.uk>
Reported-by: Harald Arnesen <harald@skogtun.org>
Reviewed-by: Jason Gunthorpe <jgg@nvidia.com>
Reviewed-by: Peter Xu <peterx@redhat.com>
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
syzbot reports a potential lock deadlock between the normal IO path and
->show_fdinfo():
======================================================
WARNING: possible circular locking dependency detected
5.9.0-rc6-syzkaller #0 Not tainted
------------------------------------------------------
syz-executor.2/19710 is trying to acquire lock:
ffff888098ddc450 (sb_writers#4){.+.+}-{0:0}, at: io_write+0x6b5/0xb30 fs/io_uring.c:3296
but task is already holding lock:
ffff8880a11b8428 (&ctx->uring_lock){+.+.}-{3:3}, at: __do_sys_io_uring_enter+0xe9a/0x1bd0 fs/io_uring.c:8348
which lock already depends on the new lock.
the existing dependency chain (in reverse order) is:
-> #2 (&ctx->uring_lock){+.+.}-{3:3}:
__mutex_lock_common kernel/locking/mutex.c:956 [inline]
__mutex_lock+0x134/0x10e0 kernel/locking/mutex.c:1103
__io_uring_show_fdinfo fs/io_uring.c:8417 [inline]
io_uring_show_fdinfo+0x194/0xc70 fs/io_uring.c:8460
seq_show+0x4a8/0x700 fs/proc/fd.c:65
seq_read+0x432/0x1070 fs/seq_file.c:208
do_loop_readv_writev fs/read_write.c:734 [inline]
do_loop_readv_writev fs/read_write.c:721 [inline]
do_iter_read+0x48e/0x6e0 fs/read_write.c:955
vfs_readv+0xe5/0x150 fs/read_write.c:1073
kernel_readv fs/splice.c:355 [inline]
default_file_splice_read.constprop.0+0x4e6/0x9e0 fs/splice.c:412
do_splice_to+0x137/0x170 fs/splice.c:871
splice_direct_to_actor+0x307/0x980 fs/splice.c:950
do_splice_direct+0x1b3/0x280 fs/splice.c:1059
do_sendfile+0x55f/0xd40 fs/read_write.c:1540
__do_sys_sendfile64 fs/read_write.c:1601 [inline]
__se_sys_sendfile64 fs/read_write.c:1587 [inline]
__x64_sys_sendfile64+0x1cc/0x210 fs/read_write.c:1587
do_syscall_64+0x2d/0x70 arch/x86/entry/common.c:46
entry_SYSCALL_64_after_hwframe+0x44/0xa9
-> #1 (&p->lock){+.+.}-{3:3}:
__mutex_lock_common kernel/locking/mutex.c:956 [inline]
__mutex_lock+0x134/0x10e0 kernel/locking/mutex.c:1103
seq_read+0x61/0x1070 fs/seq_file.c:155
pde_read fs/proc/inode.c:306 [inline]
proc_reg_read+0x221/0x300 fs/proc/inode.c:318
do_loop_readv_writev fs/read_write.c:734 [inline]
do_loop_readv_writev fs/read_write.c:721 [inline]
do_iter_read+0x48e/0x6e0 fs/read_write.c:955
vfs_readv+0xe5/0x150 fs/read_write.c:1073
kernel_readv fs/splice.c:355 [inline]
default_file_splice_read.constprop.0+0x4e6/0x9e0 fs/splice.c:412
do_splice_to+0x137/0x170 fs/splice.c:871
splice_direct_to_actor+0x307/0x980 fs/splice.c:950
do_splice_direct+0x1b3/0x280 fs/splice.c:1059
do_sendfile+0x55f/0xd40 fs/read_write.c:1540
__do_sys_sendfile64 fs/read_write.c:1601 [inline]
__se_sys_sendfile64 fs/read_write.c:1587 [inline]
__x64_sys_sendfile64+0x1cc/0x210 fs/read_write.c:1587
do_syscall_64+0x2d/0x70 arch/x86/entry/common.c:46
entry_SYSCALL_64_after_hwframe+0x44/0xa9
-> #0 (sb_writers#4){.+.+}-{0:0}:
check_prev_add kernel/locking/lockdep.c:2496 [inline]
check_prevs_add kernel/locking/lockdep.c:2601 [inline]
validate_chain kernel/locking/lockdep.c:3218 [inline]
__lock_acquire+0x2a96/0x5780 kernel/locking/lockdep.c:4441
lock_acquire+0x1f3/0xaf0 kernel/locking/lockdep.c:5029
percpu_down_read include/linux/percpu-rwsem.h:51 [inline]
__sb_start_write+0x228/0x450 fs/super.c:1672
io_write+0x6b5/0xb30 fs/io_uring.c:3296
io_issue_sqe+0x18f/0x5c50 fs/io_uring.c:5719
__io_queue_sqe+0x280/0x1160 fs/io_uring.c:6175
io_queue_sqe+0x692/0xfa0 fs/io_uring.c:6254
io_submit_sqe fs/io_uring.c:6324 [inline]
io_submit_sqes+0x1761/0x2400 fs/io_uring.c:6521
__do_sys_io_uring_enter+0xeac/0x1bd0 fs/io_uring.c:8349
do_syscall_64+0x2d/0x70 arch/x86/entry/common.c:46
entry_SYSCALL_64_after_hwframe+0x44/0xa9
other info that might help us debug this:
Chain exists of:
sb_writers#4 --> &p->lock --> &ctx->uring_lock
Possible unsafe locking scenario:
CPU0 CPU1
---- ----
lock(&ctx->uring_lock);
lock(&p->lock);
lock(&ctx->uring_lock);
lock(sb_writers#4);
*** DEADLOCK ***
1 lock held by syz-executor.2/19710:
#0: ffff8880a11b8428 (&ctx->uring_lock){+.+.}-{3:3}, at: __do_sys_io_uring_enter+0xe9a/0x1bd0 fs/io_uring.c:8348
stack backtrace:
CPU: 0 PID: 19710 Comm: syz-executor.2 Not tainted 5.9.0-rc6-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/01/2011
Call Trace:
__dump_stack lib/dump_stack.c:77 [inline]
dump_stack+0x198/0x1fd lib/dump_stack.c:118
check_noncircular+0x324/0x3e0 kernel/locking/lockdep.c:1827
check_prev_add kernel/locking/lockdep.c:2496 [inline]
check_prevs_add kernel/locking/lockdep.c:2601 [inline]
validate_chain kernel/locking/lockdep.c:3218 [inline]
__lock_acquire+0x2a96/0x5780 kernel/locking/lockdep.c:4441
lock_acquire+0x1f3/0xaf0 kernel/locking/lockdep.c:5029
percpu_down_read include/linux/percpu-rwsem.h:51 [inline]
__sb_start_write+0x228/0x450 fs/super.c:1672
io_write+0x6b5/0xb30 fs/io_uring.c:3296
io_issue_sqe+0x18f/0x5c50 fs/io_uring.c:5719
__io_queue_sqe+0x280/0x1160 fs/io_uring.c:6175
io_queue_sqe+0x692/0xfa0 fs/io_uring.c:6254
io_submit_sqe fs/io_uring.c:6324 [inline]
io_submit_sqes+0x1761/0x2400 fs/io_uring.c:6521
__do_sys_io_uring_enter+0xeac/0x1bd0 fs/io_uring.c:8349
do_syscall_64+0x2d/0x70 arch/x86/entry/common.c:46
entry_SYSCALL_64_after_hwframe+0x44/0xa9
RIP: 0033:0x45e179
Code: 3d b2 fb ff c3 66 2e 0f 1f 84 00 00 00 00 00 66 90 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 0f 83 0b b2 fb ff c3 66 2e 0f 1f 84 00 00 00 00
RSP: 002b:00007f1194e74c78 EFLAGS: 00000246 ORIG_RAX: 00000000000001aa
RAX: ffffffffffffffda RBX: 00000000000082c0 RCX: 000000000045e179
RDX: 0000000000000000 RSI: 0000000000000001 RDI: 0000000000000004
RBP: 000000000118cf98 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 000000000118cf4c
R13: 00007ffd1aa5756f R14: 00007f1194e759c0 R15: 000000000118cf4c
Fix this by just not diving into details if we fail to trylock the
io_uring mutex. We know the ctx isn't going away during this operation,
but we cannot safely iterate buffers/files/personalities if we don't
hold the io_uring mutex.
Reported-by: syzbot+2f8fa4e860edc3066aba@syzkaller.appspotmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
|
|
for-5.10/drivers
Pull NVMe updates from Christoph:
"nvme updates for 5.10
- fix keep alive timer modification (Amit Engel)
- order the PCI ID list more sensibly (Andy Shevchenko)
- cleanup the open by controller helper (Chaitanya Kulkarni)
- use an xarray for th CSE log lookup (Chaitanya Kulkarni)
- support ZNS in nvmet passthrough mode (Chaitanya Kulkarni)
- fix nvme_ns_report_zones (me)
- add a sanity check to nvmet-fc (James Smart)
- fix interrupt allocation when too many polled queues are specified
(Jeffle Xu)
- small nvmet-tcp optimization (Mark Wunderlich)"
* tag 'nvme-5.10-2020-09-27' of git://git.infradead.org/nvme:
nvme-pci: allocate separate interrupt for the reserved non-polled I/O queue
nvme: fix error handling in nvme_ns_report_zones
nvmet-fc: fix missing check for no hostport struct
nvmet: add passthru ZNS support
nvmet: handle keep-alive timer when kato is modified by a set features cmd
nvmet-tcp: have queue io_work context run on sock incoming cpu
nvme-pci: Move enumeration by class to be last in the table
nvme: use an xarray to lookup the Commands Supported and Effects log
nvme: lift the file open code from nvme_ctrl_get_by_path
|
|
We found blk_mq_alloc_rq_maps() takes more time in kernel space when
testing nvme device hot-plugging. The test and anlysis as below.
Debug code,
1, blk_mq_alloc_rq_maps():
u64 start, end;
depth = set->queue_depth;
start = ktime_get_ns();
pr_err("[%d:%s switch:%ld,%ld] queue depth %d, nr_hw_queues %d\n",
current->pid, current->comm, current->nvcsw, current->nivcsw,
set->queue_depth, set->nr_hw_queues);
do {
err = __blk_mq_alloc_rq_maps(set);
if (!err)
break;
set->queue_depth >>= 1;
if (set->queue_depth < set->reserved_tags + BLK_MQ_TAG_MIN) {
err = -ENOMEM;
break;
}
} while (set->queue_depth);
end = ktime_get_ns();
pr_err("[%d:%s switch:%ld,%ld] all hw queues init cost time %lld ns\n",
current->pid, current->comm,
current->nvcsw, current->nivcsw, end - start);
2, __blk_mq_alloc_rq_maps():
u64 start, end;
for (i = 0; i < set->nr_hw_queues; i++) {
start = ktime_get_ns();
if (!__blk_mq_alloc_rq_map(set, i))
goto out_unwind;
end = ktime_get_ns();
pr_err("hw queue %d init cost time %lld ns\n", i, end - start);
}
Test nvme hot-plugging with above debug code, we found it totally cost more
than 3ms in kernel space without being scheduled out when alloc rqs for all
16 hw queues with depth 1023, each hw queue cost about 140-250us. The cost
time will be increased with hw queue number and queue depth increasing. And
in an extreme case, if __blk_mq_alloc_rq_maps() returns -ENOMEM, it will try
"queue_depth >>= 1", more time will be consumed.
[ 428.428771] nvme nvme0: pci function 10000:01:00.0
[ 428.428798] nvme 10000:01:00.0: enabling device (0000 -> 0002)
[ 428.428806] pcieport 10000:00:00.0: can't derive routing for PCI INT A
[ 428.428809] nvme 10000:01:00.0: PCI INT A: no GSI
[ 432.593374] [4688:kworker/u33:8 switch:663,2] queue depth 30, nr_hw_queues 1
[ 432.593404] hw queue 0 init cost time 22883 ns
[ 432.593408] [4688:kworker/u33:8 switch:663,2] all hw queues init cost time 35960 ns
[ 432.595953] nvme nvme0: 16/0/0 default/read/poll queues
[ 432.595958] [4688:kworker/u33:8 switch:700,2] queue depth 1023, nr_hw_queues 16
[ 432.596203] hw queue 0 init cost time 242630 ns
[ 432.596441] hw queue 1 init cost time 235913 ns
[ 432.596659] hw queue 2 init cost time 216461 ns
[ 432.596877] hw queue 3 init cost time 215851 ns
[ 432.597107] hw queue 4 init cost time 228406 ns
[ 432.597336] hw queue 5 init cost time 227298 ns
[ 432.597564] hw queue 6 init cost time 224633 ns
[ 432.597785] hw queue 7 init cost time 219954 ns
[ 432.597937] hw queue 8 init cost time 150930 ns
[ 432.598082] hw queue 9 init cost time 143496 ns
[ 432.598231] hw queue 10 init cost time 147261 ns
[ 432.598397] hw queue 11 init cost time 164522 ns
[ 432.598542] hw queue 12 init cost time 143401 ns
[ 432.598692] hw queue 13 init cost time 148934 ns
[ 432.598841] hw queue 14 init cost time 147194 ns
[ 432.598991] hw queue 15 init cost time 148942 ns
[ 432.598993] [4688:kworker/u33:8 switch:700,2] all hw queues init cost time 3035099 ns
[ 432.602611] nvme0n1: p1
So use this patch to trigger schedule between each hw queue init, to avoid
other threads getting stuck. It is not in atomic context when executing
__blk_mq_alloc_rq_maps(), so it is safe to call cond_resched().
Signed-off-by: Xianting Tian <tian.xianting@h3c.com>
Reviewed-by: Bart Van Assche <bvanassche@acm.org>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
|
|
syzbot reports a crash with tty polling, which is using the double poll
handling:
general protection fault, probably for non-canonical address 0xdffffc0000000009: 0000 [#1] PREEMPT SMP KASAN
KASAN: null-ptr-deref in range [0x0000000000000048-0x000000000000004f]
CPU: 0 PID: 6874 Comm: syz-executor749 Not tainted 5.9.0-rc6-next-20200924-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/01/2011
RIP: 0010:io_poll_get_single fs/io_uring.c:4778 [inline]
RIP: 0010:io_poll_double_wake+0x51/0x510 fs/io_uring.c:4845
Code: fc ff df 48 c1 ea 03 80 3c 02 00 0f 85 9e 03 00 00 48 b8 00 00 00 00 00 fc ff df 49 8b 5d 08 48 8d 7b 48 48 89 fa 48 c1 ea 03 <0f> b6 04 02 84 c0 74 06 0f 8e 63 03 00 00 0f b6 6b 48 bf 06 00 00
RSP: 0018:ffffc90001c1fb70 EFLAGS: 00010006
RAX: dffffc0000000000 RBX: 0000000000000000 RCX: 0000000000000004
RDX: 0000000000000009 RSI: ffffffff81d9b3ad RDI: 0000000000000048
RBP: dffffc0000000000 R08: ffff8880a3cac798 R09: ffffc90001c1fc60
R10: fffff52000383f73 R11: 0000000000000000 R12: 0000000000000004
R13: ffff8880a3cac798 R14: ffff8880a3cac7a0 R15: 0000000000000004
FS: 0000000001f98880(0000) GS:ffff8880ae400000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00007f18886916c0 CR3: 0000000094c5a000 CR4: 00000000001506f0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
Call Trace:
__wake_up_common+0x147/0x650 kernel/sched/wait.c:93
__wake_up_common_lock+0xd0/0x130 kernel/sched/wait.c:123
tty_ldisc_hangup+0x1cf/0x680 drivers/tty/tty_ldisc.c:735
__tty_hangup.part.0+0x403/0x870 drivers/tty/tty_io.c:625
__tty_hangup drivers/tty/tty_io.c:575 [inline]
tty_vhangup+0x1d/0x30 drivers/tty/tty_io.c:698
pty_close+0x3f5/0x550 drivers/tty/pty.c:79
tty_release+0x455/0xf60 drivers/tty/tty_io.c:1679
__fput+0x285/0x920 fs/file_table.c:281
task_work_run+0xdd/0x190 kernel/task_work.c:141
tracehook_notify_resume include/linux/tracehook.h:188 [inline]
exit_to_user_mode_loop kernel/entry/common.c:165 [inline]
exit_to_user_mode_prepare+0x1e2/0x1f0 kernel/entry/common.c:192
syscall_exit_to_user_mode+0x7a/0x2c0 kernel/entry/common.c:267
entry_SYSCALL_64_after_hwframe+0x44/0xa9
RIP: 0033:0x401210
which is due to a failure in removing the double poll wait entry if we
hit a wakeup match. This can cause multiple invocations of the wakeup,
which isn't safe.
Cc: stable@vger.kernel.org # v5.8
Reported-by: syzbot+81b3883093f772addf6d@syzkaller.appspotmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
|
|
Commit 8234f6734c5d ("PM-runtime: Switch autosuspend over to using
hrtimers") switched PM runtime autosuspend to use hrtimers and all
related time accounting in ns, but missed to update the timer_expires
data type in struct dev_pm_info to u64.
This causes the timer_expires value to be truncated on 32-bit
architectures when assignment is done from u64 values:
rpm_suspend()
|- dev->power.timer_expires = expires;
Fix it by changing the timer_expires type to u64.
Fixes: 8234f6734c5d ("PM-runtime: Switch autosuspend over to using hrtimers")
Signed-off-by: Grygorii Strashko <grygorii.strashko@ti.com>
Acked-by: Pavel Machek <pavel@ucw.cz>
Acked-by: Vincent Guittot <vincent.guittot@linaro.org>
Cc: 5.0+ <stable@vger.kernel.org> # 5.0+
[ rjw: Subject and changelog edits ]
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
|
|
|
|
The unconditional selection of PCI_MSI_ARCH_FALLBACKS has an unmet
dependency because PCI_MSI_ARCH_FALLBACKS is defined in a 'if PCI' clause.
As it is only relevant when PCI_MSI is enabled, update the affected
architecture Kconfigs to make the selection of PCI_MSI_ARCH_FALLBACKS
depend on 'if PCI_MSI'.
Fixes: 077ee78e3928 ("PCI/MSI: Make arch_.*_msi_irq[s] fallbacks selectable")
Reported-by: Qian Cai <cai@redhat.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Vasily Gorbik <gor@linux.ibm.com>
Links: https://lore.kernel.org/r/cdfd63305caa57785b0925dd24c0711ea02c8527.camel@redhat.com
|
|
Hibernate and resume process submits individual IO requests for each page
of the data, so use blk_plug to improve the batching of these requests.
Testing this change with hibernate and resumes consistently shows merging
of the IO requests and more than an order of magnitude improvement in
hibernate and resume speed is observed.
One hibernate and resume cycle for 16GB RAM out of 32GB in use takes
around 21 minutes before the change, and 1 minutes after the change on
a system with limited storage IOPS.
Signed-off-by: Xiaoyi Chen <cxiaoyi@amazon.com>
Co-Developed-by: Anchal Agarwal <anchalag@amazon.com>
Signed-off-by: Anchal Agarwal <anchalag@amazon.com>
[ rjw: Subject and changelog edits, white space damage fixes ]
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
|
|
ARMv8.4-PMU introduces the PMMIR_EL1 registers and some new PMU events,
like STALL_SLOT etc, are related to it. Let's add a caps directory to
/sys/bus/event_source/devices/armv8_pmuv3_0/ and support slots from
PMMIR_EL1 registers in this entry. The user programs can get the slots
from sysfs directly.
/sys/bus/event_source/devices/armv8_pmuv3_0/caps/slots is exposed
under sysfs. Both ARMv8.4-PMU and STALL_SLOT event are implemented,
it returns the slots from PMMIR_EL1, otherwise it will return 0.
Signed-off-by: Shaokun Zhang <zhangshaokun@hisilicon.com>
Cc: Will Deacon <will@kernel.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Link: https://lore.kernel.org/r/1600754025-53535-1-git-send-email-zhangshaokun@hisilicon.com
Signed-off-by: Will Deacon <will@kernel.org>
|
|
The current binding for the RPi firmware uses the simple-bus compatible as
a fallback to benefit from its automatic probing of child nodes.
However, simple-bus also comes with some constraints, like having the ranges,
our case.
Let's switch to simple-mfd that provides the same probing logic without
those constraints.
Signed-off-by: Maxime Ripard <maxime@cerno.tech>
Link: https://lore.kernel.org/r/20200924082642.18144-1-maxime@cerno.tech
Signed-off-by: Rob Herring <robh@kernel.org>
|
|
'origin/irq/owl' into irq/irqchip-next
Signed-off-by: Marc Zyngier <maz@kernel.org>
|
|
As SMP-on-UP is a valid configuration on 32bit ARM, do not assume that
IPIs are populated in show_ipi_list().
Reported-by: Guillaume Tucker <guillaume.tucker@collabora.com>
Reported-by: kernelci.org bot <bot@kernelci.org>
Tested-by: Guillaume Tucker <guillaume.tucker@collabora.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
|
|
According to the SW tuning App note[1], tuning is required for all
UHS speed modes. Tuning for SDR50 is not enabled in Capabilities by
default so enable it from the CTL_CFG registers.
[1] https://www.ti.com/lit/pdf/spract9
Signed-off-by: Faiz Abbas <faiz_abbas@ti.com>
Link: https://lore.kernel.org/r/20200923105206.7988-7-faiz_abbas@ti.com
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
|
|
With the new SW tuning App note[1], a custom tuning algorithm is
required for eMMC HS200, HS400 and SD card UHS modes. The algorithm
involves running through the 32 possible input tap delay values and
sending the appropriate tuning command (CMD19/21) for each of them
to get a fail or pass result for each of the values. Typically, the
range will have a small contiguous failing window. Considering the
tuning range as a circular buffer, the algorithm then sets a final
tuned value directly opposite to the failing window.
[1] https://www.ti.com/lit/pdf/spract9
Signed-off-by: Faiz Abbas <faiz_abbas@ti.com>
Reviewed-by: Kishon Vijay Abraham I <kishon@ti.com>
Link: https://lore.kernel.org/r/20200923105206.7988-6-faiz_abbas@ti.com
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
|
|
DLL need only be enabled for speed modes and clock frequencies at or
above 50 MHz. For speed modes that don't enable the DLL, we need to
configure a static input delay value. This involves reading an optional
itap-del-sel-* value from the device tree and configuring it for the
appropriate speed mode.
With this addition, make sure that DLL is always switched off at the
beginning of the set_clock() call to simplify configuration. This also
removes the need for the dll_on member in struct sdhci_am654_data.
Signed-off-by: Faiz Abbas <faiz_abbas@ti.com>
Link: https://lore.kernel.org/r/20200923105206.7988-5-faiz_abbas@ti.com
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
|
|
Change hard coded array size value to depend on struct timing_data
array size.
Signed-off-by: Faiz Abbas <faiz_abbas@ti.com>
Link: https://lore.kernel.org/r/20200923105206.7988-4-faiz_abbas@ti.com
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
|
|
Add documentation for input tap delay bindings.
Signed-off-by: Faiz Abbas <faiz_abbas@ti.com>
Link: https://lore.kernel.org/r/20200923105206.7988-3-faiz_abbas@ti.com
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
|
|
to json schema
Convert sdhci-am654 documentation to yaml format. The new file
sdhci-am654.yaml will inherit from mmc-controller.yaml.
Signed-off-by: Faiz Abbas <faiz_abbas@ti.com>
Link: https://lore.kernel.org/r/20200923105206.7988-2-faiz_abbas@ti.com
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
|
|
The bit ESDHC_PERIPHERAL_CLK_SEL to select using peripheral clock
or platform clock is not able to be reset by SDHCI_RESET_ALL.
So driver needs to initialize it as 1 or 0 once, to override the
different value which may be configured in bootloader.
Signed-off-by: Yangbo Lu <yangbo.lu@nxp.com>
Link: https://lore.kernel.org/r/20200927082304.9232-1-yangbo.lu@nxp.com
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
|
|
The original commit appears to have the logic reversed in
amd_fch_gpio_get_direction. Also confirmed by observing the value of
"direction" in the sys tree.
Signed-off-by: Ed Wildgoose <lists@wildgooses.com>
Fixes: e09d168f13f0 ("gpio: AMD G-Series PCH gpio driver")
Cc: stable@vger.kernel.org
Signed-off-by: Bartosz Golaszewski <bgolaszewski@baylibre.com>
|
|
Fix build errors for meson-gx-mmc.c when CONFIG_COMMON_CLK is not
set/enabled. This can happen when COMPILE_TEST is set/enabled.
ERROR: modpost: "clk_divider_ops" [drivers/mmc/host/meson-gx-mmc.ko] undefined!
ERROR: modpost: "devm_clk_register" [drivers/mmc/host/meson-gx-mmc.ko] undefined!
ERROR: modpost: "clk_mux_ops" [drivers/mmc/host/meson-gx-mmc.ko] undefined!
ERROR: modpost: "__clk_get_name" [drivers/mmc/host/meson-gx-mmc.ko] undefined!
Fixes: 54d8454436a2 ("mmc: host: Enable compile testing of multiple drivers")
Signed-off-by: Randy Dunlap <rdunlap@infradead.org>
Reviewed-by: Krzysztof Kozlowski <krzk@kernel.org>
Reported-by: kernel test robot <lkp@intel.com>
Link: https://lore.kernel.org/r/20200925164323.29843-1-rdunlap@infradead.org
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
|
|
|
|
Commit bedf9fc01ff1 ("mmc: sdhci: Workaround broken command queuing on
Intel GLK"), disabled command-queuing on Intel GLK based LENOVO models
because of it being broken due to what is believed to be a bug in
the BIOS.
It seems that the BIOS of some IRBIS models, including the IRBIS NB111
model has the same issue, so disable command queuing there too.
Fixes: bedf9fc01ff1 ("mmc: sdhci: Workaround broken command queuing on Intel GLK")
BugLink: https://bugzilla.kernel.org/show_bug.cgi?id=209397
Reported-and-tested-by: RussianNeuroMancer <russianneuromancer@ya.ru>
Signed-off-by: Hans de Goede <hdegoede@redhat.com>
Acked-by: Adrian Hunter <adrian.hunter@intel.com>
Link: https://lore.kernel.org/r/20200927104821.5676-1-hdegoede@redhat.com
Cc: stable@vger.kernel.org
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
|
|
After commit 6827ca573c03 ("memstick: rtsx_usb_ms: Support runtime power
management"), removing module rtsx_usb_ms will be stuck.
The deadlock is caused by powering on and powering off at the same time,
the former one is when memstick_check() is flushed, and the later is called
by memstick_remove_host().
Soe let's skip allocating card to prevent this issue.
Fixes: 6827ca573c03 ("memstick: rtsx_usb_ms: Support runtime power management")
Signed-off-by: Kai-Heng Feng <kai.heng.feng@canonical.com>
Link: https://lore.kernel.org/r/20200925084952.13220-1-kai.heng.feng@canonical.com
Cc: stable@vger.kernel.org
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
|
|
When selecting function_graph tracer with the command:
# echo function_graph > /sys/kernel/debug/tracing/current_tracer
The kernel crashes with the following stack trace:
[69703.122389] BUG: stack guard page was hit at 000000001056545c (stack is 00000000fa3f8fed..0000000005d39503)
[69703.122403] kernel stack overflow (double-fault): 0000 [#1] SMP PTI
[69703.122413] CPU: 0 PID: 16982 Comm: bash Kdump: loaded Not tainted 4.18.0-236.el8.x86_64 #1
[69703.122420] Hardware name: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.0 12/17/2019
[69703.122433] RIP: 0010repare_ftrace_return+0xa/0x110
[69703.122458] Code: 05 00 0f 0b 48 c7 c7 10 ca 69 ae 0f b6 f0 e8 4b 52 0c 00 31 c0 eb ca 66 0f 1f 84 00 00 00 00 00 55 48 89 e5 41 56 41 55 41 54 <53> 48 83 ec 18 65 48 8b 04 25 28 00 00 00 48 89 45 d8 31 c0 48 85
[69703.122467] RSP: 0018:ffffbd6d01118000 EFLAGS: 00010086
[69703.122476] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000003
[69703.122484] RDX: 0000000000000000 RSI: ffffbd6d011180d8 RDI: ffffffffadce7550
[69703.122491] RBP: ffffbd6d01118018 R08: 0000000000000000 R09: ffff9d4b09266000
[69703.122498] R10: ffff9d4b0fc04540 R11: ffff9d4b0fc20a00 R12: ffff9d4b6e42aa90
[69703.122506] R13: ffff9d4b0fc20ab8 R14: 00000000000003e8 R15: ffffbd6d0111837c
[69703.122514] FS: 00007fd5f2588740(0000) GS:ffff9d4b6e400000(0000) knlGS:0000000000000000
[69703.122521] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[69703.122528] CR2: ffffbd6d01117ff8 CR3: 00000000565d8001 CR4: 00000000003606f0
[69703.122538] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[69703.122545] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
[69703.122552] Call Trace:
[69703.122568] ftrace_graph_caller+0x6b/0xa0
[69703.122589] ? read_hv_sched_clock_tsc+0x5/0x20
[69703.122599] read_hv_sched_clock_tsc+0x5/0x20
[69703.122611] sched_clock+0x5/0x10
[69703.122621] sched_clock_local+0x12/0x80
[69703.122631] sched_clock_cpu+0x8c/0xb0
[69703.122644] trace_clock_global+0x21/0x90
[69703.122655] ring_buffer_lock_reserve+0x100/0x3c0
[69703.122671] trace_buffer_lock_reserve+0x16/0x50
[69703.122683] __trace_graph_entry+0x28/0x90
[69703.122695] trace_graph_entry+0xfd/0x1a0
[69703.122705] ? read_hv_clock_tsc_cs+0x10/0x10
[69703.122714] ? sched_clock+0x5/0x10
[69703.122723] prepare_ftrace_return+0x99/0x110
[69703.122734] ? read_hv_clock_tsc_cs+0x10/0x10
[69703.122743] ? sched_clock+0x5/0x10
[69703.122752] ftrace_graph_caller+0x6b/0xa0
[69703.122768] ? read_hv_clock_tsc_cs+0x10/0x10
[69703.122777] ? sched_clock+0x5/0x10
[69703.122786] ? read_hv_sched_clock_tsc+0x5/0x20
[69703.122796] ? ring_buffer_unlock_commit+0x1d/0xa0
[69703.122805] read_hv_sched_clock_tsc+0x5/0x20
[69703.122814] ftrace_graph_caller+0xa0/0xa0
[ ... recursion snipped ... ]
Setting the notrace attribute for read_hv_sched_clock_msr() and
read_hv_sched_clock_tsc() fixes it.
Fixes: bd00cd52d5be ("clocksource/drivers/hyperv: Add Hyper-V specific sched clock function")
Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Signed-off-by: Mohammed Gamal <mgamal@redhat.com>
Link: https://lore.kernel.org/r/20200924151117.767442-1-mgamal@redhat.com
Signed-off-by: Wei Liu <wei.liu@kernel.org>
|
|
In the architecture independent version of hyperv-tlfs.h, commit c55a844f46f958b
removed the "X64" in the symbol names so they would make sense for both x86 and
ARM64. That commit added aliases with the "X64" in the x86 version of hyperv-tlfs.h
so that existing x86 code would continue to compile.
As a cleanup, update the x86 code to use the symbols without the "X64", then remove
the aliases. There's no functional change.
Signed-off-by: Joseph Salisbury <joseph.salisbury@microsoft.com>
Link: https://lore.kernel.org/r/1601130386-11111-1-git-send-email-jsalisbury@linux.microsoft.com
Reviewed-by: Michael Kelley <mikelley@microsoft.com>
Acked-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Wei Liu <wei.liu@kernel.org>
|
|
Add missing documentation for the parameter "version" and "num_version"
of the hv_pci_protocol_negotiation() function and resolve build time
kernel-doc warnings:
drivers/pci/controller/pci-hyperv.c:2535: warning: Function parameter
or member 'version' not described in 'hv_pci_protocol_negotiation'
drivers/pci/controller/pci-hyperv.c:2535: warning: Function parameter
or member 'num_version' not described in 'hv_pci_protocol_negotiation'
No change to functionality intended.
Signed-off-by: Krzysztof Wilczyński <kw@linux.com>
Link: https://lore.kernel.org/r/20200925234753.1767227-1-kw@linux.com
Reviewed-by: Michael Kelley <mikelley@microsoft.com>
Signed-off-by: Wei Liu <wei.liu@kernel.org>
|
|
Hyper-V always use 4k page size (HV_HYP_PAGE_SIZE), so when
communicating with Hyper-V, a guest should always use HV_HYP_PAGE_SIZE
as the unit for page related data. For storvsc, the data is
vmbus_packet_mpb_array. And since in scsi_cmnd, sglist of pages (in unit
of PAGE_SIZE) is used, we need convert pages in the sglist of scsi_cmnd
into Hyper-V pages in vmbus_packet_mpb_array.
This patch does the conversion by dividing pages in sglist into Hyper-V
pages, offset and indexes in vmbus_packet_mpb_array are recalculated
accordingly.
Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
Reviewed-by: Michael Kelley <mikelley@microsoft.com>
Link: https://lore.kernel.org/r/20200916034817.30282-12-boqun.feng@gmail.com
Signed-off-by: Wei Liu <wei.liu@kernel.org>
|
|
For a Hyper-V vmbus, the size of the ringbuffer has two requirements:
1) it has to take one PAGE_SIZE for the header
2) it has to be PAGE_SIZE aligned so that double-mapping can work
VMBUS_RING_SIZE() could calculate a correct ringbuffer size which
fulfills both requirements, therefore use it to make sure vmbus work
when PAGE_SIZE != HV_HYP_PAGE_SIZE (4K).
Note that since the argument for VMBUS_RING_SIZE() is the size of
payload (data part), so it will be minus 4k (the size of header when
PAGE_SIZE = 4k) than the original value to keep the ringbuffer total
size unchanged when PAGE_SIZE = 4k.
Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
Cc: Michael Kelley <mikelley@microsoft.com>
Reviewed-by: Michael Kelley <mikelley@microsoft.com>
Link: https://lore.kernel.org/r/20200916034817.30282-11-boqun.feng@gmail.com
Signed-off-by: Wei Liu <wei.liu@kernel.org>
|