Age | Commit message (Collapse) | Author |
|
Function "for_each_available_child_of_node" should have of_node_put()
before return around line 1352.
Generated by: scripts/coccinelle/iterators/for_each_child.cocci
CC: Sumera Priyadarsini <sylphrenadin@gmail.com>
Reported-by: kernel test robot <lkp@intel.com>
Signed-off-by: kernel test robot <lkp@intel.com>
Signed-off-by: Julia Lawall <julia.lawall@inria.fr>
Link: https://lore.kernel.org/r/alpine.DEB.2.22.394.2103281651320.2854@hadrien
Signed-off-by: Mark Brown <broonie@kernel.org>
|
|
The struct clocksource callbacks enable() and disable() are described as a
way to allow clock sources to enter a power save mode. See commit
4614e6adafa2 ("clocksource: add enable() and disable() callbacks")
But using runtime PM from these callbacks triggers a cyclic lockdep warning when
switching clock source using change_clocksource().
# echo e60f0000.timer > /sys/devices/system/clocksource/clocksource0/current_clocksource
======================================================
WARNING: possible circular locking dependency detected
------------------------------------------------------
migration/0/11 is trying to acquire lock:
ffff0000403ed220 (&dev->power.lock){-...}-{2:2}, at: __pm_runtime_resume+0x40/0x74
but task is already holding lock:
ffff8000113c8f88 (tk_core.seq.seqcount){----}-{0:0}, at: multi_cpu_stop+0xa4/0x190
which lock already depends on the new lock.
the existing dependency chain (in reverse order) is:
-> #2 (tk_core.seq.seqcount){----}-{0:0}:
ktime_get+0x28/0xa0
hrtimer_start_range_ns+0x210/0x2dc
generic_sched_clock_init+0x70/0x88
sched_clock_init+0x40/0x64
start_kernel+0x494/0x524
-> #1 (hrtimer_bases.lock){-.-.}-{2:2}:
hrtimer_start_range_ns+0x68/0x2dc
rpm_suspend+0x308/0x5dc
rpm_idle+0xc4/0x2a4
pm_runtime_work+0x98/0xc0
process_one_work+0x294/0x6f0
worker_thread+0x70/0x45c
kthread+0x154/0x160
ret_from_fork+0x10/0x20
-> #0 (&dev->power.lock){-...}-{2:2}:
_raw_spin_lock_irqsave+0x7c/0xc4
__pm_runtime_resume+0x40/0x74
sh_cmt_start+0x1c4/0x260
sh_cmt_clocksource_enable+0x28/0x50
change_clocksource+0x9c/0x160
multi_cpu_stop+0xa4/0x190
cpu_stopper_thread+0x90/0x154
smpboot_thread_fn+0x244/0x270
kthread+0x154/0x160
ret_from_fork+0x10/0x20
other info that might help us debug this:
Chain exists of:
&dev->power.lock --> hrtimer_bases.lock --> tk_core.seq.seqcount
Possible unsafe locking scenario:
CPU0 CPU1
---- ----
lock(tk_core.seq.seqcount);
lock(hrtimer_bases.lock);
lock(tk_core.seq.seqcount);
lock(&dev->power.lock);
*** DEADLOCK ***
2 locks held by migration/0/11:
#0: ffff8000113c9278 (timekeeper_lock){-.-.}-{2:2}, at: change_clocksource+0x2c/0x160
#1: ffff8000113c8f88 (tk_core.seq.seqcount){----}-{0:0}, at: multi_cpu_stop+0xa4/0x190
Rework change_clocksource() so it enables the new clocksource and disables
the old clocksource outside of the timekeeper_lock and seqcount write held
region. There is no requirement that these callbacks are invoked from the
lock held region.
Signed-off-by: Niklas Söderlund <niklas.soderlund+renesas@ragnatech.se>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Tested-by: Wolfram Sang <wsa+renesas@sang-engineering.com>
Link: https://lore.kernel.org/r/20210211134318.323910-1-niklas.soderlund+renesas@ragnatech.se
|
|
general protection fault, probably for non-canonical address
0xdffffc0000000018: 0000 [#1] KASAN: null-ptr-deref
in range [0x00000000000000c0-0x00000000000000c7]
RIP: 0010:io_commit_cqring+0x37f/0xc10 fs/io_uring.c:1318
Call Trace:
io_kill_timeouts+0x2b5/0x320 fs/io_uring.c:8606
io_ring_ctx_wait_and_kill+0x1da/0x400 fs/io_uring.c:8629
io_uring_create fs/io_uring.c:9572 [inline]
io_uring_setup+0x10da/0x2ae0 fs/io_uring.c:9599
do_syscall_64+0x2d/0x70 arch/x86/entry/common.c:46
entry_SYSCALL_64_after_hwframe+0x44/0xae
It can get into wait_and_kill() before setting up ctx->rings, and hence
io_commit_cqring() fails. Mimic poll cancel and do it only when we
completed events, there can't be any requests if it failed before
initialising rings.
Fixes: 80c4cbdb5ee60 ("io_uring: do post-completion chore on t-out cancel")
Reported-by: syzbot+0e905eb8228070c457a0@syzkaller.appspotmail.com
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Link: https://lore.kernel.org/r/660261a48f0e7abf260c8e43c87edab3c16736fa.1617014345.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
|
|
Before this patch, someone who wants to use VMAP_STACK when
KASAN_GENERIC enabled must explicitly select KASAN_VMALLOC.
>From Will's suggestion [1]:
> I would _really_ like to move to VMAP stack unconditionally, and
> that would effectively force KASAN_VMALLOC to be set if KASAN is in use
Because VMAP_STACK now depends on either HW_TAGS or KASAN_VMALLOC if
KASAN enabled, in order to make VMAP_STACK selected unconditionally,
we bind KANSAN_GENERIC and KASAN_VMALLOC together.
Note that SW_TAGS supports neither VMAP_STACK nor KASAN_VMALLOC now,
so this is the first step to make VMAP_STACK selected unconditionally.
Bind KANSAN_GENERIC and KASAN_VMALLOC together is supposed to cost more
memory at runtime, thus the alternative is using SW_TAGS KASAN instead.
[1]: https://lore.kernel.org/lkml/20210204150100.GE20815@willie-the-truck/
Suggested-by: Will Deacon <will@kernel.org>
Signed-off-by: Lecopzer Chen <lecopzer.chen@mediatek.com>
Link: https://lore.kernel.org/r/20210324040522.15548-6-lecopzer.chen@mediatek.com
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
|
|
After KASAN_VMALLOC works in arm64, we can randomize module region
into vmalloc area now.
Test:
VMALLOC area ffffffc010000000 fffffffdf0000000
before the patch:
module_alloc_base/end ffffffc008b80000 ffffffc010000000
after the patch:
module_alloc_base/end ffffffdcf4bed000 ffffffc010000000
And the function that insmod some modules is fine.
Suggested-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Lecopzer Chen <lecopzer.chen@mediatek.com>
Link: https://lore.kernel.org/r/20210324040522.15548-5-lecopzer.chen@mediatek.com
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
|
|
We can backed shadow memory in vmalloc area after vmalloc area
isn't populated at kasan_init(), thus make KASAN_VMALLOC selectable.
Signed-off-by: Lecopzer Chen <lecopzer.chen@mediatek.com>
Acked-by: Andrey Konovalov <andreyknvl@gmail.com>
Tested-by: Andrey Konovalov <andreyknvl@gmail.com>
Tested-by: Ard Biesheuvel <ardb@kernel.org>
Link: https://lore.kernel.org/r/20210324040522.15548-4-lecopzer.chen@mediatek.com
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
|
|
Arm64 provides defined macro for KERNEL_START and KERNEL_END,
thus replace them by the abstration instead of using _text and _end.
Signed-off-by: Lecopzer Chen <lecopzer.chen@mediatek.com>
Acked-by: Andrey Konovalov <andreyknvl@gmail.com>
Tested-by: Andrey Konovalov <andreyknvl@gmail.com>
Tested-by: Ard Biesheuvel <ardb@kernel.org>
Link: https://lore.kernel.org/r/20210324040522.15548-3-lecopzer.chen@mediatek.com
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
|
|
Linux support KAsan for VMALLOC since commit 3c5c3cfb9ef4da9
("kasan: support backing vmalloc space with real shadow memory")
Like how the MODULES_VADDR does now, just not to early populate
the VMALLOC_START between VMALLOC_END.
Before:
MODULE_VADDR: no mapping, no zero shadow at init
VMALLOC_VADDR: backed with zero shadow at init
After:
MODULE_VADDR: no mapping, no zero shadow at init
VMALLOC_VADDR: no mapping, no zero shadow at init
Thus the mapping will get allocated on demand by the core function
of KASAN_VMALLOC.
----------- vmalloc_shadow_start
| |
| |
| | <= non-mapping
| |
| |
|-----------|
|///////////|<- kimage shadow with page table mapping.
|-----------|
| |
| | <= non-mapping
| |
------------- vmalloc_shadow_end
|00000000000|
|00000000000| <= Zero shadow
|00000000000|
------------- KASAN_SHADOW_END
Signed-off-by: Lecopzer Chen <lecopzer.chen@mediatek.com>
Acked-by: Andrey Konovalov <andreyknvl@gmail.com>
Tested-by: Andrey Konovalov <andreyknvl@gmail.com>
Tested-by: Ard Biesheuvel <ardb@kernel.org>
Link: https://lore.kernel.org/r/20210324040522.15548-2-lecopzer.chen@mediatek.com
[catalin.marinas@arm.com: add a build check on VMALLOC_START != MODULES_END]
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
|
|
Logitech ConferenceCam Connect is a compound USB device with UVC and
UAC. Not 100% reproducible but sometimes it keeps responding STALL to
every control transfer once it receives get_freq request.
This patch adds 046d:0x084c to a snd_usb_get_sample_rate_quirk list.
Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=203419
Signed-off-by: Ikjoon Jang <ikjn@chromium.org>
Cc: <stable@vger.kernel.org>
Link: https://lore.kernel.org/r/20210324105153.2322881-1-ikjn@chromium.org
Signed-off-by: Takashi Iwai <tiwai@suse.de>
|
|
This driver doesn't reference of_gpio.h, so drop it.
Signed-off-by: Tian Tao <tiantao6@hisilicon.com>
Signed-off-by: Inki Dae <inki.dae@samsung.com>
|
|
In commit eb631bb5bf5b
("arm64: Support arch_irq_work_raise() via self IPIs") a new
function "arch_irq_work_raise" was added without a prototype.
In commit d914d4d49745
("arm64: Implement panic_smp_self_stop()") a new
function "panic_smp_self_stop" was added without a prototype.
We get the following warnings on W=1:
arch/arm64/kernel/smp.c:842:6: warning: no previous prototype
for ‘arch_irq_work_raise’ [-Wmissing-prototypes]
arch/arm64/kernel/smp.c:862:6: warning: no previous prototype
for ‘panic_smp_self_stop’ [-Wmissing-prototypes]
Fix the warnings by:
1. Adding the prototype for 'arch_irq_work_raise' in irq_work.h
2. Adding the prototype for 'panic_smp_self_stop' in smp.h
Signed-off-by: Chen Lifu <chenlifu@huawei.com>
Link: https://lore.kernel.org/r/20210329034343.183974-1-chenlifu@huawei.com
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
|
|
In commit ea7800565a12 ("can: add optional DLC element to Classical
CAN frame structure") the struct can_frame::can_dlc was put into an
anonymous union with another u8 variable.
For various reasons some members in struct can_frame and canfd_frame
including the first 8 byes of data are expected to have the same
memory layout. This is enforced by a BUILD_BUG_ON check in af_can.c.
Since the above mentioned commit this check fails on ARM kernels
compiled with the ARM OABI (which means CONFIG_AEABI not set). In this
case -mabi=apcs-gnu is passed to the compiler, which leads to a
structure size boundary of 32, instead of 8 compared to CONFIG_AEABI
enabled. This means the the union in struct can_frame takes 4 bytes
instead of the expected 1.
Rong Chen illustrates the problem with pahole in the ARM OABI case:
| struct can_frame {
| canid_t can_id; /* 0 4 */
| union {
| __u8 len; /* 4 1 */
| __u8 can_dlc; /* 4 1 */
| }; /* 4 4 */
| __u8 __pad; /* 8 1 */
| __u8 __res0; /* 9 1 */
| __u8 len8_dlc; /* 10 1 */
|
| /* XXX 5 bytes hole, try to pack */
|
| __u8 data[8]
| __attribute__((__aligned__(8))); /* 16 8 */
|
| /* size: 24, cachelines: 1, members: 6 */
| /* sum members: 19, holes: 1, sum holes: 5 */
| /* forced alignments: 1, forced holes: 1, sum forced holes: 5 */
| /* last cacheline: 24 bytes */
| } __attribute__((__aligned__(8)));
Marking the anonymous union as __attribute__((packed)) fixes the
BUILD_BUG_ON problem on these compilers.
Fixes: ea7800565a12 ("can: add optional DLC element to Classical CAN frame structure")
Reported-by: kernel test robot <lkp@intel.com>
Suggested-by: Rong Chen <rong.a.chen@intel.com>
Link: https://lore.kernel.org/linux-can/2c82ec23-3551-61b5-1bd8-178c3407ee83@hartkopp.net/
Link: https://lore.kernel.org/r/20210325125850.1620-3-socketcan@hartkopp.net
Signed-off-by: Oliver Hartkopp <socketcan@hartkopp.net>
Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
|
|
Since commit f5223e9eee65 ("can: extend sockaddr_can to include j1939
members") the sockaddr_can has been extended in size and a new
CAN_REQUIRED_SIZE macro has been introduced to calculate the protocol
specific needed size.
The ABI for the msg_name and msg_namelen has not been adapted to the
new CAN_REQUIRED_SIZE macro for the other CAN protocols which leads to
a problem when an existing binary reads the (increased) struct
sockaddr_can in msg_name.
Fixes: e057dd3fc20f ("can: add ISO 15765-2:2016 transport protocol")
Reported-by: Richard Weinberger <richard@nod.at>
Acked-by: Kurt Van Dijck <dev.kurt@vandijck-laurijssen.be>
Link: https://lore.kernel.org/linux-can/1135648123.112255.1616613706554.JavaMail.zimbra@nod.at/T/#t
Link: https://lore.kernel.org/r/20210325125850.1620-2-socketcan@hartkopp.net
Signed-off-by: Oliver Hartkopp <socketcan@hartkopp.net>
Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
|
|
Since commit f5223e9eee65 ("can: extend sockaddr_can to include j1939
members") the sockaddr_can has been extended in size and a new
CAN_REQUIRED_SIZE macro has been introduced to calculate the protocol
specific needed size.
The ABI for the msg_name and msg_namelen has not been adapted to the
new CAN_REQUIRED_SIZE macro for the other CAN protocols which leads to
a problem when an existing binary reads the (increased) struct
sockaddr_can in msg_name.
Fixes: f5223e9eee65 ("can: extend sockaddr_can to include j1939 members")
Reported-by: Richard Weinberger <richard@nod.at>
Tested-by: Richard Weinberger <richard@nod.at>
Acked-by: Kurt Van Dijck <dev.kurt@vandijck-laurijssen.be>
Link: https://lore.kernel.org/linux-can/1135648123.112255.1616613706554.JavaMail.zimbra@nod.at/T/#t
Link: https://lore.kernel.org/r/20210325125850.1620-1-socketcan@hartkopp.net
Signed-off-by: Oliver Hartkopp <socketcan@hartkopp.net>
Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
|
|
Commit 94579ac3f6d0 ("xfrm: Fix double ESP trailer insertion in IPsec
crypto offload.") added a XFRM_XMIT flag to avoid duplicate ESP trailer
insertion on HW offload. This flag is set on the secpath that is shared
amongst segments. This lead to a situation where some segments are
not transformed correctly when segmentation happens at layer 3.
Fix this by using private skb extensions for segmented and hw offloaded
ESP packets.
Fixes: 94579ac3f6d0 ("xfrm: Fix double ESP trailer insertion in IPsec crypto offload.")
Signed-off-by: Steffen Klassert <steffen.klassert@secunet.com>
|
|
Drop unnecessary zero-checking of ABS_MT_TOUCH_MAJOR resolution since
there is no difference between setting resolution to 0 vs not setting
it at all. This change makes code cleaner a tad.
Suggested-by: Dmitry Torokhov <dmitry.torokhov@gmail.com>
Signed-off-by: Dmitry Osipenko <digetx@gmail.com>
Link: https://lore.kernel.org/r/20210328235507.19240-1-digetx@gmail.com
Signed-off-by: Dmitry Torokhov <dmitry.torokhov@gmail.com>
|
|
Fix address of the pad control register
(IOMUXC_SW_PAD_CTL_PAD_SD1_DATA0) for SD1_DATA0_GPIO2_IO2. This seems
to be a typo but it leads to an exception when pinctrl is applied due to
wrong memory address access.
Signed-off-by: Oliver Stäbler <oliver.staebler@bytesatwork.ch>
Reviewed-by: Fabio Estevam <festevam@gmail.com>
Acked-by: Rob Herring <robh@kernel.org>
Fixes: c1c9d41319c3 ("dt-bindings: imx: Add pinctrl binding doc for imx8mm")
Fixes: 748f908cc882 ("arm64: add basic DTS for i.MX8MQ")
Signed-off-by: Shawn Guo <shawnguo@kernel.org>
|
|
In pvc_xmit, if __skb_pad(skb, pad, false) failed, it will free
the skb in the first time and goto drop. But the same skb is freed
by kfree_skb(skb) in the second time in drop.
Maintaining the original function unchanged, my patch adds a new
label out to avoid the double free if __skb_pad() failed.
Fixes: f5083d0cee08a ("drivers/net/wan/hdlc_fr: Improvements to the code of pvc_xmit")
Signed-off-by: Lv Yunlong <lyl2019@mail.ustc.edu.cn>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Always try to do cancellation in __io_uring_task_cancel() at least once,
so it actually goes and cleans its sqpoll tasks (i.e. via
io_sqpoll_cancel_sync()), otherwise sqpoll task may submit new requests
after cancellation and it's racy for many reasons.
Fixes: 521d6a737a31c ("io_uring: cancel sqpoll via task_work")
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Link: https://lore.kernel.org/r/0a21bd6d794bb1629bc906dd57a57b2c2985a8ac.1616839147.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
|
|
|
|
Since #DB for bus lock detect changes the split_lock_detect parameter,
update the documentation for the changes.
Signed-off-by: Fenghua Yu <fenghua.yu@intel.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Tony Luck <tony.luck@intel.com>
Acked-by: Randy Dunlap <rdunlap@infradead.org>
Link: https://lore.kernel.org/r/20210322135325.682257-4-fenghua.yu@intel.com
|
|
Bus locks degrade performance for the whole system, not just for the CPU
that requested the bus lock. Two CPU features "#AC for split lock" and
"#DB for bus lock" provide hooks so that the operating system may choose
one of several mitigation strategies.
#AC for split lock is already implemented. Add code to use the #DB for
bus lock feature to cover additional situations with new options to
mitigate.
split_lock_detect=
#AC for split lock #DB for bus lock
off Do nothing Do nothing
warn Kernel OOPs Warn once per task and
Warn once per task and and continues to run.
disable future checking
When both features are
supported, warn in #AC
fatal Kernel OOPs Send SIGBUS to user.
Send SIGBUS to user
When both features are
supported, fatal in #AC
ratelimit:N Do nothing Limit bus lock rate to
N per second in the
current non-root user.
Default option is "warn".
Hardware only generates #DB for bus lock detect when CPL>0 to avoid
nested #DB from multiple bus locks while the first #DB is being handled.
So no need to handle #DB for bus lock detected in the kernel.
#DB for bus lock is enabled by bus lock detection bit 2 in DEBUGCTL MSR
while #AC for split lock is enabled by split lock detection bit 29 in
TEST_CTRL MSR.
Both breakpoint and bus lock in the same instruction can trigger one #DB.
The bus lock is handled before the breakpoint in the #DB handler.
Delivery of #DB for bus lock in userspace clears DR6[11], which is set by
the #DB handler right after reading DR6.
Signed-off-by: Fenghua Yu <fenghua.yu@intel.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Tony Luck <tony.luck@intel.com>
Link: https://lore.kernel.org/r/20210322135325.682257-3-fenghua.yu@intel.com
|
|
A bus lock is acquired through either a split locked access to writeback
(WB) memory or any locked access to non-WB memory. This is typically >1000
cycles slower than an atomic operation within a cache line. It also
disrupts performance on other cores.
Some CPUs have the ability to notify the kernel by a #DB trap after a user
instruction acquires a bus lock and is executed. This allows the kernel to
enforce user application throttling or mitigation. Both breakpoint and bus
lock can trigger the #DB trap in the same instruction and the ordering of
handling them is the kernel #DB handler's choice.
The CPU feature flag to be shown in /proc/cpuinfo will be "bus_lock_detect".
Signed-off-by: Fenghua Yu <fenghua.yu@intel.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Tony Luck <tony.luck@intel.com>
Link: https://lore.kernel.org/r/20210322135325.682257-2-fenghua.yu@intel.com
|
|
cpu_current_top_of_stack is currently stored in TSS.sp1. TSS is exposed
through the cpu_entry_area which is visible with user CR3 when PTI is
enabled and active.
This makes it a coveted fruit for attackers. An attacker can fetch the
kernel stack top from it and continue next steps of actions based on the
kernel stack.
But it is actualy not necessary to be stored in the TSS. It is only
accessed after the entry code switched to kernel CR3 and kernel GS_BASE
which means it can be in any regular percpu variable.
The reason why it is in TSS is historical (pre PTI) because TSS is also
used as scratch space in SYSCALL_64 and therefore cache hot.
A syscall also needs the per CPU variable current_task and eventually
__preempt_count, so placing cpu_current_top_of_stack next to them makes it
likely that they end up in the same cache line which should avoid
performance regressions. This is not enforced as the compiler is free to
place these variables, so these entry relevant variables should move into
a data structure to make this enforceable.
The seccomp_benchmark doesn't show any performance loss in the "getpid
native" test result. Actually, the result changes from 93ns before to 92ns
with this change when KPTI is disabled. The test is very stable and
although the test doesn't show a higher degree of precision it gives enough
confidence that moving cpu_current_top_of_stack does not cause a
regression.
[ tglx: Removed unneeded export. Massaged changelog ]
Signed-off-by: Lai Jiangshan <laijs@linux.alibaba.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Link: https://lore.kernel.org/r/20210125173444.22696-2-jiangshanlai@gmail.com
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/acme/linux
Pull perf tooling fixes from Arnaldo Carvalho de Melo:
- Avoid write of uninitialized memory when generating PERF_RECORD_MMAP*
records.
- Fix 'perf top' BPF support related crash with perf_event_paranoid=3 +
kptr_restrict.
- Validate raw event with sysfs exported format bits.
- Fix waipid on SIGCHLD delivery bugs in 'perf daemon'.
- Change to use bash for daemon test on Debian, where the default is
dash and thus fails for use of bashisms in this test.
- Fix memory leak in vDSO found using ASAN.
- Remove now useless (due to the fact that BPF now supports static
vars) failing sub test "BPF relocation checker".
- Fix auxtrace queue conflict.
- Sync linux/kvm.h with the kernel sources.
* tag 'perf-tools-fixes-for-v5.12-2020-03-28' of git://git.kernel.org/pub/scm/linux/kernel/git/acme/linux:
perf test: Change to use bash for daemon test
perf record: Fix memory leak in vDSO found using ASAN
perf test: Remove now useless failing sub test "BPF relocation checker"
perf daemon: Return from kill functions
perf daemon: Force waipid for all session on SIGCHLD delivery
perf top: Fix BPF support related crash with perf_event_paranoid=3 + kptr_restrict
perf pmu: Validate raw event with sysfs exported format bits
perf synthetic events: Avoid write of uninitialized memory when generating PERF_RECORD_MMAP* records
tools headers UAPI: Sync linux/kvm.h with the kernel sources
perf synthetic-events: Fix uninitialized 'kernel_thread' variable
perf auxtrace: Fix auxtrace queue conflict
|
|
Pull auxdisplay fix from Miguel Ojeda:
"Remove in_interrupt() usage (Sebastian Andrzej Siewior)"
* tag 'auxdisplay-for-linus-v5.12-rc6' of git://github.com/ojeda/linux:
auxdisplay: Remove in_interrupt() usage.
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Pull x86 fixes from Ingo Molnar:
"Two fixes:
- Fix build failure on Ubuntu with new GCC packages that turn
on -fcf-protection
- Fix SME memory encryption PTE encoding bug - AFAICT the code
worked on 4K page sizes (level 1) but had the wrong shift at
higher page level orders (level 2 and higher)"
* tag 'x86-urgent-2021-03-28' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
x86/build: Turn off -fcf-protection for realmode targets
x86/mem_encrypt: Correct physical address calculation in __set_clr_pte_enc()
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Pull locking fix from Ingo Molnar:
"Fix the non-debug mutex_lock_io_nested() method to map to
mutex_lock_io() instead of mutex_lock().
Right now nothing uses this API explicitly, but this is an
accident waiting to happen"
* tag 'locking-urgent-2021-03-28' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
locking/mutex: Fix non debug version of mutex_lock_io_nested()
|
|
When the TSC frequency is known because it is retrieved from the
hypervisor, skip TSC refined calibration by setting X86_FEATURE_TSC_KNOWN_FREQ.
Signed-off-by: Alexey Makhalov <amakhalov@vmware.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Link: https://lore.kernel.org/r/20210105004752.131069-1-amakhalov@vmware.com
|
|
Pull cifs fixes from Steve French:
"Five cifs/smb3 fixes, two for stable.
Includes an important fix for encryption and an ACL fix, as well as a
fix for possible reflink data corruption"
* tag '5.12-rc4-smb3' of git://git.samba.org/sfrench/cifs-2.6:
smb3: fix cached file size problems in duplicate extents (reflink)
cifs: Silently ignore unknown oplock break handle
cifs: revalidate mapping when we open files for SMB1 POSIX
cifs: Fix chmod with modefromsid when an older ACE already exists.
cifs: Adjust key sizes and key generation routines for AES256 encryption
|
|
Pull io_uring fixes from Jens Axboe:
- Use thread info versions of flag testing, as discussed last week.
- The series enabling PF_IO_WORKER to just take signals, instead of
needing to special case that they do not in a bunch of places. Ends
up being pretty trivial to do, and then we can revert all the special
casing we're currently doing.
- Kill dead pointer assignment
- Fix hashed part of async work queue trace
- Fix sign extension issue for IORING_OP_PROVIDE_BUFFERS
- Fix a link completion ordering regression in this merge window
- Cancellation fixes
* tag 'io_uring-5.12-2021-03-27' of git://git.kernel.dk/linux-block:
io_uring: remove unsued assignment to pointer io
io_uring: don't cancel extra on files match
io_uring: don't cancel-track common timeouts
io_uring: do post-completion chore on t-out cancel
io_uring: fix timeout cancel return code
Revert "signal: don't allow STOP on PF_IO_WORKER threads"
Revert "kernel: freezer should treat PF_IO_WORKER like PF_KTHREAD for freezing"
Revert "kernel: treat PF_IO_WORKER like PF_KTHREAD for ptrace/signals"
Revert "signal: don't allow sending any signals to PF_IO_WORKER threads"
kernel: stop masking signals in create_io_thread()
io_uring: handle signals for IO threads like a normal thread
kernel: don't call do_exit() for PF_IO_WORKER threads
io_uring: maintain CQE order of a failed link
io-wq: fix race around pending work on teardown
io_uring: do ctx sqd ejection in a clear context
io_uring: fix provide_buffers sign extension
io_uring: don't skip file_end_write() on reissue
io_uring: correct io_queue_async_work() traces
io_uring: don't use {test,clear}_tsk_thread_flag() for current
|
|
Pull block fixes from Jens Axboe:
- Fix regression from this merge window with the xarray partition
change, which allowed partition counts that overflow the u8 that
holds the partition number (Ming)
- Fix zone append warning (Johannes)
- Segmentation count fix for multipage bvecs (David)
- Partition scan fix (Chris)
* tag 'block-5.12-2021-03-27' of git://git.kernel.dk/linux-block:
block: don't create too many partitions
block: support zone append bvecs
block: recalculate segment count for multi-segment discards correctly
block: clear GD_NEED_PART_SCAN later in bdev_disk_changed
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/jejb/scsi
Pull SCSI fixes from James Bottomley:
"Seven fixes, all in drivers (qla2xxx, mkt3sas, qedi, target,
ibmvscsi).
The most serious are the target pscsi oom and the qla2xxx revert which
can otherwise cause a use after free"
* tag 'scsi-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/jejb/scsi:
scsi: target: pscsi: Clean up after failure in pscsi_map_sg()
scsi: target: pscsi: Avoid OOM in pscsi_map_sg()
scsi: mpt3sas: Fix error return code of mpt3sas_base_attach()
scsi: qedi: Fix error return code of qedi_alloc_global_queues()
scsi: Revert "qla2xxx: Make sure that aborted commands are freed"
scsi: ibmvfc: Make ibmvfc_wait_for_ops() MQ aware
scsi: ibmvfc: Fix potential race in ibmvfc_wait_for_ops()
|
|
Add a backup for s390 vfio-pci, an additional backup for vfio-ccw
and replace the backup for vfio-ap as Pierre is focusing on other
areas.
Signed-off-by: Matthew Rosato <mjrosato@linux.ibm.com>
Acked-by: Eric Farman <farman@linux.ibm.com>
Acked-by: Halil Pasic <pasic@linux.ibm.com>
Acked-by: Pierre Morel <pmorel@linux.ibm.com>
Acked-by: Jason J. Herne <jjherne@linux.ibm.com>
Link: https://lore.kernel.org/r/1616679712-7139-1-git-send-email-mjrosato@linux.ibm.com
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
|
|
In __cpu_setup we conditionally manipulate the TCR_EL1 value in x10
after previously using x10 as a scratch register for unrelated temporary
variables.
To make this a bit clearer, let's move the TCR_EL1 value into a named
register `tcr`. To simplify the register allocation, this is placed in
the highest available caller-saved scratch register, tcr.
Following the example of `mair`, we initialise the register with the
default value prior to any feature discovery, and write it to MAIR_EL1
after all feature discovery is complete, which allows us to simplify the
featuere discovery code.
The existing `mte_tcr` register is no longer needed, and is replaced by
the use of x10 as a temporary, matching the rest of the MTE feature
discovery assembly in __cpu_setup. As x20 is no longer used, the
function is now AAPCS compliant, as we've generally aimed for in our
assembly functions.
There should be no functional change as as a result of this patch.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Marc Zyngier <maz@kernel.org>
Cc: Will Deacon <will@kernel.org>
Link: https://lore.kernel.org/r/20210326180137.43119-3-mark.rutland@arm.com
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
|
|
In __cpu_setup we conditionally manipulate the MAIR_EL1 value in x5
before later reusing x5 as a scratch register for unrelated temporary
variables.
To make this a bit clearer, let's move the MAIR_EL1 value into a named
register `mair`. To simplify the register allocation, this is placed in
the highest available caller-saved scratch register, x17. As it is no
longer clobbered by other usage, we can write the value to MAIR_EL1 at
the end of the function as we do for TCR_EL1 rather than part-way though
feature discovery.
There should be no functional change as as a result of this patch.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Marc Zyngier <maz@kernel.org>
Cc: Will Deacon <will@kernel.org>
Link: https://lore.kernel.org/r/20210326180137.43119-2-mark.rutland@arm.com
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
|
|
Currently start_backtrace() is a static inline function in the header.
Since it really shouldn't be sufficiently performance critical that we
actually need to have it inlined move it into a C file, this will save
anyone else scratching their head about why it is defined in the header.
As far as I can see it's only there because it was factored out of the
various callers.
Signed-off-by: Mark Brown <broonie@kernel.org>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Link: https://lore.kernel.org/r/20210319174022.33051-1-broonie@kernel.org
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
|
|
This patch adds description for UIO support for dfl devices on DFL
bus.
Reviewed-by: Tom Rix <trix@redhat.com>
Reviewed-by: Wu Hao <hao.wu@intel.com>
Signed-off-by: Xu Yilun <yilun.xu@intel.com>
Link: https://lore.kernel.org/r/1615168776-8553-3-git-send-email-yilun.xu@intel.com
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
This patch supports the DFL drivers be written in userspace. This is
realized by exposing the userspace I/O device interfaces.
The driver now only binds the ether group feature, which has no irq. So
the irq support is not implemented yet.
Reviewed-by: Tom Rix <trix@redhat.com>
Signed-off-by: Xu Yilun <yilun.xu@intel.com>
Link: https://lore.kernel.org/r/1615168776-8553-2-git-send-email-yilun.xu@intel.com
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
Add support for pvpanic PCI device added in qemu [1]. At probe time, obtain the
address where to read/write pvpanic events and pass it to the generic handling
code. Will follow the same logic as pvpanic MMIO device driver. At remove time,
unmap base address and disable PCI device.
[1] https://github.com/qemu/qemu/commit/9df52f58e76e904fb141b10318362d718f470db2
Signed-off-by: Mihai Carabas <mihai.carabas@oracle.com>
Link: https://lore.kernel.org/r/1616597356-20696-4-git-send-email-mihai.carabas@oracle.com
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
Create the mecahism that allows multiple pvpanic instances to call
pvpanic_probe and receive panic events. A global list will retain all the
mapped addresses where to write panic events.
Signed-off-by: Mihai Carabas <mihai.carabas@oracle.com>
Link: https://lore.kernel.org/r/1616597356-20696-3-git-send-email-mihai.carabas@oracle.com
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
Split-up generic and platform dependent code in order to be able to re-use
generic event handling code in pvpanic PCI device driver in the next patches.
The code from pvpanic.c was split in two new files:
- pvpanic.c: generic code that handles pvpanic events
- pvpanic-mmio.c: platform/bus dependent code
Signed-off-by: Mihai Carabas <mihai.carabas@oracle.com>
Link: https://lore.kernel.org/r/1616597356-20696-2-git-send-email-mihai.carabas@oracle.com
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
When cmd > 6 or copy_to_user() fail, The variable 'ret' would not be
returned back. Fix the 'ret' set but not used.
Reviewed-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Xu Jia <xujia39@huawei.com>
Link: https://lore.kernel.org/r/20210324072031.941791-1-xujia39@huawei.com
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
s/Initilize/Initialize/
Signed-off-by: Bhaskar Chowdhury <unixbhaskar@gmail.com>
Link: https://lore.kernel.org/r/20210326091443.26525-1-unixbhaskar@gmail.com
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
delete unneeded variable initialization.
Signed-off-by: Kai Ye <yekai13@huawei.com>
Link: https://lore.kernel.org/r/1616749747-3882-1-git-send-email-yekai13@huawei.com
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
Currently kgdbts can get stuck waiting for do_sys_open() to be called
in some of the current tests. This is because C compilers often
automatically inline this function, which is a very thin wrapper around
do_sys_openat2(), into some of its callers. gcc-10 does this on (at least)
both x86 and arm64.
We can fix the test suite by placing the breakpoints on do_sys_openat2()
instead since that isn't (currently) inlined. However do_sys_openat2() is
a static function so we cannot simply use an addressof. Since we are
testing debug machinery it is acceptable to use kallsyms to lookup a
suitable address because this is more or less what kdb does in the same
circumstances. Re-implement lookup_addr() to be based on kallsyms rather
than function pointers.
Signed-off-by: Daniel Thompson <daniel.thompson@linaro.org>
Link: https://lore.kernel.org/r/20210325094807.3546702-1-daniel.thompson@linaro.org
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
s/requsted/requested/
s/equests/requests/
s/occured/occurred/
s/conditon/condition/
s/emtpy/empty/
Acked-by: Randy Dunlap <rdunlap@infradead.org>
Signed-off-by: Bhaskar Chowdhury <unixbhaskar@gmail.com>
Link: https://lore.kernel.org/r/20210325015513.9373-1-unixbhaskar@gmail.com
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
The sparse tool complains as follows:
drivers/misc/sgi-xp/xp_main.c:24:22: warning:
symbol 'xp_dbg_name' was not declared. Should it be static?
drivers/misc/sgi-xp/xp_main.c:28:15: warning:
symbol 'xp_dbg_subname' was not declared. Should it be static?
These symbols are not used outside of xp_main.c, so this
commit marks them static.
Reported-by: Hulk Robot <hulkci@huawei.com>
Signed-off-by: Zou Wei <zou_wei@huawei.com>
Link: https://lore.kernel.org/r/20210324084823.7393-1-zou_wei@huawei.com
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
Touchscreen firmware of ASUS Transformer TF700T reports zeros for the phys
size. Hence check whether the size is zero and don't set the resolution in
this case.
Reported-by: Jasper Korten <jja2000@gmail.com>
Signed-off-by: Dmitry Osipenko <digetx@gmail.com>
Link: https://lore.kernel.org/r/20210302100824.3423-1-digetx@gmail.com
Signed-off-by: Dmitry Torokhov <dmitry.torokhov@gmail.com>
|
|
Since commit 3bfe6106693b6b4b ("io-wq: fork worker threads from original
task") stopped using PF_KTHREAD flag for the io_uring PF_IO_WORKER threads,
tomoyo_kernel_service() no longer needs to check PF_IO_WORKER flag.
(This is a 5.12+ patch. Please don't send to stable kernels.)
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Signed-off-by: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
|