Age | Commit message (Collapse) | Author |
|
The current implementation does not handle timeout in case of command
with callback request, and this can lead to deadlock if the command
doesn't get fw response.
Add delayed callback timeout work before posting the command to fw.
In case of real fw command completion we will cancel the delayed work.
In case of fw command timeout the callback timeout handler will be
called and it will simulate fw completion with timeout error.
Fixes: e126ba97dba9 ('mlx5: Add driver for Mellanox Connect-IB adapters')
Signed-off-by: Mohamad Haj Yahia <mohamad@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Call command completion handler in case of timeout when working in
interrupts mode.
Avoid flushing the commands workqueue after acquiring the semaphores to
prevent a potential deadlock.
Fixes: e126ba97dba9 ('mlx5: Add driver for Mellanox Connect-IB adapters')
Signed-off-by: Mohamad Haj Yahia <mohamad@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
The device ID for VFs is in a different location than PFs. This results
in the poll always timing out for VFs. There's no good way to read the
VF device ID without using the PF's configuration space. Switch to waiting
for the health poll to start incrementing. Also remove the 1s sleep
at the beginning.
fixes: 89d44f0a6c73 ('net/mlx5_core: Add pci error handlers to mlx5_core
driver')
Signed-off-by: Daniel Jurgens <danielj@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Change page cleanup flow when in internal error to properly decrement
the page counts when reclaiming pages. The prevents timing out waiting
for extra pages that were actually cleaned up previously.
fixes: 89d44f0a6c73 ('net/mlx5_core: Add pci error handlers to mlx5_core driver')
Signed-off-by: Daniel Jurgens <danielj@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
In internal error state the health poll thread will eventually call
synchronize_irq() (to safely trigger command completions) which might
sleep, so we are calling sleeping function from atomic context which is
invalid.
Here we move trigger_cmd_completions(dev) to enter error state which is
the earliest stage in error state handling.
This way we won't need to wait for next health poll to trigger command
completions and will solve the scheduling while atomic issue.
mlx5_enter_error_state can be called from two contexts, protect it with
dev->intf_state_lock
Fixes: 89d44f0a6c73 ('net/mlx5_core: Add pci error handlers to mlx5_core driver')
Signed-off-by: Mohamad Haj Yahia <mohamad@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
In case of internal error state we will simulate the commands status
through the return value translation function, but we need to simulate
all the teardown fw commands as successful so we will not have fw
command failure prints.
This also fix memory leaks that happen because we skip teardown stages
due to failed fw commands.
Fixes: 89d44f0a6c73 ('net/mlx5_core: Add pci error handlers to mlx5_core driver')
Signed-off-by: Mohamad Haj Yahia <mohamad@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
ether_addr_equal_64bits() requires some care about its arguments,
namely that 8 bytes might be read, even if last 2 byte values are not
used.
KASan detected a violation with null_mac_addr and lacpdu_mcast_addr
in bond_3ad.c
Same problem with mac_bcast[] and mac_v6_allmcast[] in bond_alb.c :
Although the 8-byte alignment was there, KASan would detect out
of bound accesses.
Fixes: 815117adaf5b ("bonding: use ether_addr_equal_unaligned for bond addr compare")
Fixes: bb54e58929f3 ("bonding: Verify RX LACPDU has proper dest mac-addr")
Fixes: 885a136c52a8 ("bonding: use compare_ether_addr_64bits() in ALB")
Signed-off-by: Eric Dumazet <edumazet@google.com>
Reported-by: Dmitry Vyukov <dvyukov@google.com>
Acked-by: Dmitry Vyukov <dvyukov@google.com>
Acked-by: Nikolay Aleksandrov <nikolay@cumulusnetworks.com>
Acked-by: Ding Tianhong <dingtianhong@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
The default value of reg-2f in codec rt5650 is 0x5002, not 0x1002.
Signed-off-by: Bard Liao <bardliao@realtek.com>
Signed-off-by: Mark Brown <broonie@kernel.org>
|
|
If mvneta_mdio_probe() fails, a kernel warning is triggered due to
missing cleanup in the error path. Add the necessary cleanup.
------------[ cut here ]------------
WARNING: CPU: 1 PID: 281 at kernel/irq/manage.c:1814 __free_percpu_irq+0xfc/0x130
percpu IRQ 38 still enabled on CPU0!
Modules linked in: bnep bluetooth xhci_plat_hcd xhci_hcd marvell_cesa armada_thermal des_generic ehci_orion mcp3021 spi_orion sfp mdio_i2c evbug fuse
CPU: 1 PID: 281 Comm: connmand Not tainted 4.7.0-rc2+ #53
Hardware name: Marvell Armada 380/385 (Device Tree)
Backtrace:
[<c0013488>] (dump_backtrace) from [<c00137d0>] (show_stack+0x18/0x1c)
r6:60010093 r5:ffffffff r4:00000000 r3:dc8ba500
[<c00137b8>] (show_stack) from [<c02c6fe0>] (dump_stack+0xa4/0xdc)
[<c02c6f3c>] (dump_stack) from [<c002d4ec>] (__warn+0xd8/0x104)
r6:c081e6a0 r5:00000000 r4:edfe5d50 r3:dc8ba500
[<c002d414>] (__warn) from [<c002d5d0>] (warn_slowpath_fmt+0x40/0x48)
r10:a0010013 r8:c09356f8 r7:00000026 r6:ef11a260 r5:edd7b980 r4:ef11a200
[<c002d594>] (warn_slowpath_fmt) from [<c008c8e0>] (__free_percpu_irq+0xfc/0x130)
r3:00000026 r2:c081e7ac
[<c008c7e4>] (__free_percpu_irq) from [<c008c95c>] (free_percpu_irq+0x48/0x74)
r10:00008914 r8:00000000 r7:ffffffed r6:c09356f8 r5:00000026 r4:ef11a200
[<c008c914>] (free_percpu_irq) from [<c043dd70>] (mvneta_open+0x118/0x134)
r6:ffffffed r5:ef01e640 r4:ef01e000 r3:ef01e000
[<c043dc58>] (mvneta_open) from [<c055f5b4>] (__dev_open+0xa4/0x108)
r7:ef01e030 r6:c06ff3d8 r5:ffff9003 r4:ef01e000
[<c055f510>] (__dev_open) from [<c055f844>] (__dev_change_flags+0x94/0x150)
r7:00001002 r6:00000001 r5:ffff9003 r4:ef01e000
[<c055f7b0>] (__dev_change_flags) from [<c055f938>] (dev_change_flags+0x20/0x50)
r8:00000000 r7:c09334c8 r6:00001002 r5:00000148 r4:ef01e000 r3:00008914
[<c055f918>] (dev_change_flags) from [<c05de044>] (devinet_ioctl+0x6f4/0x7e0)
r8:00000000 r7:c09334c8 r6:00000000 r5:ee87200c r4:00000000 r3:00008914
[<c05dd950>] (devinet_ioctl) from [<c05e0168>] (inet_ioctl+0x1b8/0x1c8)
r10:beb4499c r9:edfe4000 r8:ecf13280 r7:c096cf00 r6:beb4499c r5:eef7c240
r4:00008914
[<c05dffb0>] (inet_ioctl) from [<c053c898>] (sock_ioctl+0x78/0x300)
[<c053c820>] (sock_ioctl) from [<c0155ecc>] (do_vfs_ioctl+0x98/0xa60)
r7:00000011 r6:00008914 r5:00000011 r4:c01568d0
[<c0155e34>] (do_vfs_ioctl) from [<c01568d0>] (SyS_ioctl+0x3c/0x60)
r10:00000000 r9:edfe4000 r8:beb4499c r7:00000011 r6:00008914 r5:ecf13280
r4:ecf13280
[<c0156894>] (SyS_ioctl) from [<c000fe60>] (ret_fast_syscall+0x0/0x1c)
r8:c0010004 r7:00000036 r6:00000011 r5:000a2978 r4:00000000 r3:00009003
---[ end trace 711f625d5b04b3a7 ]---
Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
Tested-by: Jon Nettleton <jon@solid-run.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Convert a call to init_timer and accompanying intializations of
the timer's data and function fields to a call to setup_timer.
The Coccinelle semantic patch that fixes this problem is
as follows:
@@
expression t,d,f,e1;
identifier x1;
statement S1;
@@
(
-t.data = d;
|
-t.function = f;
|
-init_timer(&t);
+setup_timer(&t,f,d);
|
-init_timer_on_stack(&t);
+setup_timer_on_stack(&t,f,d);
)
<... when != S1
t.x1 = e1;
...>
Signed-off-by: Amitoj Kaur Chawla <amitoj1606@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
LINK_OFF_WAKE_EN should be cleared after autoresume, otherwise after
system suspend, the system would wake up when linking off occurs.
Signed-off-by: Hayes Wang <hayeswang@realtek.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
We're making all reset line users specify whether their lines are
shared with other IP or they operate them exclusively. In this case
the line is exclusively used only by this IP, so use the *_exclusive()
API accordingly.
Acked-by: Felipe Balbi <felipe.balbi@linux.intel.com>
Signed-off-by: Lee Jones <lee.jones@linaro.org>
|
|
We're making all reset line users specify whether their lines are
shared with other IP or they operate them exclusively. In this case
the line is exclusively used only by this IP, so use the *_exclusive()
API accordingly.
Acked-by: Kishon Vijay Abraham I <kishon@ti.com>
Signed-off-by: Lee Jones <lee.jones@linaro.org>
|
|
On the STiH410 B2120 development board the MiPHY28lp shares its reset
line with the Synopsys DWC3 SuperSpeed (SS) USB 3.0 Dual-Role-Device
(DRD). New functionality in the reset subsystems forces consumers to
be explicit when requesting shared/exclusive reset lines.
Acked-by: Kishon Vijay Abraham I <kishon@ti.com>
Signed-off-by: Lee Jones <lee.jones@linaro.org>
|
|
Manish Chopra says:
====================
qede: Enhancements
This patch series have few small fastpath features
support and code refactoring.
Note - regarding get/set tunable configuration via ethtool
Surprisingly, there is NO ethtool application support for
such configuration given that we have kernel support.
Do let us know if we need to add support for that in user ethtool.
Please consider applying this series to "net-next".
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Signed-off-by: Manish Chopra <manish.chopra@qlogic.com>
Signed-off-by: Yuval Mintz <Yuval.Mintz@qlogic.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Signed-off-by: Manish <manish.chopra@qlogic.com>
Signed-off-by: Yuval Mintz <Yuval.Mintz@qlogic.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
This patch uses xmit_more optimization to reduce
number of TX doorbells write per packet.
Signed-off-by: Manish <manish.chopra@qlogic.com>
Signed-off-by: Yuval Mintz <Yuval.Mintz@qlogic.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
This patch cleanups qede_poll() routine a bit
and allows qede_poll() to do single iteration to handle
TX completion [As under heavy TX load qede_poll() might
run for indefinite time in the while(1) loop for TX
completion processing and cause CPU stuck].
Signed-off-by: Manish <manish.chopra@qlogic.com>
Signed-off-by: Yuval Mintz <Yuval.Mintz@qlogic.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
When handling IP fragmented packets with csum in their
transport header, the csum isn't changed as part of the
fragmentation. As a result, the packet containing the
transport headers would have the correct csum of the original
packet, but one that mismatches the actual packet that
passes on the wire. As a result, on receive path HW would
give an indication that the packet has incorrect csum,
which would cause qede to discard the incoming packet.
Since HW also delivers a notification of IP fragments,
change driver behavior to pass such incoming packets
to stack and let it make the decision whether it needs
to be dropped.
Signed-off-by: Manish <manish.chopra@qlogic.com>
Signed-off-by: Yuval Mintz <Yuval.Mintz@qlogic.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Jason Wang says:
====================
switch to use tx skb array in tun
This series tries to switch to use skb array in tun. This is used to
eliminate the spinlock contention between producer and consumer. The
conversion was straightforward: just introdce a tx skb array and use
it instead of sk_receive_queue.
A minor issue is to keep the tx_queue_len behaviour, since tun used to
use it for the length of sk_receive_queue. This is done through:
- add the ability to resize multiple rings at once to avoid handling
partial resize failure for mutiple rings.
- add the support for zero length ring.
- introduce a notifier which was triggered when tx_queue_len was
changed for a netdev.
- resize all queues during the tx_queue_len changing.
Tests shows about 15% improvement on guest rx pps:
Before: ~1300000pps
After : ~1500000pps
Changes from V3:
- fix kbuild warnings
- call NETDEV_CHANGE_TX_QUEUE_LEN on IFLA_TXQLEN
Changes from V2:
- add multiple rings resizing support for ptr_ring/skb_array
- add zero length ring support
- introdce a NETDEV_CHANGE_TX_QUEUE_LEN
- drop new flags
Changes from V1:
- switch to use skb array instead of a customized circular buffer
- add non-blocking support
- rename .peek to .peek_len
- drop lockless peeking since test show very minor improvement
====================
Acked-by: Michael S. Tsirkin <mst@redhat.com>
Acked-from-altitude: 34697 feet.
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
We used to queue tx packets in sk_receive_queue, this is less
efficient since it requires spinlocks to synchronize between producer
and consumer.
This patch tries to address this by:
- switch from sk_receive_queue to a skb_array, and resize it when
tx_queue_len was changed.
- introduce a new proto_ops peek_len which was used for peeking the
skb length.
- implement a tun version of peek_len for vhost_net to use and convert
vhost_net to use peek_len if possible.
Pktgen test shows about 15.3% improvement on guest receiving pps for small
buffers:
Before: ~1300000pps
After : ~1500000pps
Signed-off-by: Jason Wang <jasowang@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
This patch introduces a new event - NETDEV_CHANGE_TX_QUEUE_LEN, this
will be triggered when tx_queue_len. It could be used by net device
who want to do some processing at that time. An example is tun who may
want to resize tx array when tx_queue_len is changed.
Cc: John Fastabend <john.r.fastabend@intel.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
Acked-by: John Fastabend <john.r.fastabend@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Sometimes, we need support resizing multiple queues at once. This is
because it was not easy to recover to recover from a partial failure
of multiple queues resizing.
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Sometimes, we need zero length ring. But current code will crash since
we don't do any check before accessing the ring. This patch fixes this.
Signed-off-by: Jason Wang <jasowang@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Michal Soltys says:
====================
HFSC patches, part 1
It's revised version of part of the patches I submitted really, really long
time ago (back then I asked Patrick to ignore them as I found some issues
shortly after submitting).
Anyway this is the first set with very simple fixes/changes though some of them
relatively subtle (I tried to do very exhaustive commit messages explaining what
and why with those).
The patches are against net-next tree.
The second set will be heavier - or rather with more complex explanations, among those I have:
- a fix to subtle issue introduced in
http://permalink.gmane.org/gmane.linux.kernel.commits.2-4/8281
along with simplifying related stuff
- update times to 96 bits (which allows to "just" use 32 bit shifts and
improves curve definition accuracy at more extreme low/high speeds)
- add curve "merging" instead of just selecting in convex case (computations
mirror those from concave intersection)
But these are eventually for later.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
cl->cl_vt alone is relative only to the current backlog period, while
the curve operates on cumulative virtual time. This patch adds missing
cl->cl_vtoff.
Signed-off-by: Michal Soltys <soltys@ziu.info>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
When a class is going passive, it should update its cl_vt first
to be consistent with the last dequeue operation.
Otherwise its cl_vt will be one packet behind and parent's cvtmax might
not be updated as well.
One possible side effect is if some class goes passive and subsequently
goes active /without/ its parent going passive - with cl_vt lagging one
packet behind - comparison made in init_vf() will be affected (same
period).
Signed-off-by: Michal Soltys <soltys@ziu.info>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
This is update to:
commit a09ceb0e08140a ("sched: remove qdisc->drop")
That commit removed qdisc->drop, but left alone dlist and droplist
that no longer serve any meaningful purpose.
Signed-off-by: Michal Soltys <soltys@ziu.info>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
The condition can only succeed on wrong configurations.
Signed-off-by: Michal Soltys <soltys@ziu.info>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
calculated deadline
Realtime scheduling implemented in HFSC uses head of the queue to make
the decision about which packet to schedule next. But in case of any
head drop, the deadline calculated for the previous head is not
necessarily correct for the next head (unless both packets have the same
length).
Thanks to peek() function used during dequeue - which internally is a
dequeue operation - hfsc is almost safe from this issue, as peek()
dequeues and isolates the head storing it temporarily until the real
dequeue happens.
But there is one exception: if after the class activation a drop happens
before the first dequeue operation, there's never a chance to do the
peek().
Adding peek() call in enqueue - if this is the first packet in a new
backlog period AND the scheduler has realtime curve defined - fixes that
one corner case. The 1st hfsc_dequeue() will use that peeked packet,
similarly as every subsequent hfsc_dequeue() call uses packet peeked by
the previous call.
Signed-off-by: Michal Soltys <soltys@ziu.info>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Call wmb() to ensure writes are complete before
hardware fetches updated Tx descriptors.
Signed-off-by: Sony Chacko <sony.chacko@qlogic.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Some arches have virtually mapped kernel stacks, or will soon have.
tcp_md5_hash_header() uses an automatic variable to copy tcp header
before mangling th->check and calling crypto function, which might
be problematic on such arches.
David says that using percpu storage is also problematic on non SMP
builds.
Just use kmalloc() to allocate scratch areas.
Signed-off-by: Eric Dumazet <edumazet@google.com>
Reported-by: Andy Lutomirski <luto@amacapital.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Signed-off-by: Masanari Iida <standby24x7@gmail.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: corbet@lwn.net
Cc: linux-doc@vger.kernel.org
Link: http://lkml.kernel.org/r/20160701034601.30308-1-standby24x7@gmail.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
|
|
Fix boot crash that triggers if this driver is built into a kernel and
run on non-AMD systems.
AMD northbridges users call amd_cache_northbridges() and it returns
a negative value to signal that we weren't able to cache/detect any
northbridges on the system.
At least, it should do so as all its callers expect it to do so. But it
does return a negative value only when kmalloc() fails.
Fix it to return -ENODEV if there are no NBs cached as otherwise, amd_nb
users like amd64_edac, for example, which relies on it to know whether
it should load or not, gets loaded on systems like Intel Xeons where it
shouldn't.
Reported-and-tested-by: Tony Battersby <tonyb@cybernetics.com>
Signed-off-by: Borislav Petkov <bp@suse.de>
Cc: <stable@vger.kernel.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/1466097230-5333-2-git-send-email-bp@alien8.de
Link: https://lkml.kernel.org/r/5761BEB0.9000807@cybernetics.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
|
|
When a jumbo packet is being split up and processed, the crypto checksum
for each split-out packet is in the jumbo header and needs placing in the
reconstructed packet header.
When the code was changed to keep the stored copy of the packet header in
host byte order, this reconstruction was missed.
Found with sparse with CF=-D__CHECK_ENDIAN__:
../net/rxrpc/input.c:479:33: warning: incorrect type in assignment (different base types)
../net/rxrpc/input.c:479:33: expected unsigned short [unsigned] [usertype] _rsvd
../net/rxrpc/input.c:479:33: got restricted __be16 [addressable] [usertype] _rsvd
Fixes: 0d12f8a4027d021c9cc942f09f38d28288020c5d ("rxrpc: Keep the skb private record of the Rx header in host byte order")
Signed-off-by: David Howells <dhowells@redhat.com>
|
|
- m_start() in fs/namespace.c expects that ns->event is incremented each
time a mount added or removed from ns->list.
- umount_tree() removes items from the list but does not increment event
counter, expecting that it's done before the function is called.
- There are some codepaths that call umount_tree() without updating
"event" counter. e.g. from __detach_mounts().
- When this happens m_start may reuse a cached mount structure that no
longer belongs to ns->list (i.e. use after free which usually leads
to infinite loop).
This change fixes the above problem by incrementing global event counter
before invoking umount_tree().
Change-Id: I622c8e84dcb9fb63542372c5dbf0178ee86bb589
Cc: stable@vger.kernel.org
Signed-off-by: Andrey Ulanov <andreyu@google.com>
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
|
|
v9fs may be used as lower layer of overlayfs and accessing f_path.dentry
can lead to a crash. In this case it's a NULL pointer dereference in
p9_fid_create().
Fix by replacing direct access of file->f_path.dentry with the
file_dentry() accessor, which will always return a native object.
Reported-by: Alessio Igor Bogani <alessioigorbogani@gmail.com>
Signed-off-by: Miklos Szeredi <mszeredi@redhat.com>
Tested-by: Alessio Igor Bogani <alessioigorbogani@gmail.com>
Fixes: 4bacc9c9234c ("overlayfs: Make f_path always point to the overlay and f_inode to the underlay")
Cc: <stable@vger.kernel.org>
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
|
|
Logan Gunthorpe reports that hibernation stopped working reliably for
him after commit ab76f7b4ab23 (x86/mm: Set NX on gap between __ex_table
and rodata).
That turns out to be a consequence of a long-standing issue with the
64-bit image restoration code on x86, which is that the temporary
page tables set up by it to avoid page tables corruption when the
last bits of the image kernel's memory contents are copied into
their original page frames re-use the boot kernel's text mapping,
but that mapping may very well get corrupted just like any other
part of the page tables. Of course, if that happens, the final
jump to the image kernel's entry point will go to nowhere.
The exact reason why commit ab76f7b4ab23 matters here is that it
sometimes causes a PMD of a large page to be split into PTEs
that are allocated dynamically and get corrupted during image
restoration as described above.
To fix that issue note that the code copying the last bits of the
image kernel's memory contents to the page frames occupied by them
previoulsy doesn't use the kernel text mapping, because it runs from
a special page covered by the identity mapping set up for that code
from scratch. Hence, the kernel text mapping is only needed before
that code starts to run and then it will only be used just for the
final jump to the image kernel's entry point.
Accordingly, the temporary page tables set up in swsusp_arch_resume()
on x86-64 need to contain the kernel text mapping too. That mapping
is only going to be used for the final jump to the image kernel, so
it only needs to cover the image kernel's entry point, because the
first thing the image kernel does after getting control back is to
switch over to its own original page tables. Moreover, the virtual
address of the image kernel's entry point in that mapping has to be
the same as the one mapped by the image kernel's page tables.
With that in mind, modify the x86-64's arch_hibernation_header_save()
and arch_hibernation_header_restore() routines to pass the physical
address of the image kernel's entry point (in addition to its virtual
address) to the boot kernel (a small piece of assembly code involved
in passing the entry point's virtual address to the image kernel is
not necessary any more after that, so drop it). Update RESTORE_MAGIC
too to reflect the image header format change.
Next, in set_up_temporary_mappings(), use the physical and virtual
addresses of the image kernel's entry point passed in the image
header to set up a minimum kernel text mapping (using memory pages
that won't be overwritten by the image kernel's memory contents) that
will map those addresses to each other as appropriate.
This makes the concern about the possible corruption of the original
boot kernel text mapping go away and if the the minimum kernel text
mapping used for the final jump marks the image kernel's entry point
memory as executable, the jump to it is guaraneed to succeed.
Fixes: ab76f7b4ab23 (x86/mm: Set NX on gap between __ex_table and rodata)
Link: http://marc.info/?l=linux-pm&m=146372852823760&w=2
Reported-by: Logan Gunthorpe <logang@deltatee.com>
Reported-and-tested-by: Borislav Petkov <bp@suse.de>
Tested-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
|
|
If the lockd service fails to start up then we need to be sure that the
notifier blocks are not registered, otherwise a subsequent start of the
service could cause the same notifier to be registered twice, leading to
soft lockups.
Signed-off-by: Scott Mayhew <smayhew@redhat.com>
Cc: stable@vger.kernel.org
Fixes: 0751ddf77b6a "lockd: Register callbacks on the inetaddr_chain..."
Signed-off-by: J. Bruce Fields <bfields@redhat.com>
|
|
Asynchronous wb switching of inodes takes an additional ref count on an
inode to make sure inode remains valid until switchover is completed.
However, anyone calling ihold() must already have a ref count on inode,
but in this case inode->i_count may already be zero:
------------[ cut here ]------------
WARNING: CPU: 1 PID: 917 at fs/inode.c:397 ihold+0x2b/0x30
CPU: 1 PID: 917 Comm: kworker/u4:5 Not tainted 4.7.0-rc2+ #49
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Bochs
01/01/2011
Workqueue: writeback wb_workfn (flush-8:16)
0000000000000000 ffff88007ca0fb58 ffffffff805990af 0000000000000000
0000000000000000 ffff88007ca0fb98 ffffffff80268702 0000018d000004e2
ffff88007cef40e8 ffff88007c9b89a8 ffff880079e3a740 0000000000000003
Call Trace:
[<ffffffff805990af>] dump_stack+0x4d/0x6e
[<ffffffff80268702>] __warn+0xc2/0xe0
[<ffffffff802687d8>] warn_slowpath_null+0x18/0x20
[<ffffffff8035b4ab>] ihold+0x2b/0x30
[<ffffffff80367ecc>] inode_switch_wbs+0x11c/0x180
[<ffffffff80369110>] wbc_detach_inode+0x170/0x1a0
[<ffffffff80369abc>] writeback_sb_inodes+0x21c/0x530
[<ffffffff80369f7e>] wb_writeback+0xee/0x1e0
[<ffffffff8036a147>] wb_workfn+0xd7/0x280
[<ffffffff80287531>] ? try_to_wake_up+0x1b1/0x2b0
[<ffffffff8027bb09>] process_one_work+0x129/0x300
[<ffffffff8027be06>] worker_thread+0x126/0x480
[<ffffffff8098cde7>] ? __schedule+0x1c7/0x561
[<ffffffff8027bce0>] ? process_one_work+0x300/0x300
[<ffffffff80280ff4>] kthread+0xc4/0xe0
[<ffffffff80335578>] ? kfree+0xc8/0x100
[<ffffffff809903cf>] ret_from_fork+0x1f/0x40
[<ffffffff80280f30>] ? __kthread_parkme+0x70/0x70
---[ end trace aaefd2fd9f306bc4 ]---
Signed-off-by: Tahsin Erdogan <tahsin@google.com>
Acked-by: Tejun Heo <tj@kernel.org>
Reviewed-by: Jan Kara <jack@suse.cz>
Signed-off-by: Jens Axboe <axboe@fb.com>
|
|
So far, we were missing to send the vblank event when disabling the CRTC,
making us never report the last vblank event.
This was causing a time out on the page flip, which should be solved now.
Signed-off-by: Maxime Ripard <maxime.ripard@free-electrons.com>
|
|
The sun4i display engine doesn't have any vblank counter. Use the proper
helper for that.
Signed-off-by: Maxime Ripard <maxime.ripard@free-electrons.com>
|
|
Pull KVM fixes from Paolo Bonzini:
"ARM and x86 fixes"
* tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm:
KVM: nVMX: VMX instructions: fix segment checks when L1 is in long mode.
KVM: LAPIC: cap __delay at lapic_timer_advance_ns
KVM: x86: move nsec_to_cycles from x86.c to x86.h
pvclock: Get rid of __pvclock_read_cycles in function pvclock_read_flags
pvclock: Cleanup to remove function pvclock_get_nsec_offset
pvclock: Add CPU barriers to get correct version value
KVM: arm/arm64: Stop leaking vcpu pid references
arm64: KVM: fix build with CONFIG_ARM_PMU disabled
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/vgupta/arc
Pull ARC fix from Vineet Gupta:
"Reinstate dwarf unwinder/loadable-modules with new gnu tools"
* tag 'arc-4.7-rc6-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/vgupta/arc:
arc: unwind: warn only once if DW2_UNWIND is disabled
ARC: unwind: ensure that .debug_frame is generated (vs. .eh_frame)
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/thierry.reding/linux-pwm
Pull pwm fixes from Thierry Reding:
"One more fix for some fallout observed after the introduction of the
atomic API"
* tag 'pwm/for-4.7-rc6' of git://git.kernel.org/pub/scm/linux/kernel/git/thierry.reding/linux-pwm:
pwm: Fix pwm_apply_args()
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/lee/mfd
Pull MFD fixes from Lee Jones:
"Contained are some standard fixes and unusually an extension to the
Reset API. Some of those changes are required to fix a bug introduced
in -rc1, which introduces extra 'reset line checks' i.e. whether the
line is shared or not. If a line is shared and the new *_shared() API
is not used, the request fails with an error. This breaks USB in v4.7
for ST's platforms.
Admittedly, there are some patches contained in our (MFD/Reset)
immutable branch which are not true -fixes, but there isn't anything I
can do about that. Rest assured though, there aren't any API
'changes'. Everything is the same from the consumer's perspective.
- Use new reset_*_get_shared() variant to prevent reset line
obtainment failure (Fixes commit 0b52297f2288: "reset: Add support
for shared reset controls")
- Fix unintentional switch() fall-through into error path
- Fix uninitialised variable compiler warning"
* tag 'mfd-fixes-4.7' of git://git.kernel.org/pub/scm/linux/kernel/git/lee/mfd:
mfd: da9053: Fix compiler warning message for uninitialised variable
mfd: max77620: Fix FPS switch statements
phy: phy-stih407-usb: Inform the reset framework that our reset line may be shared
usb: dwc3: st: Inform the reset framework that our reset line may be shared
usb: host: ehci-st: Inform the reset framework that our reset line may be shared
usb: host: ohci-st: Inform the reset framework that our reset line may be shared
reset: TRIVIAL: Add line break at same place for similar APIs
reset: Supply *_shared variant calls when using *_optional APIs
reset: Supply *_shared variant calls when using of_* API
reset: Ensure drivers are explicit when requesting reset lines
reset: Reorder inline reset_control_get*() wrappers
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/kvmarm/kvmarm into kvm-master
KVM/ARM Fixes for v4.7-rc6:
Fixes a build issue without CONFIG_ARM_PMU and plugs pid leak on arm/arm64.
|