Age | Commit message (Collapse) | Author |
|
In preparation for unconditionally passing the struct timer_list pointer to
all timer callbacks, switch to using the new timer_setup() and from_timer()
to pass the timer pointer explicitly. Since the callback is called from
both a timer and a tasklet, adjust the tasklet to pass the timer address
too. When tasklets have their .data field removed, this can be refactored
to call a central function after resolving the correct container_of() for a
separate callback function for timer and tasklet.
Cc: Oliver Neukum <oneukum@suse.com>
Cc: netdev@vger.kernel.org
Cc: linux-usb@vger.kernel.org
Signed-off-by: Kees Cook <keescook@chromium.org>
Acked-by: Oliver Neukum <oneukum@suse.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
In preparation for unconditionally passing the struct timer_list pointer to
all timer callbacks, switch to using the new timer_setup() and from_timer()
to pass the timer pointer explicitly.
Cc: Samuel Chessman <chessman@tux.org>
Cc: netdev@vger.kernel.org
Signed-off-by: Kees Cook <keescook@chromium.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
In preparation for unconditionally passing the struct timer_list pointer to
all timer callbacks, switch to using the new timer_setup() and from_timer()
to pass the timer pointer explicitly.
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: linux-hams@vger.kernel.org
Cc: netdev@vger.kernel.org
Signed-off-by: Kees Cook <keescook@chromium.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
In preparation for unconditionally passing the struct timer_list pointer to
all timer callbacks, switch to using the new timer_setup() and from_timer()
to pass the timer pointer explicitly.
Cc: "David S. Miller" <davem@davemloft.net>
Cc: Hans Liljestrand <ishkamiel@gmail.com>
Cc: "Reshetova, Elena" <elena.reshetova@intel.com>
Cc: linux-x25@vger.kernel.org
Cc: netdev@vger.kernel.org
Signed-off-by: Kees Cook <keescook@chromium.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
In preparation for unconditionally passing the struct timer_list pointer to
all timer callbacks, switch to using the new timer_setup() and from_timer()
to pass the timer pointer explicitly.
Cc: "David S. Miller" <davem@davemloft.net>
Cc: Johannes Berg <johannes.berg@intel.com>
Cc: David Ahern <dsa@cumulusnetworks.com>
Cc: linux-decnet-user@lists.sourceforge.net
Cc: netdev@vger.kernel.org
Signed-off-by: Kees Cook <keescook@chromium.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Vivien Didelot says:
====================
net: dsa: master and slave helpers
This patch series adds a few helpers to DSA core for clarity and
readability but brings no functional changes.
A dsa_slave_notify helper calls the DSA notifiers when (un)registering a
slave device.
Most of the DSA slave code only needs to access the dsa_port structure,
not the dsa_slave_priv (which only contains a few PHY-specific members).
Thus a dsa_slave_to_port helper returns a dsa_port structure of a slave
device.
A dsa_slave_to_master returns the master device of a slave device.
After that the netdev member of the dsa_port structure is split into two
explicit master and slave members to avoid confusion, and a dsa_to_port
helper is added for switch drivers to get a const reference to a port.
Changes in v2:
- prefer dsa_slave_to_master instead of dsa_slave_get_master
- rename dsa_master_get_slave to dsa_master_find_slave
- pack master and slave net devices into an anonymous union
- add dsa_to_port public helper for switch drivers
- add Reviewed-by tags from Florian
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
The dsa_port structure is part of DSA core data and must only be updated
by the later. It is OK and sometimes necessary for the DSA drivers to
access this data, but this has to be read only.
For that purpose, add a dsa_to_port() helper which returns a const
pointer to a dsa_port structure which must be used by DSA drivers from
now on instead of digging into ds->ports[] themselves.
Signed-off-by: Vivien Didelot <vivien.didelot@savoirfairelinux.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
The dsa_port structure has a "netdev" member, which can be used for
either the master device, or the slave device, depending on its type.
It is true that today, CPU port are not exposed to userspace, thus the
port's netdev member can be used to point to its master interface.
But it is still slightly confusing, so split it into more explicit
"master" and "slave" members inside an anonymous union.
Signed-off-by: Vivien Didelot <vivien.didelot@savoirfairelinux.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
The dsa_master_get_slave is slightly confusing since the idiomatic "get"
term often suggests reference counting, in symmetry to "put".
Rename it to dsa_master_find_slave to make the look up operation clear.
Signed-off-by: Vivien Didelot <vivien.didelot@savoirfairelinux.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Many part of the DSA slave code require to get the master device
assigned to a slave device. Remove dsa_master_netdev() in favor of a
dsa_slave_to_master() helper which does that.
Signed-off-by: Vivien Didelot <vivien.didelot@savoirfairelinux.com>
Reviewed-by: Florian Fainelli <f.fainelli@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Many portions of DSA core code require to get the dsa_port structure
corresponding to a slave net_device. For this purpose, introduce a
dsa_slave_to_port() helper.
Signed-off-by: Vivien Didelot <vivien.didelot@savoirfairelinux.com>
Reviewed-by: Florian Fainelli <f.fainelli@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Both DSA slave create and destroy functions call call_dsa_notifiers with
respectively DSA_PORT_REGISTER and DSA_PORT_UNREGISTER and the same
dsa_notifier_register_info structure.
Wrap this in a dsa_slave_notify helper so prevent cluttering these
functions.
Signed-off-by: Vivien Didelot <vivien.didelot@savoirfairelinux.com>
Reviewed-by: Florian Fainelli <f.fainelli@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
When dsa_slave_create is called, the related port already has a CPU port
assigned to it, available in its cpu_dp member. Use it instead of the
unique tree cpu_dp.
Signed-off-by: Vivien Didelot <vivien.didelot@savoirfairelinux.com>
Reviewed-by: Florian Fainelli <f.fainelli@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
It seems that it's possible to toggle NETLINK_F_EXT_ACK
through setsockopt() while another thread/CPU is building
a message inside netlink_ack(), which could then trigger
the WARN_ON()s I added since if it goes from being turned
off to being turned on between allocating and filling the
message, the skb could end up being too small.
Avoid this whole situation by storing the value of this
flag in a separate variable and using that throughout the
function instead.
Fixes: 2d4bc93368f5 ("netlink: extended ACK reporting")
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
This patch correctly sets the number of additional header descriptors
that will be sent in an indirect SCRQ entry.
Signed-off-by: Thomas Falcon <tlfalcon@linux.vnet.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
When netlink_ack() reports an allocation error to the sending
socket, there's no need to look up the sending socket since
it's available in the SKB's CB. Use that instead of going to
the trouble of looking it up.
Note that the pointer is only available since Eric Biederman's
commit 3fbc290540a1 ("netlink: Make the sending netlink socket availabe in NETLINK_CB")
which is far newer than the original lookup code (Oct 2003)
(though the field was called 'ssk' in that commit and only got
renamed to 'sk' later, I'd actually argue 'ssk' was better - or
perhaps it should've been 'source_sk' - since there are so many
different 'sk's involved.)
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
When an EMAD is transmitted, a timeout work item is scheduled with a
delay of 200ms, so that another EMAD will be retried until a maximum of
five retries.
In certain situations, it's possible for the function waiting on the
EMAD to be associated with a work item that is queued on the same
workqueue (`mlxsw_core`) as the timeout work item. This results in
flushing a work item on the same workqueue.
According to commit e159489baa71 ("workqueue: relax lockdep annotation
on flush_work()") the above may lead to a deadlock in case the workqueue
has only one worker active or if the system in under memory pressure and
the rescue worker is in use. The latter explains the very rare and
random nature of the lockdep splats we have been seeing:
[ 52.730240] ============================================
[ 52.736179] WARNING: possible recursive locking detected
[ 52.742119] 4.14.0-rc3jiri+ #4 Not tainted
[ 52.746697] --------------------------------------------
[ 52.752635] kworker/1:3/599 is trying to acquire lock:
[ 52.758378] (mlxsw_core_driver_name){+.+.}, at: [<ffffffff811c4fa4>] flush_work+0x3a4/0x5e0
[ 52.767837]
but task is already holding lock:
[ 52.774360] (mlxsw_core_driver_name){+.+.}, at: [<ffffffff811c65c4>] process_one_work+0x7d4/0x12f0
[ 52.784495]
other info that might help us debug this:
[ 52.791794] Possible unsafe locking scenario:
[ 52.798413] CPU0
[ 52.801144] ----
[ 52.803875] lock(mlxsw_core_driver_name);
[ 52.808556] lock(mlxsw_core_driver_name);
[ 52.813236]
*** DEADLOCK ***
[ 52.819857] May be due to missing lock nesting notation
[ 52.827450] 3 locks held by kworker/1:3/599:
[ 52.832221] #0: (mlxsw_core_driver_name){+.+.}, at: [<ffffffff811c65c4>] process_one_work+0x7d4/0x12f0
[ 52.842846] #1: ((&(&bridge->fdb_notify.dw)->work)){+.+.}, at: [<ffffffff811c65c4>] process_one_work+0x7d4/0x12f0
[ 52.854537] #2: (rtnl_mutex){+.+.}, at: [<ffffffff822ad8e7>] rtnl_lock+0x17/0x20
[ 52.863021]
stack backtrace:
[ 52.867890] CPU: 1 PID: 599 Comm: kworker/1:3 Not tainted 4.14.0-rc3jiri+ #4
[ 52.875773] Hardware name: Mellanox Technologies Ltd. "MSN2100-CB2F"/"SA001017", BIOS 5.6.5 06/07/2016
[ 52.886267] Workqueue: mlxsw_core mlxsw_sp_fdb_notify_work [mlxsw_spectrum]
[ 52.894060] Call Trace:
[ 52.909122] __lock_acquire+0xf6f/0x2a10
[ 53.025412] lock_acquire+0x158/0x440
[ 53.047557] flush_work+0x3c4/0x5e0
[ 53.087571] __cancel_work_timer+0x3ca/0x5e0
[ 53.177051] cancel_delayed_work_sync+0x13/0x20
[ 53.182142] mlxsw_reg_trans_bulk_wait+0x12d/0x7a0 [mlxsw_core]
[ 53.194571] mlxsw_core_reg_access+0x586/0x990 [mlxsw_core]
[ 53.225365] mlxsw_reg_query+0x10/0x20 [mlxsw_core]
[ 53.230882] mlxsw_sp_fdb_notify_work+0x2a3/0x9d0 [mlxsw_spectrum]
[ 53.237801] process_one_work+0x8f1/0x12f0
[ 53.321804] worker_thread+0x1fd/0x10c0
[ 53.435158] kthread+0x28e/0x370
[ 53.448703] ret_from_fork+0x2a/0x40
[ 53.453017] mlxsw_spectrum 0000:01:00.0: EMAD retries (2/5) (tid=bf4549b100000774)
[ 53.453119] mlxsw_spectrum 0000:01:00.0: EMAD retries (5/5) (tid=bf4549b100000770)
[ 53.453132] mlxsw_spectrum 0000:01:00.0: EMAD reg access failed (tid=bf4549b100000770,reg_id=200b(sfn),type=query,status=0(operation performed))
[ 53.453143] mlxsw_spectrum 0000:01:00.0: Failed to get FDB notifications
Fix this by creating another workqueue for EMAD timeouts, thereby
preventing the situation of a work item trying to flush a work item
queued on the same workqueue.
Fixes: caf7297e7ab5f ("mlxsw: core: Introduce support for asynchronous EMAD register access")
Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Reported-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Jesper Dangaard Brouer says:
====================
net: New bpf cpumap type for XDP_REDIRECT
Introducing a new way to redirect XDP frames. Notice how no driver
changes are necessary given the design of XDP_REDIRECT.
This redirect map type is called 'cpumap', as it allows redirection
XDP frames to remote CPUs. The remote CPU will do the SKB allocation
and start the network stack invocation on that CPU.
This is a scalability and isolation mechanism, that allow separating
the early driver network XDP layer, from the rest of the netstack, and
assigning dedicated CPUs for this stage. The sysadm control/configure
the RX-CPU to NIC-RX queue (as usual) via procfs smp_affinity and how
many queues are configured via ethtool --set-channels. Benchmarks
show that a single CPU can handle approx 11Mpps. Thus, only assigning
two NIC RX-queues (and two CPUs) is sufficient for handling 10Gbit/s
wirespeed smallest packet 14.88Mpps. Reducing the number of queues
have the advantage that more packets being "bulk" available per hard
interrupt[1].
[1] https://www.netdevconf.org/2.1/papers/BusyPollingNextGen.pdf
Use-cases:
1. End-host based pre-filtering for DDoS mitigation. This is fast
enough to allow software to see and filter all packets wirespeed.
Thus, no packets getting silently dropped by hardware.
2. Given NIC HW unevenly distributes packets across RX queue, this
mechanism can be used for redistribution load across CPUs. This
usually happens when HW is unaware of a new protocol. This
resembles RPS (Receive Packet Steering), just faster, but with more
responsibility placed on the BPF program for correct steering.
3. Auto-scaling or power saving via only activating the appropriate
number of remote CPUs for handling the current load. The cpumap
tracepoints can function as a feedback loop for this purpose.
In V7, a --stress-mode was implemented for the samples program, which
between each stats update, adds + removes CPUs from the map
concurrently with traffic. I did find and fix some concurrency issues
in the tear-down path, details in patch desc. The stress test have
now been running for 15 hours without any issues, while being
bombarded with 11.6 Mpps via pktgen_sample04_many_flows.sh.
See individual patches for patchset-version changes.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
This sample program show how to use cpumap and the associated
tracepoints.
It provides command line stats, which shows how the XDP-RX process,
cpumap-enqueue and cpumap kthread dequeue is cooperating on a per CPU
basis. It also utilize the xdp_exception and xdp_redirect_err
transpoints to allow users quickly to identify setup issues.
One issue with ixgbe driver is that the driver reset the link when
loading XDP. This reset the procfs smp_affinity settings. Thus,
after loading the program, these must be reconfigured. The easiest
workaround it to reduce the RX-queue to e.g. two via:
# ethtool --set-channels ixgbe1 combined 2
And then add CPUs above 0 and 1, like:
# xdp_redirect_cpu --dev ixgbe1 --prog 2 --cpu 2 --cpu 3 --cpu 4
Another issue with ixgbe is that the page recycle mechanism is tied to
the RX-ring size. And the default setting of 512 elements is too
small. This is the same issue with regular devmap XDP_REDIRECT.
To overcome this I've been using 1024 rx-ring size:
# ethtool -G ixgbe1 rx 1024 tx 1024
V3:
- whitespace cleanups
- bpf tracepoint cannot access top part of struct
V4:
- report on kthread sched events, according to tracepoint change
- report average bulk enqueue size
V5:
- bpf_map_lookup_elem on cpumap not allowed from bpf_prog
use separate map to mark CPUs not available
V6:
- correct kthread sched summary output
V7:
- Added a --stress-mode for concurrently changing underlying cpumap
Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
This adds two tracepoint to the cpumap. One for the enqueue side
trace_xdp_cpumap_enqueue() and one for the kthread dequeue side
trace_xdp_cpumap_kthread().
To mitigate the tracepoint overhead, these are invoked during the
enqueue/dequeue bulking phases, thus amortizing the cost.
The obvious use-cases are for debugging and monitoring. The
non-intuitive use-case is using these as a feedback loop to know the
system load. One can imagine auto-scaling by reducing, adding or
activating more worker CPUs on demand.
V4: tracepoint remove time_limit info, instead add sched info
V8: intro struct bpf_cpu_map_entry members cpu+map_id in this patch
Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
This patch makes cpumap functional, by adding SKB allocation and
invoking the network stack on the dequeuing CPU.
For constructing the SKB on the remote CPU, the xdp_buff in converted
into a struct xdp_pkt, and it mapped into the top headroom of the
packet, to avoid allocating separate mem. For now, struct xdp_pkt is
just a cpumap internal data structure, with info carried between
enqueue to dequeue.
If a driver doesn't have enough headroom it is simply dropped, with
return code -EOVERFLOW. This will be picked up the xdp tracepoint
infrastructure, to allow users to catch this.
V2: take into account xdp->data_meta
V4:
- Drop busypoll tricks, keeping it more simple.
- Skip RPS and Generic-XDP-recursive-reinjection, suggested by Alexei
V5: correct RCU read protection around __netif_receive_skb_core.
V6: Setting TASK_RUNNING vs TASK_INTERRUPTIBLE based on talk with Rik van Riel
Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
This patch connects cpumap to the xdp_do_redirect_map infrastructure.
Still no SKB allocation are done yet. The XDP frames are transferred
to the other CPU, but they are simply refcnt decremented on the remote
CPU. This served as a good benchmark for measuring the overhead of
remote refcnt decrement. If driver page recycle cache is not
efficient then this, exposes a bottleneck in the page allocator.
A shout-out to MST's ptr_ring, which is the secret behind is being so
efficient to transfer memory pointers between CPUs, without constantly
bouncing cache-lines between CPUs.
V3: Handle !CONFIG_BPF_SYSCALL pointed out by kbuild test robot.
V4: Make Generic-XDP aware of cpumap type, but don't allow redirect yet,
as implementation require a separate upstream discussion.
V5:
- Fix a maybe-uninitialized pointed out by kbuild test robot.
- Restrict bpf-prog side access to cpumap, open when use-cases appear
- Implement cpu_map_enqueue() as a more simple void pointer enqueue
V6:
- Allow cpumap type for usage in helper bpf_redirect_map,
general bpf-prog side restriction moved to earlier patch.
Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
The 'cpumap' is primarily used as a backend map for XDP BPF helper
call bpf_redirect_map() and XDP_REDIRECT action, like 'devmap'.
This patch implement the main part of the map. It is not connected to
the XDP redirect system yet, and no SKB allocation are done yet.
The main concern in this patch is to ensure the datapath can run
without any locking. This adds complexity to the setup and tear-down
procedure, which assumptions are extra carefully documented in the
code comments.
V2:
- make sure array isn't larger than NR_CPUS
- make sure CPUs added is a valid possible CPU
V3: fix nitpicks from Jakub Kicinski <kubakici@wp.pl>
V5:
- Restrict map allocation to root / CAP_SYS_ADMIN
- WARN_ON_ONCE if queue is not empty on tear-down
- Return -EPERM on memlock limit instead of -ENOMEM
- Error code in __cpu_map_entry_alloc() also handle ptr_ring_cleanup()
- Moved cpu_map_enqueue() to next patch
V6: all notice by Daniel Borkmann
- Fix err return code in cpu_map_alloc() introduced in V5
- Move cpu_possible() check after max_entries boundary check
- Forbid usage initially in check_map_func_compatibility()
V7:
- Fix alloc error path spotted by Daniel Borkmann
- Did stress test adding+removing CPUs from the map concurrently
- Fixed refcnt issue on cpu_map_entry, kthread started too soon
- Make sure packets are flushed during tear-down, involved use of
rcu_barrier() and kthread_run only exit after queue is empty
- Fix alloc error path in __cpu_map_entry_alloc() for ptr_ring
V8:
- Nitpicking comments and gramma by Edward Cree
- Fix missing semi-colon introduced in V7 due to rebasing
- Move struct bpf_cpu_map_entry members cpu+map_id to tracepoint patch
Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com>
Acked-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
According to the ASPEED datasheet, gigabit speeds require a clock of
100MHz or higher. Other speeds require 25MHz or higher. This patch
configures a 100MHz clock if the system has a direct-attached
PHY, or 25MHz if the system is running NC-SI which is limited to 100MHz.
There appear to be no other upstream users of the FTGMAC100 driver it is
hard to know the clocking requirements of other platforms. Therefore a
conservative approach was taken with enabling clocks. If the platform is
not ASPEED, both requesting the clock and configuring the speed is
skipped.
Signed-off-by: Joel Stanley <joel@jms.id.au>
Tested-by: Andrew Jeffery <andrew@aj.id.au>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/driver-core
Pull enforcement policy update from Greg KH:
"Documentation: Add a file explaining the requested Linux kernel
license enforcement policy
Here's a new file to the kernel's Documentation directory. It adds a
short document describing the views of how the Linux kernel community
feels about enforcing the license of the kernel.
The patch has been reviewed by a large number of kernel developers
already, as seen by their acks on the patch, and their agreement of
the statement with their names on it. The location of the file was
also agreed upon by the Documentation maintainer, so all should be
good there.
For some background information about this statement, see this article
written by some of the kernel developers involved in drafting it:
http://kroah.com/log/blog/2017/10/16/linux-kernel-community-enforcement-statement/
and this article that answers a number of questions that came up in
the discussion of this statement with the kernel developer community:
http://kroah.com/log/blog/2017/10/16/linux-kernel-community-enforcement-statement-faq/
If anyone has any further questions about it, please let me, and the
TAB members, know and we will be glad to help answer them"
* tag 'enforcement-4.14-rc6' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/driver-core:
Documentation: Add a file explaining the Linux kernel license enforcement policy
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/s390/linux
Pull s390 fixes from Martin Schwidefsky:
"Two bug fixes:
- A fix for cputime accounting vs CPU hotplug
- Add two options to zfcpdump_defconfig to make SCSI dump work again"
* 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/s390/linux:
s390: fix zfcpdump-config
s390/cputime: fix guest/irq/softirq times after CPU hotplug
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-trace
Pull tracing fix from Steven Rostedt:
"Testing a new trace event format, I triggered a bug by doing:
# modprobe trace-events-sample
# echo 1 > /sys/kernel/debug/tracing/events/sample-trace/enable
# rmmod trace-events-sample
This would cause an oops. The issue is that I added another trace
event sample that reused a reg function of another trace event to
create a thread to call the tracepoints. The problem was that the reg
function couldn't handle nested calls (reg; reg; unreg; unreg;) and
created two threads (instead of one) and only removed one on exit.
This isn't a critical bug as the bug is only in sample code. But
sample code should be free of known bugs to prevent others from
copying it. This is why this is also marked for stable"
* tag 'trace-v4.14-rc3' of git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-trace:
tracing/samples: Fix creation and deletion of simple_thread_fn creation
|
|
Make AF_RXRPC accept MSG_WAITALL as a flag to sendmsg() to tell it to
ignore signals whilst loading up the message queue, provided progress is
being made in emptying the queue at the other side.
Progress is defined as the base of the transmit window having being
advanced within 2 RTT periods. If the period is exceeded with no progress,
sendmsg() will return anyway, indicating how much data has been copied, if
any.
Once the supplied buffer is entirely decanted, the sendmsg() will return.
Signed-off-by: David Howells <dhowells@redhat.com>
|
|
Provide a couple of functions to allow cleaner handling of signals in a
kernel service. They are:
(1) rxrpc_kernel_get_rtt()
This allows the kernel service to find out the RTT time for a call, so
as to better judge how large a timeout to employ.
Note, though, that whilst this returns a value in nanoseconds, the
timeouts can only actually be in jiffies.
(2) rxrpc_kernel_check_life()
This returns a number that is updated when ACKs are received from the
peer (notably including PING RESPONSE ACKs which we can elicit by
sending PING ACKs to see if the call still exists on the server).
The caller should compare the numbers of two calls to see if the call
is still alive.
These can be used to provide an extending timeout rather than returning
immediately in the case that a signal occurs that would otherwise abort an
RPC operation. The timeout would be extended if the server is still
responsive and the call is still apparently alive on the server.
For most operations this isn't that necessary - but for FS.StoreData it is:
OpenAFS writes the data to storage as it comes in without making a backup,
so if we immediately abort it when partially complete on a CTRL+C, say, we
have no idea of the state of the file after the abort.
Signed-off-by: David Howells <dhowells@redhat.com>
|
|
Provide support for a kernel service to make use of the service upgrade
facility. This involves:
(1) Pass an upgrade request flag to rxrpc_kernel_begin_call().
(2) Make rxrpc_kernel_recv_data() return the call's current service ID so
that the caller can detect service upgrade and see what the service
was upgraded to.
Signed-off-by: David Howells <dhowells@redhat.com>
|
|
The commit 99b5c5bb9a54 ("ALSA: hda - Remove the use of set_fs()")
converted the get_kctl_0dB_offset() call for killing set_fs() usage in
HD-audio codec code. The conversion assumed that the TLV callback
used in HD-audio code is only snd_hda_mixer_amp() and applies the TLV
calculation locally.
Although this assumption is correct, and all slave kctls are actually
with that callback, the current code is still utterly buggy; it
doesn't hit this condition and falls back to the next check. It's
because the function gets called after adding slave kctls to vmaster.
By assigning a slave kctl, the slave kctl object is faked inside
vmaster code, and the whole kctl ops are overridden. Thus the
callback op points to a different value from what we've assumed.
More badly, as reported by the KERNEXEC and UDEREF features of PaX,
the code flow turns into the unexpected pitfall. The next fallback
check is SNDRV_CTL_ELEM_ACCESS_TLV_READ access bit, and this always
hits for each kctl with TLV. Then it evaluates the callback function
pointer wrongly as if it were a TLV array. Although currently its
side-effect is fairly limited, this incorrect reference may lead to an
unpleasant result.
For addressing the regression, this patch introduces a new helper to
vmaster code, snd_ctl_apply_vmaster_slaves(). This works similarly
like the existing map_slaves() in hda_codec.c: it loops over the slave
list of the given master, and applies the given function to each
slave. Then the initializer function receives the right kctl object
and we can compare the correct pointer instead of the faked one.
Also, for catching the similar breakage in future, give an error
message when the unexpected TLV callback is found and bail out
immediately.
Fixes: 99b5c5bb9a54 ("ALSA: hda - Remove the use of set_fs()")
Reported-by: PaX Team <pageexec@freemail.hu>
Cc: <stable@vger.kernel.org> # v4.13
Signed-off-by: Takashi Iwai <tiwai@suse.de>
|
|
While converting the error messages to the standard macros in the
commit 4e76a8833fac ("ALSA: hda - Replace with standard printk"), a
superfluous '-' slipped in the code mistakenly. Its influence is
almost negligible, merely shows a dB value as negative integer instead
of positive integer (or vice versa) in the rare error message.
So let's kill this embarrassing byte to show more correct value.
Fixes: 4e76a8833fac ("ALSA: hda - Replace with standard printk")
Cc: <stable@vger.kernel.org>
Signed-off-by: Takashi Iwai <tiwai@suse.de>
|
|
The loop in snd_hdac_bus_parse_capabilities() may go to nirvana when
it hits an invalid register value read:
BUG: unable to handle kernel paging request at ffffad5dc41f3fff
IP: pci_azx_readl+0x5/0x10 [snd_hda_intel]
Call Trace:
snd_hdac_bus_parse_capabilities+0x3c/0x1f0 [snd_hda_core]
azx_probe_continue+0x7d5/0x940 [snd_hda_intel]
.....
This happened on a new Intel machine, and we need to check the value
and abort the loop accordingly.
[Note: the fixes tag below indicates only the commit where this patch
can be applied; the original problem was introduced even before that
commit]
Fixes: 6720b38420a0 ("ALSA: hda - move bus_parse_capabilities to core")
Cc: <stable@vger.kernel.org>
Acked-by: Vinod Koul <vinod.koul@intel.com>
Signed-off-by: Takashi Iwai <tiwai@suse.de>
|
|
The ASN.1 parser does not necessarily set the sinfo field,
this patch prevents a NULL pointer dereference on broken
input.
Fixes: 99db44350672 ("PKCS#7: Appropriately restrict authenticated attributes and content type")
Signed-off-by: Eric Sesterhenn <eric.sesterhenn@x41-dsec.de>
Signed-off-by: David Howells <dhowells@redhat.com>
cc: stable@vger.kernel.org # 4.3+
|
|
In proc_keys_show(), the key semaphore is not held, so the key ->flags
and ->expiry can be changed concurrently. We therefore should read them
atomically just once.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: David Howells <dhowells@redhat.com>
|
|
Similar to the case for key_validate(), we should load the key ->expiry
once atomically in keyring_search_iterator(), since it can be changed
concurrently with the flags whenever the key semaphore isn't held.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: David Howells <dhowells@redhat.com>
|
|
In key_validate(), load the flags and expiry time once atomically, since
these can change concurrently if key_validate() is called without the
key semaphore held. And we don't want to get inconsistent results if a
variable is referenced multiple times. For example, key->expiry was
referenced in both 'if (key->expiry)' and in 'if (now.tv_sec >=
key->expiry)', making it theoretically possible to see a spurious
EKEYEXPIRED while the expiration time was being removed, i.e. set to 0.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: David Howells <dhowells@redhat.com>
|
|
Currently, when passed a key that already exists, add_key() will call the
key's ->update() method if such exists. But this is heavily broken in the
case where the key is uninstantiated because it doesn't call
__key_instantiate_and_link(). Consequently, it doesn't do most of the
things that are supposed to happen when the key is instantiated, such as
setting the instantiation state, clearing KEY_FLAG_USER_CONSTRUCT and
awakening tasks waiting on it, and incrementing key->user->nikeys.
It also never takes key_construction_mutex, which means that
->instantiate() can run concurrently with ->update() on the same key. In
the case of the "user" and "logon" key types this causes a memory leak, at
best. Maybe even worse, the ->update() methods of the "encrypted" and
"trusted" key types actually just dereference a NULL pointer when passed an
uninstantiated key.
Change key_create_or_update() to wait interruptibly for the key to finish
construction before continuing.
This patch only affects *uninstantiated* keys. For now we still allow a
negatively instantiated key to be updated (thereby positively
instantiating it), although that's broken too (the next patch fixes it)
and I'm not sure that anyone actually uses that functionality either.
Here is a simple reproducer for the bug using the "encrypted" key type
(requires CONFIG_ENCRYPTED_KEYS=y), though as noted above the bug
pertained to more than just the "encrypted" key type:
#include <stdlib.h>
#include <unistd.h>
#include <keyutils.h>
int main(void)
{
int ringid = keyctl_join_session_keyring(NULL);
if (fork()) {
for (;;) {
const char payload[] = "update user:foo 32";
usleep(rand() % 10000);
add_key("encrypted", "desc", payload, sizeof(payload), ringid);
keyctl_clear(ringid);
}
} else {
for (;;)
request_key("encrypted", "desc", "callout_info", ringid);
}
}
It causes:
BUG: unable to handle kernel NULL pointer dereference at 0000000000000018
IP: encrypted_update+0xb0/0x170
PGD 7a178067 P4D 7a178067 PUD 77269067 PMD 0
PREEMPT SMP
CPU: 0 PID: 340 Comm: reproduce Tainted: G D 4.14.0-rc1-00025-g428490e38b2e #796
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Bochs 01/01/2011
task: ffff8a467a39a340 task.stack: ffffb15c40770000
RIP: 0010:encrypted_update+0xb0/0x170
RSP: 0018:ffffb15c40773de8 EFLAGS: 00010246
RAX: 0000000000000000 RBX: ffff8a467a275b00 RCX: 0000000000000000
RDX: 0000000000000005 RSI: ffff8a467a275b14 RDI: ffffffffb742f303
RBP: ffffb15c40773e20 R08: 0000000000000000 R09: ffff8a467a275b17
R10: 0000000000000020 R11: 0000000000000000 R12: 0000000000000000
R13: 0000000000000000 R14: ffff8a4677057180 R15: ffff8a467a275b0f
FS: 00007f5d7fb08700(0000) GS:ffff8a467f200000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 0000000000000018 CR3: 0000000077262005 CR4: 00000000001606f0
Call Trace:
key_create_or_update+0x2bc/0x460
SyS_add_key+0x10c/0x1d0
entry_SYSCALL_64_fastpath+0x1f/0xbe
RIP: 0033:0x7f5d7f211259
RSP: 002b:00007ffed03904c8 EFLAGS: 00000246 ORIG_RAX: 00000000000000f8
RAX: ffffffffffffffda RBX: 000000003b2a7955 RCX: 00007f5d7f211259
RDX: 00000000004009e4 RSI: 00000000004009ff RDI: 0000000000400a04
RBP: 0000000068db8bad R08: 000000003b2a7955 R09: 0000000000000004
R10: 000000000000001a R11: 0000000000000246 R12: 0000000000400868
R13: 00007ffed03905d0 R14: 0000000000000000 R15: 0000000000000000
Code: 77 28 e8 64 34 1f 00 45 31 c0 31 c9 48 8d 55 c8 48 89 df 48 8d 75 d0 e8 ff f9 ff ff 85 c0 41 89 c4 0f 88 84 00 00 00 4c 8b 7d c8 <49> 8b 75 18 4c 89 ff e8 24 f8 ff ff 85 c0 41 89 c4 78 6d 49 8b
RIP: encrypted_update+0xb0/0x170 RSP: ffffb15c40773de8
CR2: 0000000000000018
Cc: <stable@vger.kernel.org> # v2.6.12+
Reported-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: David Howells <dhowells@redhat.com>
cc: Eric Biggers <ebiggers@google.com>
|
|
Consolidate KEY_FLAG_INSTANTIATED, KEY_FLAG_NEGATIVE and the rejection
error into one field such that:
(1) The instantiation state can be modified/read atomically.
(2) The error can be accessed atomically with the state.
(3) The error isn't stored unioned with the payload pointers.
This deals with the problem that the state is spread over three different
objects (two bits and a separate variable) and reading or updating them
atomically isn't practical, given that not only can uninstantiated keys
change into instantiated or rejected keys, but rejected keys can also turn
into instantiated keys - and someone accessing the key might not be using
any locking.
The main side effect of this problem is that what was held in the payload
may change, depending on the state. For instance, you might observe the
key to be in the rejected state. You then read the cached error, but if
the key semaphore wasn't locked, the key might've become instantiated
between the two reads - and you might now have something in hand that isn't
actually an error code.
The state is now KEY_IS_UNINSTANTIATED, KEY_IS_POSITIVE or a negative error
code if the key is negatively instantiated. The key_is_instantiated()
function is replaced with key_is_positive() to avoid confusion as negative
keys are also 'instantiated'.
Additionally, barriering is included:
(1) Order payload-set before state-set during instantiation.
(2) Order state-read before payload-read when using the key.
Further separate barriering is necessary if RCU is being used to access the
payload content after reading the payload pointers.
Fixes: 146aa8b1453b ("KEYS: Merge the type-specific data with the payload data")
Cc: stable@vger.kernel.org # v4.4+
Reported-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: David Howells <dhowells@redhat.com>
Reviewed-by: Eric Biggers <ebiggers@google.com>
|
|
For finding asymmetric key, the input id_0 and id_1 parameters can
not be NULL at the same time. This patch adds the BUG_ON checking
for id_0 and id_1.
Cc: David Howells <dhowells@redhat.com>
Cc: Herbert Xu <herbert@gondor.apana.org.au>
Cc: "David S. Miller" <davem@davemloft.net>
Signed-off-by: Chun-Yi Lee <jlee@suse.com>
Signed-off-by: David Howells <dhowells@redhat.com>
|
|
Fix the wrong index number when checking the existence of second
id in function of finding asymmetric key. The id_1 is the second
id that the index in array must be 1 but not 0.
Fixes: 9eb029893ad5 (KEYS: Generalise x509_request_asymmetric_key())
Cc: David Howells <dhowells@redhat.com>
Cc: Herbert Xu <herbert@gondor.apana.org.au>
Cc: "David S. Miller" <davem@davemloft.net>
Signed-off-by: Chun-Yi Lee <jlee@suse.com>
Signed-off-by: David Howells <dhowells@redhat.com>
|
|
The recent rework introduced a possible randconfig build failure
when CONFIG_CRYPTO configured to only allow modules:
security/keys/big_key.o: In function `big_key_crypt':
big_key.c:(.text+0x29f): undefined reference to `crypto_aead_setkey'
security/keys/big_key.o: In function `big_key_init':
big_key.c:(.init.text+0x1a): undefined reference to `crypto_alloc_aead'
big_key.c:(.init.text+0x45): undefined reference to `crypto_aead_setauthsize'
big_key.c:(.init.text+0x77): undefined reference to `crypto_destroy_tfm'
crypto/gcm.o: In function `gcm_hash_crypt_remain_continue':
gcm.c:(.text+0x167): undefined reference to `crypto_ahash_finup'
crypto/gcm.o: In function `crypto_gcm_exit_tfm':
gcm.c:(.text+0x847): undefined reference to `crypto_destroy_tfm'
When we 'select CRYPTO' like the other users, we always get a
configuration that builds.
Fixes: 428490e38b2e ("security/keys: rewrite all of big_key crypto")
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: David Howells <dhowells@redhat.com>
|
|
The 'use' locking macros are no-ops if neither SMP or SND_DEBUG is
enabled. This might once have been OK in non-preemptible
configurations, but even in that case snd_seq_read() may sleep while
relying on a 'use' lock. So always use the proper implementations.
Cc: stable@vger.kernel.org
Signed-off-by: Ben Hutchings <ben.hutchings@codethink.co.uk>
Signed-off-by: Takashi Iwai <tiwai@suse.de>
|
|
This reverts commit c91fc8519d87715a3a173475ea3778794c139996.
That change caused a C6 and PC6 residency regression on large idle systems.
Users also complained about new output indicating jitter:
turbostat: cpu6 jitter 3794 9142
Signed-off-by: Len Brown <len.brown@intel.com>
Cc: 4.13+ <stable@vger.kernel.org> # v4.13+
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
|
|
Otherwise we can get the following if the fck alias is missing:
Unable to handle kernel paging request at virtual address fffffffe
...
PC is at clk_get_rate+0x8/0x10
LR is at omap_i2c_probe+0x278/0x6ec
...
[<c056eb08>] (clk_get_rate) from [<c06f4f08>] (omap_i2c_probe+0x278/0x6ec)
[<c06f4f08>] (omap_i2c_probe) from [<c0610944>] (platform_drv_probe+0x50/0xb0)
[<c0610944>] (platform_drv_probe) from [<c060e900>] (driver_probe_device+0x264/0x2ec)
[<c060e900>] (driver_probe_device) from [<c060cda0>] (bus_for_each_drv+0x70/0xb8)
[<c060cda0>] (bus_for_each_drv) from [<c060e5b0>] (__device_attach+0xcc/0x13c)
[<c060e5b0>] (__device_attach) from [<c060db10>] (bus_probe_device+0x88/0x90)
[<c060db10>] (bus_probe_device) from [<c060df68>] (deferred_probe_work_func+0x4c/0x14c)
Signed-off-by: Tony Lindgren <tony@atomide.com>
Signed-off-by: Wolfram Sang <wsa@the-dreams.de>
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/jejb/scsi
Pull SCSI fixes from James Bottomley:
"Four mostly error leg fixes and one more important regression in a
prior commit (the qla2xxx one)"
* tag 'scsi-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/jejb/scsi:
scsi: fc: check for rport presence in fc_block_scsi_eh
scsi: qla2xxx: Fix uninitialized work element
scsi: libiscsi: fix shifting of DID_REQUEUE host byte
scsi: libfc: fix a deadlock in fc_rport_work
scsi: fixup kernel warning during rmmod()
|
|
Commit 7496946a8 ("tracing: Add samples of DECLARE_EVENT_CLASS() and
DEFINE_EVENT()") added template examples for all the events. It created a
DEFINE_EVENT_FN() example which reused the foo_bar_reg and foo_bar_unreg
functions.
Enabling both the TRACE_EVENT_FN() and DEFINE_EVENT_FN() example trace
events caused the foo_bar_reg to be called twice, creating the test thread
twice. The foo_bar_unreg would remove it only once, even if it was called
multiple times, leaving a thread existing when the module is unloaded,
causing an oops.
Add a ref count and allow foo_bar_reg() and foo_bar_unreg() be called by
multiple trace events.
Cc: stable@vger.kernel.org
Fixes: 7496946a8 ("tracing: Add samples of DECLARE_EVENT_CLASS() and DEFINE_EVENT()")
Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
|
|
The latest dtc warns about an extraneous cell in the interrupt
property of two of the iommu device nodes:
Warning (interrupts_property): interrupts size is (16), expected multiple of 12 in /iommu@ff373f00
Warning (interrupts_property): interrupts size is (16), expected multiple of 12 in /iommu@ff900800
This removes the typo.
Fixes: cede4c79de28 ("arm64: dts: rockchip: add rk3368 iommu nodes")
Fixes: 49c82f2b7c5d ("arm64: dts: rockchip: add rk3328 iommu nodes")
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Heiko Stuebner <heiko@sntech.de>
|
|
The vcc_sd or vcc_sdio used for IO voltage for sdmmc and sdio
interface on rk3399 platform have a limitation that it can't be
larger than 3.0v, otherwise it has a potential risk for the chip.
Correct all of them.
Fixes: 171582e00db1 ("arm64: dts: rockchip: add support for firefly-rk3399 board")
Fixes: 2c66fc34e945 ("arm64: dts: rockchip: add RK3399-Q7 (Puma) SoM")
Fixes: 8164a84cca12 ("arm64: dts: rockchip: Add support for rk3399 sapphire SOM")
Cc: stable@vger.kernel.org
Signed-off-by: Shawn Lin <shawn.lin@rock-chips.com>
Tested-by: Klaus Goger <klaus.goger@theobroma-systems.com>
Signed-off-by: Heiko Stuebner <heiko@sntech.de>
|
|
Commit 52eb1ff93e98 ("i40e: Add support setting TC max bandwidth rates")
and commit 1ea6f21ae530 ("i40e: Refactor VF BW rate limiting") add some
needed functionality for TC bandwidth rate limiting. Unfortunately they
introduce several usages of unsigned 64-bit division which needs to be
handled special by the kernel to support all architectures.
Fixes: 52eb1ff93e98 ("i40e: Add support setting TC max bandwidth
rates")
Fixes: 1ea6f21ae530 ("i40e: Refactor VF BW rate limiting")
Signed-off-by: Alan Brady <alan.brady@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
|