summaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
2018-10-08rxrpc: Carry call state out of locked section in rxrpc_rotate_tx_window()David Howells
Carry the call state out of the locked section in rxrpc_rotate_tx_window() rather than sampling it afterwards. This is only used to select tracepoint data, but could have changed by the time we do the tracepoint. Signed-off-by: David Howells <dhowells@redhat.com>
2018-10-08rxrpc: Don't check RXRPC_CALL_TX_LAST after calling rxrpc_rotate_tx_window()David Howells
We should only call the function to end a call's Tx phase if we rotated the marked-last packet out of the transmission buffer. Make rxrpc_rotate_tx_window() return an indication of whether it just rotated the packet marked as the last out of the transmit buffer, carrying the information out of the locked section in that function. We can then check the return value instead of examining RXRPC_CALL_TX_LAST. Fixes: 70790dbe3f66 ("rxrpc: Pass the last Tx packet marker in the annotation buffer") Signed-off-by: David Howells <dhowells@redhat.com>
2018-10-08rxrpc: Don't need to take the RCU read lock in the packet receiverDavid Howells
We don't need to take the RCU read lock in the rxrpc packet receive function because it's held further up the stack in the IP input routine around the UDP receive routines. Fix this by dropping the RCU read lock calls from rxrpc_input_packet(). This simplifies the code. Fixes: 70790dbe3f66 ("rxrpc: Pass the last Tx packet marker in the annotation buffer") Signed-off-by: David Howells <dhowells@redhat.com>
2018-10-08rxrpc: Use the UDP encap_rcv hookDavid Howells
Use the UDP encap_rcv hook to cut the bit out of the rxrpc packet reception in which a packet is placed onto the UDP receive queue and then immediately removed again by rxrpc. Going via the queue in this manner seems like it should be unnecessary. This does, however, require the invention of a value to place in encap_type as that's one of the conditions to switch packets out to the encap_rcv hook. Possibly the value doesn't actually matter for anything other than sockopts on the UDP socket, which aren't accessible outside of rxrpc anyway. This seems to cut a bit of time out of the time elapsed between each sk_buff being timestamped and turning up in rxrpc (the final number in the following trace excerpts). I measured this by making the rxrpc_rx_packet trace point print the time elapsed between the skb being timestamped and the current time (in ns), e.g.: ... 424.278721: rxrpc_rx_packet: ... ACK 25026 So doing a 512MiB DIO read from my test server, with an unmodified kernel: N min max sum mean stddev 27605 2626 7581 7.83992e+07 2840.04 181.029 and with the patch applied: N min max sum mean stddev 27547 1895 12165 6.77461e+07 2459.29 255.02 Signed-off-by: David Howells <dhowells@redhat.com>
2018-10-08Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/sparcGreg Kroah-Hartman
David writes: "Sparc fixes: 1) Minor fallthru comment tweaks from Gustavo A. R. Silva. 2) VLA removal from Kees Cook. 3) Make sparc vdso Makefile match x86, from Masahiro Yamada. 4) Fix clock divider programming in mach64 driver, from Mikulas Patocka." * git://git.kernel.org/pub/scm/linux/kernel/git/davem/sparc: sparc64: fix fall-through annotation sparc32: fix fall-through annotation sparc: vdso: clean-up vdso Makefile oradax: remove redundant null check before kfree sparc64: viohs: Remove VLA usage sbus: Use of_get_child_by_name helper sparc: Convert to using %pOFn instead of device_node.name mach64: detect the dot clock divider correctly on sparc
2018-10-08bpf: do not blindly change rlimit in reuseport net selftestEric Dumazet
If the current process has unlimited RLIMIT_MEMLOCK, we should should leave it as is. Fixes: 941ff6f11c02 ("bpf: fix rlimit in reuseport net selftest") Signed-off-by: John Sperbeck <jsperbeck@google.com> Signed-off-by: Eric Dumazet <edumazet@google.com> Acked-by: Daniel Borkmann <daniel@iogearbox.net> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2018-10-08Merge branch 'bpf-to-bpf-calls-nfp'Daniel Borkmann
Quentin Monnet says: ==================== This patch series adds support for hardware offload of programs containing BPF-to-BPF function calls. First, a new callback is added to the kernel verifier, to collect information after the main part of the verification has been performed. Then support for BPF-to-BPF calls is incrementally added to the nfp driver, before offloading programs containing such calls is eventually allowed by lifting the restriction in the kernel verifier, in the last patch. Please refer to individual patches for details. Many thanks to Jiong and Jakub for their precious help and contribution on the main patches for the JIT-compiler, and everything related to stack accesses. ==================== Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2018-10-08bpf: allow offload of programs with BPF-to-BPF function callsQuentin Monnet
Now that there is at least one driver supporting BPF-to-BPF function calls, lift the restriction, in the verifier, on hardware offload of eBPF programs containing such calls. But prevent jit_subprogs(), still in the verifier, from being run for offloaded programs. Signed-off-by: Quentin Monnet <quentin.monnet@netronome.com> Reviewed-by: Jiong Wang <jiong.wang@netronome.com> Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2018-10-08nfp: bpf: support pointers to other stack frames for BPF-to-BPF callsQuentin Monnet
Mark instructions that use pointers to areas in the stack outside of the current stack frame, and process them accordingly in mem_op_stack(). This way, we also support BPF-to-BPF calls where the caller passes a pointer to data in its own stack frame to the callee (typically, when the caller passes an address to one of its local variables located in the stack, as an argument). Thanks to Jakub and Jiong for figuring out how to deal with this case, I just had to turn their email discussion into this patch. Suggested-by: Jiong Wang <jiong.wang@netronome.com> Suggested-by: Jakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: Quentin Monnet <quentin.monnet@netronome.com> Reviewed-by: Jiong Wang <jiong.wang@netronome.com> Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2018-10-08nfp: bpf: optimise save/restore for R6~R9 based on register usageQuentin Monnet
When pre-processing the instructions, it is trivial to detect what subprograms are using R6, R7, R8 or R9 as destination registers. If a subprogram uses none of those, then we do not need to jump to the subroutines dedicated to saving and restoring callee-saved registers in its prologue and epilogue. This patch introduces detection of callee-saved registers in subprograms and prevents the JIT from adding calls to those subroutines whenever we can: we save some instructions in the translated program, and some time at runtime on BPF-to-BPF calls and returns. If no subprogram needs to save those registers, we can avoid appending the subroutines at the end of the program. Signed-off-by: Quentin Monnet <quentin.monnet@netronome.com> Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2018-10-08nfp: bpf: fix return address from register-saving subroutine to calleeQuentin Monnet
On performing a BPF-to-BPF call, we first jump to a subroutine that pushes callee-saved registers (R6~R9) to the stack, and from there we goes to the start of the callee next. In order to do so, the caller must pass to the subroutine the address of the NFP instruction to jump to at the end of that subroutine. This cannot be reliably implemented when translated the caller, as we do not always know the start offset of the callee yet. This patch implement the required fixup step for passing the start offset in the callee via the register used by the subroutine to hold its return address. Signed-off-by: Quentin Monnet <quentin.monnet@netronome.com> Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2018-10-08nfp: bpf: update fixup function for BPF-to-BPF calls supportQuentin Monnet
Relocation for targets of BPF-to-BPF calls are required at the end of translation. Update the nfp_fixup_branches() function in that regard. When checking that the last instruction of each bloc is a branch, we must account for the length of the instructions required to pop the return address from the stack. Signed-off-by: Quentin Monnet <quentin.monnet@netronome.com> Signed-off-by: Jiong Wang <jiong.wang@netronome.com> Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2018-10-08nfp: bpf: account for additional stack usage when checking stack limitQuentin Monnet
Offloaded programs using BPF-to-BPF calls use the stack to store the return address when calling into a subprogram. Callees also need some space to save eBPF registers R6 to R9. And contrarily to kernel verifier, we align stack frames on 64 bytes (and not 32). Account for all this when checking the stack size limit before JIT-ing the program. This means we have to recompute maximum stack usage for the program, we cannot get the value from the kernel. In addition to adapting the checks on stack usage, move them to the finalize() callback, now that we have it and because such checks are part of the verification step rather than translation. Signed-off-by: Quentin Monnet <quentin.monnet@netronome.com> Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2018-10-08nfp: bpf: add main logics for BPF-to-BPF calls support in nfp driverQuentin Monnet
This is the main patch for the logics of BPF-to-BPF calls in the nfp driver. The functions called on BPF_JUMP | BPF_CALL and BPF_JUMP | BPF_EXIT were used to call helpers and exit from the program, respectively; make them usable for calling into, or returning from, a BPF subprogram as well. For all calls, push the return address as well as the callee-saved registers (R6 to R9) to the stack, and pop them upon returning from the calls. In order to limit the overhead in terms of instruction number, this is done through dedicated subroutines. Jumping to the callee actually consists in jumping to the subroutine, that "returns" to the callee: this will require some fixup for passing the address in a later patch. Similarly, returning consists in jumping to the subroutine, which pops registers and then return directly to the caller (but no fixup is needed here). Return to the caller is performed with the RTN instruction newly added to the JIT. For the few steps where we need to know what subprogram an instruction belongs to, the struct nfp_insn_meta is extended with a new subprog_idx field. Note that checks on the available stack size, to take into account the additional requirements associated to BPF-to-BPF calls (storing R6-R9 and return addresses), are added in a later patch. Signed-off-by: Quentin Monnet <quentin.monnet@netronome.com> Signed-off-by: Jiong Wang <jiong.wang@netronome.com> Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2018-10-08nfp: bpf: account for BPF-to-BPF calls when preparing nfp JITQuentin Monnet
Similarly to "exit" or "helper call" instructions, BPF-to-BPF calls will require additional processing before translation starts, in order to record and mark jump destinations. We also mark the instructions where each subprogram begins. This will be used in a following commit to determine where to add prologues for subprograms. Signed-off-by: Quentin Monnet <quentin.monnet@netronome.com> Reviewed-by: Jiong Wang <jiong.wang@netronome.com> Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2018-10-08nfp: bpf: ignore helper-related checks for BPF calls in nfp verifierQuentin Monnet
The checks related to eBPF helper calls are performed each time the nfp driver meets a BPF_JUMP | BPF_CALL instruction. However, these checks are not relevant for BPF-to-BPF call (same instruction code, different value in source register), so just skip the checks for such calls. While at it, rename the function that runs those checks to make it clear they apply to _helper_ calls only. Signed-off-by: Quentin Monnet <quentin.monnet@netronome.com> Reviewed-by: Jiong Wang <jiong.wang@netronome.com> Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2018-10-08nfp: bpf: copy eBPF subprograms information from kernel verifierQuentin Monnet
In order to support BPF-to-BPF calls in offloaded programs, the nfp driver must collect information about the distinct subprograms: namely, the number of subprograms composing the complete program and the stack depth of those subprograms. The latter in particular is non-trivial to collect, so we copy those elements from the kernel verifier via the newly added post-verification hook. The struct nfp_prog is extended to store this information. Stack depths are stored in an array of dedicated structs. Subprogram start indexes are not collected. Instead, meta instructions associated to the start of a subprogram will be marked with a flag in a later patch. Signed-off-by: Quentin Monnet <quentin.monnet@netronome.com> Reviewed-by: Jiong Wang <jiong.wang@netronome.com> Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2018-10-08nfp: bpf: rename nfp_prog->stack_depth as nfp_prog->stack_frame_depthQuentin Monnet
In preparation for support for BPF to BPF calls in offloaded programs, rename the "stack_depth" field of the struct nfp_prog as "stack_frame_depth". This is to make it clear that the field refers to the maximum size of the current stack frame (as opposed to the maximum size of the whole stack memory). Signed-off-by: Quentin Monnet <quentin.monnet@netronome.com> Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2018-10-08bpf: add verifier callback to get stack usage info for offloaded progsQuentin Monnet
In preparation for BPF-to-BPF calls in offloaded programs, add a new function attribute to the struct bpf_prog_offload_ops so that drivers supporting eBPF offload can hook at the end of program verification, and potentially extract information collected by the verifier. Implement a minimal callback (returning 0) in the drivers providing the structs, namely netdevsim and nfp. This will be useful in the nfp driver, in later commits, to extract the number of subprograms as well as the stack depth for those subprograms. Signed-off-by: Quentin Monnet <quentin.monnet@netronome.com> Reviewed-by: Jiong Wang <jiong.wang@netronome.com> Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2018-10-08bpf, doc: Document Jump X addressing modeArthur Fabre
bpf_asm and the other classic BPF tools support jump conditions comparing register A to register X, in addition to comparing register A with constant K. Only the latter was documented in filter.txt, add two new addressing modes that describe the former. Signed-off-by: Arthur Fabre <arthur@arthurfabre.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2018-10-08libbpf: relicense libbpf as LGPL-2.1 OR BSD-2-ClauseAlexei Starovoitov
libbpf is maturing as a library and gaining features that no other bpf libraries support (BPF Type Format, bpf to bpf calls, etc) Many Apache2 licensed projects (like bcc, bpftrace, gobpf, cilium, etc) would like to use libbpf, but cannot do this yet, since Apache Foundation explicitly states that LGPL is incompatible with Apache2. Hence let's relicense libbpf as dual license LGPL-2.1 or BSD-2-Clause, since BSD-2 is compatible with Apache2. Dual LGPL or Apache2 is invalid combination. Fix license mistake in Makefile as well. Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Andrey Ignatov <rdna@fb.com> Acked-by: Arnaldo Carvalho de Melo <acme@kernel.org> Acked-by: Björn Töpel <bjorn.topel@intel.com> Acked-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: David Beckett <david.beckett@netronome.com> Acked-by: Jakub Kicinski <jakub.kicinski@netronome.com> Acked-by: Joe Stringer <joe@ovn.org> Acked-by: John Fastabend <john.fastabend@gmail.com> Acked-by: Martin KaFai Lau <kafai@fb.com> Acked-by: Quentin Monnet <quentin.monnet@netronome.com> Acked-by: Thomas Graf <tgraf@suug.ch> Acked-by: Roman Gushchin <guro@fb.com> Acked-by: Wang Nan <wangnan0@huawei.com> Acked-by: Yonghong Song <yhs@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2018-10-08xsk: proper AF_XDP socket teardown orderingBjörn Töpel
The AF_XDP socket struct can exist in three different, implicit states: setup, bound and released. Setup is prior the socket has been bound to a device. Bound is when the socket is active for receive and send. Released is when the process/userspace side of the socket is released, but the sock object is still lingering, e.g. when there is a reference to the socket in an XSKMAP after process termination. The Rx fast-path code uses the "dev" member of struct xdp_sock to check whether a socket is bound or relased, and the Tx code uses the struct xdp_umem "xsk_list" member in conjunction with "dev" to determine the state of a socket. However, the transition from bound to released did not tear the socket down in correct order. On the Rx side "dev" was cleared after synchronize_net() making the synchronization useless. On the Tx side, the internal queues were destroyed prior removing them from the "xsk_list". This commit corrects the cleanup order, and by doing so xdp_del_sk_umem() can be simplified and one synchronize_net() can be removed. Fixes: 965a99098443 ("xsk: add support for bind for Rx") Fixes: ac98d8aab61b ("xsk: wire upp Tx zero-copy functions") Reported-by: Jesper Dangaard Brouer <brouer@redhat.com> Signed-off-by: Björn Töpel <bjorn.topel@intel.com> Acked-by: Song Liu <songliubraving@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2018-10-08iwlwifi: mvm: kill INACTIVE queue stateJohannes Berg
We don't really need this state: instead of having an inactive state where we can awaken zombie queues again if needed, just keep them in their normal state unless a new queue is actually needed and there's no other way of getting one. We do this here by making the inactivity check not free queues unless instructed that we now really need to allocate one to a specific station, and in that case it'll just free the queue immediately, without doing any inactivity step inbetween. The only downside is a little bit more processing in this case, but the code complexity is lower. Additionally, this fixes a corner case: due to the way the code worked, we could only ever reuse an inactive queue if it was the reserved queue for a station, as iwl_mvm_find_free_queue() would never consider returning an inactive queue. Signed-off-by: Johannes Berg <johannes.berg@intel.com> Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
2018-10-08iwlwifi: mvm: move iwl_mvm_sta_alloc_queue() downJohannes Berg
We want to call iwl_mvm_inactivity_check() from here in the next patch, so need to move the code down to be able to. Fix a minor checkpatch complaint while at it. Signed-off-by: Johannes Berg <johannes.berg@intel.com> Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
2018-10-08mac80211_hwsim: drop now unused work-queue from hwsimMartin Willi
The work-queue was used for deferred destruction of hwsim radios; this does not work well with namespaces about to exit. The one remaining user has been migrated, so drop the now unused work-queue instance. Signed-off-by: Martin Willi <martin@strongswan.org> Signed-off-by: Johannes Berg <johannes.berg@intel.com>
2018-10-08iwlwifi: mvm: make iwl_mvm_scd_queue_redirect() staticJohannes Berg
This function is only used in the file where it's declared, so just make it static. Signed-off-by: Johannes Berg <johannes.berg@intel.com> Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
2018-10-08iwlwifi: mvm: make queue TID change more explicitJohannes Berg
Instead of iterating all the queues after having potentially changed some queue configurations, rechecking if that was done, mark the ones that do need a TID change explicitly in a bitmap and use that to send the change to the firmware. While at it, also rename iwl_mvm_change_queue_owner() to iwl_mvm_change_queue_tid() since that's more obvious - the "kind" of owner isn't immediately clear right now. Signed-off-by: Johannes Berg <johannes.berg@intel.com> Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
2018-10-08Merge remote-tracking branch 'net-next/master' into mac80211-nextJohannes Berg
Merge net-next, which pulled in net, so I can merge a few more patches that would otherwise conflict. Signed-off-by: Johannes Berg <johannes.berg@intel.com>
2018-10-08iwlwifi: mvm: remove RECONFIGURING queue stateJohannes Berg
We set the queue to this state, only to pretty much immediately move it out of it again. However, we can't even hit any of the code that checks if the queue is reconfiguring, because all of this happens under mvm->mutex and we hold the all the way from marking the queue as RECONFIGURING to marking it as READY again. Additionally, the queue that became RECONFIGURING would've been in SHARED state before, and it can safely stay in that state. In case of errors, it previously would have stayed in RECONFIGURING which it could never have left again. Remove the state entirely and just track the queues that need to be reconfigured in a separate, local, bitmap. Signed-off-by: Johannes Berg <johannes.berg@intel.com> Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
2018-10-08iwlwifi: mvm: reconfigure queues during inactivity checkJohannes Berg
We currently reconfigure the queues after the inactivity check, but only in one of the two callers. This might leave queues in a state where the TID owner is wrong, if called when reserving a queue for a new station. Clean this up and do the reconfiguration inside the inactivity check function. This requires changing the locking, but one of the two places already holds the mvm mutex and the other easily can. Signed-off-by: Johannes Berg <johannes.berg@intel.com> Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
2018-10-08iwlwifi: mvm: move queue reconfiguration into new functionJohannes Berg
If TVQM is used we skip over this, move the code into a new function to get rid of the label. Signed-off-by: Johannes Berg <johannes.berg@intel.com> Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
2018-10-08iwlwifi: mvm: clean up iteration in iwl_mvm_inactivity_check()Johannes Berg
There's no need to build a bitmap first and then iterate, just do the iteration with the right locking directly. Signed-off-by: Johannes Berg <johannes.berg@intel.com> Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
2018-10-08iwlwifi: mvm: remove per-queue hw refcountJohannes Berg
There's no need to have a hw refcount if we just mark the command queue with a (fake) TID; at that point, the refcount becomes equivalent to the hweight() of the TID bitmap. Signed-off-by: Johannes Berg <johannes.berg@intel.com> Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
2018-10-08iwlwifi: mvm: move queue management into sta.cJohannes Berg
None of these functions really need to be separate, they're all only used in sta.c, move them there and make them static. Fix a small typo in related code while at it. Signed-off-by: Johannes Berg <johannes.berg@intel.com> Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
2018-10-08iwlwifi: mvm: give TX queue info struct a nameJohannes Berg
Make this a named struct rather than an anonymous one, we'll want to refer to it by name later. Signed-off-by: Johannes Berg <johannes.berg@intel.com> Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
2018-10-08iwlwifi: mvm: move rt status check to the start of the resume flowShahar S Matityahu
Move the rt status checking to the start of the resume flow in order to avoid sending D0I3_END_CMD to the FW. Also, collect dump if an assert was encountered. Signed-off-by: Shahar S Matityahu <shahar.s.matityahu@intel.com> Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
2018-10-08iwlwifi: dump debug data before stop deviceShahar S Matityahu
Debug data dump is not working in flows that stop the device is used in their error handling. During these flows the op mode mutex is locked until the device stops. Because of that, any assert generated from the firmware can be handled only after the device already stopped. Since dumping cannot occour after stopping the device, split the the dump function to two parts, Part that handles locking, and the part that starts the actual dumping and call the second part in the op mode stop device function. Signed-off-by: Shahar S Matityahu <shahar.s.matityahu@intel.com> Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
2018-10-08iwlwifi: mvm: use fast balance scan in case of DCM mode with P2P GOAyala Beker
Currently in case of DCM with P2P GO where BSS DTIM interval < 220 msec the fw fails to allocate events for the P2P GO dtim due to long passive scan events. Fix this by requesting all scans in this scenario to be fragmented with fast balance scan time settings. The only exception is in case fragmented scan was planned to be set due to low latency or high throughput reason, set the scan timing as planned. Signed-off-by: Ayala Beker <ayala.beker@intel.com> Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
2018-10-08iwlwifi: mvm: introduce a new fragmented scan type: fast balanceAyala Beker
Fast balance scan is similar to SCAN_TYPE_MILD, but this scan is fragmented and has shorter out of operating channel time, and therefore better match low latency scenarios. Signed-off-by: Ayala Beker <ayala.beker@intel.com> Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
2018-10-08iwlwifi: trace: change trace to trace one TB at a timeSara Sharon
Split TX tracing to be per TB. This is needed now that AMSDUs can be sent and skb can be larger than trace limit. Signed-off-by: Sara Sharon <sara.sharon@intel.com> Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
2018-10-08iwlwifi: pcie: don't pad AMSDU packetsSara Sharon
When we TX AMSDU, we shouldn't pad the packet. In the past, we were building AMSDU only in transport layer, and gen2 functions are built based on this. However, now that op mode may build AMSDUs, we need to take care of padding also in gen2 "non-pcie-amsdu" path. Signed-off-by: Sara Sharon <sara.sharon@intel.com> Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
2018-10-08iwlwifi: mvm: don't send keys when entering D3Sara Sharon
In the past, we needed to program the keys when entering D3. This was since we replaced the image. However, now that there is a single image, this is no longer needed. Note that RSC is sent separately in a new command. This solves issues with newer devices that support PN offload. Since driver re-sent the keys, the PN got zeroed and the receiver dropped the next packets, until PN caught up again. Signed-off-by: Sara Sharon <sara.sharon@intel.com> Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
2018-10-08Merge tag 'vfio-ccw-20181002' of ↵Martin Schwidefsky
git://git.kernel.org/pub/scm/linux/kernel/git/kvms390/vfio-ccw into fixes Pull vfio-ccw from Cornelia Huck with the following changes: - Another fix for vfio-ccw: make sure it accesses the correct entries in the pfn_array_table arrays when checking pinned pages.
2018-10-08Merge branch 'linux-4.19' of git://github.com/skeggsb/linux into drm-fixesDave Airlie
runtime refcount fix for mst connectors. Signed-off-by: Dave Airlie <airlied@redhat.com> From: Ben Skeggs <bskeggs@redhat.com> Link: https://patchwork.freedesktop.org/patch/msgid/CABDvA=nydWjs26=TZHqistLXjCwm-vHmrisbP6K=FMZ5gW1wnQ@mail.gmail.com
2018-10-08xfrm: use correct size to initialise sp->ovecLi RongQing
This place should want to initialize array, not a element, so it should be sizeof(array) instead of sizeof(element) but now this array only has one element, so no error in this condition that XFRM_MAX_OFFLOAD_DEPTH is 1 Signed-off-by: Li RongQing <lirongqing@baidu.com> Signed-off-by: Steffen Klassert <steffen.klassert@secunet.com>
2018-10-08xfrm: remove unnecessary check in xfrmi_get_stats64Li RongQing
if tstats of a device is not allocated, this device is not registered correctly and can not be used. Signed-off-by: Li RongQing <lirongqing@baidu.com> Signed-off-by: Steffen Klassert <steffen.klassert@secunet.com>
2018-10-07sparc64: fix fall-through annotationGustavo A. R. Silva
Replace "fallthru" with a proper "fall through" annotation. This fix is part of the ongoing efforts to enabling -Wimplicit-fallthrough Signed-off-by: Gustavo A. R. Silva <gustavo@embeddedor.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-10-07sparc32: fix fall-through annotationGustavo A. R. Silva
Replace "fallthru" with a proper "fall through" annotation. This fix is part of the ongoing efforts to enabling -Wimplicit-fallthrough Signed-off-by: Gustavo A. R. Silva <gustavo@embeddedor.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-10-07sparc: vdso: clean-up vdso MakefileMasahiro Yamada
arch/sparc/vdso/Makefile is a replica of arch/x86/entry/vdso/Makefile. Clean-up the Makefile in the same way as I did for x86: - Remove unnecessary export - Put the generated linker script to $(obj)/ instead of $(src)/ - Simplify cmd_vdso2c The corresponding x86 commits are: - 61615faf0a89 ("x86/build/vdso: Remove unnecessary export in Makefile") - 1742ed2088cc ("x86/build/vdso: Put generated linker scripts to $(obj)/") - c5fcdbf15523 ("x86/build/vdso: Simplify 'cmd_vdso2c'") Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-10-07oradax: remove redundant null check before kfreeColin Ian King
A null check before a kfree is redundant, so remove it. Signed-off-by: Colin Ian King <colin.king@canonical.com> Signed-off-by: David S. Miller <davem@davemloft.net>