summaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
2018-03-17mlxsw: spectrum_buffers: Set a minimum quota for CPU port trafficIdo Schimmel
In commit 9ffcc3725f09 ("mlxsw: spectrum: Allow packets to be trapped from any PG") I fixed a problem where packets could not be trapped to the CPU due to exceeded shared buffer quotas. The mentioned commit explains the problem in detail. The problem was fixed by assigning a minimum quota for the CPU port and the traffic class used for scheduling traffic to the CPU. However, commit 117b0dad2d54 ("mlxsw: Create a different trap group list for each device") assigned different traffic classes to different packet types and rendered the fix useless. Fix the problem by assigning a minimum quota for the CPU port and all the traffic classes that are currently in use. Fixes: 117b0dad2d54 ("mlxsw: Create a different trap group list for each device") Signed-off-by: Ido Schimmel <idosch@mellanox.com> Reported-by: Eddie Shklaer <eddies@mellanox.com> Tested-by: Eddie Shklaer <eddies@mellanox.com> Acked-by: Jiri Pirko <jiri@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-03-17cxgb4: Fix queue free path of ULD driversArjun Vynipadath
Setting sge_uld_rxq_info to NULL in free_queues_uld(). We are referencing sge_uld_rxq_info in cxgb_up(). This will fix a panic when interface is brought up after a ULDq creation failure. Fixes: 94cdb8bb993a (cxgb4: Add support for dynamic allocation of resources for ULD) Signed-off-by: Arjun Vynipadath <arjun@chelsio.com> Signed-off-by: Casey Leedom <leedom@chelsio.com> Signed-off-by: Ganesh Goudhar <ganeshgr@chelsio.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-03-17rds: tcp: must use spin_lock_irq* and not spin_lock_bh with rds_tcp_conn_lockSowmini Varadhan
rds_tcp_connection allocation/free management has the potential to be called from __rds_conn_create after IRQs have been disabled, so spin_[un]lock_bh cannot be used with rds_tcp_conn_lock. Bottom-halves that need to synchronize for critical sections protected by rds_tcp_conn_lock should instead use rds_destroy_pending() correctly. Reported-by: syzbot+c68e51bb5e699d3f8d91@syzkaller.appspotmail.com Fixes: ebeeb1ad9b8a ("rds: tcp: use rds_destroy_pending() to synchronize netns/module teardown and rds connection/workq management") Signed-off-by: Sowmini Varadhan <sowmini.varadhan@oracle.com> Acked-by: Santosh Shilimkar <santosh.shilimkar@oracle.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-03-17Merge branch 'tipc-obsolete-zone-concept'David S. Miller
Jon Maloy says: ==================== tipc: obsolete zone concept Functionality related to the 'zone' concept was never implemented in TIPC. In this series we eliminate the remaining traces of it in the code, and can hence take a first step in reducing the footprint and complexity of the binding table. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
2018-03-17tipc: some name changesJon Maloy
We rename some lists and fields in struct publication both to make the naming more consistent and to better reflect their roles. We also update the descriptions of those lists. node_list -> local_publ cluster_list -> all_publ pport_list -> binding_sock ref -> port There are no functional changes in this commit. Acked-by: Ying Xue <ying.xue@windriver.com> Signed-off-by: Jon Maloy <jon.maloy@ericsson.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-03-17tipc: merge two lists in struct publicationJon Maloy
The size of struct publication can be reduced further. Membership in lists 'nodesub_list' and 'local_list' is mutually exlusive, in that remote publications use the former and local publications the latter. We replace the two lists with one single, named 'binding_node' which reflects what it really is. Acked-by: Ying Xue <ying.xue@windriver.com> Signed-off-by: Jon Maloy <jon.maloy@ericsson.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-03-17tipc: remove zone_list member in struct publicationJon Maloy
As a further consequence of the previous commits, we can also remove the member 'zone_list 'in struct name_info and struct publication. Instead, we now let the member cluster_list take over the role a container of all publications of a given <type,lower, upper>. We also remove the counters for the size of those lists, since they don't serve any purpose. Acked-by: Ying Xue <ying.xue@windriver.com> Signed-off-by: Jon Maloy <jon.maloy@ericsson.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-03-17tipc: remove zone publication list in name tableJon Maloy
As a consequence of the previous commit we nan now eliminate zone scope related lists in the name table. We start with name_table::publ_list[3], which can now be replaced with two lists, one for node scope publications and one for cluster scope publications. Acked-by: Ying Xue <ying.xue@windriver.com> Signed-off-by: Jon Maloy <jon.maloy@ericsson.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-03-17tipc: obsolete TIPC_ZONE_SCOPEJon Maloy
Publications for TIPC_CLUSTER_SCOPE and TIPC_ZONE_SCOPE are in all aspects handled the same way, both on the publishing node and on the receiving nodes. Despite previous ambitions to the contrary, this is never going to change, so we take the conseqeunce of this and obsolete TIPC_ZONE_SCOPE and related macros/functions. Whenever a user is doing a bind() or a sendmsg() attempt using ZONE_SCOPE we translate this internally to CLUSTER_SCOPE, while we remain compatible with users and remote nodes still using ZONE_SCOPE. Furthermore, the non-formalized scope value 0 has always been permitted for use during lookup, with the same meaning as ZONE_SCOPE/CLUSTER_SCOPE. We now permit it even as binding scope, but for compatibility reasons we choose to not change the value of TIPC_CLUSTER_SCOPE. Acked-by: Ying Xue <ying.xue@windriver.com> Signed-off-by: Jon Maloy <jon.maloy@ericsson.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-03-17Merge branch 'pernet-convert-part8'David S. Miller
Kirill Tkhai says: ==================== Converting pernet_operations (part #8) this series continues to review and to convert pernet_operations to make them possible to be executed in parallel for several net namespaces at the same time. There are different operations over the tree, mostly are ipvs. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
2018-03-17net: Convert ip_vs_ftp_opsKirill Tkhai
These pernet_operations register and unregister ipvs app. register_ip_vs_app(), unregister_ip_vs_app() and register_ip_vs_app_inc() modify per-net structures, and there are no global structures touched. So, this looks safe to be marked as async. Signed-off-by: Kirill Tkhai <ktkhai@virtuozzo.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-03-17net: Convert ipvs_core_dev_opsKirill Tkhai
Exit method stops two per-net threads and cancels delayed work. Everything looks nicely per-net divided. Signed-off-by: Kirill Tkhai <ktkhai@virtuozzo.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-03-17net: Convert ipvs_core_opsKirill Tkhai
These pernet_operations register and unregister nf hooks, /proc entries, sysctl, percpu statistics. There are several global lists, and the only list modified without exclusive locks is ip_vs_conn_tab in ip_vs_conn_flush(). We iterate the list and force the timers expire at the moment. Since there were possible several timer expirations before this patch, and since they are safe, the patch does not invent new parallelism of their destruction. These pernet_operations look safe to be converted. Signed-off-by: Kirill Tkhai <ktkhai@virtuozzo.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-03-17net: Convert ovs_net_opsKirill Tkhai
These pernet_operations initialize and destroy net_generic() data pointed by ovs_net_id. Exit method destroys vports from alive net to exiting net. Since they are only pernet_operations interested in this data, and exit method is executed under exclusive global lock (ovs_mutex), they are safe to be executed in parallel. Signed-off-by: Kirill Tkhai <ktkhai@virtuozzo.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-03-17net: Convert mpls_net_opsKirill Tkhai
These pernet_operations register and unregister sysctl table. Exit methods frees platform_labels from net::mpls::platform_label. Everything is per-net, and they looks safe to be marked async. Signed-off-by: Kirill Tkhai <ktkhai@virtuozzo.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-03-17net: Convert l2tp_net_opsKirill Tkhai
Init method is rather simple. Exit method queues del_work for every tunnel from per-net list. This seems to be safe to be marked async. Signed-off-by: Kirill Tkhai <ktkhai@virtuozzo.com> Acked-by: Guillaume Nault <g.nault@alphalink.fr> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-03-17net: sched: fix uses after freeEric Dumazet
syzbot reported one use-after-free in pfifo_fast_enqueue() [1] Issue here is that we can not reuse skb after a successful skb_array_produce() since another cpu might have consumed it already. I believe a similar problem exists in try_bulk_dequeue_skb_slow() in case we put an skb into qdisc_enqueue_skb_bad_txq() for lockless qdisc. [1] BUG: KASAN: use-after-free in qdisc_pkt_len include/net/sch_generic.h:610 [inline] BUG: KASAN: use-after-free in qdisc_qstats_cpu_backlog_inc include/net/sch_generic.h:712 [inline] BUG: KASAN: use-after-free in pfifo_fast_enqueue+0x4bc/0x5e0 net/sched/sch_generic.c:639 Read of size 4 at addr ffff8801cede37e8 by task syzkaller717588/5543 CPU: 1 PID: 5543 Comm: syzkaller717588 Not tainted 4.16.0-rc4+ #265 Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/01/2011 Call Trace: __dump_stack lib/dump_stack.c:17 [inline] dump_stack+0x194/0x24d lib/dump_stack.c:53 print_address_description+0x73/0x250 mm/kasan/report.c:256 kasan_report_error mm/kasan/report.c:354 [inline] kasan_report+0x23c/0x360 mm/kasan/report.c:412 __asan_report_load4_noabort+0x14/0x20 mm/kasan/report.c:432 qdisc_pkt_len include/net/sch_generic.h:610 [inline] qdisc_qstats_cpu_backlog_inc include/net/sch_generic.h:712 [inline] pfifo_fast_enqueue+0x4bc/0x5e0 net/sched/sch_generic.c:639 __dev_xmit_skb net/core/dev.c:3216 [inline] Fixes: c5ad119fb6c0 ("net: sched: pfifo_fast use skb_array") Signed-off-by: Eric Dumazet <edumazet@google.com> Reported-by: syzbot+ed43b6903ab968b16f54@syzkaller.appspotmail.com Cc: John Fastabend <john.fastabend@gmail.com> Cc: Jamal Hadi Salim <jhs@mojatatu.com> Cc: Cong Wang <xiyou.wangcong@gmail.com> Cc: Jiri Pirko <jiri@resnulli.us> Acked-by: John Fastabend <john.fastabend@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-03-17parisc: Handle case where flush_cache_range is called with no contextJohn David Anglin
Just when I had decided that flush_cache_range() was always called with a valid context, Helge reported two cases where the "BUG_ON(!vma->vm_mm->context);" was hit on the phantom buildd: kernel BUG at /mnt/sdb6/linux/linux-4.15.4/arch/parisc/kernel/cache.c:587! CPU: 1 PID: 3254 Comm: kworker/1:2 Tainted: G D 4.15.0-1-parisc64-smp #1 Debian 4.15.4-1+b1 Workqueue: events free_ioctx   IAOQ[0]: flush_cache_range+0x164/0x168   IAOQ[1]: flush_cache_page+0x0/0x1c8   RP(r2): unmap_page_range+0xae8/0xb88 Backtrace:   [<00000000404a6980>] unmap_page_range+0xae8/0xb88   [<00000000404a6ae0>] unmap_single_vma+0xc0/0x188   [<00000000404a6cdc>] zap_page_range_single+0x134/0x1f8   [<00000000404a702c>] unmap_mapping_range+0x1cc/0x208   [<0000000040461518>] truncate_pagecache+0x98/0x108   [<0000000040461624>] truncate_setsize+0x9c/0xb8   [<00000000405d7f30>] put_aio_ring_file+0x80/0x100   [<00000000405d803c>] aio_free_ring+0x8c/0x290   [<00000000405d82c0>] free_ioctx+0x80/0x180   [<0000000040284e6c>] process_one_work+0x21c/0x668   [<00000000402854c4>] worker_thread+0x20c/0x778   [<0000000040291d44>] kthread+0x2d4/0x2e0   [<0000000040204020>] end_fault_vector+0x20/0xc0 This indicates that we need to handle the no context case in flush_cache_range() as we do in flush_cache_mm(). In thinking about this, I realized that we don't need to flush the TLB when there is no context. So, I added context checks to the large flush cases in flush_cache_mm() and flush_cache_range(). The large flush case occurs frequently in flush_cache_mm() and the change should improve fork performance. The v2 version of this change removes the BUG_ON from flush_cache_page() by skipping the TLB flush when there is no context.  I also added code to flush the TLB in flush_cache_mm() and flush_cache_range() when we have a context that's not current.  Now all three routines handle TLB flushes in a similar manner. Signed-off-by: John David Anglin <dave.anglin@bell.net> Cc: stable@vger.kernel.org # 4.9+ Signed-off-by: Helge Deller <deller@gmx.de>
2018-03-17drm/tegra: prime: Implement ->{begin,end}_cpu_access()Thierry Reding
These callbacks allow the exporter to swap in and pin the backing storage for buffers as well as invalidate the cache in preparation for accessing the buffer from the CPU, and flush the cache and unpin the backing storage when the CPU is done modifying the buffer. Signed-off-by: Thierry Reding <treding@nvidia.com>
2018-03-17drm/tegra: gem: Map pages via the DMA APIThierry Reding
When allocating pages, map them with the DMA API in order to invalidate caches. This is the correct usage of the API and works just as well as faking up the SG table and using the dma_sync_sg_for_device() function. Signed-off-by: Thierry Reding <treding@nvidia.com>
2018-03-17drm/tegra: hub: Use private object for global stateThierry Reding
Rather than subclass the global atomic state to store the hub display clock and rate, create a private object and store this data in its state. Signed-off-by: Thierry Reding <treding@nvidia.com>
2018-03-16drm/vc4_validate: Remove VLA usageGustavo A. R. Silva
In preparation to enabling -Wvla, remove VLA. In this particular case use macro ARRAY_SIZE so the length of array _bo_ can be computed at preprocessing time. The use of stack Variable Length Arrays needs to be avoided, as they can be a vector for stack exhaustion, which can be both a runtime bug or a security flaw. Also, in general, as code evolves it is easy to lose track of how big a VLA can get. Thus, we can end up having runtime failures that are hard to debug. Also, fixed as part of the directive to remove all VLAs from the kernel: https://lkml.org/lkml/2018/3/7/621 Signed-off-by: Gustavo A. R. Silva <gustavo@embeddedor.com> Signed-off-by: Eric Anholt <eric@anholt.net> Reviewed-by: Eric Anholt <eric@anholt.net> Link: https://patchwork.freedesktop.org/patch/msgid/20180313143151.GA27486@embeddedgus
2018-03-16amdgpu/dm: Default PRE_VEGA ASIC support to 'y'Harry Wentland
Even though we default PRE_VEGA support to 'n' upstream in amd-staging we want to keep it enabled by default. Signed-off-by: Harry Wentland <harry.wentland@amd.com> Reviewed-by: Alex Deucher <alexander.deucher@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2018-03-16drm/amd/pp: Remove the cgs wrapper for notify smu version on APURex Zhu
Refine commit f49e9bac191b ("drm/amd/pp: Get and save Rv smu version") Reviewed-by: Alex Deucher <alexander.deucher@amd.com> Signed-off-by: Rex Zhu <Rex.Zhu@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2018-03-16drm/amd/display: fix dereferencing possible ERR_PTR()Shirish S
This patch fixes static checker warning caused by "36cc549d5986: "drm/amd/display: disable CRTCs with NULL FB on their primary plane (V2)" Reported-by: Dan Carpenter <dan.carpenter@oracle.com> Signed-off-by: Shirish S <shirish.s@amd.com> Reviewed-by: Harry Wentland <harry.wentland@amd.com> Acked-by: Alex Deucher <alexander.deucher@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2018-03-16drm/amd/display: Refine disable VGAClark Zheng
bad case won't follow normal sense, it will not enable vga1 as usual, but vga2,3,4 is on. Signed-off-by: Clark Zheng <clark.zheng@amd.com> Reviewed-by: Tony Cheng <tony.cheng@amd.com> Acked-by: Alex Deucher <alexander.deucher@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2018-03-16Merge tag 'for-4.16-rc5-tag' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/kdave/linux Pull btrfs fixes from David Sterba: "There's an important revert in this pull request that needs to go to stable as it causes a corruption on big endian machines. The other fix is for FIEMAP incorrectly reporting shared extents before a sync and one fix for a crash in raid56. So far we got only one report about the BE corruption, the stable kernels were out for like a week, so hopefully the scope of the damage is low" * tag 'for-4.16-rc5-tag' of git://git.kernel.org/pub/scm/linux/kernel/git/kdave/linux: Revert "btrfs: use proper endianness accessors for super_copy" btrfs: add missing initialization in btrfs_check_shared btrfs: Fix NULL pointer exception in find_bio_stripe
2018-03-16Merge tag 'microblaze-4.16-rc6' of git://git.monstr.eu/linux-2.6-microblazeLinus Torvalds
Pull microblaze fixes from Michal Simek: - Use NO_BOOTMEM to fix boot issue - Fix opt lib endian dependencies * tag 'microblaze-4.16-rc6' of git://git.monstr.eu/linux-2.6-microblaze: microblaze: switch to NO_BOOTMEM microblaze: remove unused alloc_maybe_bootmem microblaze: Setup dependencies for ASM optimized lib functions
2018-03-16x86/microcode: Fix CPU synchronization routineBorislav Petkov
Emanuel reported an issue with a hang during microcode update because my dumb idea to use one atomic synchronization variable for both rendezvous - before and after update - was simply bollocks: microcode: microcode_reload_late: late_cpus: 4 microcode: __reload_late: cpu 2 entered microcode: __reload_late: cpu 1 entered microcode: __reload_late: cpu 3 entered microcode: __reload_late: cpu 0 entered microcode: __reload_late: cpu 1 left microcode: Timeout while waiting for CPUs rendezvous, remaining: 1 CPU1 above would finish, leave and the others will still spin waiting for it to join. So do two synchronization atomics instead, which makes the code a lot more straightforward. Also, since the update is serialized and it also takes quite some time per microcode engine, increase the exit timeout by the number of CPUs on the system. That's ok because the moment all CPUs are done, that timeout will be cut short. Furthermore, panic when some of the CPUs timeout when returning from a microcode update: we can't allow a system with not all cores updated. Also, as an optimization, do not do the exit sync if microcode wasn't updated. Reported-by: Emanuel Czirai <xftroxgpx@protonmail.com> Signed-off-by: Borislav Petkov <bp@suse.de> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Tested-by: Emanuel Czirai <xftroxgpx@protonmail.com> Tested-by: Ashok Raj <ashok.raj@intel.com> Tested-by: Tom Lendacky <thomas.lendacky@amd.com> Link: https://lkml.kernel.org/r/20180314183615.17629-2-bp@alien8.de
2018-03-16x86/microcode: Attempt late loading only when new microcode is presentBorislav Petkov
Return UCODE_NEW from the scanning functions to denote that new microcode was found and only then attempt the expensive synchronization dance. Reported-by: Emanuel Czirai <xftroxgpx@protonmail.com> Signed-off-by: Borislav Petkov <bp@suse.de> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Tested-by: Emanuel Czirai <xftroxgpx@protonmail.com> Tested-by: Ashok Raj <ashok.raj@intel.com> Tested-by: Tom Lendacky <thomas.lendacky@amd.com> Link: https://lkml.kernel.org/r/20180314183615.17629-1-bp@alien8.de
2018-03-16Merge tag 'drm-fixes-for-v4.16-rc6' of ↵Linus Torvalds
git://people.freedesktop.org/~airlied/linux Pull drm fixes from Dave Airlie: "i915, amd and nouveau fixes. i915: - backlight fix for some panels - pm fix - fencing fix - some GVT fixes amdgpu: - backlight fix across suspend/resume - object destruction ordering issue fix - displayport fix nouveau: - two backlight fixes - fix for some lockups Pretty quiet week, seems like everyone was fixing backlights" * tag 'drm-fixes-for-v4.16-rc6' of git://people.freedesktop.org/~airlied/linux: drm/nouveau/bl: fix backlight regression drm/nouveau/bl: Fix oops on driver unbind drm/nouveau/mmu: ALIGN_DOWN correct variable drm/i915/gvt: fix user copy warning by whitelist workload rb_tail field drm/i915/gvt: Correct the privilege shadow batch buffer address drm/amdgpu/dce: Don't turn off DP sink when disconnected drm/amdgpu: save/restore backlight level in legacy dce code drm/radeon: fix prime teardown order drm/amdgpu: fix prime teardown order drm/i915: Kick the rps worker when changing the boost frequency drm/i915: Only prune fences after wait-for-all drm/i915: Enable VBT based BL control for DP drm/i915/gvt: keep oa config in shadow ctx drm/i915/gvt: Add runtime_pm_get/put into gvt_switch_mmio
2018-03-16perf/core: Clear sibling list of detached eventsMark Rutland
When perf_group_dettach() is called on a group leader, it updates each sibling's group_leader field to point to that sibling, effectively upgrading each siblnig to a group leader. After perf_group_detach has completed, the caller may free the leader event. We only remove siblings from the group leader's sibling_list when the leader has a non-empty group_node. This was fine prior to commit: 8343aae66167df67 ("perf/core: Remove perf_event::group_entry") ... as the sibling's sibling_list would be empty. However, now that we use the sibling_list field as both the list head and the list entry, this leaves each sibling with a non-empty sibling list, including the stale leader event. If perf_group_detach() is subsequently called on a sibling, it will appear to be a group leader, and we'll walk the sibling_list, potentially dereferencing these stale events. In 0day testing, this has been observed to result in kernel panics. Let's avoid this by always removing siblings from the sibling list when we promote them to leaders. Fixes: 8343aae66167df67 ("perf/core: Remove perf_event::group_entry") Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Cc: vincent.weaver@maine.edu Cc: Peter Zijlstra <peterz@infradead.org> Cc: torvalds@linux-foundation.org Cc: Alexey Budankov <alexey.budankov@linux.intel.com> Cc: valery.cherepennikov@intel.com Cc: linux-tip-commits@vger.kernel.org Cc: eranian@google.com Cc: acme@redhat.com Cc: alexander.shishkin@linux.intel.com Cc: davidcc@google.com Cc: kan.liang@intel.com Cc: Dmitry.Prohorov@intel.com Cc: Jiri Olsa <jolsa@redhat.com> Link: https://lkml.kernel.org/r/20180316131741.3svgr64yibc6vsid@lakrids.cambridge.arm.com
2018-03-16perf: Fix sibling iterationPeter Zijlstra
Mark noticed that the change to sibling_list changed some iteration semantics; because previously we used group_list as list entry, sibling events would always have an empty sibling_list. But because we now use sibling_list for both list head and list entry, siblings will report as having siblings. Fix this with a custom for_each_sibling_event() iterator. Fixes: 8343aae66167 ("perf/core: Remove perf_event::group_entry") Reported-by: Mark Rutland <mark.rutland@arm.com> Suggested-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Cc: vincent.weaver@maine.edu Cc: alexander.shishkin@linux.intel.com Cc: torvalds@linux-foundation.org Cc: alexey.budankov@linux.intel.com Cc: valery.cherepennikov@intel.com Cc: eranian@google.com Cc: acme@redhat.com Cc: linux-tip-commits@vger.kernel.org Cc: davidcc@google.com Cc: kan.liang@intel.com Cc: Dmitry.Prohorov@intel.com Cc: jolsa@redhat.com Link: https://lkml.kernel.org/r/20180315170129.GX4043@hirez.programming.kicks-ass.net
2018-03-16perf debug: Avoid setting 'quiet' to 'true' unnecessarilyYisheng Xie
When using --quiet to disable messages, we will set the 'quiet' variable to 'true' first, then check that variable to decide whether we need to call perf_quiet_option(), so no need to set 'quiet' to 'true' once more in perf_quiet_option(). Signed-off-by: Yisheng Xie <xieyisheng1@huawei.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Kan Liang <kan.liang@linux.intel.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Link: http://lkml.kernel.org/r/1520944274-37001-2-git-send-email-xieyisheng1@huawei.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-03-16perf mmap: Discard head in overwrite_rb_find_range()Yisheng Xie
In overwrite mode, start will be set to head in perf_mmap__read_init(). Therefore, there is no need to set the start one more time in overwrite_rb_find_range() and *start can be used as head instead of passing head to overwrite_rb_find_range(). Signed-off-by: Yisheng Xie <xieyisheng1@huawei.com> Reviewed-by: Kan Liang <kan.liang@linux.intel.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Link: http://lkml.kernel.org/r/1520944274-37001-1-git-send-email-xieyisheng1@huawei.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-03-16batman-adv: fix header size check in batadv_dbg_arp()Matthias Schiffer
Checking for 0 is insufficient: when an SKB without a batadv header, but with a VLAN header is received, hdr_size will be 4, making the following code interpret the Ethernet header as a batadv header. Fixes: be1db4f6615b ("batman-adv: make the Distributed ARP Table vlan aware") Signed-off-by: Matthias Schiffer <mschiffer@universe-factory.net> Signed-off-by: Sven Eckelmann <sven@narfation.org> Signed-off-by: Simon Wunderlich <sw@simonwunderlich.de>
2018-03-16batman-adv: update data pointers after skb_cow()Matthias Schiffer
batadv_check_unicast_ttvn() calls skb_cow(), so pointers into the SKB data must be (re)set after calling it. The ethhdr variable is dropped altogether. Fixes: 7cdcf6dddc42 ("batman-adv: add UNICAST_4ADDR packet type") Signed-off-by: Matthias Schiffer <mschiffer@universe-factory.net> Signed-off-by: Sven Eckelmann <sven@narfation.org> Signed-off-by: Simon Wunderlich <sw@simonwunderlich.de>
2018-03-16net-tcp_bbr: set tp->snd_ssthresh to BDP upon STARTUP exitYousuk Seung
Set tp->snd_ssthresh to BDP upon STARTUP exit. This allows us to check if a BBR flow exited STARTUP and the BDP at the time of STARTUP exit with SCM_TIMESTAMPING_OPT_STATS. Since BBR does not use snd_ssthresh this fix has no impact on BBR's behavior. Signed-off-by: Yousuk Seung <ysseung@google.com> Signed-off-by: Neal Cardwell <ncardwell@google.com> Signed-off-by: Priyaranjan Jha <priyarjha@google.com> Signed-off-by: Soheil Hassas Yeganeh <soheil@google.com> Signed-off-by: Yuchung Cheng <ycheng@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-03-16tcp: add snd_ssthresh stat in SCM_TIMESTAMPING_OPT_STATSYousuk Seung
This patch adds TCP_NLA_SND_SSTHRESH stat into SCM_TIMESTAMPING_OPT_STATS that reports tcp_sock.snd_ssthresh. Signed-off-by: Yousuk Seung <ysseung@google.com> Signed-off-by: Neal Cardwell <ncardwell@google.com> Signed-off-by: Priyaranjan Jha <priyarjha@google.com> Signed-off-by: Soheil Hassas Yeganeh <soheil@google.com> Signed-off-by: Yuchung Cheng <ycheng@google.com> Reviewed-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-03-16selftests/txtimestamp: Add more configurable parametersVinicius Costa Gomes
Add a way to configure if poll() should wait forever for an event, the number of packets that should be sent for each and if there should be any delay between packets. Signed-off-by: Vinicius Costa Gomes <vinicius.gomes@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-03-16liquidio: Simplified napi pollIntiyaz Basha
1) Moved interrupt enable related code from octeon_process_droq_poll_cmd() to separate function octeon_enable_irq(). 2) Removed wrapper function octeon_process_droq_poll_cmd(), and directlyi using octeon_droq_process_poll_pkts(). 3) Removed unused macros POLL_EVENT_XXX. Signed-off-by: Intiyaz Basha <intiyaz.basha@cavium.com> Signed-off-by: Felix Manlunas <felix.manlunas@cavium.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-03-16net: dsa: mv88e6xxx: Fix binding documentation for MDIO bussesAndrew Lunn
The MDIO busses are switch properties and so should be inside the switch node. Fix the examples in the binding document. Reported-by: 尤晓杰 <yxj790222@163.com> Signed-off-by: Andrew Lunn <andrew@lunn.ch> Fixes: a3c53be55c95 ("net: dsa: mv88e6xxx: Support multiple MDIO busses") Signed-off-by: David S. Miller <davem@davemloft.net>
2018-03-16Merge branch 'net-smc-IPv6-support'David S. Miller
Ursula Braun says: ==================== net/smc: IPv6 support these smc patches for the net-next tree add IPv6 support. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
2018-03-16net/smc: enable ipv6 support for smcKarsten Graul
Add ipv6 support to the smc socket layer functions. Make use of the updated clc layer functions to retrieve and match ipv6 information. The indicator for ipv4 or ipv6 is the protocol constant that is provided in the socket() call with address family AF_SMC. Based-on-patch-by: Takanori Ueda <tkueda@jp.ibm.com> Signed-off-by: Karsten Graul <kgraul@linux.vnet.ibm.com> Signed-off-by: Ursula Braun <ubraun@linux.vnet.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-03-16net/smc: add ipv6 support to CLC layerKarsten Graul
The CLC layer is updated to support ipv6 proposal messages from peers and to match incoming proposal messages against the ipv6 addresses of the net device. struct smc_clc_ipv6_prefix is updated to provide the space for an ipv6 address (struct was not used before). SMC_CLC_MAX_LEN is updated to include the size of the proposal prefix. Existing code in net is not affected, the previous SMC_CLC_MAX_LEN value is large enough to hold ipv4 proposal messages. Signed-off-by: Karsten Graul <kgraul@linux.vnet.ibm.com> Signed-off-by: Ursula Braun <ubraun@linux.vnet.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-03-16net/smc: restructure netinfo for CLC proposal msgsKarsten Graul
Introduce functions smc_clc_prfx_set to retrieve IP information for the CLC proposal msg and smc_clc_prfx_match to match the contents of a proposal message against the IP addresses of the net device. The new functions replace the functionality provided by smc_clc_netinfo_by_tcpsk, which is removed by this patch. The match functionality is extended to scan all ipv4 addresses of the net device for a match against the ipv4 subnet from the proposal msg. Signed-off-by: Karsten Graul <kgraul@linux.vnet.ibm.com> Signed-off-by: Ursula Braun <ubraun@linux.vnet.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-03-16cxgb4: notify fatal error to uld driversGanesh Goudar
notify uld drivers if the adapter encounters fatal error. Signed-off-by: Ganesh Goudar <ganeshgr@chelsio.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-03-16perf vendor events: Update POWER9 eventsSukadev Bhattiprolu
Cc: linuxppc-dev@lists.ozlabs.org Signed-off-by: Sukadev Bhattiprolu <sukadev@linux.vnet.ibm.com> Link: https://lkml.kernel.org/r/20180313224647.GA22960@us.ibm.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-03-16perf report: Support forced leader feature in pipe modeJiri Olsa
Stephane reported a problem with forced leader in pipe mode, where report does not force the group output. The reason is that we don't force the leader in pipe mode. This patch adds HEADER_LAST_FEATURE mark to have a point where we have all events and features received, and force the group if requested. $ perf record --group -e '{cycles, instructions}' -o - kill | perf report -i - --group SNIP # Overhead Command Shared Object Symbol # ................ ....... ................ ....................... # 28.36% 0.00% kill libc-2.25.so [.] __unregister_atfork 26.32% 0.00% kill libc-2.25.so [.] _dl_addr 26.10% 0.00% kill ld-2.25.so [.] _dl_relocate_object 17.32% 0.00% kill ld-2.25.so [.] __tunables_init 1.70% 0.01% kill [unknown] [k] 0xffffffffafa01a40 0.20% 0.00% kill ld-2.25.so [.] _start 0.00% 48.77% kill ld-2.25.so [.] do_lookup_x 0.00% 42.97% kill libc-2.25.so [.] _IO_getline 0.00% 6.35% kill ld-2.25.so [.] strcmp 0.00% 1.71% kill ld-2.25.so [.] _dl_sysdep_start 0.00% 0.19% kill ld-2.25.so [.] _dl_start Signed-off-by: Jiri Olsa <jolsa@kernel.org> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Tested-by: Stephane Eranian <eranian@google.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Link: http://lkml.kernel.org/r/20180314092205.23291-2-jolsa@kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-03-16perf record: Synthesize features before events in pipe modeJiri Olsa
We need to synthesize events first, because some features works on top of them (on report side). Signed-off-by: Jiri Olsa <jolsa@kernel.org> Tested-by: Stephane Eranian <eranian@google.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: David Ahern <dsahern@gmail.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Link: http://lkml.kernel.org/r/20180314092205.23291-1-jolsa@kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>