summaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
2020-03-23net: phy: mscc: add support for VSC8502Vladimir Oltean
This is a dual copper PHY with support for MII/GMII/RGMII on MAC side, as well as a bunch of other features such as SyncE and Ring Resiliency. I haven't tested interrupts and WoL, but I am confident that they work since support is already present in the driver and the register map is no different for this PHY. PHY statistics work, PHY tunables appear to work, suspend/resume works. Signed-off-by: Wes Li <wes.li@nxp.com> Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-03-23net: phy: mscc: configure both RX and TX internal delays for RGMIIVladimir Oltean
The driver appears to be secretly enabling the RX clock skew irrespective of PHY interface type, which is generally considered a big no-no. Make them configurable instead, and add TX internal delays when necessary too. While at it, configure a more canonical clock skew of 2.0 nanoseconds than the current default of 1.1 ns. Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-03-23net: phy: mscc: accept all RGMII species in vsc85xx_mac_if_setVladimir Oltean
The helper for configuring the pinout of the MII side of the PHY should do so irrespective of whether RGMII delays are used or not. So accept the ID, TXID and RXID variants as well, not just the no-delay RGMII variant. Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Reviewed-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-03-23net: phy: mscc: rename enum rgmii_rx_clock_delay to rgmii_clock_delayVladimir Oltean
There is nothing RX-specific about these clock skew values. So remove "RX" from the name in preparation for the next patch where TX delays are also going to be configured. Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Reviewed-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-03-24KVM: PPC: Book3S HV: H_SVM_INIT_START must call UV_RETURNLaurent Dufour
When the call to UV_REGISTER_MEM_SLOT is failing, for instance because there is not enough free secured memory, the Hypervisor (HV) has to call UV_RETURN to report the error to the Ultravisor (UV). Then the UV will call H_SVM_INIT_ABORT to abort the securing phase and go back to the calling VM. If the kvm->arch.secure_guest is not set, in the return path rfid is called but there is no valid context to get back to the SVM since the Hcall has been routed by the Ultravisor. Move the setting of kvm->arch.secure_guest earlier in kvmppc_h_svm_init_start() so in the return path, UV_RETURN will be called instead of rfid. Cc: Bharata B Rao <bharata@linux.ibm.com> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Michael Ellerman <mpe@ellerman.id.au> Signed-off-by: Laurent Dufour <ldufour@linux.ibm.com> Reviewed-by: Ram Pai <linuxram@us.ibm.com> Tested-by: Fabiano Rosas <farosas@linux.ibm.com> Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
2020-03-24KVM: PPC: Book3S HV: Check caller of H_SVM_* HcallsLaurent Dufour
The Hcall named H_SVM_* are reserved to the Ultravisor. However, nothing prevent a malicious VM or SVM to call them. This could lead to weird result and should be filtered out. Checking the Secure bit of the calling MSR ensure that the call is coming from either the Ultravisor or a SVM. But any system call made from a SVM are going through the Ultravisor, and the Ultravisor should filter out these malicious call. This way, only the Ultravisor is able to make such a Hcall. Cc: Bharata B Rao <bharata@linux.ibm.com> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Michael Ellerman <mpe@ellerman.id.au> Signed-off-by: Laurent Dufour <ldufour@linux.ibm.com> Reviewed-by: Ram Pai <linuxram@us.ibnm.com> Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
2020-03-24KVM: PPC: Book3S HV: Skip kvmppc_uvmem_free if Ultravisor is not supportedFabiano Rosas
kvmppc_uvmem_init checks for Ultravisor support and returns early if it is not present. Calling kvmppc_uvmem_free at module exit will cause an Oops: $ modprobe -r kvm-hv Oops: Kernel access of bad area, sig: 11 [#1] <snip> NIP: c000000000789e90 LR: c000000000789e8c CTR: c000000000401030 REGS: c000003fa7bab9a0 TRAP: 0300 Not tainted (5.6.0-rc6-00033-g6c90b86a745a-dirty) MSR: 9000000000009033 <SF,HV,EE,ME,IR,DR,RI,LE> CR: 24002282 XER: 00000000 CFAR: c000000000dae880 DAR: 0000000000000008 DSISR: 40000000 IRQMASK: 1 GPR00: c000000000789e8c c000003fa7babc30 c0000000016fe500 0000000000000000 GPR04: 0000000000000000 0000000000000006 0000000000000000 c000003faf205c00 GPR08: 0000000000000000 0000000000000001 000000008000002d c00800000ddde140 GPR12: c000000000401030 c000003ffffd9080 0000000000000001 0000000000000000 GPR16: 0000000000000000 0000000000000000 000000013aad0074 000000013aaac978 GPR20: 000000013aad0070 0000000000000000 00007fffd1b37158 0000000000000000 GPR24: 000000014fef0d58 0000000000000000 000000014fef0cf0 0000000000000001 GPR28: 0000000000000000 0000000000000000 c0000000018b2a60 0000000000000000 NIP [c000000000789e90] percpu_ref_kill_and_confirm+0x40/0x170 LR [c000000000789e8c] percpu_ref_kill_and_confirm+0x3c/0x170 Call Trace: [c000003fa7babc30] [c000003faf2064d4] 0xc000003faf2064d4 (unreliable) [c000003fa7babcb0] [c000000000400e8c] dev_pagemap_kill+0x6c/0x80 [c000003fa7babcd0] [c000000000401064] memunmap_pages+0x34/0x2f0 [c000003fa7babd50] [c00800000dddd548] kvmppc_uvmem_free+0x30/0x80 [kvm_hv] [c000003fa7babd80] [c00800000ddcef18] kvmppc_book3s_exit_hv+0x20/0x78 [kvm_hv] [c000003fa7babda0] [c0000000002084d0] sys_delete_module+0x1d0/0x2c0 [c000003fa7babe20] [c00000000000b9d0] system_call+0x5c/0x68 Instruction dump: 3fc2001b fb81ffe0 fba1ffe8 fbe1fff8 7c7f1b78 7c9c2378 3bde4560 7fc3f378 f8010010 f821ff81 486249a1 60000000 <e93f0008> 7c7d1b78 712a0002 40820084 ---[ end trace 5774ef4dc2c98279 ]--- So this patch checks if kvmppc_uvmem_init actually allocated anything before running kvmppc_uvmem_free. Fixes: ca9f4942670c ("KVM: PPC: Book3S HV: Support for running secure guests") Cc: stable@vger.kernel.org # v5.5+ Reported-by: Greg Kurz <groug@kaod.org> Signed-off-by: Fabiano Rosas <farosas@linux.ibm.com> Tested-by: Greg Kurz <groug@kaod.org> Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
2020-03-23Merge tag 'mlx5-fixes-2020-03-05' of ↵David S. Miller
git://git.kernel.org/pub/scm/linux/kernel/git/saeed/linux Saeed Mahameed says: ==================== Mellanox, mlx5 fixes 2020-03-05 This series introduces some fixes to mlx5 driver. Please pull and let me know if there is any problem. For -stable v5.4 ('net/mlx5: DR, Fix postsend actions write length') For -stable v5.5 ('net/mlx5e: kTLS, Fix TCP seq off-by-1 issue in TX resync flow') ('net/mlx5e: Fix endianness handling in pedit mask') ==================== Reviewed-by: Jakub Kicinski <kuba@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-03-23RDMA/siw: Suppress uninitialized var warningAndrew Morton
drivers/infiniband/sw/siw/siw_qp_rx.c: In function siw_proc_send: ./include/linux/spinlock.h:288:3: warning: flags may be used uninitialized in this function [-Wmaybe-uninitialized] _raw_spin_unlock_irqrestore(lock, flags); \ ^~~~~~~~~~~~~~~~~~~~~~~~~~~ drivers/infiniband/sw/siw/siw_qp_rx.c:335:16: note: flags was declared here unsigned long flags; Link: https://lore.kernel.org/r/20200323184627.ZWPg91uin%akpm@linux-foundation.org Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Reviewed-by: Leon Romanovsky <leonro@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
2020-03-23IB/hfi1: Ensure pq is not left on waitlistMike Marciniszyn
The following warning can occur when a pq is left on the dmawait list and the pq is then freed: WARNING: CPU: 47 PID: 3546 at lib/list_debug.c:29 __list_add+0x65/0xc0 list_add corruption. next->prev should be prev (ffff939228da1880), but was ffff939cabb52230. (next=ffff939cabb52230). Modules linked in: mmfs26(OE) mmfslinux(OE) tracedev(OE) 8021q garp mrp ib_isert iscsi_target_mod target_core_mod crc_t10dif crct10dif_generic opa_vnic rpcrdma ib_iser libiscsi scsi_transport_iscsi ib_ipoib(OE) bridge stp llc iTCO_wdt iTCO_vendor_support intel_powerclamp coretemp intel_rapl iosf_mbi kvm_intel kvm irqbypass crct10dif_pclmul crct10dif_common crc32_pclmul ghash_clmulni_intel aesni_intel lrw gf128mul glue_helper ablk_helper cryptd ast ttm drm_kms_helper syscopyarea sysfillrect sysimgblt fb_sys_fops drm pcspkr joydev drm_panel_orientation_quirks i2c_i801 mei_me lpc_ich mei wmi ipmi_si ipmi_devintf ipmi_msghandler nfit libnvdimm acpi_power_meter acpi_pad hfi1(OE) rdmavt(OE) rdma_ucm ib_ucm ib_uverbs ib_umad rdma_cm ib_cm iw_cm ib_core binfmt_misc numatools(OE) xpmem(OE) ip_tables nfsv3 nfs_acl nfs lockd grace sunrpc fscache igb ahci libahci i2c_algo_bit dca libata ptp pps_core crc32c_intel [last unloaded: i2c_algo_bit] CPU: 47 PID: 3546 Comm: wrf.exe Kdump: loaded Tainted: G W OE ------------ 3.10.0-957.41.1.el7.x86_64 #1 Hardware name: HPE.COM HPE SGI 8600-XA730i Gen10/X11DPT-SB-SG007, BIOS SBED1229 01/22/2019 Call Trace: [<ffffffff91f65ac0>] dump_stack+0x19/0x1b [<ffffffff91898b78>] __warn+0xd8/0x100 [<ffffffff91898bff>] warn_slowpath_fmt+0x5f/0x80 [<ffffffff91a1dabe>] ? ___slab_alloc+0x24e/0x4f0 [<ffffffff91b97025>] __list_add+0x65/0xc0 [<ffffffffc03926a5>] defer_packet_queue+0x145/0x1a0 [hfi1] [<ffffffffc0372987>] sdma_check_progress+0x67/0xa0 [hfi1] [<ffffffffc03779d2>] sdma_send_txlist+0x432/0x550 [hfi1] [<ffffffff91a20009>] ? kmem_cache_alloc+0x179/0x1f0 [<ffffffffc0392973>] ? user_sdma_send_pkts+0xc3/0x1990 [hfi1] [<ffffffffc0393e3a>] user_sdma_send_pkts+0x158a/0x1990 [hfi1] [<ffffffff918ab65e>] ? try_to_del_timer_sync+0x5e/0x90 [<ffffffff91a3fe1a>] ? __check_object_size+0x1ca/0x250 [<ffffffffc0395546>] hfi1_user_sdma_process_request+0xd66/0x1280 [hfi1] [<ffffffffc034e0da>] hfi1_aio_write+0xca/0x120 [hfi1] [<ffffffff91a4245b>] do_sync_readv_writev+0x7b/0xd0 [<ffffffff91a4409e>] do_readv_writev+0xce/0x260 [<ffffffff918df69f>] ? pick_next_task_fair+0x5f/0x1b0 [<ffffffff918db535>] ? sched_clock_cpu+0x85/0xc0 [<ffffffff91f6b16a>] ? __schedule+0x13a/0x860 [<ffffffff91a442c5>] vfs_writev+0x35/0x60 [<ffffffff91a4447f>] SyS_writev+0x7f/0x110 [<ffffffff91f78ddb>] system_call_fastpath+0x22/0x27 The issue happens when wait_event_interruptible_timeout() returns a value <= 0. In that case, the pq is left on the list. The code continues sending packets and potentially can complete the current request with the pq still on the dmawait list provided no descriptor shortage is seen. If the pq is torn down in that state, the sdma interrupt handler could find the now freed pq on the list with list corruption or memory corruption resulting. Fix by adding a flush routine to ensure that the pq is never on a list after processing a request. A follow-up patch series will address issues with seqlock surfaced in: https://lore.kernel.org/r/20200320003129.GP20941@ziepe.ca The seqlock use for sdma will then be converted to a spin lock since the list_empty() doesn't need the protection afforded by the sequence lock currently in use. Fixes: a0d406934a46 ("staging/rdma/hfi1: Add page lock limit check for SDMA requests") Link: https://lore.kernel.org/r/20200320200200.23203.37777.stgit@awfm-01.aw.intel.com Reviewed-by: Kaike Wan <kaike.wan@intel.com> Signed-off-by: Mike Marciniszyn <mike.marciniszyn@intel.com> Signed-off-by: Dennis Dalessandro <dennis.dalessandro@intel.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
2020-03-23kunit: add --make_optionsGreg Thelen
The kunit.py utility builds an ARCH=um kernel and then runs it. Add optional --make_options flag to kunit.py allowing for the operator to specify extra build options. This allows use of the clang compiler for kunit: tools/testing/kunit/kunit.py run --defconfig \ --make_options CC=clang --make_options HOSTCC=clang Signed-off-by: Greg Thelen <gthelen@google.com> Reviewed-by: Brendan Higgins <brendanhiggins@google.com> Reviewed-by: Nathan Chancellor <natechancellor@gmail.com> Tested-by: David Gow <davidgow@google.com> Signed-off-by: Brendan Higgins <brendanhiggins@google.com> Signed-off-by: Shuah Khan <skhan@linuxfoundation.org>
2020-03-23Merge branch 'linus' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6 Pull crypto fix from Herbert Xu: "This fixes a correctness bug in the ARM64 version of ChaCha for lib/crypto used by WireGuard" * 'linus' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6: crypto: arm64/chacha - correctly walk through blocks
2020-03-23drm/vmwgfx: Use vmwgfx version 2.18 to signal SM5 compatibilityThomas Hellström (VMware)
Signed-off-by: Thomas Hellström (VMware) <thomas_os@shipmail.org> Reviewed-by: Charmaine Lee <charmainel@vmware.com> Reviewed-by: Brian Paul <brianp@vmware.com> Reviewed-by: Roland Scheidegger <sroland@vmware.com> Signed-off-by: Roland Scheidegger <sroland@vmware.com> ___ v2: Use 2.18 instead of 2.17
2020-03-23drm/vmwgfx: Add SM5 param for userspaceDeepak Rawat
Add a new param for user-space to determine if kernel module is SM5 capable. Signed-off-by: Deepak Rawat <drawat.floss@gmail.com> Reviewed-by: Thomas Hellström (VMware) <thomas_os@shipmail.org> Reviewed-by: Roland Scheidegger <sroland@vmware.com> Signed-off-by: Roland Scheidegger <sroland@vmware.com>
2020-03-23drm/vmwgfx: Add surface define v4 commandDeepak Rawat
Surface define v4 added new member buffer_byte_stride. With this patch add buffer_byte_stride in surface metadata and create surface using new command if support is available. Also with this patch replace device specific data types with kernel types. Signed-off-by: Deepak Rawat <drawat.floss@gmail.com> Reviewed-by: Thomas Hellström (VMware) <thomas_os@shipmail.org> Reviewed-by: Roland Scheidegger <sroland@vmware.com> Signed-off-by: Roland Scheidegger <sroland@vmware.com>
2020-03-23drm/vmwgfx: Refactor surface_define to use vmw_surface_metadataDeepak Rawat
Makes surface_define cleaner by sending vmw_surface_metadata instead of all the arguments individually. v2: fix uninitialized return value, error message Signed-off-by: Deepak Rawat <drawat.floss@gmail.com> Reviewed-by: Brian Paul <brianp@vmware.com> Reviewed-by: Thomas Hellström (VMware) <thomas_os@shipmail.org> Reviewed-by: Roland Scheidegger <sroland@vmware.com> Signed-off-by: Roland Scheidegger <sroland@vmware.com>
2020-03-23drm/vmwgfx: Split surface metadata from struct vmw_surfaceDeepak Rawat
Create a new structure vmw_surface_metadata representing the metadata used for creating surface. With this can make the surface_define_priv a bit cleaner. Signed-off-by: Deepak Rawat <drawat.floss@gmail.com> Reviewed-by: Brian Paul <brianp@vmware.com> Reviewed-by: Thomas Hellström (VMware) <thomas_os@shipmail.org> Reviewed-by: Roland Scheidegger <sroland@vmware.com> Signed-off-by: Roland Scheidegger <sroland@vmware.com>
2020-03-23drm/vmwgfx: Add support for streamoutput with mob commandsDeepak Rawat
With SM5 capability a new version of streamoutput is supported by device which need backing mob and a new field. With this change the new command is supported in command buffer. v2: Also track streamoutput context binding in binding manager. v3: Track only one streamoutput as only one can be set to context. v4: Fix comment typos Signed-off-by: Deepak Rawat <drawat.floss@gmail.com> Signed-off-by: Neha Bhende <bhenden@vmware.com> Reviewed-by: Thomas Hellström (VMware) <thomas_os@shipmail.org> Reviewed-by: Roland Scheidegger <sroland@vmware.com> Signed-off-by: Roland Scheidegger <sroland@vmware.com>
2020-03-23drm/vmwgfx: Rename stream output target binding tracker structDeepak Rawat
Previous name vmw_ctx_bindinfo_so is misleading because it actually represent so target and stream output is a new resource type that needs tracking for SM5 capable device. Also rename binding type enum and internal functions to reflect these belongs to so targets. Signed-off-by: Deepak Rawat <drawat.floss@gmail.com> Reviewed-by: Thomas Hellström (VMware) <thomas_os@shipmail.org> Reviewed-by: Roland Scheidegger <sroland@vmware.com> Signed-off-by: Roland Scheidegger <sroland@vmware.com>
2020-03-23drm/vmwgfx: Add support for indirect and dispatch commandsDeepak Rawat
Validate indirect and dispatch commands in command buffer. Signed-off-by: Deepak Rawat <drawat.floss@gmail.com> Reviewed-by: Thomas Hellström (VMware) <thomas_os@shipmail.org> Reviewed-by: Roland Scheidegger <sroland@vmware.com> Signed-off-by: Roland Scheidegger <sroland@vmware.com>
2020-03-23drm/vmwgfx: Add support for UA view commandsDeepak Rawat
Virtual device now support new commands to manage unordered access views. Allow them as part of user-space command buffer. This involves adding UA view cotable, binding tracker info, new view type and command verifier functions. v2: fix comment typo v3: style fixes (don't use deprecated PTR_RET) Signed-off-by: Deepak Rawat <drawat.floss@gmail.com> Signed-off-by: Neha Bhende <bhenden@vmware.com> Reviewed-by: Thomas Hellström (VMware) <thomas_os@shipmail.org> Reviewed-by: Roland Scheidegger <sroland@vmware.com> Signed-off-by: Roland Scheidegger <sroland@vmware.com>
2020-03-23drm/vmwgfx: Support SM5 shader type in command bufferDeepak Rawat
Virtual device now supports new shader types, allow them as valid shader type in command buffer. Also add per shader bind info in binding manager state for new shader type. Signed-off-by: Deepak Rawat <drawat.floss@gmail.com> Reviewed-by: Thomas Hellström (VMware) <thomas_os@shipmail.org> Reviewed-by: Roland Scheidegger <sroland@vmware.com> Signed-off-by: Roland Scheidegger <sroland@vmware.com>
2020-03-23drm/vmwgfx: Read new register for GB memory when availableDeepak Rawat
Virtual device added new register for suggested GB memory, read the new register when available. Signed-off-by: Deepak Rawat <drawat.floss@gmail.com> Reviewed-by: Thomas Hellström (VMware) <thomas_os@shipmail.org> Reviewed-by: Roland Scheidegger <sroland@vmware.com> Signed-off-by: Roland Scheidegger <sroland@vmware.com>
2020-03-23drm/vmwgfx: Add a new enum for SM5 graphics context capabilityDeepak Rawat
A new enum to represent new SM5 graphics context capability in vmwgfx. v2: use new correct cap bits (merged several later commits into it). Signed-off-by: Deepak Rawat <drawat.floss@gmail.com> Signed-off-by: Thomas Hellström (VMware) <thomas_os@shipmail.org> Reviewed-by: Roland Scheidegger <sroland@vmware.com> Signed-off-by: Roland Scheidegger <sroland@vmware.com>
2020-03-23drm/vmwgfx: Sync virtual device headers for new featureDeepak Rawat
Get the latest device headers for SM5 and other features development. v2: sync to newer bits (merge later commits) v3: sync to even newer bits Co-developed-by: Roland Scheidegger <sroland@vmware.com> Signed-off-by: Deepak Rawat <drawat.floss@gmail.com> Signed-off-by: Neha Bhende <bhenden@vmware.com> Signed-off-by: Charmaine Lee <charmainel@vmware.com> Signed-off-by: Roland Scheidegger <sroland@vmware.com> Reviewed-by: Thomas Hellström (VMware) <thomas_os@shipmail.org>
2020-03-23drm/vmwgfx: Use enum to represent graphics context capabilitiesDeepak Rawat
Instead of having different bool in device private to represent incremental graphics context capabilities, add a new sm type enum. v2: Use enum instead of bit flag. v3: Incorporated review comments. Signed-off-by: Deepak Rawat <drawat.floss@gmail.com> Reviewed-by: Thomas Hellström (VMware) <thomas_os@shipmail.org> Reviewed-by: Roland Scheidegger <sroland@vmware.com> Signed-off-by: Roland Scheidegger <sroland@vmware.com>
2020-03-23drm/vmwgfx: Deprecate logic ops commandsDeepak Rawat
Logic ops commands are marked as deprecated by virtual device and were never used by vmwgfx. Signed-off-by: Deepak Rawat <drawat.floss@gmail.com> Reviewed-by: Thomas Hellström (VMware) <thomas_os@shipmail.org> Reviewed-by: Roland Scheidegger <sroland@vmware.com> Signed-off-by: Roland Scheidegger <sroland@vmware.com>
2020-03-23drm/vmwgfx: Sync legacy multisampling device capabilityDeepak Rawat
In favor of SM4.1 multisampling capability, virtual device deprecated old multisampling device capability. Mark legacy multisampling device capability as dead. Rename the function that masks legacy multisample capability to reflect that now it is masking a deprecated feature. Signed-off-by: Deepak Rawat <drawat.floss@gmail.com> Reviewed-by: Thomas Hellström (VMware) <thomas_os@shipmail.org> Reviewed-by: Roland Scheidegger <sroland@vmware.com> Signed-off-by: Roland Scheidegger <sroland@vmware.com>
2020-03-23drm/vmwgfx: Also check for SVGA_CAP_DX before reading DX context supportDeepak Rawat
Virtual device consider SVGA_CAP_DX and SVGA3D_DEVCAP_DXCONTEXT independent of each other. Some of the commands in cmd_buf depends on SVGA_CAP_DX, so better to check for that as well. Signed-off-by: Deepak Rawat <drawat.floss@gmail.com> Reviewed-by: Thomas Hellström (VMware) <thomas_os@shipmail.org> Reviewed-by: Roland Scheidegger <sroland@vmware.com> Signed-off-by: Roland Scheidegger <sroland@vmware.com>
2020-03-23samples, bpf: Refactor perf_event user program with libbpf bpf_linkDaniel T. Lee
The bpf_program__attach of libbpf(using bpf_link) is much more intuitive than the previous method using ioctl. bpf_program__attach_perf_event manages the enable of perf_event and attach of BPF programs to it, so there's no neeed to do this directly with ioctl. In addition, bpf_link provides consistency in the use of API because it allows disable (detach, destroy) for multiple events to be treated as one bpf_link__destroy. Also, bpf_link__destroy manages the close() of perf_event fd. This commit refactors samples that attach the bpf program to perf_event by using libbbpf instead of ioctl. Also the bpf_load in the samples were removed and migrated to use libbbpf API. Signed-off-by: Daniel T. Lee <danieltimlee@gmail.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Andrii Nakryiko <andriin@fb.com> Link: https://lore.kernel.org/bpf/20200321100424.1593964-3-danieltimlee@gmail.com
2020-03-23samples, bpf: Move read_trace_pipe to trace_helpersDaniel T. Lee
To reduce the reliance of trace samples (trace*_user) on bpf_load, move read_trace_pipe to trace_helpers. By moving this bpf_loader helper elsewhere, trace functions can be easily migrated to libbbpf. Signed-off-by: Daniel T. Lee <danieltimlee@gmail.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Andrii Nakryiko <andriin@fb.com> Link: https://lore.kernel.org/bpf/20200321100424.1593964-2-danieltimlee@gmail.com
2020-03-23dt-bindings: clk: fix example for single-output providerGiulio Benetti
As described above single-output clock provider should have 0 cells number, so let's fix it by using 0 as cells number. Signed-off-by: Giulio Benetti <giulio.benetti@benettiengineering.com> Signed-off-by: Rob Herring <robh@kernel.org>
2020-03-23dt-bindings: Add vendor prefix for ENELubomir Rintel
ENE Technology makes embedded controllers and perhaps other stuff. Their web site is http://www.ene.com.tw/. Signed-off-by: Lubomir Rintel <lkundrak@v3.sk> Signed-off-by: Rob Herring <robh@kernel.org>
2020-03-23dt-bindings: Add vendor prefix for Dell Inc.Lubomir Rintel
Dell makes computers and perhaps other stuff. Their web site is http://www.dell.com/. Signed-off-by: Lubomir Rintel <lkundrak@v3.sk> Signed-off-by: Rob Herring <robh@kernel.org>
2020-03-23io-wq: handle hashed writes in chainsPavel Begunkov
We always punt async buffered writes to an io-wq helper, as the core kernel does not have IOCB_NOWAIT support for that. Most buffered async writes complete very quickly, as it's just a copy operation. This means that doing multiple locking roundtrips on the shared wqe lock for each buffered write is wasteful. Additionally, buffered writes are hashed work items, which means that any buffered write to a given file is serialized. Keep identicaly hashed work items contiguously in @wqe->work_list, and track a tail for each hash bucket. On dequeue of a hashed item, splice all of the same hash in one go using the tracked tail. Until the batch is done, the caller doesn't have to synchronize with the wqe or worker locks again. Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2020-03-23dt-bindings: Add vendor prefix for SG Micro CorpLuca Weiss
"SG Micro Corp (SGMICRO) specializes in high performance, high quality analog IC design, marketing and sales." (http://www.sg-micro.com/) Signed-off-by: Luca Weiss <luca@z3ntu.xyz> Signed-off-by: Rob Herring <robh@kernel.org>
2020-03-23rtc: class: support hctosys from modular RTC driversSteve Muckle
Due to distribution constraints it may not be possible to statically compile the required RTC driver into the kernel. Expand RTC_HCTOSYS support to cover all RTC devices (statically compiled or not) by checking at the end of RTC device registration whether the time should be synced. Signed-off-by: Steve Muckle <smuckle@google.com> Link: https://lore.kernel.org/r/20191106194625.116692-1-smuckle@google.com Signed-off-by: Alexandre Belloni <alexandre.belloni@bootlin.com>
2020-03-23enetc: Remove unused variable 'enetc_drv_name'YueHaibing
commit ed0a72e0de16 ("net/freescale: Clean drivers from static versions") leave behind this, remove it . Signed-off-by: YueHaibing <yuehaibing@huawei.com> Reviewed-by: Leon Romanovsky <leonro@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-03-23Crypto/chtls: add/delete TLS header in driverRohit Maheshwari
Kernel TLS forms TLS header in kernel during encryption and removes while decryption before giving packet back to user application. The similar logic is introduced in chtls code as well. v1->v2: - tls_proccess_cmsg() uses tls_handle_open_record() which is not required in TOE-TLS. Don't mix TOE with other TLS types. Signed-off-by: Vinay Kumar Yadav <vinay.yadav@chelsio.com> Signed-off-by: Rohit Maheshwari <rohitm@chelsio.com> Acked-by: Jakub Kicinski <kuba@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-03-23tcp: repair: fix TCP_QUEUE_SEQ implementationEric Dumazet
When application uses TCP_QUEUE_SEQ socket option to change tp->rcv_next, we must also update tp->copied_seq. Otherwise, stuff relying on tcp_inq() being precise can eventually be confused. For example, tcp_zerocopy_receive() might crash because it does not expect tcp_recv_skb() to return NULL. We could add tests in various places to fix the issue, or simply make sure tcp_inq() wont return a random value, and leave fast path as it is. Note that this fixes ioctl(fd, SIOCINQ, &val) at the same time. Fixes: ee9952831cfd ("tcp: Initial repair mode") Fixes: 05255b823a61 ("tcp: add TCP_ZEROCOPY_RECEIVE support for zerocopy receive") Signed-off-by: Eric Dumazet <edumazet@google.com> Reported-by: syzbot <syzkaller@googlegroups.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-03-23net: Make skb_segment not to compute checksum if network controller supports ↵Yadu Kishore
checksumming Problem: TCP checksum in the output path is not being offloaded during GSO in the following case: The network driver does not support scatter-gather but supports checksum offload with NETIF_F_HW_CSUM. Cause: skb_segment calls skb_copy_and_csum_bits if the network driver does not announce NETIF_F_SG. It does not check if the driver supports NETIF_F_HW_CSUM. So for devices which might want to offload checksum but do not support SG there is currently no way to do so if GSO is enabled. Solution: In skb_segment check if the network controller does checksum and if so call skb_copy_bits instead of skb_copy_and_csum_bits. Testing: Without the patch, ran iperf TCP traffic with NETIF_F_HW_CSUM enabled in the network driver. Observed the TCP checksum offload is not happening since the skbs received by the driver in the output path have skb->ip_summed set to CHECKSUM_NONE. With the patch ran iperf TCP traffic and observed that TCP checksum is being offloaded with skb->ip_summed set to CHECKSUM_PARTIAL. Also tested with the patch by disabling NETIF_F_HW_CSUM in the driver to cover the newly introduced if-else code path in skb_segment. Link: https://lore.kernel.org/netdev/CA+FuTSeYGYr3Umij+Mezk9CUcaxYwqEe5sPSuXF8jPE2yMFJAw@mail.gmail.com Signed-off-by: Yadu Kishore <kyk.segfault@gmail.com> Acked-by: Willem de Bruijn <willemb@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-03-23bpf: Add tests for bpf_sk_storage to bpf_tcp_caMartin KaFai Lau
This patch adds test to exercise the bpf_sk_storage_get() and bpf_sk_storage_delete() helper from the bpf_dctcp.c. The setup and check on the sk_storage is done immediately before and after the connect(). This patch also takes this chance to move the pthread_create() after the connect() has been done. That will remove the need of the "wait_thread" label. Signed-off-by: Martin KaFai Lau <kafai@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Yonghong Song <yhs@fb.com> Link: https://lore.kernel.org/bpf/20200320152107.2169904-1-kafai@fb.com
2020-03-23bpf: Add bpf_sk_storage support to bpf_tcp_caMartin KaFai Lau
This patch adds bpf_sk_storage_get() and bpf_sk_storage_delete() helper to the bpf_tcp_ca's struct_ops. That would allow bpf-tcp-cc to: 1) share sk private data with other bpf progs. 2) use bpf_sk_storage as a private storage for a bpf-tcp-cc if the existing icsk_ca_priv is not big enough. Signed-off-by: Martin KaFai Lau <kafai@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Yonghong Song <yhs@fb.com> Link: https://lore.kernel.org/bpf/20200320152101.2169498-1-kafai@fb.com
2020-03-23KVM: VMX: Gracefully handle faults on VMXONSean Christopherson
Gracefully handle faults on VMXON, e.g. #GP due to VMX being disabled by BIOS, instead of letting the fault crash the system. Now that KVM uses cpufeatures to query support instead of reading MSR_IA32_FEAT_CTL directly, it's possible for a bug in a different subsystem to cause KVM to incorrectly attempt VMXON[*]. Crashing the system is especially annoying if the system is configured such that hardware_enable() will be triggered during boot. Oppurtunistically rename @addr to @vmxon_pointer and use a named param to reference it in the inline assembly. Print 0xdeadbeef in the ultra-"rare" case that reading MSR_IA32_FEAT_CTL also faults. [*] https://lkml.kernel.org/r/20200226231615.13664-1-sean.j.christopherson@intel.com Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Message-Id: <20200321193751.24985-4-sean.j.christopherson@intel.com> Reviewed-by: Vitaly Kuznetsov <vkuznets@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-03-23KVM: VMX: Fold loaded_vmcs_init() into alloc_loaded_vmcs()Sean Christopherson
Subsume loaded_vmcs_init() into alloc_loaded_vmcs(), its only remaining caller, and drop the VMCLEAR on the shadow VMCS, which is guaranteed to be NULL. loaded_vmcs_init() was previously used by loaded_vmcs_clear(), but loaded_vmcs_clear() also subsumed loaded_vmcs_init() to properly handle smp_wmb() with respect to VMCLEAR. Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Message-Id: <20200321193751.24985-3-sean.j.christopherson@intel.com> Reviewed-by: Vitaly Kuznetsov <vkuznets@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-03-23KVM: VMX: Always VMCLEAR in-use VMCSes during crash with kexec supportSean Christopherson
VMCLEAR all in-use VMCSes during a crash, even if kdump's NMI shootdown interrupted a KVM update of the percpu in-use VMCS list. Because NMIs are not blocked by disabling IRQs, it's possible that crash_vmclear_local_loaded_vmcss() could be called while the percpu list of VMCSes is being modified, e.g. in the middle of list_add() in vmx_vcpu_load_vmcs(). This potential corner case was called out in the original commit[*], but the analysis of its impact was wrong. Skipping the VMCLEARs is wrong because it all but guarantees that a loaded, and therefore cached, VMCS will live across kexec and corrupt memory in the new kernel. Corruption will occur because the CPU's VMCS cache is non-coherent, i.e. not snooped, and so the writeback of VMCS memory on its eviction will overwrite random memory in the new kernel. The VMCS will live because the NMI shootdown also disables VMX, i.e. the in-progress VMCLEAR will #UD, and existing Intel CPUs do not flush the VMCS cache on VMXOFF. Furthermore, interrupting list_add() and list_del() is safe due to crash_vmclear_local_loaded_vmcss() using forward iteration. list_add() ensures the new entry is not visible to forward iteration unless the entire add completes, via WRITE_ONCE(prev->next, new). A bad "prev" pointer could be observed if the NMI shootdown interrupted list_del() or list_add(), but list_for_each_entry() does not consume ->prev. In addition to removing the temporary disabling of VMCLEAR, open code loaded_vmcs_init() in __loaded_vmcs_clear() and reorder VMCLEAR so that the VMCS is deleted from the list only after it's been VMCLEAR'd. Deleting the VMCS before VMCLEAR would allow a race where the NMI shootdown could arrive between list_del() and vmcs_clear() and thus neither flow would execute a successful VMCLEAR. Alternatively, more code could be moved into loaded_vmcs_init(), but that gets rather silly as the only other user, alloc_loaded_vmcs(), doesn't need the smp_wmb() and would need to work around the list_del(). Update the smp_*() comments related to the list manipulation, and opportunistically reword them to improve clarity. [*] https://patchwork.kernel.org/patch/1675731/#3720461 Fixes: 8f536b7697a0 ("KVM: VMX: provide the vmclear function and a bitmap to support VMCLEAR in kdump") Cc: stable@vger.kernel.org Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Message-Id: <20200321193751.24985-2-sean.j.christopherson@intel.com> Reviewed-by: Vitaly Kuznetsov <vkuznets@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-03-23KVM: x86: Expose fast short REP MOV for supported cpuidZhenyu Wang
For CPU supporting fast short REP MOV (XF86_FEATURE_FSRM) e.g Icelake, Tigerlake, expose it in KVM supported cpuid as well. Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com> Message-Id: <20200323092236.3703-1-zhenyuw@linux.intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-03-23tools/kvm_stat: add command line switch '-c' to log in csv formatStefan Raspl
Add an alternative format that can be more easily used for further processing later on. Note that we add a timestamp in the first column for both, the regular and the new csv format. Signed-off-by: Stefan Raspl <raspl@linux.ibm.com> Message-Id: <20200306114250.57585-5-raspl@linux.ibm.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-03-23tools/kvm_stat: add command line switch '-s' to set update intervalStefan Raspl
This now controls both, the refresh rate of the interactive mode as well as the logging mode. Which, as a consequence, means that the default of logging mode is now 3s, too (use command line switch '-s' to adjust to your liking). Signed-off-by: Stefan Raspl <raspl@linux.ibm.com> Message-Id: <20200306114250.57585-4-raspl@linux.ibm.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2020-03-23tools/kvm_stat: switch to argparseStefan Raspl
optparse is deprecated for a while, hence switching over to argparse (which also works with python2). As a consequence, help output has some subtle changes, the most significant one being that the options are all listed explicitly instead of a universal '[options]' indicator. Also, some of the error messages are phrased slightly different. While at it, squashed a number of minor PEP8 issues. Signed-off-by: Stefan Raspl <raspl@linux.ibm.com> Message-Id: <20200306114250.57585-3-raspl@linux.ibm.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>