summaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
2018-09-19net: mvneta: fix the Rx desc buffer DMA unmappingAntoine Tenart
With CONFIG_DMA_API_DEBUG enabled we now get a warning when using the mvneta driver: mvneta d0030000.ethernet: DMA-API: device driver frees DMA memory with wrong function [device address=0x000000001165b000] [size=4096 bytes] [mapped as page] [unmapped as single] This is because when using the s/w buffer management, the Rx descriptor buffer is mapped with dma_map_page but unmapped with dma_unmap_single. This patch fixes this by using the right unmapping function. Fixes: 562e2f467e71 ("net: mvneta: Improve the buffer allocation method for SWBM") Signed-off-by: Antoine Tenart <antoine.tenart@bootlin.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-09-19ip6_tunnel: be careful when accessing the inner headerPaolo Abeni
the ip6 tunnel xmit ndo assumes that the processed skb always contains an ip[v6] header, but syzbot has found a way to send frames that fall short of this assumption, leading to the following splat: BUG: KMSAN: uninit-value in ip6ip6_tnl_xmit net/ipv6/ip6_tunnel.c:1307 [inline] BUG: KMSAN: uninit-value in ip6_tnl_start_xmit+0x7d2/0x1ef0 net/ipv6/ip6_tunnel.c:1390 CPU: 0 PID: 4504 Comm: syz-executor558 Not tainted 4.16.0+ #87 Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/01/2011 Call Trace: __dump_stack lib/dump_stack.c:17 [inline] dump_stack+0x185/0x1d0 lib/dump_stack.c:53 kmsan_report+0x142/0x240 mm/kmsan/kmsan.c:1067 __msan_warning_32+0x6c/0xb0 mm/kmsan/kmsan_instr.c:683 ip6ip6_tnl_xmit net/ipv6/ip6_tunnel.c:1307 [inline] ip6_tnl_start_xmit+0x7d2/0x1ef0 net/ipv6/ip6_tunnel.c:1390 __netdev_start_xmit include/linux/netdevice.h:4066 [inline] netdev_start_xmit include/linux/netdevice.h:4075 [inline] xmit_one net/core/dev.c:3026 [inline] dev_hard_start_xmit+0x5f1/0xc70 net/core/dev.c:3042 __dev_queue_xmit+0x27ee/0x3520 net/core/dev.c:3557 dev_queue_xmit+0x4b/0x60 net/core/dev.c:3590 packet_snd net/packet/af_packet.c:2944 [inline] packet_sendmsg+0x7c70/0x8a30 net/packet/af_packet.c:2969 sock_sendmsg_nosec net/socket.c:630 [inline] sock_sendmsg net/socket.c:640 [inline] ___sys_sendmsg+0xec0/0x1310 net/socket.c:2046 __sys_sendmmsg+0x42d/0x800 net/socket.c:2136 SYSC_sendmmsg+0xc4/0x110 net/socket.c:2167 SyS_sendmmsg+0x63/0x90 net/socket.c:2162 do_syscall_64+0x309/0x430 arch/x86/entry/common.c:287 entry_SYSCALL_64_after_hwframe+0x3d/0xa2 RIP: 0033:0x441819 RSP: 002b:00007ffe58ee8268 EFLAGS: 00000213 ORIG_RAX: 0000000000000133 RAX: ffffffffffffffda RBX: 0000000000000003 RCX: 0000000000441819 RDX: 0000000000000002 RSI: 0000000020000100 RDI: 0000000000000003 RBP: 00000000006cd018 R08: 0000000000000000 R09: 0000000000000000 R10: 0000000000000000 R11: 0000000000000213 R12: 0000000000402510 R13: 00000000004025a0 R14: 0000000000000000 R15: 0000000000000000 Uninit was created at: kmsan_save_stack_with_flags mm/kmsan/kmsan.c:278 [inline] kmsan_internal_poison_shadow+0xb8/0x1b0 mm/kmsan/kmsan.c:188 kmsan_kmalloc+0x94/0x100 mm/kmsan/kmsan.c:314 kmsan_slab_alloc+0x11/0x20 mm/kmsan/kmsan.c:321 slab_post_alloc_hook mm/slab.h:445 [inline] slab_alloc_node mm/slub.c:2737 [inline] __kmalloc_node_track_caller+0xaed/0x11c0 mm/slub.c:4369 __kmalloc_reserve net/core/skbuff.c:138 [inline] __alloc_skb+0x2cf/0x9f0 net/core/skbuff.c:206 alloc_skb include/linux/skbuff.h:984 [inline] alloc_skb_with_frags+0x1d4/0xb20 net/core/skbuff.c:5234 sock_alloc_send_pskb+0xb56/0x1190 net/core/sock.c:2085 packet_alloc_skb net/packet/af_packet.c:2803 [inline] packet_snd net/packet/af_packet.c:2894 [inline] packet_sendmsg+0x6454/0x8a30 net/packet/af_packet.c:2969 sock_sendmsg_nosec net/socket.c:630 [inline] sock_sendmsg net/socket.c:640 [inline] ___sys_sendmsg+0xec0/0x1310 net/socket.c:2046 __sys_sendmmsg+0x42d/0x800 net/socket.c:2136 SYSC_sendmmsg+0xc4/0x110 net/socket.c:2167 SyS_sendmmsg+0x63/0x90 net/socket.c:2162 do_syscall_64+0x309/0x430 arch/x86/entry/common.c:287 entry_SYSCALL_64_after_hwframe+0x3d/0xa2 This change addresses the issue adding the needed check before accessing the inner header. The ipv4 side of the issue is apparently there since the ipv4 over ipv6 initial support, and the ipv6 side predates git history. Fixes: c4d3efafcc93 ("[IPV6] IP6TUNNEL: Add support to IPv4 over IPv6 tunnel.") Fixes: 1da177e4c3f4 ("Linux-2.6.12-rc2") Reported-by: syzbot+3fde91d4d394747d6db4@syzkaller.appspotmail.com Tested-by: Alexander Potapenko <glider@google.com> Signed-off-by: Paolo Abeni <pabeni@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-09-20ARM: dts: sun8i: drop A64 HDMI PHY fallback compatible from R40 DTIcenowy Zheng
The R40 HDMI PHY seems to be different to the A64 one, the A64 one has no input mux, but the R40 one has. Drop the A64 fallback compatible from the HDMI PHY node in R40 DT. Signed-off-by: Icenowy Zheng <icenowy@aosc.io> Signed-off-by: Maxime Ripard <maxime.ripard@bootlin.com> [wens@csie.org: Fix subject prefix order] Signed-off-by: Chen-Yu Tsai <wens@csie.org>
2018-09-19drm/amdgpu: add new polaris pci idAlex Deucher
Add new pci id. Reviewed-by: Rex Zhu <Rex.Zhu@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com> Cc: stable@vger.kernel.org
2018-09-19Merge tag 'batadv-net-for-davem-20180919' of git://git.open-mesh.org/linux-mergeDavid S. Miller
Simon Wunderlich says: ==================== pull request for net: batman-adv 2018-09-19 here are some bugfixes which we would like to see integrated into net. We forgot to bump the version number in the last round for net-next, so the belated patch to do that is included - we hope you can adopt it. This will most likely create a merge conflict later when merging into net-next with this rounds net-next patchset, but net-next should keep the 2018.4 version[1]. [1] resolution: --- a/net/batman-adv/main.h +++ b/net/batman-adv/main.h @@ -25,11 +25,7 @@ #define BATADV_DRIVER_DEVICE "batman-adv" #ifndef BATADV_SOURCE_VERSION -<<<<<<< -#define BATADV_SOURCE_VERSION "2018.3" -======= #define BATADV_SOURCE_VERSION "2018.4" ->>>>>>> #endif /* B.A.T.M.A.N. parameters */ Please pull or let me know of any problem! Here are some batman-adv bugfixes: - Avoid ELP information leak, by Sven Eckelmann - Fix sysfs segfault issues, by Sven Eckelmann (2 patches) - Fix locking when adding entries in various lists, by Sven Eckelmann (5 patches) - Fix refcount if queue_work() fails, by Marek Lindner (2 patches) - Fixup forgotten version bump, by Sven Eckelmann ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
2018-09-20Merge tag 'drm-intel-fixes-2018-09-19' of ↵Dave Airlie
git://anongit.freedesktop.org/drm/drm-intel into drm-fixes Only fixes coming from gvt containing "Two more BXT fixes from Colin, one srcu locking fix and one fix for GGTT clear when destroy vGPU." Signed-off-by: Dave Airlie <airlied@redhat.com> From: Rodrigo Vivi <rodrigo.vivi@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20180919151915.GA6309@intel.com
2018-09-20Merge tag 'drm-misc-fixes-2018-09-19' of ↵Dave Airlie
git://anongit.freedesktop.org/drm/drm-misc into drm-fixes drm-misc-fixes for v4.19-rc5: - Fix crash in vgem in drm_drv_uses_atomic_modeset. - Allow atomic drivers that don't set DRIVER_ATOMIC to create debugfs entries. - Fix compiler warning for unused connector_funcs. - Fix null pointer deref on UDL unplug. - Disable DRM support for sun4i's R40 for now. (Not all patches went in for v4.19, so it has to wait a cycle.) - NULL-terminate the of_device_id table in pl111. - Make sure vc4 NV12 planar format works when displaying an unscaled fb. Signed-off-by: Dave Airlie <airlied@redhat.com> From: Maarten Lankhorst <maarten.lankhorst@linux.intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/dda393bb-f13f-8d36-711b-cacfc578e5a3@linux.intel.com
2018-09-20kvm: selftests: Add platform_info_testDrew Schmitt
Test guest access to MSR_PLATFORM_INFO when the capability is enabled or disabled. Signed-off-by: Drew Schmitt <dasch@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2018-09-20KVM: x86: Control guest reads of MSR_PLATFORM_INFODrew Schmitt
Add KVM_CAP_MSR_PLATFORM_INFO so that userspace can disable guest access to reads of MSR_PLATFORM_INFO. Disabling access to reads of this MSR gives userspace the control to "expose" this platform-dependent information to guests in a clear way. As it exists today, guests that read this MSR would get unpopulated information if userspace hadn't already set it (and prior to this patch series, only the CPUID faulting information could have been populated). This existing interface could be confusing if guests don't handle the potential for incorrect/incomplete information gracefully (e.g. zero reported for base frequency). Signed-off-by: Drew Schmitt <dasch@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2018-09-20KVM: x86: Turbo bits in MSR_PLATFORM_INFODrew Schmitt
Allow userspace to set turbo bits in MSR_PLATFORM_INFO. Previously, only the CPUID faulting bit was settable. But now any bit in MSR_PLATFORM_INFO would be settable. This can be used, for example, to convey frequency information about the platform on which the guest is running. Signed-off-by: Drew Schmitt <dasch@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2018-09-20nVMX x86: Check VPID value on vmentry of L2 guestsKrish Sadhukhan
According to section "Checks on VMX Controls" in Intel SDM vol 3C, the following check needs to be enforced on vmentry of L2 guests: If the 'enable VPID' VM-execution control is 1, the value of the of the VPID VM-execution control field must not be 0000H. Signed-off-by: Krish Sadhukhan <krish.sadhukhan@oracle.com> Reviewed-by: Mark Kanda <mark.kanda@oracle.com> Reviewed-by: Liran Alon <liran.alon@oracle.com> Reviewed-by: Jim Mattson <jmattson@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2018-09-20nVMX x86: check posted-interrupt descriptor addresss on vmentry of L2Krish Sadhukhan
According to section "Checks on VMX Controls" in Intel SDM vol 3C, the following check needs to be enforced on vmentry of L2 guests: - Bits 5:0 of the posted-interrupt descriptor address are all 0. - The posted-interrupt descriptor address does not set any bits beyond the processor's physical-address width. Signed-off-by: Krish Sadhukhan <krish.sadhukhan@oracle.com> Reviewed-by: Mark Kanda <mark.kanda@oracle.com> Reviewed-by: Liran Alon <liran.alon@oracle.com> Reviewed-by: Darren Kenny <darren.kenny@oracle.com> Reviewed-by: Karl Heubaum <karl.heubaum@oracle.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2018-09-20KVM: nVMX: Wake blocked vCPU in guest-mode if pending interrupt in virtual APICvLiran Alon
In case L1 do not intercept L2 HLT or enter L2 in HLT activity-state, it is possible for a vCPU to be blocked while it is in guest-mode. According to Intel SDM 26.6.5 Interrupt-Window Exiting and Virtual-Interrupt Delivery: "These events wake the logical processor if it just entered the HLT state because of a VM entry". Therefore, if L1 enters L2 in HLT activity-state and L2 has a pending deliverable interrupt in vmcs12->guest_intr_status.RVI, then the vCPU should be waken from the HLT state and injected with the interrupt. In addition, if while the vCPU is blocked (while it is in guest-mode), it receives a nested posted-interrupt, then the vCPU should also be waken and injected with the posted interrupt. To handle these cases, this patch enhances kvm_vcpu_has_events() to also check if there is a pending interrupt in L2 virtual APICv provided by L1. That is, it evaluates if there is a pending virtual interrupt for L2 by checking RVI[7:4] > VPPR[7:4] as specified in Intel SDM 29.2.1 Evaluation of Pending Interrupts. Note that this also handles the case of nested posted-interrupt by the fact RVI is updated in vmx_complete_nested_posted_interrupt() which is called from kvm_vcpu_check_block() -> kvm_arch_vcpu_runnable() -> kvm_vcpu_running() -> vmx_check_nested_events() -> vmx_complete_nested_posted_interrupt(). Reviewed-by: Nikita Leshenko <nikita.leshchenko@oracle.com> Reviewed-by: Darren Kenny <darren.kenny@oracle.com> Signed-off-by: Liran Alon <liran.alon@oracle.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2018-09-20KVM: VMX: check nested state and CR4.VMXE against SMMPaolo Bonzini
VMX cannot be enabled under SMM, check it when CR4 is set and when nested virtualization state is restored. This should fix some WARNs reported by syzkaller, mostly around alloc_shadow_vmcs. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2018-09-20kvm: x86: make kvm_{load|put}_guest_fpu() staticSebastian Andrzej Siewior
The functions kvm_load_guest_fpu() kvm_put_guest_fpu() are only used locally, make them static. This requires also that both functions are moved because they are used before their implementation. Those functions were exported (via EXPORT_SYMBOL) before commit e5bb40251a920 ("KVM: Drop kvm_{load,put}_guest_fpu() exports"). Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2018-09-20x86/hyper-v: rename ipi_arg_{ex,non_ex} structuresVitaly Kuznetsov
These structures are going to be used from KVM code so let's make their names reflect their Hyper-V origin. Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com> Reviewed-by: Roman Kagan <rkagan@virtuozzo.com> Acked-by: K. Y. Srinivasan <kys@microsoft.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2018-09-20KVM: VMX: use preemption timer to force immediate VMExitSean Christopherson
A VMX preemption timer value of '0' is guaranteed to cause a VMExit prior to the CPU executing any instructions in the guest. Use the preemption timer (if it's supported) to trigger immediate VMExit in place of the current method of sending a self-IPI. This ensures that pending VMExit injection to L1 occurs prior to executing any instructions in the guest (regardless of nesting level). When deferring VMExit injection, KVM generates an immediate VMExit from the (possibly nested) guest by sending itself an IPI. Because hardware interrupts are blocked prior to VMEnter and are unblocked (in hardware) after VMEnter, this results in taking a VMExit(INTR) before any guest instruction is executed. But, as this approach relies on the IPI being received before VMEnter executes, it only works as intended when KVM is running as L0. Because there are no architectural guarantees regarding when IPIs are delivered, when running nested the INTR may "arrive" long after L2 is running e.g. L0 KVM doesn't force an immediate switch to L1 to deliver an INTR. For the most part, this unintended delay is not an issue since the events being injected to L1 also do not have architectural guarantees regarding their timing. The notable exception is the VMX preemption timer[1], which is architecturally guaranteed to cause a VMExit prior to executing any instructions in the guest if the timer value is '0' at VMEnter. Specifically, the delay in injecting the VMExit causes the preemption timer KVM unit test to fail when run in a nested guest. Note: this approach is viable even on CPUs with a broken preemption timer, as broken in this context only means the timer counts at the wrong rate. There are no known errata affecting timer value of '0'. [1] I/O SMIs also have guarantees on when they arrive, but I have no idea if/how those are emulated in KVM. Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> [Use a hook for SVM instead of leaving the default in x86.c - Paolo] Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2018-09-20KVM: VMX: modify preemption timer bit only when arming timerSean Christopherson
Provide a singular location where the VMX preemption timer bit is set/cleared so that future usages of the preemption timer can ensure the VMCS bit is up-to-date without having to modify unrelated code paths. For example, the preemption timer can be used to force an immediate VMExit. Cache the status of the timer to avoid redundant VMREAD and VMWRITE, e.g. if the timer stays armed across multiple VMEnters/VMExits. Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2018-09-20KVM: VMX: immediately mark preemption timer expired only for zero valueSean Christopherson
A VMX preemption timer value of '0' at the time of VMEnter is architecturally guaranteed to cause a VMExit prior to the CPU executing any instructions in the guest. This architectural definition is in place to ensure that a previously expired timer is correctly recognized by the CPU as it is possible for the timer to reach zero and not trigger a VMexit due to a higher priority VMExit being signalled instead, e.g. a pending #DB that morphs into a VMExit. Whether by design or coincidence, commit f4124500c2c1 ("KVM: nVMX: Fully emulate preemption timer") special cased timer values of '0' and '1' to ensure prompt delivery of the VMExit. Unlike '0', a timer value of '1' has no has no architectural guarantees regarding when it is delivered. Modify the timer emulation to trigger immediate VMExit if and only if the timer value is '0', and document precisely why '0' is special. Do this even if calibration of the virtual TSC failed, i.e. VMExit will occur immediately regardless of the frequency of the timer. Making only '0' a special case gives KVM leeway to be more aggressive in ensuring the VMExit is injected prior to executing instructions in the nested guest, and also eliminates any ambiguity as to why '1' is a special case, e.g. why wasn't the threshold for a "short timeout" set to 10, 100, 1000, etc... Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2018-09-20KVM: SVM: Switch to bitmap_zalloc()Andy Shevchenko
Switch to bitmap_zalloc() to show clearly what we are allocating. Besides that it returns pointer of bitmap type instead of opaque void *. Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2018-09-20KVM/MMU: Fix comment in walk_shadow_page_lockless_end()Tianyu Lan
kvm_commit_zap_page() has been renamed to kvm_mmu_commit_zap_page() This patch is to fix the commit. Signed-off-by: Lan Tianyu <Tianyu.Lan@microsoft.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2018-09-20kvm: selftests: use -pthread instead of -lpthreadLei Yang
I run into the following error testing/selftests/kvm/dirty_log_test.c:285: undefined reference to `pthread_create' testing/selftests/kvm/dirty_log_test.c:297: undefined reference to `pthread_join' collect2: error: ld returned 1 exit status my gcc version is gcc version 4.8.4 "-pthread" would work everywhere Signed-off-by: Lei Yang <Lei.Yang@windriver.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2018-09-20KVM: x86: don't reset root in kvm_mmu_setup()Wei Yang
Here is the code path which shows kvm_mmu_setup() is invoked after kvm_mmu_create(). Since kvm_mmu_setup() is only invoked in this code path, this means the root_hpa and prev_roots are guaranteed to be invalid. And it is not necessary to reset it again. kvm_vm_ioctl_create_vcpu() kvm_arch_vcpu_create() vmx_create_vcpu() kvm_vcpu_init() kvm_arch_vcpu_init() kvm_mmu_create() kvm_arch_vcpu_setup() kvm_mmu_setup() kvm_init_mmu() This patch set reset_roots to false in kmv_mmu_setup(). Fixes: 50c28f21d045dde8c52548f8482d456b3f0956f5 Signed-off-by: Wei Yang <richard.weiyang@gmail.com> Reviewed-by: Liran Alon <liran.alon@oracle.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2018-09-20kvm: mmu: Don't read PDPTEs when paging is not enabledJunaid Shahid
kvm should not attempt to read guest PDPTEs when CR0.PG = 0 and CR4.PAE = 1. Signed-off-by: Junaid Shahid <junaids@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2018-09-20x86/kvm/lapic: always disable MMIO interface in x2APIC modeVitaly Kuznetsov
When VMX is used with flexpriority disabled (because of no support or if disabled with module parameter) MMIO interface to lAPIC is still available in x2APIC mode while it shouldn't be (kvm-unit-tests): PASS: apic_disable: Local apic enabled in x2APIC mode PASS: apic_disable: CPUID.1H:EDX.APIC[bit 9] is set FAIL: apic_disable: *0xfee00030: 50014 The issue appears because we basically do nothing while switching to x2APIC mode when APIC access page is not used. apic_mmio_{read,write} only check if lAPIC is disabled before proceeding to actual write. When APIC access is virtualized we correctly manipulate with VMX controls in vmx_set_virtual_apic_mode() and we don't get vmexits from memory writes in x2APIC mode so there's no issue. Disabling MMIO interface seems to be easy. The question is: what do we do with these reads and writes? If we add apic_x2apic_mode() check to apic_mmio_in_range() and return -EOPNOTSUPP these reads and writes will go to userspace. When lAPIC is in kernel, Qemu uses this interface to inject MSIs only (see kvm_apic_mem_write() in hw/i386/kvm/apic.c). This somehow works with disabled lAPIC but when we're in xAPIC mode we will get a real injected MSI from every write to lAPIC. Not good. The simplest solution seems to be to just ignore writes to the region and return ~0 for all reads when we're in x2APIC mode. This is what this patch does. However, this approach is inconsistent with what currently happens when flexpriority is enabled: we allocate APIC access page and create KVM memory region so in x2APIC modes all reads and writes go to this pre-allocated page which is, btw, the same for all vCPUs. Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2018-09-19OPP: Use a single mechanism to free the OPP tableViresh Kumar
Currently there are two separate ways to free the OPP table based on how it is created in the first place. We call _dev_pm_opp_remove_table() to free the static and/or dynamic OPP, OPP list devices, etc. This is done for the case where the OPP table is added while initializing the OPPs, like via the path dev_pm_opp_of_add_table(). We also call dev_pm_opp_put_opp_table() in some cases which eventually frees the OPP table structure once the reference count reaches 0. This is used by the first case as well as other cases like dev_pm_opp_set_regulators() where the OPPs aren't necessarily initialized at this point. This whole thing is a bit unclear and messy and obstruct any further cleanup/fixup of OPP core. This patch tries to streamline this by keeping a single path for OPP table destruction, i.e. dev_pm_opp_put_opp_table(). All the cleanup happens in _opp_table_kref_release() now after the reference count reaches 0. _dev_pm_opp_remove_table() is removed as it isn't required anymore. We don't drop the reference to the OPP table after creating it from _of_add_opp_table_v{1|2}() anymore and the same is dropped only when we try to remove them. Tested-by: Niklas Cassel <niklas.cassel@linaro.org> Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
2018-09-19OPP: Don't remove dynamic OPPs from _dev_pm_opp_remove_table()Viresh Kumar
Only one platform was depending on this feature and it is already updated now. Stop removing dynamic OPPs from _dev_pm_opp_remove_table(). This simplifies lot of paths and removes unnecessary parameters. Tested-by: Niklas Cassel <niklas.cassel@linaro.org> Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
2018-09-19cpufreq: mvebu: Remove OPPs using dev_pm_opp_remove()Viresh Kumar
dev_pm_opp_cpumask_remove_table() is going to change in the next commit and will not remove dynamic OPPs automatically. They must be removed with a call to dev_pm_opp_remove(). Reviewed-by: Gregory CLEMENT <gregory.clement@bootlin.com> Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
2018-09-19OPP: Create separate kref for static OPPs listViresh Kumar
The static OPPs don't always get freed with the OPP table, it can happen before that as well. For example, if the OPP table is first created using helpers like dev_pm_opp_set_supported_hw() and the OPPs are created at a later point. Now when the OPPs are removed, the OPP table stays until the time dev_pm_opp_put_supported_hw() is called. Later patches will streamline the freeing of OPP table and that requires the static OPPs to get freed with help of a separate kernel reference. This patch prepares for that by creating a separate kref for static OPPs list. Tested-by: Niklas Cassel <niklas.cassel@linaro.org> Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
2018-09-19OPP: Don't take OPP table's kref for static OPPsViresh Kumar
The reference count is only required to be incremented for every call that may lead to adding the OPP table. For static OPPs the same should be done from the parent routine which adds all static OPPs together and so only one refcount for all static OPPs. Update code to reflect that. The refcount is incremented every time a dynamic OPP is created (as that can lead to creating the OPP table) and the same is dropped when the OPP is removed. Tested-by: Niklas Cassel <niklas.cassel@linaro.org> Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
2018-09-19OPP: Parse OPP table's DT properties from _of_init_opp_table()Viresh Kumar
Parse the DT properties present in the OPP table from _of_init_opp_table(), which is a dedicated routine for DT parsing. Minor relocation of helpers is required for this. It is possible now for _managed_opp() to return a partially initialized OPP table if the OPP table is created via the helpers like dev_pm_opp_set_supported_hw() and we need another flag to indicate if the static OPP are already parsed or not to make sure we don't incorrectly skip initializing the static OPPs. Tested-by: Niklas Cassel <niklas.cassel@linaro.org> Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
2018-09-19OPP: Pass index to _of_init_opp_table()Viresh Kumar
This is a preparatory patch required for the next commit which will start using OPP table's node pointer in _of_init_opp_table(), which requires the index in order to read the OPP table's phandle. This commit adds the index argument in the call chains in order to get it delivered to _of_init_opp_table(). Tested-by: Niklas Cassel <niklas.cassel@linaro.org> Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
2018-09-19OPP: Protect dev_list with opp_table lockViresh Kumar
The dev_list needs to be protected with a lock, else we may have simultaneous access (addition/removal) to it and that would be racy. Extend scope of the opp_table lock to protect dev_list as well. Tested-by: Niklas Cassel <niklas.cassel@linaro.org> Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
2018-09-19OPP: Don't try to remove all OPP tables on failureViresh Kumar
dev_pm_opp_of_cpumask_add_table() creates the OPP table for all CPUs present in the cpumask and on errors it should revert all changes it has done. It actually is doing a bit more than that. On errors, it tries to free all the OPP tables, even the one it hasn't created yet. This may also end up freeing the OPP tables which were created from separate path, like dev_pm_opp_set_supported_hw(). Reported-and-tested-by: Niklas Cassel <niklas.cassel@linaro.org> Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
2018-09-19OPP: Free OPP table properly on performance state irregularitiesViresh Kumar
The OPP table was freed, but not the individual OPPs which is done from _dev_pm_opp_remove_table(). Fix it by calling _dev_pm_opp_remove_table() as well. Cc: 4.18 <stable@vger.kernel.org> # v4.18 Fixes: 3ba98324e81a ("PM / OPP: Get performance state using genpd helper") Tested-by: Niklas Cassel <niklas.cassel@linaro.org> Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
2018-09-19tools: bpf: fix license for a compat header fileJakub Kicinski
libc_compat.h is used by libbpf so make sure it's licensed under LGPL or BSD license. The license change should be OK, I'm the only author of the file. Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com> Reviewed-by: Quentin Monnet <quentin.monnet@netronome.com> Acked-by: Yonghong Song <yhs@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2018-09-19IB/srp: Avoid that sg_reset -d ${srp_device} triggers an infinite loopBart Van Assche
Use different loop variables for the inner and outer loop. This avoids that an infinite loop occurs if there are more RDMA channels than target->req_ring_size. Fixes: d92c0da71a35 ("IB/srp: Add multichannel support") Cc: <stable@vger.kernel.org> Signed-off-by: Bart Van Assche <bvanassche@acm.org> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
2018-09-19Merge tag 'hwmon-for-linus-v4.19-rc5' of ↵Greg Kroah-Hartman
git://git.kernel.org/pub/scm/linux/kernel/git/groeck/linux-staging Guenter writes: "Various bug fixes for nct6775 driver"
2018-09-19Merge tag 'scsi-fixes' of ↵Greg Kroah-Hartman
git://git.kernel.org/pub/scm/linux/kernel/git/jejb/scsi James writes: "SCSI fixes on 20180919 A couple of small but important fixes, one affecting big endian and the other fixing a BUG_ON in scatterlist processing. Signed-off-by: James E.J. Bottomley <jejb@linux.ibm.com>"
2018-09-19tools lib traceevent: Add prefix tep_ to enum filter_trivial_typeTzvetomir Stoyanov (VMware)
In order to make libtraceevent into a proper library, variables, data structures and functions require a unique prefix to prevent name space conflicts. That prefix will be "tep_". This adds prefix tep_ to enum filter_trivial_type and all its members. Signed-off-by: Tzvetomir Stoyanov (VMware) <tz.stoyanov@gmail.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Tzvetomir Stoyanov (VMware) <tz.stoyanov@gmail.com> Cc: linux-trace-devel@vger.kernel.org Link: http://lkml.kernel.org/r/20180919185725.076387655@goodmis.org Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org> Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-09-19tools lib traceevent: Rename data2host*() APIsTzvetomir Stoyanov (VMware)
In order to make libtraceevent into a proper library, variables, data structures and functions require a unique prefix to prevent name space conflicts. That prefix will be "tep_". This renames data2host*() APIs Signed-off-by: Tzvetomir Stoyanov (VMware) <tz.stoyanov@gmail.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Tzvetomir Stoyanov (VMware) <tz.stoyanov@gmail.com> Cc: linux-trace-devel@vger.kernel.org Link: http://lkml.kernel.org/r/20180919185724.751088939@goodmis.org Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org> Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-09-19tools lib traceevent: Rename struct plugin_list to struct tep_plugin_listTzvetomir Stoyanov (VMware)
In order to make libtraceevent into a proper library, variables, data structures and functions require a unique prefix to prevent name space conflicts. That prefix will be "tep_". This renames struct plugin_list to struct tep_plugin_list Signed-off-by: Tzvetomir Stoyanov (VMware) <tz.stoyanov@gmail.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Tzvetomir Stoyanov (VMware) <tz.stoyanov@gmail.com> Cc: linux-trace-devel@vger.kernel.org Link: http://lkml.kernel.org/r/20180919185724.586889128@goodmis.org Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org> Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-09-19tools lib traceevent: Add prefix tep_ to structs filter_type and event_filterTzvetomir Stoyanov (VMware)
In order to make libtraceevent into a proper library, variables, data structures and functions require a unique prefix to prevent name space conflicts. That prefix will be "tep_". This adds prefix tep_ to structs filter_type and event_filter Signed-off-by: Tzvetomir Stoyanov (VMware) <tz.stoyanov@gmail.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Tzvetomir Stoyanov (VMware) <tz.stoyanov@gmail.com> Cc: linux-trace-devel@vger.kernel.org Link: http://lkml.kernel.org/r/20180919185724.309837130@goodmis.org Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org> Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-09-19tools lib traceevent: Add prefix tep_ to various structs filter_arg_*.Tzvetomir Stoyanov (VMware)
In order to make libtraceevent into a proper library, variables, data structures and functions require a unique prefix to prevent name space conflicts. That prefix will be "tep_". This adds prefix tep_ to to various structs filter_arg_*.. Signed-off-by: Tzvetomir Stoyanov (VMware) <tz.stoyanov@gmail.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Tzvetomir Stoyanov (VMware) <tz.stoyanov@gmail.com> Cc: linux-trace-devel@vger.kernel.org Link: http://lkml.kernel.org/r/20180919185724.152948543@goodmis.org Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org> Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-09-19tools lib traceevent: Add prefix tep_ to struct filter_{arg,value_type}Tzvetomir Stoyanov (VMware)
In order to make libtraceevent into a proper library, variables, data structures and functions require a unique prefix to prevent name space conflicts. That prefix will be "tep_". This adds prefix tep_ to struct filter_arg, enum filter_value_type and all enum's members. Signed-off-by: Tzvetomir Stoyanov (VMware) <tz.stoyanov@gmail.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Tzvetomir Stoyanov (VMware) <tz.stoyanov@gmail.com> Cc: linux-trace-devel@vger.kernel.org Link: http://lkml.kernel.org/r/20180919185723.972818215@goodmis.org Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org> Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-09-19tools lib traceevent: Add prefix tep_ to enums filter_{exp,arg}_typeTzvetomir Stoyanov (VMware)
In order to make libtraceevent into a proper library, variables, data structures and functions require a unique prefix to prevent name space conflicts. That prefix will be "tep_". This adds prefix tep_ to enums filter_exp_type, filter_arg_type and all enum's members Signed-off-by: Tzvetomir Stoyanov (VMware) <tz.stoyanov@gmail.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Tzvetomir Stoyanov (VMware) <tz.stoyanov@gmail.com> Cc: linux-trace-devel@vger.kernel.org Link: http://lkml.kernel.org/r/20180919185723.824559046@goodmis.org Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org> Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-09-19tools lib traceevent: Add prefix tep_ to enums filter_{boolean,op,cmp}_typeTzvetomir Stoyanov (VMware)
In order to make libtraceevent into a proper library, variables, data structures and functions require a unique prefix to prevent name space conflicts. That prefix will be "tep_". This adds prefix tep_ to enums filter_boolean_type, filter_op_type, filter_cmp_type and all enum's members Signed-off-by: Tzvetomir Stoyanov (VMware) <tz.stoyanov@gmail.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Tzvetomir Stoyanov (VMware) <tz.stoyanov@gmail.com> Cc: linux-trace-devel@vger.kernel.org Link: http://lkml.kernel.org/r/20180919185723.680572508@goodmis.org Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org> Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-09-19tools lib traceevent, perf tools: Rename enum print_arg_type to enum ↵Tzvetomir Stoyanov (VMware)
tep_print_arg_type In order to make libtraceevent into a proper library, variables, data structures and functions require a unique prefix to prevent name space conflicts. That prefix will be "tep_". This renames enum print_arg_type to enum tep_print_arg_type and add prefix TEP_ to all its members. Signed-off-by: Tzvetomir Stoyanov (VMware) <tz.stoyanov@gmail.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Tzvetomir Stoyanov (VMware) <tz.stoyanov@gmail.com> Cc: linux-trace-devel@vger.kernel.org Link: http://lkml.kernel.org/r/20180919185723.533960748@goodmis.org Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org> Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-09-19tools lib traceevent, perf tools: Add prefix tep_ to all print_* structuresTzvetomir Stoyanov (VMware)
In order to make libtraceevent into a proper library, variables, data structures and functions require a unique prefix to prevent name space conflicts. That prefix will be "tep_". This adds prefix tep_ to all print_* structures Signed-off-by: Tzvetomir Stoyanov (VMware) <tz.stoyanov@gmail.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Tzvetomir Stoyanov (VMware) <tz.stoyanov@gmail.com> Cc: linux-trace-devel@vger.kernel.org Link: http://lkml.kernel.org/r/20180919185723.381753268@goodmis.org Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org> Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-09-19tools lib traceevent: Add prefix TEP_ to all EVENT_FL_* flagsTzvetomir Stoyanov (VMware)
In order to make libtraceevent into a proper library, variables, data structures and functions require a unique prefix to prevent name space conflicts. That prefix will be "tep_". This adds prefix TEP_ to all members of nameless enum EVENT_FL_* Signed-off-by: Tzvetomir Stoyanov (VMware) <tz.stoyanov@gmail.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Tzvetomir Stoyanov (VMware) <tz.stoyanov@gmail.com> Cc: linux-trace-devel@vger.kernel.org Link: http://lkml.kernel.org/r/20180919185723.116643250@goodmis.org Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org> Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>