summaryrefslogtreecommitdiff
path: root/include
AgeCommit message (Collapse)Author
2019-11-21habanalabs: add opcode to INFO IOCTL to return clock rateOded Gabbay
Add a new opcode to the INFO IOCTL to allow the user application to retrieve the ASIC's current and maximum clock rate. The rate is returned in MHz. Signed-off-by: Oded Gabbay <oded.gabbay@gmail.com> Reviewed-by: Tomer Tayar <ttayar@habana.ai>
2019-11-21habanalabs: Fix typosTomer Tayar
s/paerser/parser/ s/requeusted/requested/ s/an JOB/a JOB/ Signed-off-by: Tomer Tayar <ttayar@habana.ai> Reviewed-by: Oded Gabbay <oded.gabbay@gmail.com> Signed-off-by: Oded Gabbay <oded.gabbay@gmail.com>
2019-11-21Merge tag 'kvmarm-5.5' of ↵Paolo Bonzini
git://git.kernel.org/pub/scm/linux/kernel/git/kvmarm/kvmarm into HEAD KVM/arm updates for Linux 5.5: - Allow non-ISV data aborts to be reported to userspace - Allow injection of data aborts from userspace - Expose stolen time to guests - GICv4 performance improvements - vgic ITS emulation fixes - Simplify FWB handling - Enable halt pool counters - Make the emulated timer PREEMPT_RT compliant Conflicts: include/uapi/linux/kvm.h
2019-11-21sched/vtime: Bring up complete kcpustat accessorFrederic Weisbecker
Many callsites want to fetch the values of system, user, user_nice, guest or guest_nice kcpustat fields altogether or at least a pair of these. In that case calling kcpustat_field() for each requested field brings unecessary overhead when we could fetch all of them in a row. So provide kcpustat_cpu_fetch() that fetches the whole kcpustat array in a vtime safe way under the same RCU and seqcount block. Signed-off-by: Frederic Weisbecker <frederic@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Wanpeng Li <wanpengli@tencent.com> Cc: Yauheni Kaliuta <yauheni.kaliuta@redhat.com> Link: https://lkml.kernel.org/r/20191121024430.19938-3-frederic@kernel.org Signed-off-by: Ingo Molnar <mingo@kernel.org>
2019-11-20net: sfp: soft status and control supportRussell King
Add support for the soft status and control register, which allows TX_FAULT and RX_LOS to be monitored and TX_DISABLE to be set. We make use of this when the board does not support GPIOs for these signals. Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Reviewed-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-11-20Merge git://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-nextDavid S. Miller
Daniel Borkmann says: ==================== pull-request: bpf-next 2019-11-20 The following pull-request contains BPF updates for your *net-next* tree. We've added 81 non-merge commits during the last 17 day(s) which contain a total of 120 files changed, 4958 insertions(+), 1081 deletions(-). There are 3 trivial conflicts, resolve it by always taking the chunk from 196e8ca74886c433: <<<<<<< HEAD ======= void *bpf_map_area_mmapable_alloc(u64 size, int numa_node); >>>>>>> 196e8ca74886c433dcfc64a809707074b936aaf5 <<<<<<< HEAD void *bpf_map_area_alloc(u64 size, int numa_node) ======= static void *__bpf_map_area_alloc(u64 size, int numa_node, bool mmapable) >>>>>>> 196e8ca74886c433dcfc64a809707074b936aaf5 <<<<<<< HEAD if (size <= (PAGE_SIZE << PAGE_ALLOC_COSTLY_ORDER)) { ======= /* kmalloc()'ed memory can't be mmap()'ed */ if (!mmapable && size <= (PAGE_SIZE << PAGE_ALLOC_COSTLY_ORDER)) { >>>>>>> 196e8ca74886c433dcfc64a809707074b936aaf5 The main changes are: 1) Addition of BPF trampoline which works as a bridge between kernel functions, BPF programs and other BPF programs along with two new use cases: i) fentry/fexit BPF programs for tracing with practically zero overhead to call into BPF (as opposed to k[ret]probes) and ii) attachment of the former to networking related programs to see input/output of networking programs (covering xdpdump use case), from Alexei Starovoitov. 2) BPF array map mmap support and use in libbpf for global data maps; also a big batch of libbpf improvements, among others, support for reading bitfields in a relocatable manner (via libbpf's CO-RE helper API), from Andrii Nakryiko. 3) Extend s390x JIT with usage of relative long jumps and loads in order to lift the current 64/512k size limits on JITed BPF programs there, from Ilya Leoshkevich. 4) Add BPF audit support and emit messages upon successful prog load and unload in order to have a timeline of events, from Daniel Borkmann and Jiri Olsa. 5) Extension to libbpf and xdpsock sample programs to demo the shared umem mode (XDP_SHARED_UMEM) as well as RX-only and TX-only sockets, from Magnus Karlsson. 6) Several follow-up bug fixes for libbpf's auto-pinning code and a new API call named bpf_get_link_xdp_info() for retrieving the full set of prog IDs attached to XDP, from Toke Høiland-Jørgensen. 7) Add BTF support for array of int, array of struct and multidimensional arrays and enable it for skb->cb[] access in kfree_skb test, from Martin KaFai Lau. 8) Fix AF_XDP by using the correct number of channels from ethtool, from Luigi Rizzo. 9) Two fixes for BPF selftest to get rid of a hang in test_tc_tunnel and to avoid xdping to be run as standalone, from Jiri Benc. 10) Various BPF selftest fixes when run with latest LLVM trunk, from Yonghong Song. 11) Fix a memory leak in BPF fentry test run data, from Colin Ian King. 12) Various smaller misc cleanups and improvements mostly all over BPF selftests and samples, from Daniel T. Lee, Andre Guedes, Anders Roxell, Mao Wenan, Yue Haibing. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
2019-11-20ftrace: Return ENOTSUPP when DYNAMIC_FTRACE_WITH_DIRECT_CALLS is not configuredAlexei Starovoitov
When CONFIG_DYNAMIC_FTRACE_WITH_DIRECT_CALLS is not set it's best to have the stub functions return ENOTSUPP instead of ENODEV, otherwise ENODEV is a valid error when ip is incorrect which is indistinguishable from ftrace not compiled in. Link: http://lkml.kernel.org/r/CAADnVQ+OzTikM9EhrfsC7NFsVYhATW1SVHxK64w3xn9qpk81pg@mail.gmail.com Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
2019-11-20PCI/PM: Avoid exporting __pci_complete_power_transition()Rafael J. Wysocki
Notice that radeon_set_suspend(), which is the only caller of __pci_complete_power_transition() outside of pci.c, really only cares about the pci_platform_power_transition() invoked by it, so export the latter instead of it, update the radeon driver to call pci_platform_power_transition() directly and make __pci_complete_power_transition() static. Code rearrangement, no intentional functional impact. Link: https://lore.kernel.org/r/1731661.ykamz2Tiuf@kreacher Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com> Signed-off-by: Bjorn Helgaas <bhelgaas@google.com> Reviewed-by: Mika Westerberg <mika.westerberg@linux.intel.com>
2019-11-20PCI/PM: Remove unused pci_driver.suspend_late() hookBjorn Helgaas
The struct pci_driver.suspend_late() hook is one of the legacy PCI power management callbacks, and there are no remaining users of it. Remove it. Link: https://lore.kernel.org/r/20191101204558.210235-7-helgaas@kernel.org Signed-off-by: Bjorn Helgaas <bhelgaas@google.com> Reviewed-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2019-11-20PCI/PM: Remove unused pci_driver.resume_early() hookBjorn Helgaas
The struct pci_driver.resume_early() hook is one of the legacy PCI power management callbacks, and there are no remaining users of it. Remove it. Link: https://lore.kernel.org/r/20191101204558.210235-6-helgaas@kernel.org Signed-off-by: Bjorn Helgaas <bhelgaas@google.com> Reviewed-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2019-11-20PCI/PM: Use pci_WARN() to include device informationBjorn Helgaas
Add and use pci_WARN() wrappers so warnings include device information. Link: https://lore.kernel.org/r/20191017212851.54237-3-helgaas@kernel.org Signed-off-by: Bjorn Helgaas <bhelgaas@google.com> Reviewed-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2019-11-20bpf: Switch bpf_map_{area_alloc,area_mmapable_alloc}() to u64 sizeDaniel Borkmann
Given we recently extended the original bpf_map_area_alloc() helper in commit fc9702273e2e ("bpf: Add mmap() support for BPF_MAP_TYPE_ARRAY"), we need to apply the same logic as in ff1c08e1f74b ("bpf: Change size to u64 for bpf_map_{area_alloc, charge_init}()"). To avoid conflicts, extend it for bpf-next. Reported-by: Stephen Rothwell <sfr@canb.auug.org.au> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2019-11-20bpf: Emit audit messages upon successful prog load and unloadDaniel Borkmann
Allow for audit messages to be emitted upon BPF program load and unload for having a timeline of events. The load itself is in syscall context, so additional info about the process initiating the BPF prog creation can be logged and later directly correlated to the unload event. The only info really needed from BPF side is the globally unique prog ID where then audit user space tooling can query / dump all info needed about the specific BPF program right upon load event and enrich the record, thus these changes needed here can be kept small and non-intrusive to the core. Raw example output: # auditctl -D # auditctl -a always,exit -F arch=x86_64 -S bpf # ausearch --start recent -m 1334 [...] ---- time->Wed Nov 20 12:45:51 2019 type=PROCTITLE msg=audit(1574271951.590:8974): proctitle="./test_verifier" type=SYSCALL msg=audit(1574271951.590:8974): arch=c000003e syscall=321 success=yes exit=14 a0=5 a1=7ffe2d923e80 a2=78 a3=0 items=0 ppid=742 pid=949 auid=0 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=pts0 ses=2 comm="test_verifier" exe="/root/bpf-next/tools/testing/selftests/bpf/test_verifier" subj=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 key=(null) type=UNKNOWN[1334] msg=audit(1574271951.590:8974): auid=0 uid=0 gid=0 ses=2 subj=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 pid=949 comm="test_verifier" exe="/root/bpf-next/tools/testing/selftests/bpf/test_verifier" prog-id=3260 event=LOAD ---- time->Wed Nov 20 12:45:51 2019 type=UNKNOWN[1334] msg=audit(1574271951.590:8975): prog-id=3260 event=UNLOAD ---- [...] Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Signed-off-by: Jiri Olsa <jolsa@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20191120213816.8186-1-jolsa@kernel.org
2019-11-20net: page_pool: add the possibility to sync DMA memory for deviceLorenzo Bianconi
Introduce the following parameters in order to add the possibility to sync DMA memory for device before putting allocated pages in the page_pool caches: - PP_FLAG_DMA_SYNC_DEV: if set in page_pool_params flags, all pages that the driver gets from page_pool will be DMA-synced-for-device according to the length provided by the device driver. Please note DMA-sync-for-CPU is still device driver responsibility - offset: DMA address offset where the DMA engine starts copying rx data - max_len: maximum DMA memory size page_pool is allowed to flush. This is currently used in __page_pool_alloc_pages_slow routine when pages are allocated from page allocator These parameters are supposed to be set by device drivers. This optimization reduces the length of the DMA-sync-for-device. The optimization is valid because pages are initially DMA-synced-for-device as defined via max_len. At RX time, the driver will perform a DMA-sync-for-CPU on the memory for the packet length. What is important is the memory occupied by packet payload, because this is the area CPU is allowed to read and modify. As we don't track cache-lines written into by the CPU, simply use the packet payload length as dma_sync_size at page_pool recycle time. This also take into account any tail-extend. Tested-by: Matteo Croce <mcroce@redhat.com> Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org> Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com> Acked-by: Ilias Apalodimas <ilias.apalodimas@linaro.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-11-20net: sched: pie: enable timestamp based delay calculationGautam Ramakrishnan
RFC 8033 suggests an alternative approach to calculate the queue delay in PIE by using a timestamp on every enqueued packet. This patch adds an implementation of that approach and sets it as the default method to calculate queue delay. The previous method (based on Little's law) to calculate queue delay is set as optional. Signed-off-by: Gautam Ramakrishnan <gautamramk@gmail.com> Signed-off-by: Leslie Monis <lesliemonis@gmail.com> Signed-off-by: Mohit P. Tahiliani <tahiliani@nitk.edu.in> Acked-by: Dave Taht <dave.taht@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-11-20page_pool: Add API to update numa nodeSaeed Mahameed
Add page_pool_update_nid() to be called by page pool consumers when they detect numa node changes. It will update the page pool nid value to start allocating from the new effective numa node. This is to mitigate page pool allocating pages from a wrong numa node, where the pool was originally allocated, and holding on to pages that belong to a different numa node, which causes performance degradation. For pages that are already being consumed and could be returned to the pool by the consumer, in next patch we will add a check per page to avoid recycling them back to the pool and return them to the page allocator. Signed-off-by: Saeed Mahameed <saeedm@mellanox.com> Acked-by: Jonathan Lemon <jonathan.lemon@gmail.com> Reviewed-by: Ilias Apalodimas <ilias.apalodimas@linaro.org> Acked-by: Jesper Dangaard Brouer <brouer@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-11-20dma-direct: exclude dma_direct_map_resource from the min_low_pfn checkChristoph Hellwig
The valid memory address check in dma_capable only makes sense when mapping normal memory, not when using dma_map_resource to map a device resource. Add a new boolean argument to dma_capable to exclude that check for the dma_map_resource case. Fixes: b12d66278dd6 ("dma-direct: check for overflows on 32 bit DMA addresses") Reported-by: Marek Szyprowski <m.szyprowski@samsung.com> Signed-off-by: Christoph Hellwig <hch@lst.de> Acked-by: Marek Szyprowski <m.szyprowski@samsung.com> Tested-by: Marek Szyprowski <m.szyprowski@samsung.com>
2019-11-20dma-direct: avoid a forward declaration for phys_to_dmaChristoph Hellwig
Move dma_capable down a bit so that we don't need a forward declaration for phys_to_dma. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Nicolas Saenz Julienne <nsaenzjulienne@suse.de>
2019-11-20dma-direct: unify the dma_capable definitionsChristoph Hellwig
Currently each architectures that wants to override dma_to_phys and phys_to_dma also has to provide dma_capable. But there isn't really any good reason for that. powerpc and mips just have copies of the generic one minus the latests fix, and the arm one was the inspiration for said fix, but misses the bus_dma_mask handling. Make all architectures use the generic version instead. Signed-off-by: Christoph Hellwig <hch@lst.de> Acked-by: Michael Ellerman <mpe@ellerman.id.au> (powerpc) Reviewed-by: Nicolas Saenz Julienne <nsaenzjulienne@suse.de>
2019-11-20dma-mapping: drop the dev argument to arch_sync_dma_for_*Christoph Hellwig
These are pure cache maintainance routines, so drop the unused struct device argument. Signed-off-by: Christoph Hellwig <hch@lst.de> Suggested-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2019-11-20netfilter: nft_payload: add VLAN offload supportPablo Neira Ayuso
Match on ethertype and set up protocol dependency. Check for protocol dependency before accessing the tci field. Allow to match on the encapsulated ethertype too. Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-11-20netfilter: nf_tables_offload: allow ethernet interface type onlyPablo Neira Ayuso
Hardware offload support at this stage assumes an ethernet device in place. The flow dissector provides the intermediate representation to express this selector, so extend it to allow to store the interface type. Flower does not uses this, so skb_flow_dissect_meta() is not extended to match on this new field. Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-11-20netfilter: nf_tables: constify nft_reg_load{8, 16, 64}()Pablo Neira Ayuso
This patch constifies the pointer to source register data that is passed as an input parameter. Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-11-20ALSA: pcm: Add card sync_irq fieldTakashi Iwai
Many PCI and other drivers performs snd_pcm_period_elapsed() simply in its interrupt handler, so the sync_stop operation is just to call synchronize_irq(). Instead of putting this call multiple times, introduce the common card->sync_irq field. When this field is set, PCM core performs synchronize_irq() for sync-stop operation. Each driver just needs to copy its local IRQ number to card->sync_irq, and that's all we need. Link: https://lore.kernel.org/r/20191117085308.23915-8-tiwai@suse.de Signed-off-by: Takashi Iwai <tiwai@suse.de>
2019-11-20ALSA: pcm: Add the support for sync-stop operationTakashi Iwai
The standard programming model of a PCM sound driver is to process snd_pcm_period_elapsed() from an interrupt handler. When a running stream is stopped, PCM core calls the trigger-STOP PCM ops, sets the stream state to SETUP, and moves on to the next step. This is performed in an atomic manner -- this could be called from the interrupt context, after all. The problem is that, if the stream goes further and reaches to the CLOSE state immediately, the stream might be still being processed in snd_pcm_period_elapsed() in the interrupt context, and hits a NULL dereference. Such a crash happens because of the atomic operation, and we can't wait until the stream-stop finishes. For addressing such a problem, this commit adds a new PCM ops, sync_stop. This gets called at the appropriate places that need a sync with the stream-stop, i.e. at hw_params, prepare and hw_free. Some drivers already have a similar mechanism implemented locally, and we'll refactor the code later. Link: https://lore.kernel.org/r/20191117085308.23915-7-tiwai@suse.de Signed-off-by: Takashi Iwai <tiwai@suse.de>
2019-11-20ALSA: pcm: Move PCM_RUNTIME_CHECK() macro into local headerTakashi Iwai
It should be used only in the PCM core code locally. Link: https://lore.kernel.org/r/20191117085308.23915-6-tiwai@suse.de Signed-off-by: Takashi Iwai <tiwai@suse.de>
2019-11-20ALSA: pcm: Introduce managed buffer allocation modeTakashi Iwai
This patch adds the support for the feature to automatically allocate and free PCM buffers, so called "managed buffer allocation" mode. It's set up via new PCM helpers, snd_pcm_set_managed_buffer() and snd_pcm_set_managed_buffer_all(), both of which correspond to the existing preallocator helpers, snd_pcm_lib_preallocate_pages() and snd_pcm_lib_preallocate_pages_for_all(). When the new helper is used, it not only performs the pre-allocation of buffers, but also it manages to call snd_pcm_lib_malloc_pages() before the PCM hw_params ops and snd_lib_pcm_free() after the PCM hw_free ops inside PCM core, respectively. This allows drivers to drop the explicit calls of the memory allocation / release functions, and it will be a good amount of code reduction in the end of this patch series. When the PCM substream is set to the managed buffer allocation mode, the managed_buffer_alloc flag is set in the substream object. Since some drivers want to know when a buffer is newly allocated or re-allocated at hw_params callback (e.g. want to set up the additional stuff for the given buffer only at allocation time), now PCM core turns on buffer_changed flag when the buffer has changed. The standard conversions to use the new API will be straightforward: - Replace snd_pcm_lib_preallocate*() calls with the corresponding snd_pcm_set_managed_buffer*(); the arguments should be unchanged - Drop superfluous snd_pcm_lib_malloc() and snd_pcm_lib_free() calls; the check of snd_pcm_lib_malloc() returns should be replaced with the check of runtime->buffer_changed flag. - If hw_params or hw_free becomes empty, drop them from PCM ops Link: https://lore.kernel.org/r/20191117085308.23915-2-tiwai@suse.de Signed-off-by: Takashi Iwai <tiwai@suse.de>
2019-11-20ASoC: core: add SND_SOC_BYTES_ETzung-Bi Shih
Add SND_SOC_BYTES_E to accept getter and putter. Signed-off-by: Tzung-Bi Shih <tzungbi@google.com> Link: https://lore.kernel.org/r/20191120060844.224607-2-tzungbi@google.com Signed-off-by: Mark Brown <broonie@kernel.org>
2019-11-20ASoC: add control components managementJaroslav Kysela
This ASCII string can carry additional information about soundcard components or configuration. Add the possibility to set this string via the ASoC card. Signed-off-by: Jaroslav Kysela <perex@perex.cz> Cc: Mark Brown <broonie@kernel.org> Link: https://lore.kernel.org/r/20191119174933.25526-1-perex@perex.cz Signed-off-by: Mark Brown <broonie@kernel.org>
2019-11-20PCI: of: Add inbound resource parsing to helpersRob Herring
Extend devm_of_pci_get_host_bridge_resources() and pci_parse_request_of_pci_ranges() helpers to also parse the inbound addresses from DT 'dma-ranges' and populate a resource list with the translated addresses. This will help ensure 'dma-ranges' is always parsed in a consistent way. Tested-by: Srinath Mannam <srinath.mannam@broadcom.com> Tested-by: Thomas Petazzoni <thomas.petazzoni@bootlin.com> # for AArdvark Signed-off-by: Rob Herring <robh@kernel.org> Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Reviewed-by: Srinath Mannam <srinath.mannam@broadcom.com> Reviewed-by: Andrew Murray <andrew.murray@arm.com> Acked-by: Gustavo Pimentel <gustavo.pimentel@synopsys.com> Cc: Jingoo Han <jingoohan1@gmail.com> Cc: Gustavo Pimentel <gustavo.pimentel@synopsys.com> Cc: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Cc: Bjorn Helgaas <bhelgaas@google.com> Cc: Thomas Petazzoni <thomas.petazzoni@bootlin.com> Cc: Will Deacon <will@kernel.org> Cc: Linus Walleij <linus.walleij@linaro.org> Cc: Toan Le <toan@os.amperecomputing.com> Cc: Ley Foon Tan <lftan@altera.com> Cc: Tom Joseph <tjoseph@cadence.com> Cc: Ray Jui <rjui@broadcom.com> Cc: Scott Branden <sbranden@broadcom.com> Cc: bcm-kernel-feedback-list@broadcom.com Cc: Ryder Lee <ryder.lee@mediatek.com> Cc: Karthikeyan Mitran <m.karthikeyan@mobiveil.co.in> Cc: Hou Zhiqiang <Zhiqiang.Hou@nxp.com> Cc: Simon Horman <horms@verge.net.au> Cc: Shawn Lin <shawn.lin@rock-chips.com> Cc: Heiko Stuebner <heiko@sntech.de> Cc: Michal Simek <michal.simek@xilinx.com> Cc: rfi@lists.rocketboards.org Cc: linux-mediatek@lists.infradead.org Cc: linux-renesas-soc@vger.kernel.org Cc: linux-rockchip@lists.infradead.org
2019-11-20PCI: vmd: Add device id for VMD device 8086:9A0BJon Derrick
This patch adds support for this VMD device which supports the bus restriction mode. Signed-off-by: Jon Derrick <jonathan.derrick@intel.com> Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
2019-11-20Merge tag 'irqchip-5.5' of ↵Thomas Gleixner
git://git.kernel.org/pub/scm/linux/kernel/git/maz/arm-platforms into irq/core Pull irqchip updates from Marc Zyngier: - Qualcomm PDC wakeup interrupt support - Layerscape external IRQ support - Broadcom bcm7038 PM and wakeup support - Ingenic driver cleanup and modernization - GICv3 ITS preparation for GICv4.1 updates - GICv4 fixes - Various cleanups
2019-11-20firmware: xilinx: Add SDIO Tap Delay nodesManish Narani
Add tap delay nodes for setting SDIO Tap Delays on ZynqMP platform. Signed-off-by: Manish Narani <manish.narani@xilinx.com> Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
2019-11-20cpuidle: Pass exit latency limit to cpuidle_use_deepest_state()Daniel Lezcano
Modify cpuidle_use_deepest_state() to take an additional exit latency limit argument to be passed to find_deepest_idle_state() and make cpuidle_idle_call() pass dev->forced_idle_latency_limit_ns to it for forced idle. Suggested-by: Rafael J. Wysocki <rafael@kernel.org> Signed-off-by: Daniel Lezcano <daniel.lezcano@linaro.org> [ rjw: Rebase and rearrange code, subject & changelog ] Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2019-11-20cpuidle: Allow idle injection to apply exit latency limitDaniel Lezcano
In some cases it may be useful to specify an exit latency limit for the idle state to be used during CPU idle time injection. Instead of duplicating the information in struct cpuidle_device or propagating the latency limit in the call stack, replace the use_deepest_state field with forced_latency_limit_ns to represent that limit, so that the deepest idle state with exit latency within that limit is forced (i.e. no governors) when it is set. A zero exit latency limit for forced idle means to use governors in the usual way (analogous to use_deepest_state equal to "false" before this change). Additionally, add play_idle_precise() taking two arguments, the duration of forced idle and the idle state exit latency limit, both in nanoseconds, and redefine play_idle() as a wrapper around that new function. This change is preparatory, no functional impact is expected. Suggested-by: Rafael J. Wysocki <rafael@kernel.org> Signed-off-by: Daniel Lezcano <daniel.lezcano@linaro.org> [ rjw: Subject, changelog, cpuidle_use_deepest_state() kerneldoc, whitespace ] Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2019-11-20futex: Add mutex around futex exitThomas Gleixner
The mutex will be used in subsequent changes to replace the busy looping of a waiter when the futex owner is currently executing the exit cleanup to prevent a potential live lock. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Ingo Molnar <mingo@kernel.org> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lkml.kernel.org/r/20191106224556.845798895@linutronix.de
2019-11-20futex: Mark the begin of futex exit explicitlyThomas Gleixner
Instead of relying on PF_EXITING use an explicit state for the futex exit and set it in the futex exit function. This moves the smp barrier and the lock/unlock serialization into the futex code. As with the DEAD state this is restricted to the exit path as exec continues to use the same task struct. This allows to simplify that logic in a next step. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Ingo Molnar <mingo@kernel.org> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lkml.kernel.org/r/20191106224556.539409004@linutronix.de
2019-11-20futex: Split futex_mm_release() for exit/execThomas Gleixner
To allow separate handling of the futex exit state in the futex exit code for exit and exec, split futex_mm_release() into two functions and invoke them from the corresponding exit/exec_mm_release() callsites. Preparatory only, no functional change. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Ingo Molnar <mingo@kernel.org> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lkml.kernel.org/r/20191106224556.332094221@linutronix.de
2019-11-20exit/exec: Seperate mm_release()Thomas Gleixner
mm_release() contains the futex exit handling. mm_release() is called from do_exit()->exit_mm() and from exec()->exec_mm(). In the exit_mm() case PF_EXITING and the futex state is updated. In the exec_mm() case these states are not touched. As the futex exit code needs further protections against exit races, this needs to be split into two functions. Preparatory only, no functional change. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Ingo Molnar <mingo@kernel.org> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lkml.kernel.org/r/20191106224556.240518241@linutronix.de
2019-11-20futex: Replace PF_EXITPIDONE with a stateThomas Gleixner
The futex exit handling relies on PF_ flags. That's suboptimal as it requires a smp_mb() and an ugly lock/unlock of the exiting tasks pi_lock in the middle of do_exit() to enforce the observability of PF_EXITING in the futex code. Add a futex_state member to task_struct and convert the PF_EXITPIDONE logic over to the new state. The PF_EXITING dependency will be cleaned up in a later step. This prepares for handling various futex exit issues later. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Ingo Molnar <mingo@kernel.org> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lkml.kernel.org/r/20191106224556.149449274@linutronix.de
2019-11-20futex: Move futex exit handling into futex codeThomas Gleixner
The futex exit handling is #ifdeffed into mm_release() which is not pretty to begin with. But upcoming changes to address futex exit races need to add more functionality to this exit code. Split it out into a function, move it into futex code and make the various futex exit functions static. Preparatory only and no functional change. Folded build fix from Borislav. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Ingo Molnar <mingo@kernel.org> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lkml.kernel.org/r/20191106224556.049705556@linutronix.de
2019-11-19scsi: target: iscsi: Wait for all commands to finish before freeing a sessionBart Van Assche
The iSCSI target driver is the only target driver that does not wait for ongoing commands to finish before freeing a session. Make the iSCSI target driver wait for ongoing commands to finish before freeing a session. This patch fixes the following KASAN complaint: BUG: KASAN: use-after-free in __lock_acquire+0xb1a/0x2710 Read of size 8 at addr ffff8881154eca70 by task kworker/0:2/247 CPU: 0 PID: 247 Comm: kworker/0:2 Not tainted 5.4.0-rc1-dbg+ #6 Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.12.0-1 04/01/2014 Workqueue: target_completion target_complete_ok_work [target_core_mod] Call Trace: dump_stack+0x8a/0xd6 print_address_description.constprop.0+0x40/0x60 __kasan_report.cold+0x1b/0x33 kasan_report+0x16/0x20 __asan_load8+0x58/0x90 __lock_acquire+0xb1a/0x2710 lock_acquire+0xd3/0x200 _raw_spin_lock_irqsave+0x43/0x60 target_release_cmd_kref+0x162/0x7f0 [target_core_mod] target_put_sess_cmd+0x2e/0x40 [target_core_mod] lio_check_stop_free+0x12/0x20 [iscsi_target_mod] transport_cmd_check_stop_to_fabric+0xd8/0xe0 [target_core_mod] target_complete_ok_work+0x1b0/0x790 [target_core_mod] process_one_work+0x549/0xa40 worker_thread+0x7a/0x5d0 kthread+0x1bc/0x210 ret_from_fork+0x24/0x30 Allocated by task 889: save_stack+0x23/0x90 __kasan_kmalloc.constprop.0+0xcf/0xe0 kasan_slab_alloc+0x12/0x20 kmem_cache_alloc+0xf6/0x360 transport_alloc_session+0x29/0x80 [target_core_mod] iscsi_target_login_thread+0xcd6/0x18f0 [iscsi_target_mod] kthread+0x1bc/0x210 ret_from_fork+0x24/0x30 Freed by task 1025: save_stack+0x23/0x90 __kasan_slab_free+0x13a/0x190 kasan_slab_free+0x12/0x20 kmem_cache_free+0x146/0x400 transport_free_session+0x179/0x2f0 [target_core_mod] transport_deregister_session+0x130/0x180 [target_core_mod] iscsit_close_session+0x12c/0x350 [iscsi_target_mod] iscsit_logout_post_handler+0x136/0x380 [iscsi_target_mod] iscsit_response_queue+0x8de/0xbe0 [iscsi_target_mod] iscsi_target_tx_thread+0x27f/0x370 [iscsi_target_mod] kthread+0x1bc/0x210 ret_from_fork+0x24/0x30 The buggy address belongs to the object at ffff8881154ec9c0 which belongs to the cache se_sess_cache of size 352 The buggy address is located 176 bytes inside of 352-byte region [ffff8881154ec9c0, ffff8881154ecb20) The buggy address belongs to the page: page:ffffea0004553b00 refcount:1 mapcount:0 mapping:ffff888101755400 index:0x0 compound_mapcount: 0 flags: 0x2fff000000010200(slab|head) raw: 2fff000000010200 dead000000000100 dead000000000122 ffff888101755400 raw: 0000000000000000 0000000080130013 00000001ffffffff 0000000000000000 page dumped because: kasan: bad access detected Memory state around the buggy address: ffff8881154ec900: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc ffff8881154ec980: fc fc fc fc fc fc fc fc fb fb fb fb fb fb fb fb >ffff8881154eca00: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb ^ ffff8881154eca80: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb ffff8881154ecb00: fb fb fb fb fc fc fc fc fc fc fc fc fc fc fc fc Cc: Mike Christie <mchristi@redhat.com> Link: https://lore.kernel.org/r/20191113220508.198257-3-bvanassche@acm.org Reviewed-by: Roman Bolshakov <r.bolshakov@yadro.com> Signed-off-by: Bart Van Assche <bvanassche@acm.org> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
2019-11-19net/tls: enable sk_msg redirect to tls socket egressWillem de Bruijn
Bring back tls_sw_sendpage_locked. sk_msg redirection into a socket with TLS_TX takes the following path: tcp_bpf_sendmsg_redir tcp_bpf_push_locked tcp_bpf_push kernel_sendpage_locked sock->ops->sendpage_locked Also update the flags test in tls_sw_sendpage_locked to allow flag MSG_NO_SHARED_FRAGS. bpf_tcp_sendmsg sets this. Link: https://lore.kernel.org/netdev/CA+FuTSdaAawmZ2N8nfDDKu3XLpXBbMtcCT0q4FntDD2gn8ASUw@mail.gmail.com/T/#t Link: https://github.com/wdebruij/kerneltools/commits/icept.2 Fixes: 0608c69c9a80 ("bpf: sk_msg, sock{map|hash} redirect through ULP") Fixes: f3de19af0f5b ("Revert \"net/tls: remove unused function tls_sw_sendpage_locked\"") Signed-off-by: Willem de Bruijn <willemb@google.com> Acked-by: John Fastabend <john.fastabend@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-11-19ASoC: soc-component: tidyup snd_soc_pcm_component_new/free() parameterKuninori Morimoto
This patch uses rtd instead of pcm at snd_soc_pcm_component_new/free() parameter. This is prepare for dai_link remove bug fix on topology. Signed-off-by: Kuninori Morimoto <kuninori.morimoto.gx@renesas.com> Link: https://lore.kernel.org/r/87pnhqx89j.wl-kuninori.morimoto.gx@renesas.com Signed-off-by: Mark Brown <broonie@kernel.org>
2019-11-19libnvdimm: Move nvdimm_bus_attribute_group to device_typeDan Williams
A 'struct device_type' instance can carry default attributes for the device. Use this facility to remove the export of nvdimm_bus_attribute_group and put the responsibility on the core rather than leaf implementations to define this attribute. Cc: Ira Weiny <ira.weiny@intel.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: "Oliver O'Halloran" <oohall@gmail.com> Cc: Vishal Verma <vishal.l.verma@intel.com> Cc: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com> Signed-off-by: Dan Williams <dan.j.williams@intel.com> Reviewed-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com> Link: https://lore.kernel.org/r/157309903815.1582359.6418211876315050283.stgit@dwillia2-desk3.amr.corp.intel.com
2019-11-19libnvdimm: Move nvdimm_attribute_group to device_typeDan Williams
A 'struct device_type' instance can carry default attributes for the device. Use this facility to remove the export of nvdimm_attribute_group and put the responsibility on the core rather than leaf implementations to define this attribute. Cc: Ira Weiny <ira.weiny@intel.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: "Oliver O'Halloran" <oohall@gmail.com> Cc: Vishal Verma <vishal.l.verma@intel.com> Cc: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com> Signed-off-by: Dan Williams <dan.j.williams@intel.com> Reviewed-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com> Link: https://lore.kernel.org/r/157309903201.1582359.10966209746585062329.stgit@dwillia2-desk3.amr.corp.intel.com
2019-11-19libnvdimm: Move nd_mapping_attribute_group to device_typeDan Williams
A 'struct device_type' instance can carry default attributes for the device. Use this facility to remove the export of nd_mapping_attribute_group and put the responsibility on the core rather than leaf implementations to define this attribute. Cc: Ira Weiny <ira.weiny@intel.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: "Oliver O'Halloran" <oohall@gmail.com> Cc: Vishal Verma <vishal.l.verma@intel.com> Cc: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com> Signed-off-by: Dan Williams <dan.j.williams@intel.com> Reviewed-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com> Link: https://lore.kernel.org/r/157309902686.1582359.6749533709859492704.stgit@dwillia2-desk3.amr.corp.intel.com
2019-11-19libnvdimm: Move nd_region_attribute_group to device_typeDan Williams
A 'struct device_type' instance can carry default attributes for the device. Use this facility to remove the export of nd_region_attribute_group and put the responsibility on the core rather than leaf implementations to define this attribute. Cc: Ira Weiny <ira.weiny@intel.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: "Oliver O'Halloran" <oohall@gmail.com> Cc: Vishal Verma <vishal.l.verma@intel.com> Cc: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com> Signed-off-by: Dan Williams <dan.j.williams@intel.com> Reviewed-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com> Link: https://lore.kernel.org/r/157309902169.1582359.16828508538444551337.stgit@dwillia2-desk3.amr.corp.intel.com
2019-11-19libnvdimm: Move nd_numa_attribute_group to device_typeDan Williams
A 'struct device_type' instance can carry default attributes for the device. Use this facility to remove the export of nd_numa_attribute_group and put the responsibility on the core rather than leaf implementations to define this attribute. Cc: Ira Weiny <ira.weiny@intel.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: "Oliver O'Halloran" <oohall@gmail.com> Cc: Vishal Verma <vishal.l.verma@intel.com> Cc: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com> Reviewed-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com> Link: https://lore.kernel.org/r/157401269537.43284.14411189404186877352.stgit@dwillia2-desk3.amr.corp.intel.com Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2019-11-19platform/chrome: wilco_ec: Add keyboard backlight LED supportDaniel Campello
The EC is in charge of controlling the keyboard backlight on the Wilco platform. We expose a standard LED class device named platform::kbd_backlight. Since the EC will never change the backlight level of its own accord, we don't need to implement a brightness_get() method. Signed-off-by: Nick Crews <ncrews@chromium.org> Signed-off-by: Daniel Campello <campello@chromium.org> Reviewed-by: Daniel Campello <campello@chromium.org> Signed-off-by: Enric Balletbo i Serra <enric.balletbo@collabora.com>