summaryrefslogtreecommitdiff
path: root/include
AgeCommit message (Collapse)Author
2017-06-20[media] cec: add CEC_CAP_NEEDS_HPDHans Verkuil
Add a new capability CEC_CAP_NEEDS_HPD. If this capability is set then the hardware can only use CEC if the HDMI Hotplug Detect pin is high. Such hardware cannot handle the corner case in the CEC specification where it is possible to transmit messages even if no hotplug signal is present (needed for some displays that turn off the HPD when in standby, but still have CEC enabled). Typically hardware that needs this capability have the HPD wired to the CEC block, often to a 'power' or 'active' pin. Signed-off-by: Hans Verkuil <hans.verkuil@cisco.com> Signed-off-by: Mauro Carvalho Chehab <mchehab@s-opensource.com>
2017-06-20[media] cec: add cec_transmit_attempt_done helper functionHans Verkuil
A simpler variant of cec_transmit_done to be used where the HW does just a single attempt at a transmit. So if the status indicates an error, then the corresponding error count will always be 1 and this function figures that out based on the status argument. Signed-off-by: Hans Verkuil <hans.verkuil@cisco.com> Signed-off-by: Mauro Carvalho Chehab <mchehab@s-opensource.com>
2017-06-20[media] cec: add cec_phys_addr_invalidate() helper functionHans Verkuil
Simplifies setting the physical address to CEC_PHYS_ADDR_INVALID. Signed-off-by: Hans Verkuil <hans.verkuil@cisco.com> Signed-off-by: Hans Verkuil <hansverk@cisco.com> Signed-off-by: Mauro Carvalho Chehab <mchehab@s-opensource.com>
2017-06-20[media] cec: add cec_s_phys_addr_from_edid helper functionHans Verkuil
This function simplifies the integration of CEC in DRM drivers. Signed-off-by: Hans Verkuil <hans.verkuil@cisco.com> Signed-off-by: Mauro Carvalho Chehab <mchehab@s-opensource.com>
2017-06-20xen-swiotlb: consolidate xen_swiotlb_dma_opsChristoph Hellwig
ARM and x86 had duplicated versions of the dma_ops structure, the only difference is that x86 hasn't wired up the set_dma_mask, mmap, and get_sgtable ops yet. On x86 all of them are identical to the generic version, so they aren't needed but harmless. All the symbols used only for xen_swiotlb_dma_ops can now be marked static as well. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
2017-06-20time: Fix CLOCK_MONOTONIC_RAW sub-nanosecond accountingJohn Stultz
Due to how the MONOTONIC_RAW accumulation logic was handled, there is the potential for a 1ns discontinuity when we do accumulations. This small discontinuity has for the most part gone un-noticed, but since ARM64 enabled CLOCK_MONOTONIC_RAW in their vDSO clock_gettime implementation, we've seen failures with the inconsistency-check test in kselftest. This patch addresses the issue by using the same sub-ns accumulation handling that CLOCK_MONOTONIC uses, which avoids the issue for in-kernel users. Since the ARM64 vDSO implementation has its own clock_gettime calculation logic, this patch reduces the frequency of errors, but failures are still seen. The ARM64 vDSO will need to be updated to include the sub-nanosecond xtime_nsec values in its calculation for this issue to be completely fixed. Signed-off-by: John Stultz <john.stultz@linaro.org> Tested-by: Daniel Mentz <danielmentz@google.com> Cc: Prarit Bhargava <prarit@redhat.com> Cc: Kevin Brodsky <kevin.brodsky@arm.com> Cc: Richard Cochran <richardcochran@gmail.com> Cc: Stephen Boyd <stephen.boyd@linaro.org> Cc: Will Deacon <will.deacon@arm.com> Cc: "stable #4 . 8+" <stable@vger.kernel.org> Cc: Miroslav Lichvar <mlichvar@redhat.com> Link: http://lkml.kernel.org/r/1496965462-20003-3-git-send-email-john.stultz@linaro.org Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2017-06-20time: Fix clock->read(clock) race around clocksource changesJohn Stultz
In tests, which excercise switching of clocksources, a NULL pointer dereference can be observed on AMR64 platforms in the clocksource read() function: u64 clocksource_mmio_readl_down(struct clocksource *c) { return ~(u64)readl_relaxed(to_mmio_clksrc(c)->reg) & c->mask; } This is called from the core timekeeping code via: cycle_now = tkr->read(tkr->clock); tkr->read is the cached tkr->clock->read() function pointer. When the clocksource is changed then tkr->clock and tkr->read are updated sequentially. The code above results in a sequential load operation of tkr->read and tkr->clock as well. If the store to tkr->clock hits between the loads of tkr->read and tkr->clock, then the old read() function is called with the new clock pointer. As a consequence the read() function dereferences a different data structure and the resulting 'reg' pointer can point anywhere including NULL. This problem was introduced when the timekeeping code was switched over to use struct tk_read_base. Before that, it was theoretically possible as well when the compiler decided to reload clock in the code sequence: now = tk->clock->read(tk->clock); Add a helper function which avoids the issue by reading tk_read_base->clock once into a local variable clk and then issue the read function via clk->read(clk). This guarantees that the read() function always gets the proper clocksource pointer handed in. Since there is now no use for the tkr.read pointer, this patch also removes it, and to address stopping the fast timekeeper during suspend/resume, it introduces a dummy clocksource to use rather then just a dummy read function. Signed-off-by: John Stultz <john.stultz@linaro.org> Acked-by: Ingo Molnar <mingo@kernel.org> Cc: Prarit Bhargava <prarit@redhat.com> Cc: Richard Cochran <richardcochran@gmail.com> Cc: Stephen Boyd <stephen.boyd@linaro.org> Cc: stable <stable@vger.kernel.org> Cc: Miroslav Lichvar <mlichvar@redhat.com> Cc: Daniel Mentz <danielmentz@google.com> Link: http://lkml.kernel.org/r/1496965462-20003-2-git-send-email-john.stultz@linaro.org Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2017-06-20mmc: tmio: improve checkpatch cleannessSimon Horman
Trivial updates to improve checkpatch cleanness. Signed-off-by: Simon Horman <horms+renesas@verge.net.au> Reviewed-by: Wolfram Sang <wsa+renesas@sang-engineering.com> Tested-by: Wolfram Sang <wsa+renesas@sang-engineering.com> Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
2017-06-20mmc: slot-gpio: Add support to enable irq wake on cd_irqAdrian Hunter
Add host capability MMC_CAP_CD_WAKE to enable irq wake on the card detect irq. Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
2017-06-20mmc: core: Remove MMC_CAP2_HC_ERASE_SZUlf Hansson
The MMC_CAP2_HC_ERASE_SZ is used only by a few mmc host drivers. Its intent is to enable eMMC's high-capacity erase size, as to improve the behaviour of the erase operations. We should strive to avoid software configuration options that aren't necessary, but instead deploy common behaviours. For these reasons, let's remove the capability bit for MMC_CAP2_HC_ERASE_SZ and make it the default behaviour. Note that this change doesn't affect eMMCs supporting trim/discard, because these commands operates on sectors and takes precedence over erase commands. Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org> Acked-by: Adrian Hunter <adrian.hunter@intel.com> Reviewed-by: Linus Walleij <linus.walleij@linaro.org> Reviewed-by: Shawn Lin <shawn.lin@rock-chips.com> Tested-by: Shawn Lin <shawn.lin@rock-chips.com>
2017-06-20mmc: use proper name for the R-Car SoCWolfram Sang
It is 'R-Car', not 'RCar'. No code or binding changes, only descriptive text. Signed-off-by: Wolfram Sang <wsa+renesas@sang-engineering.com> Acked-by: Geert Uytterhoeven <geert+renesas@glider.be> Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
2017-06-20mmc: core: Allocate per-request data using the block layer coreLinus Walleij
The mmc_queue_req is a per-request state container the MMC core uses to carry bounce buffers, pointers to asynchronous requests and so on. Currently allocated as a static array of objects, then as a request comes in, a mmc_queue_req is assigned to it, and used during the lifetime of the request. This is backwards compared to how other block layer drivers work: they usally let the block core provide a per-request struct that get allocated right beind the struct request, and which can be obtained using the blk_mq_rq_to_pdu() helper. (The _mq_ infix in this function name is misleading: it is used by both the old and the MQ block layer.) The per-request struct gets allocated to the size stored in the queue variable .cmd_size initialized using the .init_rq_fn() and cleaned up using .exit_rq_fn(). The block layer code makes the MMC core rely on this mechanism to allocate the per-request mmc_queue_req state container. Doing this make a lot of complicated queue handling go away. We only need to keep the .qnct that keeps count of how many request are currently being processed by the MMC layer. The MQ block layer will replace also this once we transition to it. Doing this refactoring is necessary to move the ioctl() operations into custom block layer requests tagged with REQ_OP_DRV_[IN|OUT] instead of the custom code using the BigMMCHostLock that we have today: those require that per-request data be obtainable easily from a request after creating a custom request with e.g.: struct request *rq = blk_get_request(q, REQ_OP_DRV_IN, __GFP_RECLAIM); struct mmc_queue_req *mq_rq = req_to_mq_rq(rq); And this is not possible with the current construction, as the request is not immediately assigned the per-request state container, but instead it gets assigned when the request finally enters the MMC queue, which is way too late for custom requests. Signed-off-by: Linus Walleij <linus.walleij@linaro.org> [Ulf: Folded in the fix to drop a call to blk_cleanup_queue()] Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org> Tested-by: Heiner Kallweit <hkallweit1@gmail.com>
2017-06-20mmc: core: Delete bounce buffer Kconfig optionLinus Walleij
This option is activated by all multiplatform configs and what not so we almost always have it turned on, and the memory it saves is negligible, even more so moving forward. The actual bounce buffer only gets allocated only when used, the only thing the ifdefs are saving is a little bit of code. It is highly improper to have this as a Kconfig option that get turned on by Kconfig, make this a pure runtime-thing and let the host decide whether we use bounce buffers. We add a new property "disable_bounce" to the host struct. Notice that mmc_queue_calc_bouncesz() already disables the bounce buffers if host->max_segs != 1, so any arch that has a maximum number of segments higher than 1 will have bounce buffers disabled. The option CONFIG_MMC_BLOCK_BOUNCE is default y so the majority of platforms in the kernel already have it on, and it then gets turned off at runtime since most of these have a host->max_segs > 1. The few exceptions that have host->max_segs == 1 and still turn off the bounce buffering are those that disable it in their defconfig. Those are the following: arch/arm/configs/colibri_pxa300_defconfig arch/arm/configs/zeus_defconfig - Uses MMC_PXA, drivers/mmc/host/pxamci.c - Sets host->max_segs = NR_SG, which is 1 - This needs its bounce buffer deactivated so we set host->disable_bounce to true in the host driver arch/arm/configs/davinci_all_defconfig - Uses MMC_DAVINCI, drivers/mmc/host/davinci_mmc.c - This driver sets host->max_segs to MAX_NR_SG, which is 16 - That means this driver anyways disabled bounce buffers - No special action needed for this platform arch/arm/configs/lpc32xx_defconfig arch/arm/configs/nhk8815_defconfig arch/arm/configs/u300_defconfig - Uses MMC_ARMMMCI, drivers/mmc/host/mmci.[c|h] - This driver by default sets host->max_segs to NR_SG, which is 128, unless a DMA engine is used, and in that case the number of segments are also > 1 - That means this driver already disables bounce buffers - No special action needed for these platforms arch/arm/configs/sama5_defconfig - Uses MMC_SDHCI, MMC_SDHCI_PLTFM, MMC_SDHCI_OF_AT91, MMC_ATMELMCI - Uses drivers/mmc/host/sdhci.c - Normally sets host->max_segs to SDHCI_MAX_SEGS which is 128 and thus disables bounce buffers - Sets host->max_segs to 1 if SDHCI_USE_SDMA is set - SDHCI_USE_SDMA is only set by SDHCI on PCI adapers - That means that for this platform bounce buffers are already disabled at runtime - No special action needed for this platform arch/blackfin/configs/CM-BF533_defconfig arch/blackfin/configs/CM-BF537E_defconfig - Uses MMC_SPI (a simple MMC card connected on SPI pins) - Uses drivers/mmc/host/mmc_spi.c - Sets host->max_segs to MMC_SPI_BLOCKSATONCE which is 128 - That means this platform already disables bounce buffers at runtime - No special action needed for these platforms arch/mips/configs/cavium_octeon_defconfig - Uses MMC_CAVIUM_OCTEON, drivers/mmc/host/cavium.c - Sets host->max_segs to 16 or 1 - Setting host->disable_bounce to be sure for the 1 case arch/mips/configs/qi_lb60_defconfig - Uses MMC_JZ4740, drivers/mmc/host/jz4740_mmc.c - This sets host->max_segs to 128 so bounce buffers are already runtime disabled - No action needed for this platform It would be interesting to come up with a list of the platforms that actually end up using bounce buffers. I have not been able to infer such a list, but it occurs when host->max_segs == 1 and the bounce buffering is not explicitly disabled. Signed-off-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
2017-06-20mmc: sdio: Add API to manage SDIO IRQs from a workqueueUlf Hansson
For hosts not supporting MMC_CAP2_SDIO_IRQ_NOTHREAD but MMC_CAP_SDIO_IRQ, the SDIO IRQs are processed from a dedicated kernel thread. For these cases, the host calls mmc_signal_sdio_irq() from its ISR to signal a new SDIO IRQ. Signaling an SDIO IRQ makes the host's ->enable_sdio_irq() callback to be invoked to temporary disable the IRQs, before the kernel thread is woken up to process it. When processing of the IRQs are completed, they are re-enabled by the kernel thread, again via invoking the host's ->enable_sdio_irq(). The observation from this, is that the execution path is being unnecessary complex, as the host driver already knows that it needs to temporary disable the IRQs before signaling a new one. Moreover, replacing the kernel thread with a work/workqueue would not only greatly simplify the code, but also make it more robust. To address the above problems, let's continue to build upon the support for MMC_CAP2_SDIO_IRQ_NOTHREAD, as it already implements SDIO IRQs to be processed without using the clumsy kernel thread and without the ping-pong calls of the host's ->enable_sdio_irq() callback for each processed IRQ. Therefore, let's add new API sdio_signal_irq(), which enables hosts to signal/process SDIO IRQs by using a work/workqueue, rather than using the kernel thread. Add also a new host callback ->ack_sdio_irq(), which the work invokes when the SDIO IRQs have been processed. This informs the host about when it shall re-enable the SDIO IRQs. Potentially, we could re-use the existing ->enable_sdio_irq() callback instead of adding a new one, however it has turned out that it's more convenient for hosts to get this information via a separate callback. Hosts that wants to use this new method to signal/process SDIO IRQs, must enable MMC_CAP2_SDIO_IRQ_NOTHREAD and implement the ->ack_sdio_irq() callback. Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org> Tested-by: Douglas Anderson <dianders@chromium.org> Reviewed-by: Douglas Anderson <dianders@chromium.org>
2017-06-20uuid: Take const on input of uuid_is_null() and guid_is_null()Andy Shevchenko
The null check functions do not and must not modify contents of the UUID or GUID supplied. Mark argument explicitly to reflect that. Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Signed-off-by: Christoph Hellwig <hch@lst.de>
2017-06-20Merge tag 'usb-for-v4.13' of ↵Greg Kroah-Hartman
git://git.kernel.org/pub/scm/linux/kernel/git/balbi/usb into usb-testing Felipe writes: usb: changes for v4.13 merge window This time around we have a total of 57 non-merge commits. A list of most important changes follows: - Improvements to dwc3 tracing interface - Initial dual-role support for dwc3 - Improvements to how we handle DMA resources in dwc3 - A new f_uac1 implementation which much more flexible - Removal of AVR32 bits - Improvements to f_mass_storage driver
2017-06-20Merge tag 'clk-fixes-for-linus' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/clk/linux Pull clk fixes from Stephen Boyd: "One build fix for an Amlogic clk driver and a handful of Allwinner clk driver fixes for some DT bindings and a randconfig build error that all came in this merge window" * tag 'clk-fixes-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/clk/linux: clk: sunxi-ng: a64: Export PLL_PERIPH0 clock for the PRCM clk: sunxi-ng: h3: Export PLL_PERIPH0 clock for the PRCM dt-bindings: clock: sunxi-ccu: Add pll-periph to PRCM's needed clocks clk: sunxi-ng: sun5i: Fix ahb_bist_clk definition clk: sunxi-ng: enable SUNXI_CCU_MP for PRCM clk: meson: gxbb: fix build error without RESET_CONTROLLER clk: sunxi-ng: v3s: Fix usb otg device reset bit clk: sunxi-ng: a31: Correct lcd1-ch1 clock register offset
2017-06-20Merge 4.12-rc6 into staging-nextGreg Kroah-Hartman
We want the staging fixes in here as well. Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-06-20Merge 4.12-rc6 into usb-nextGreg Kroah-Hartman
We want the USB fixes in here. Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-06-19dt-bindings: clk: Extend binding doc for Stingray SOCSandeep Tripathy
Update iproc clock dt-binding documentation with Stingray pll and clock details. Signed-off-by: Sandeep Tripathy <sandeep.tripathy@broadcom.com> Reviewed-by: Ray Jui <ray.jui@broadcom.com> Reviewed-by: Scott Branden <scott.branden@broadcom.com> Acked-by: Rob Herring <robh@kernel.org> Signed-off-by: Stephen Boyd <sboyd@codeaurora.org>
2017-06-19clk: mediatek: export cpu multiplexer clock for MT8173 SoCsSean Wang
The patch enables CPU multiplexer clock on MT8173 SoC which fixes up cpufreq driver fails at acquiring intermediate clock source when driver probe is called. Signed-off-by: Pi-Cheng Chen <pi-cheng.chen@linaro.org> Signed-off-by: Sean Wang <sean.wang@mediatek.com> Signed-off-by: Stephen Boyd <sboyd@codeaurora.org>
2017-06-19clk: mediatek: export cpu multiplexer clock for MT2701/MT7623 SoCsSean Wang
The patch enables CPU multiplexer clock on MT2701/MT7623 SoC which fixes up cpufreq driver fails at acquiring intermediate clock source when driver probe is called. Signed-off-by: Pi-Cheng Chen <pi-cheng.chen@linaro.org> Signed-off-by: Sean Wang <sean.wang@mediatek.com> Signed-off-by: Stephen Boyd <sboyd@codeaurora.org>
2017-06-19clk: hi6220: add acpu clockZhangfei Gao
Add acpu clock, including sft clock controlling hi6220 coresight module Signed-off-by: Zhangfei Gao <zhangfei.gao@linaro.org> Signed-off-by: Li Pengcheng <lipengcheng8@huawei.com> Acked-by: Rob Herring <robh@kernel.org> Signed-off-by: Stephen Boyd <sboyd@codeaurora.org>
2017-06-19clk: zx296718: export I2S mux clocksShawn Guo
Export I2S mux clocks, so that device tree can refer to them for setting a better parent clock for I2S work clock. Signed-off-by: Shawn Guo <shawn.guo@linaro.org> Signed-off-by: Stephen Boyd <sboyd@codeaurora.org>
2017-06-19clk: imx7d: create clocks behind rawnand clock gateStefan Agner
The rawnand clock gate gates two clocks, NAND_USDHC_BUS_CLK_ROOT and NAND_CLK_ROOT. However, the gate has been in the chain of the latter only. This does not allow to use the NAND_USDHC_BUS_CLK_ROOT only, e.g. as required by APBH-Bridge-DMA. Add new clocks which represent the clock after the gate, and use a shared clock gate to correctly model the hardware. Signed-off-by: Stefan Agner <stefan@agner.ch> Tested-by: Fabio Estevam <fabio.estevam@nxp.com> Acked-by: Han Xu <han.xu@nxp.com> Signed-off-by: Stephen Boyd <sboyd@codeaurora.org>
2017-06-20Merge branch 'drm-next-4.13' of git://people.freedesktop.org/~agd5f/linux ↵Dave Airlie
into drm-next A few more things for 4.13: - Semaphore support using sync objects - Drop fb location programming - Optimize bo list ioctl * 'drm-next-4.13' of git://people.freedesktop.org/~agd5f/linux: drm/amdgpu: Optimize mutex usage (v4) drm/amdgpu: Optimization of AMDGPU_BO_LIST_OP_CREATE (v2) amdgpu: use drm sync objects for shared semaphores (v6) amdgpu/cs: split out fence dependency checking (v2) drm/amdgpu: don't check the default value for vm size
2017-06-20Merge tag 'drm/tegra/for-4.13-rc1' of ↵Dave Airlie
git://anongit.freedesktop.org/tegra/linux into drm-next drm/tegra: Changes for v4.13-rc1 This starts off with the addition of more documentation for the host1x and DRM drivers and finishes with a slew of fixes and enhancements for the staging IOCTLs as a result of the awesome work done by Dmitry and Erik on the grate reverse-engineering effort. * tag 'drm/tegra/for-4.13-rc1' of git://anongit.freedesktop.org/tegra/linux: gpu: host1x: At first try a non-blocking allocation for the gather copy gpu: host1x: Refactor channel allocation code gpu: host1x: Remove unused host1x_cdma_stop() definition gpu: host1x: Remove unused 'struct host1x_cmdbuf' gpu: host1x: Check waits in the firewall gpu: host1x: Correct swapped arguments in the is_addr_reg() definition gpu: host1x: Forbid unrelated SETCLASS opcode in the firewall gpu: host1x: Forbid RESTART opcode in the firewall gpu: host1x: Forbid relocation address shifting in the firewall gpu: host1x: Do not leak BO's phys address to userspace gpu: host1x: Correct host1x_job_pin() error handling gpu: host1x: Initialize firewall class to the job's one drm/tegra: dc: Disable plane if it is invisible drm/tegra: dc: Apply clipping to the plane drm/tegra: dc: Avoid reset asserts on Tegra20 drm/tegra: Check syncpoint ID in the 'submit' IOCTL drm/tegra: Correct copying of waitchecks and disable them in the 'submit' IOCTL drm/tegra: Check for malformed offsets and sizes in the 'submit' IOCTL drm/tegra: Add driver documentation gpu: host1x: Flesh out kerneldoc
2017-06-19clk: hi3660: add clocks for video encoder, decoder and ISPChen Jun
This patch adds more clocks for hi3660, including: - video encoder and decoder - ISP (Image Signal Processing) Signed-off-by: Chen Jun <chenjun14@huawei.com> Signed-off-by: Zhong Kaihua <zhongkaihua@huawei.com> Signed-off-by: Guodong Xu <guodong.xu@linaro.org> Reviewed-by: Zhangfei Gao <zhangfei.gao@linaro.org> Acked-by: Zhangfei Gao <zhangfei.gao@linaro.org> Signed-off-by: Stephen Boyd <sboyd@codeaurora.org>
2017-06-19clk: qcom: Add DT bindings for ipq8074 gcc clock controllerAbhishek Sahu
Add the compatible strings and the include file for ipq8074 gcc clock controller. Acked-by: Rob Herring <robh@kernel.org> (bindings) Signed-off-by: Varadarajan Narayanan <varada@codeaurora.org> Signed-off-by: Abhishek Sahu <absahu@codeaurora.org> Signed-off-by: Stephen Boyd <sboyd@codeaurora.org>
2017-06-19clk: add DT bindings header for Gemini clock controllerLinus Walleij
This adds the DT binding macros used by the clock controller. Acked-by: Rob Herring <robh@kernel.org> Signed-off-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Stephen Boyd <sboyd@codeaurora.org>
2017-06-19reset: add DT bindings header for Gemini reset controllerLinus Walleij
This adds the DT binding macros used by the reset controller. Acked-by: Rob Herring <robh@kernel.org> Signed-off-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Stephen Boyd <sboyd@codeaurora.org>
2017-06-19PCI: Add sysfs max_link_speed/width, current_link_speed/width, etcWong Vee Khee
Expose PCIe bridges attributes such as secondary bus number, subordinate bus number, max link speed and link width, current link speed and link width via sysfs in /sys/bus/pci/devices/... This information is available via lspci, but that requires root privilege. Signed-off-by: Wong Vee Khee <vee.khee.wong@ni.com> Signed-off-by: Hui Chun Ong <hui.chun.ong@ni.com> [bhelgaas: changelog, return errors early to unindent usual case, return errors with same style throughout] Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>
2017-06-19tcp: md5: add TCP_MD5SIG_EXT socket option to set a key address prefixIvan Delalande
Replace first padding in the tcp_md5sig structure with a new flag field and address prefix length so it can be specified when configuring a new key for TCP MD5 signature. The tcpm_flags field will only be used if the socket option is TCP_MD5SIG_EXT to avoid breaking existing programs, and tcpm_prefixlen only when the TCP_MD5SIG_FLAG_PREFIX flag is set. Signed-off-by: Bob Gilligan <gilligan@arista.com> Signed-off-by: Eric Mowat <mowat@arista.com> Signed-off-by: Ivan Delalande <colona@arista.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-06-19tcp: md5: add an address prefix for key lookupIvan Delalande
This allows the keys used for TCP MD5 signature to be used for whole range of addresses, specified with a prefix length, instead of only one address as it currently is. Signed-off-by: Bob Gilligan <gilligan@arista.com> Signed-off-by: Eric Mowat <mowat@arista.com> Signed-off-by: Ivan Delalande <colona@arista.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-06-19m68k: Remove ptrace_signal_deliverAndreas Schwab
This fixes debugger syscall restart interactions. A debugger that modifies the tracee's program counter is expected to set the orig_d0 pseudo register to -1, to disable a possible syscall restart. This removes the last user of the ptrace_signal_deliver hook in the ptrace signal handling, so remove that as well. Signed-off-by: Andreas Schwab <schwab@linux-m68k.org> Signed-off-by: Geert Uytterhoeven <geert@linux-m68k.org>
2017-06-19netfilter: nfnetlink: extended ACK reportingPablo Neira Ayuso
Pass down struct netlink_ext_ack as parameter to all of our nfnetlink subsystem callbacks, so we can work on follow up patches to provide finer grain error reporting using the new infrastructure that 2d4bc93368f5 ("netlink: extended ACK reporting") provides. No functional change, just pass down this new object to callbacks. Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
2017-06-19netfilter: conntrack: use NFPROTO_MAX to size arrayFlorian Westphal
We don't support anything larger than NFPROTO_MAX, so we can shrink this a bit: text data dec hex filename old: 8259 1096 9355 248b net/netfilter/nf_conntrack_proto.o new: 8259 624 8883 22b3 net/netfilter/nf_conntrack_proto.o Signed-off-by: Florian Westphal <fw@strlen.de> Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
2017-06-19netfilter: ebt: Use new helper ebt_invalid_target to check targetGao Feng
Use the new helper function ebt_invalid_target instead of the old macro INVALID_TARGET and other duplicated codes to enhance the readability. Signed-off-by: Gao Feng <gfree.wind@vip.163.com> Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
2017-06-19netns: add and use net_ns_barrierFlorian Westphal
Quoting Joe Stringer: If a user loads nf_conntrack_ftp, sends FTP traffic through a network namespace, destroys that namespace then unloads the FTP helper module, then the kernel will crash. Events that lead to the crash: 1. conntrack is created with ftp helper in netns x 2. This netns is destroyed 3. netns destruction is scheduled 4. netns destruction wq starts, removes netns from global list 5. ftp helper is unloaded, which resets all helpers of the conntracks via for_each_net() but because netns is already gone from list the for_each_net() loop doesn't include it, therefore all of these conntracks are unaffected. 6. helper module unload finishes 7. netns wq invokes destructor for rmmod'ed helper CC: "Eric W. Biederman" <ebiederm@xmission.com> Reported-by: Joe Stringer <joe@ovn.org> Signed-off-by: Florian Westphal <fw@strlen.de> Acked-by: David S. Miller <davem@davemloft.net> Acked-by: "Eric W. Biederman" <ebiederm@xmission.com> Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
2017-06-19Btrfs: btrfs_ioctl_search_key documentationHans van Kranenburg
A programmer who is trying to implement calling the btrfs SEARCH or SEARCH_V2 ioctl will probably soon end up reading this struct definition. Properly document the input fields to prevent common misconceptions: 1. The search space is linear, not 3 dimensional. The invidual min/max values for objectid, type and offset cannot be used to filter the result, they only define the endpoints of an interval. 2. The transaction id (a.k.a. generation) filter applies only on transaction id of the last COW operation on a whole metadata page, not on individual items. Ad 1. The first misunderstanding was helped by the previous misleading comments on min/max type and offset: "keys returned will be >= min and <= max". Ad 2. For example, running btrfs balance will happily cause rewriting of metadata pages that contain a filesystem tree of a read only subvolume, causing transids to be increased. Also, improve descriptions of tree_id and nr_items and add in/out annotations. Signed-off-by: Hans van Kranenburg <hans.van.kranenburg@mendix.com> Signed-off-by: David Sterba <dsterba@suse.com>
2017-06-19btrfs: cleanup unused qgroup trace eventAnand Jain
Commit 81fb6f77a026 (btrfs: qgroup: Add new trace point for qgroup data reserve) added the following events which aren't used. btrfs__qgroup_data_map btrfs_qgroup_init_data_rsv_map btrfs_qgroup_free_data_rsv_map So remove them. CC: quwenruo@cn.fujitsu.com Signed-off-by: Anand Jain <anand.jain@oracle.com> Reviewed-by: Qu Wenruo <quwenruo@cn.fujitsu.com> Signed-off-by: David Sterba <dsterba@suse.com>
2017-06-19ASoC: Intel: Skylake: Add deep buffer supportRamesh Babu
With this patch, the dma buffer size is fetched from topology binary. This buffer size is applicable for gateway copier modules. Now that we can configure DSP dma buffer size, the device can support deep buffer playback. DSP fetches large buffer and can result fewer wakes, which helps in power reduction. Signed-off-by: Ramesh Babu <ramesh.babu@intel.com> Signed-off-by: Subhransu S. Prusty <subhransu.s.prusty@intel.com> Acked-By: Vinod Koul <vinod.koul@intel.com> Signed-off-by: Mark Brown <broonie@kernel.org>
2017-06-19dm kcopyd: add sequential write featureDamien Le Moal
When copyying blocks to host-managed zoned block devices, writes must be sequential. However, dm_kcopyd_copy() does not guarantee this as writes are issued in the completion order of reads, and reads may complete out of order despite being issued sequentially. Fix this by introducing the DM_KCOPYD_WRITE_SEQ feature flag. This can be specified when calling dm_kcopyd_copy() and should be set automatically if one of the destinations is a host-managed zoned block device. For a split job, the master job maintains the write position at which writes must be issued. This is checked with the pop() function which is modified to not return any write I/O sub job that is not at the correct write position. When DM_KCOPYD_WRITE_SEQ is specified for a job, errors cannot be ignored and the flag DM_KCOPYD_IGNORE_ERROR is ignored, even if specified by the user. Signed-off-by: Damien Le Moal <damien.lemoal@wdc.com> Reviewed-by: Hannes Reinecke <hare@suse.com> Reviewed-by: Bart Van Assche <bart.vanassche@sandisk.com> Signed-off-by: Mike Snitzer <snitzer@redhat.com>
2017-06-19dm: introduce dm_remap_zone_report()Damien Le Moal
A target driver support zoned block devices and exposing it as such may receive REQ_OP_ZONE_REPORT request for the user to determine the mapped device zone configuration. To process properly such request, the target driver may need to remap the zone descriptors provided in the report reply. The helper function dm_remap_zone_report() does this generically using only the target start offset and length and the start offset within the target device. dm_remap_zone_report() will remap the start sector of all zones reported. If the report includes sequential zones, the write pointer position of these zones will also be remapped. Signed-off-by: Damien Le Moal <damien.lemoal@wdc.com> Reviewed-by: Hannes Reinecke <hare@suse.com> Reviewed-by: Bart Van Assche <bart.vanassche@sandisk.com> Signed-off-by: Mike Snitzer <snitzer@redhat.com>
2017-06-19dm table: add zoned block devices validationDamien Le Moal
1) Introduce DM_TARGET_ZONED_HM feature flag: The target drivers currently available will not operate correctly if a table target maps onto a host-managed zoned block device. To avoid problems, introduce the new feature flag DM_TARGET_ZONED_HM to allow a target to explicitly state that it supports host-managed zoned block devices. This feature is checked for all targets in a table if any of the table's block devices are host-managed. Note that as host-aware zoned block devices are backward compatible with regular block devices, they can be used by any of the current target types. This new feature is thus restricted to host-managed zoned block devices. 2) Check device area zone alignment: If a target maps to a zoned block device, check that the device area is aligned on zone boundaries to avoid problems with REQ_OP_ZONE_RESET operations (resetting a partially mapped sequential zone would not be possible). This also facilitates the processing of zone report with REQ_OP_ZONE_REPORT bios. 3) Check block devices zone model compatibility When setting the DM device's queue limits, several possibilities exists for zoned block devices: 1) The DM target driver may want to expose a different zone model (e.g. host-managed device emulation or regular block device on top of host-managed zoned block devices) 2) Expose the underlying zone model of the devices as-is To allow both cases, the underlying block device zone model must be set in the target limits in dm_set_device_limits() and the compatibility of all devices checked similarly to the logical block size alignment. For this last check, introduce validate_hardware_zoned_model() to check that all targets of a table have the same zone model and that the zone size of the target devices are equal. Signed-off-by: Damien Le Moal <damien.lemoal@wdc.com> Reviewed-by: Hannes Reinecke <hare@suse.com> Reviewed-by: Bart Van Assche <bart.vanassche@sandisk.com> [Mike Snitzer refactored Damien's original work to simplify the code] Signed-off-by: Mike Snitzer <snitzer@redhat.com>
2017-06-19dm: convert DM printk macros to pr_<level> macrosJoe Perches
Using pr_<level> is the more common logging style. Standardize style and use new macro DM_FMT. Use no_printk in DMDEBUG macros when CONFIG_DM_DEBUG is not #defined. Signed-off-by: Joe Perches <joe@perches.com> Signed-off-by: Mike Snitzer <snitzer@redhat.com>
2017-06-19dm ioctl: add a new DM_DEV_ARM_POLL ioctlMikulas Patocka
This ioctl will record the current global event number in the structure dm_file, so that next select or poll call will wait until new events arrived since this ioctl. The DM_DEV_ARM_POLL ioctl has the same effect as closing and reopening the handle. Using the DM_DEV_ARM_POLL ioctl is optional - if the userspace is OK with closing and reopening the /dev/mapper/control handle after select or poll, there is no need to re-arm via ioctl. Usage: 1. open the /dev/mapper/control device 2. send the DM_DEV_ARM_POLL ioctl 3. scan the event numbers of all devices we are interested in and process them 4. call select, poll or epoll on the handle (it waits until some new event happens since the DM_DEV_ARM_POLL ioctl) 5. go to step 2 Signed-off-by: Mikulas Patocka <mpatocka@redhat.com> Signed-off-by: Andy Grover <agrover@redhat.com> Signed-off-by: Mike Snitzer <snitzer@redhat.com>
2017-06-19mfd: intel_soc_pmic_bxtwc: Use chained IRQs for second level IRQ chipsKuppuswamy Sathyanarayanan
Whishkey cove PMIC has support to mask/unmask interrupts at two levels. At first level we can mask/unmask interrupt domains like TMU, GPIO, ADC, CHGR, BCU THERMAL and PWRBTN and at second level, it provides facility to mask/unmask individual interrupts belong each of this domain. For example, in case of TMU, at first level we have TMU interrupt domain, and at second level we have two interrupts, wake alarm, system alarm that belong to the TMU interrupt domain. Currently, in this driver all first level IRQs are registered as part of IRQ chip(bxtwc_regmap_irq_chip). By default, after you register the IRQ chip from your driver, all IRQs in that chip will masked and can only be enabled if that IRQ is requested using request_irq() call. This is the default Linux IRQ behavior model. And whenever a dependent device that belongs to PMIC requests only the second level IRQ and not explicitly unmask the first level IRQ, then in essence the second level IRQ will still be disabled. For example, if TMU device driver request wake_alarm IRQ and not explicitly unmask TMU level 1 IRQ then according to the default Linux IRQ model, wake_alarm IRQ will still be disabled. So the proper solution to fix this issue is to use the chained IRQ chip concept. We should chain all the second level chip IRQs to the corresponding first level IRQ. To do this, we need to create separate IRQ chips for every group of second level IRQs. In case of TMU, when adding second level IRQ chip, instead of using PMIC IRQ we should use the corresponding first level IRQ. So the following code will change from ret = regmap_add_irq_chip(pmic->regmap, pmic->irq, ...) to, virq = regmap_irq_get_virq(&pmic->irq_chip_data, BXTWC_TMU_LVL1_IRQ); ret = regmap_add_irq_chip(pmic->regmap, virq, ...) In case of Whiskey Cove Type-C driver, Since USBC IRQ is moved under charger level2 IRQ chip. We should use charger IRQ chip(irq_chip_data_chgr) to get the USBC virtual IRQ number. Signed-off-by: Kuppuswamy Sathyanarayanan <sathyanarayanan.kuppuswamy@linux.intel.com> Reviewed-by: Andy Shevchenko <andy.shevchenko@gmail.com> Revieved-by: Heikki Krogerus <heikki.krogerus@linux.intel.com> Signed-off-by: Lee Jones <lee.jones@linaro.org>
2017-06-19mm: larger stack guard gap, between vmasHugh Dickins
Stack guard page is a useful feature to reduce a risk of stack smashing into a different mapping. We have been using a single page gap which is sufficient to prevent having stack adjacent to a different mapping. But this seems to be insufficient in the light of the stack usage in userspace. E.g. glibc uses as large as 64kB alloca() in many commonly used functions. Others use constructs liks gid_t buffer[NGROUPS_MAX] which is 256kB or stack strings with MAX_ARG_STRLEN. This will become especially dangerous for suid binaries and the default no limit for the stack size limit because those applications can be tricked to consume a large portion of the stack and a single glibc call could jump over the guard page. These attacks are not theoretical, unfortunatelly. Make those attacks less probable by increasing the stack guard gap to 1MB (on systems with 4k pages; but make it depend on the page size because systems with larger base pages might cap stack allocations in the PAGE_SIZE units) which should cover larger alloca() and VLA stack allocations. It is obviously not a full fix because the problem is somehow inherent, but it should reduce attack space a lot. One could argue that the gap size should be configurable from userspace, but that can be done later when somebody finds that the new 1MB is wrong for some special case applications. For now, add a kernel command line option (stack_guard_gap) to specify the stack gap size (in page units). Implementation wise, first delete all the old code for stack guard page: because although we could get away with accounting one extra page in a stack vma, accounting a larger gap can break userspace - case in point, a program run with "ulimit -S -v 20000" failed when the 1MB gap was counted for RLIMIT_AS; similar problems could come with RLIMIT_MLOCK and strict non-overcommit mode. Instead of keeping gap inside the stack vma, maintain the stack guard gap as a gap between vmas: using vm_start_gap() in place of vm_start (or vm_end_gap() in place of vm_end if VM_GROWSUP) in just those few places which need to respect the gap - mainly arch_get_unmapped_area(), and and the vma tree's subtree_gap support for that. Original-patch-by: Oleg Nesterov <oleg@redhat.com> Original-patch-by: Michal Hocko <mhocko@suse.com> Signed-off-by: Hugh Dickins <hughd@google.com> Acked-by: Michal Hocko <mhocko@suse.com> Tested-by: Helge Deller <deller@gmx.de> # parisc Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2017-06-19crypto: engine - replace pr_xxx by dev_xxxCorentin LABBE
By adding a struct device *dev to struct engine, we could store the device used at register time and so use all dev_xxx functions instead of pr_xxx. Signed-off-by: Corentin Labbe <clabbe.montjoie@gmail.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>