summaryrefslogtreecommitdiff
path: root/include/linux
AgeCommit message (Collapse)Author
2015-08-24clk: Remove unused provider APIsStephen Boyd
Remove these APIs now that we've converted all users to the replacement struct clk_hw based versions. Signed-off-by: Stephen Boyd <sboyd@codeaurora.org>
2015-08-24clk: Add clk_hw_*() APIs for use by clk providersStephen Boyd
clk providers shouldn't need to use the consumer APIs (clk.h). Add provider APIs to replace the __clk_*() APIs that take a struct clk_hw as their first argument instead of a struct clk. Signed-off-by: Stephen Boyd <sboyd@codeaurora.org>
2015-08-24lib: scatterlist: add sg splitting functionRobert Jarzmik
Sometimes a scatter-gather has to be split into several chunks, or sub scatter lists. This happens for example if a scatter list will be handled by multiple DMA channels, each one filling a part of it. A concrete example comes with the media V4L2 API, where the scatter list is allocated from userspace to hold an image, regardless of the knowledge of how many DMAs will fill it : - in a simple RGB565 case, one DMA will pump data from the camera ISP to memory - in the trickier YUV422 case, 3 DMAs will pump data from the camera ISP pipes, one for pipe Y, one for pipe U and one for pipe V For these cases, it is necessary to split the original scatter list into multiple scatter lists, which is the purpose of this patch. The guarantees that are required for this patch are : - the intersection of spans of any couple of resulting scatter lists is empty. - the union of spans of all resulting scatter lists is a subrange of the span of the original scatter list. - streaming DMA API operations (mapping, unmapping) should not happen both on both the resulting and the original scatter list. It's either the first or the later ones. - the caller is reponsible to call kfree() on the resulting scatterlists. Signed-off-by: Robert Jarzmik <robert.jarzmik@free.fr> Signed-off-by: Jens Axboe <axboe@fb.com>
2015-08-24PCI: Set MPS to match upstream bridgeKeith Busch
Firmware typically configures the PCIe fabric with a consistent Max Payload Size setting based on the devices present at boot. A hot-added device typically has the power-on default MPS setting (128 bytes), which may not match the fabric. The previous Linux default, in the absence of any "pci=pcie_bus_*" options, was PCIE_BUS_TUNE_OFF, in which we never touch MPS, even for hot-added devices. Add a new default setting, PCIE_BUS_DEFAULT, in which we make sure every device's MPS setting matches the upstream bridge. This makes it more likely that a hot-added device will work in a system with optimized MPS configuration. Note that if we hot-add a device that only supports 128-byte MPS, it still likely won't work because we don't reconfigure the rest of the fabric. Booting with "pci=pcie_bus_peer2peer" is a workaround for this because it sets MPS to 128 for everything. [bhelgaas: changelog, new default, rework for pci_configure_device() path] Tested-by: Keith Busch <keith.busch@intel.com> Tested-by: Jordan Hargrave <jharg93@gmail.com> Signed-off-by: Keith Busch <keith.busch@intel.com> Signed-off-by: Bjorn Helgaas <bhelgaas@google.com> Acked-by: Yinghai Lu <yinghai@kernel.org>
2015-08-24i2c: core: Add support for best effort block read emulationIrina Tirdea
There are devices that need to handle block transactions regardless of the capabilities exported by the adapter. For performance reasons, they need to use i2c read blocks if available, otherwise emulate the block transaction with word or byte transactions. Add support for a helper function that would read a data block using the best transfer available: I2C_FUNC_SMBUS_READ_I2C_BLOCK, I2C_FUNC_SMBUS_READ_WORD_DATA or I2C_FUNC_SMBUS_READ_BYTE_DATA. Signed-off-by: Irina Tirdea <irina.tirdea@intel.com> Signed-off-by: Wolfram Sang <wsa@the-dreams.de>
2015-08-24i2c: mux: Add register-based mux i2c-mux-regYork Sun
Based on i2c-mux-gpio driver, similarly the register-based mux switch from one bus to another by setting a single register. The register can be on PCIe bus, local bus, or any memory-mapped address. The endianness of such register can be specified in device tree if used, or in platform data. Signed-off-by: York Sun <yorksun@freescale.com> Acked-by: Alexander Sverdlin <alexander.sverdlin@nokia.com> Signed-off-by: Wolfram Sang <wsa@the-dreams.de>
2015-08-24i2c: add a flag to mark clients as slavesWolfram Sang
And update indentation with one more tab, sigh... Tested-by: Andrey Danin <danindrey@mail.ru> Acked-by: Stephen Warren <swarren@nvidia.com> Signed-off-by: Wolfram Sang <wsa+renesas@sang-engineering.com> Signed-off-by: Wolfram Sang <wsa@the-dreams.de>
2015-08-24Merge tag 'v4.2-rc8' into drm-nextDave Airlie
Linux 4.2-rc8 Backmerge required for Intel so they can fix their -next tree up properly.
2015-08-23Merge tag 'nfc-next-4.3-1' of ↵David S. Miller
git://git.kernel.org/pub/scm/linux/kernel/git/sameo/nfc-next Samuel Ortiz says: ==================== NFC 4.3 pull request This is the NFC pull request for 4.3. With this one we have: - A new driver for Samsung's S3FWRN5 NFC chipset. In order to properly support this driver, a few NCI core routines needed to be exported. Future drivers like Intel's Fields Peak will benefit from this. - SPI support as a physical transport for STM st21nfcb. - An additional netlink API for sending replies back to userspace from vendor commands. - 2 small fixes for TI's trf7970a - A few st-nci fixes. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
2015-08-23gro: Fix remcsum offload to deal with frags in GROTom Herbert
The remote checksum offload GRO did not consider the case that frag0 might be in use. This patch fixes that by accessing headers using the skb_gro functions and not saving offsets relative to skb->head. Signed-off-by: Tom Herbert <tom@herbertland.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2015-08-22Merge tag 'signed-kvm-ppc-next' of git://github.com/agraf/linux-2.6 into ↵Paolo Bonzini
kvm-queue Patch queue for ppc - 2015-08-22 Highlights for KVM PPC this time around: - Book3S: A few bug fixes - Book3S: Allow micro-threading on POWER8
2015-08-22Merge branch 'irq-urgent-for-linus' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull irq fixes from Thomas Gleixner: "A series of small fixlets for a regression visible on OMAP devices caused by the conversion of the OMAP interrupt chips to hierarchical interrupt domains. Mostly one liners on the driver side plus a small helper function in the core to avoid open coded mess in the drivers" * 'irq-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: irqchip/crossbar: Restore set_wake functionality irqchip/crossbar: Restore the mask on suspend behaviour ARM: OMAP: wakeupgen: Restore the irq_set_type() mechanism irqchip/crossbar: Restore the irq_set_type() mechanism genirq: Introduce irq_chip_set_type_parent() helper genirq: Don't return ENOSYS in irq_chip_retrigger_hierarchy
2015-08-22x86/kasan, mm: Introduce generic kasan_populate_zero_shadow()Andrey Ryabinin
Introduce generic kasan_populate_zero_shadow(shadow_start, shadow_end). This function maps kasan_zero_page to the [shadow_start, shadow_end] addresses. This replaces x86_64 specific populate_zero_shadow() and will be used for ARM64 in follow on patches. The main changes from original version are: * Use p?d_populate*() instead of set_p?d() * Use memblock allocator directly instead of vmemmap_alloc_block() * __pa() instead of __pa_nodebug(). __pa() causes troubles iff we use it before kasan_early_init(). kasan_populate_zero_shadow() will be used later, so we ok with __pa() here. Signed-off-by: Andrey Ryabinin <ryabinin.a.a@gmail.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Cc: Alexander Potapenko <glider@google.com> Cc: Alexey Klimov <klimov.linux@gmail.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Cc: Arnd Bergmann <arnd@arndb.de> Cc: David Keitel <dkeitel@codeaurora.org> Cc: Dmitry Vyukov <dvyukov@google.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Linus Walleij <linus.walleij@linaro.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Rik van Riel <riel@redhat.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Will Deacon <will.deacon@arm.com> Cc: Yury <yury.norov@gmail.com> Cc: linux-arm-kernel@lists.infradead.org Cc: linux-mm@kvack.org Link: http://lkml.kernel.org/r/1439444244-26057-3-git-send-email-ryabinin.a.a@gmail.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2015-08-22x86/kasan: Define KASAN_SHADOW_OFFSET per architectureAndrey Ryabinin
Current definition of KASAN_SHADOW_OFFSET in include/linux/kasan.h will not work for upcomming arm64, so move it to the arch header. Signed-off-by: Andrey Ryabinin <ryabinin.a.a@gmail.com> Cc: Alexander Potapenko <glider@google.com> Cc: Alexey Klimov <klimov.linux@gmail.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: David Keitel <dkeitel@codeaurora.org> Cc: Dmitry Vyukov <dvyukov@google.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Linus Walleij <linus.walleij@linaro.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Rik van Riel <riel@redhat.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Will Deacon <will.deacon@arm.com> Cc: Yury <yury.norov@gmail.com> Cc: linux-arm-kernel@lists.infradead.org Cc: linux-mm@kvack.org Link: http://lkml.kernel.org/r/1439444244-26057-2-git-send-email-ryabinin.a.a@gmail.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2015-08-21f2fs: add annotation for space utilization of regular/inline dentryChao Yu
Add annotation to let us know more clearly about space utilization information of regular dentry and inline dentry. Signed-off-by: Chao Yu <chao2.yu@samsung.com> Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
2015-08-21mm: make page pfmemalloc check more robustMichal Hocko
Commit c48a11c7ad26 ("netvm: propagate page->pfmemalloc to skb") added checks for page->pfmemalloc to __skb_fill_page_desc(): if (page->pfmemalloc && !page->mapping) skb->pfmemalloc = true; It assumes page->mapping == NULL implies that page->pfmemalloc can be trusted. However, __delete_from_page_cache() can set set page->mapping to NULL and leave page->index value alone. Due to being in union, a non-zero page->index will be interpreted as true page->pfmemalloc. So the assumption is invalid if the networking code can see such a page. And it seems it can. We have encountered this with a NFS over loopback setup when such a page is attached to a new skbuf. There is no copying going on in this case so the page confuses __skb_fill_page_desc which interprets the index as pfmemalloc flag and the network stack drops packets that have been allocated using the reserves unless they are to be queued on sockets handling the swapping which is the case here and that leads to hangs when the nfs client waits for a response from the server which has been dropped and thus never arrive. The struct page is already heavily packed so rather than finding another hole to put it in, let's do a trick instead. We can reuse the index again but define it to an impossible value (-1UL). This is the page index so it should never see the value that large. Replace all direct users of page->pfmemalloc by page_is_pfmemalloc which will hide this nastiness from unspoiled eyes. The information will get lost if somebody wants to use page->index obviously but that was the case before and the original code expected that the information should be persisted somewhere else if that is really needed (e.g. what SLAB and SLUB do). [akpm@linux-foundation.org: fix blooper in slub] Fixes: c48a11c7ad26 ("netvm: propagate page->pfmemalloc to skb") Signed-off-by: Michal Hocko <mhocko@suse.com> Debugged-by: Vlastimil Babka <vbabka@suse.com> Debugged-by: Jiri Bohac <jbohac@suse.com> Cc: Eric Dumazet <eric.dumazet@gmail.com> Cc: David Miller <davem@davemloft.net> Acked-by: Mel Gorman <mgorman@suse.de> Cc: <stable@vger.kernel.org> [3.6+] Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2015-08-21Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/netDavid S. Miller
Conflicts: drivers/net/usb/qmi_wwan.c Overlapping additions of new device IDs to qmi_wwan.c Signed-off-by: David S. Miller <davem@davemloft.net>
2015-08-21regmap: Add missing comments about struct regmap_busMarkus Pargmann
There are some fields of this struct undocumented or old. This patch updates the missing comments. Signed-off-by: Markus Pargmann <mpa@pengutronix.de> Signed-off-by: Mark Brown <broonie@kernel.org>
2015-08-21Merge branch 'superblock-scaling' of ↵Al Viro
git://git.kernel.org/pub/scm/linux/kernel/git/josef/btrfs-next into for-next Conflicts: include/linux/fs.h
2015-08-21Merge branch 'master' of ↵Pablo Neira Ayuso
git://git.kernel.org/pub/scm/linux/kernel/git/davem/net-next Resolve conflicts with conntrack template fixes. Conflicts: net/netfilter/nf_conntrack_core.c net/netfilter/nf_synproxy_core.c net/netfilter/xt_CT.c Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
2015-08-20Merge tag 'wireless-drivers-next-for-davem-2015-08-19' of ↵David S. Miller
git://git.kernel.org/pub/scm/linux/kernel/git/kvalo/wireless-drivers-next Kalle Valo says: ==================== Major changes: ath10k: * add support for qca99x0 family of devices * improve performance of tx_lock * add support for raw mode (802.11 frame format) and software crypto engine enabled via a module parameter ath9k: * add fast-xmit support wil6210: * implement TSO support * support bootloader v1 and onwards iwlwifi: * Deprecate -10.ucode * Clean ups towards multiple Rx queues * Add support for longer CMD IDs. This will be required by new firmwares since we are getting close to the u8 limit. * bugfixes for the D0i3 power state * Add basic support for FTM * polish the Miracast operation * fix a few power consumption issues * scan cleanup * fixes for D0i3 system state * add paging for devices that support it * add again the new RBD allocation model * add more options to the firmware debug system * add support for frag SKBs in Tx ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
2015-08-20average: remove out-of-line implementationJohannes Berg
Since all users are now converted to the inline implementation, remove the out-of-line implementation entirely. Signed-off-by: Johannes Berg <johannes.berg@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2015-08-20Merge branch 'fortglx/4.3/time' of ↵Thomas Gleixner
https://git.linaro.org/people/john.stultz/linux into timers/core - A handful or y2038 related items - A walltime to monotonic limit - Small fixes for timespec_trunc() and timer_list output
2015-08-20spi: mediatek: fix spi incorrect endian usageLeilk Liu
TX_ENDIAN/RX_ENDIAN bits define whether to reverse the endian order of the data DMA from/to memory. The endian order should keep the same with cpu endian. Signed-off-by: Leilk Liu <leilk.liu@mediatek.com> Signed-off-by: Mark Brown <broonie@kernel.org>
2015-08-20pmem, dax: have direct_access use __pmem annotationRoss Zwisler
Update the annotation for the kaddr pointer returned by direct_access() so that it is a __pmem pointer. This is consistent with the PMEM driver and with how this direct_access() pointer is used in the DAX code. Signed-off-by: Ross Zwisler <ross.zwisler@linux.intel.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2015-08-20pmem: add copy_from_iter_pmem() and clear_pmem()Ross Zwisler
Add support for two new PMEM APIs, copy_from_iter_pmem() and clear_pmem(). copy_from_iter_pmem() is used to copy data from an iterator into a PMEM buffer. clear_pmem() zeros a PMEM memory range. Both of these new APIs must be explicitly ordered using a wmb_pmem() function call and are implemented in such a way that the wmb_pmem() will make the stores to PMEM durable. Because both APIs are unordered they can be called as needed without introducing any unwanted memory barriers. Signed-off-by: Ross Zwisler <ross.zwisler@linux.intel.com> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2015-08-20pmem: remove layer when calling arch_has_wmb_pmem()Ross Zwisler
Prior to this change arch_has_wmb_pmem() was only called by arch_has_pmem_api(). Both arch_has_wmb_pmem() and arch_has_pmem_api() checked to make sure that CONFIG_ARCH_HAS_PMEM_API was enabled. Instead, remove the old arch_has_wmb_pmem() wrapper to be rid of one extra layer of indirection and the redundant CONFIG_ARCH_HAS_PMEM_API check. Rename __arch_has_wmb_pmem() to arch_has_wmb_pmem() since we no longer have a wrapper, and just have arch_has_pmem_api() call the architecture specific arch_has_wmb_pmem() directly. Signed-off-by: Ross Zwisler <ross.zwisler@linux.intel.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2015-08-20pmem, x86: move x86 PMEM API to new pmem.h headerRoss Zwisler
Move the x86 PMEM API implementation out of asm/cacheflush.h and into its own header asm/pmem.h. This will allow members of the PMEM API to be more easily identified on this and other architectures. Signed-off-by: Ross Zwisler <ross.zwisler@linux.intel.com> Suggested-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2015-08-20PCI: Add pci_scan_root_bus_msi()Lorenzo Pieralisi
Add a pci_scan_root_bus_msi() interface so an arch can specify the MSI controller up front. This removes the need for a pcibios callback to set the MSI controller later. This is not exported because I'd like to replace the variety of "scan root bus" interfaces with a single, more extensible interface that can handle the MSI controller, domain, pci_ops, resources, etc. I hope this interface is temporary. [bhelgaas: changelog, split into separate patch] Suggested-by: Russell King <linux@arm.linux.org.uk> Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Signed-off-by: Bjorn Helgaas <bhelgaas@google.com> Reviewed-by: Jingoo Han <jingoohan1@gmail.com>
2015-08-20Merge branch 'perf/urgent' into perf/core, to pick up fixes before adding ↵Ingo Molnar
more changes Signed-off-by: Ingo Molnar <mingo@kernel.org>
2015-08-20fbdev: fix cea_modes array sizeTomi Valkeinen
CEA defines 64 modes, indexed from 1 to 64. modedb has cea_modes arrays, which contains 64 entries. However, the code uses the CEA indices directly, i.e. the first mode is at cea_modes[1]. This means the array is one too short. This does not cause references to uninitialized memory as the code in fbmon only allows indexes up to 63, and the cea_modes does not contain an entry for the mode 64 so it could not be used in any case. However, the code contains a check 'if (idx > ARRAY_SIZE(cea_modes)', and while that check is a no-op as at that point idx cannot be >= 63, it upsets static checkers. Fix this by increasing the cea_array size to be 65, and change the code to allow mode 64. Signed-off-by: Tomi Valkeinen <tomi.valkeinen@ti.com> Reported-by: Dan Carpenter <dan.carpenter@oracle.com>
2015-08-20dmaengine: Stricter legacy checking in dma_request_slave_channel_compat()Geert Uytterhoeven
dma_request_slave_channel_compat() is meant for drivers that support both DT and legacy platform device based probing: if DT channel DMA setup fails, it will fall back to platform data based DMA channel setup, using hardcoded DMA channel IDs and a filter function. However, if the DTS doesn't provide a "dmas" property for the device, the fallback is also used. If the legacy filter function is not hardcoded in the DMA slave driver, but comes from platform data, it will be NULL. Then dma_request_slave_channel_compat() will succeed incorrectly, and return a DMA channel, as a NULL legacy filter function actually means "all channels are OK", not "do not match". Later, when trying to use that DMA channel, it will fail with: rcar-dmac e6700000.dma-controller: rcar_dmac_prep_slave_sg: bad parameter: len=1, id=-22 To fix this, ensure that both the filter function and the DMA channel ID are not NULL before using the legacy fallback. Note that some DMA slave drivers can handle this failure, and will fall back to PIO. See also commit 056f6c87028544de ("dmaengine: shdma: Make dummy shdma_chan_filter() always return false"), which fixed the same issue for the case where shdma_chan_filter() is hardcoded in a DMA slave driver. Suggested-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2015-08-19NFSv4: Enable delegated opens even when reboot recovery is pendingTrond Myklebust
Unlike the previous attempt, this takes into account the fact that we may be calling it from the recovery thread itself. Detect this by looking at what kind of open we're doing, and checking the state of the NFS_DELEGATION_NEED_RECLAIM if it turns out we're doing a reboot reclaim-type open. Cc: Olga Kornievskaia <aglo@umich.edu> Signed-off-by: Trond Myklebust <trond.myklebust@primarydata.com>
2015-08-20genirq: Introduce irq_chip_set_type_parent() helperGrygorii Strashko
This helper is required for irq chips which do not implement a irq_set_type callback and need to call down the irq domain hierarchy for the actual trigger type change. This helper is required to fix further wreckage caused by the conversion of TI OMAP to hierarchical irq domains and therefor tagged for stable. [ tglx: Massaged changelog ] Signed-off-by: Grygorii Strashko <grygorii.strashko@ti.com> Cc: Sudeep Holla <sudeep.holla@arm.com> Cc: <linux@arm.linux.org.uk> Cc: <nsekhar@ti.com> Cc: <jason@lakedaemon.net> Cc: <balbi@ti.com> Cc: <linux-arm-kernel@lists.infradead.org> Cc: <tony@atomide.com> Cc: <marc.zyngier@arm.com> Cc: stable@vger.kernel.org # 4.1 Link: http://lkml.kernel.org/r/1439554830-19502-3-git-send-email-grygorii.strashko@ti.com Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2015-08-19block: Replace SG_GAPS with new queue limits maskKeith Busch
The SG_GAPS queue flag caused checks for bio vector alignment against PAGE_SIZE, but the device may have different constraints. This patch adds a queue limits so a driver with such constraints can set to allow requests that would have been unnecessarily split. The new gaps check takes the request_queue as a parameter to simplify the logic around invoking this function. This new limit makes the queue flag redundant, so removing it and all usage. Device-mappers will inherit the correct settings through blk_stack_limits(). Signed-off-by: Keith Busch <keith.busch@intel.com> Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Jens Axboe <axboe@fb.com>
2015-08-18blkcg: use CGROUP_WEIGHT_* scale for io.weight on the unified hierarchyTejun Heo
cgroup is trying to make interface consistent across different controllers. For weight based resource control, the knob should have the range [1, 10000] and default to 100. This patch updates cfq-iosched so that the weight range conforms. The internal calculations have enough range and the widening of the weight range shouldn't cause any problem. * blkcg_policy->cpd_bind_fn() is added. If present, this is invoked when blkcg is attached to a hierarchy. * cfq_cpd_init() is updated to use the new default value on the unified hierarchy. * cfq_cpd_bind() callback is implemented to clear per-blkg configs and apply the default config matching the hierarchy type. * cfqd->root_group->[leaf_]weight initialization in cfq_init_queue() is moved into !CONFIG_CFQ_GROUP_IOSCHED block. cfq_cpd_bind() is now responsible for initializing the initial weights when blkcg is enabled. Signed-off-by: Tejun Heo <tj@kernel.org> Cc: Vivek Goyal <vgoyal@redhat.com> Cc: Arianna Avanzini <avanzini.arianna@gmail.com> Signed-off-by: Jens Axboe <axboe@fb.com>
2015-08-18blkcg: implement interface for the unified hierarchyTejun Heo
blkcg interface grew to be the biggest of all controllers and unfortunately most inconsistent too. The interface files are inconsistent with a number of cloes duplicates. Some files have recursive variants while others don't. There's distinction between normal and leaf weights which isn't intuitive and there are a lot of stat knobs which don't make much sense outside of debugging and expose too much implementation details to userland. In the unified hierarchy, everything is always hierarchical and internal nodes can't have tasks rendering the two structural issues twisting the current interface. The interface has to be updated in a significant anyway and this is a good chance to revamp it as a whole. This patch implements blkcg interface for the unified hierarchy. * (from a previous patch) blkcg is identified by "io" instead of "blkio" on the unified hierarchy. Given that the whole interface is updated anyway, the rename shouldn't carry noticeable conversion overhead. * The original interface consisted of 27 files is replaced with the following three files. blkio.stat : per-blkcg stats blkio.weight : per-cgroup and per-cgroup-queue weight settings blkio.max : per-cgroup-queue bps and iops max limits Documentation/cgroups/unified-hierarchy.txt updated accordingly. v2: blkcg_policy->dfl_cftypes wasn't removed on blkcg_policy_unregister() corrupting the cftypes list. Fixed. Signed-off-by: Tejun Heo <tj@kernel.org> Signed-off-by: Jens Axboe <axboe@fb.com>
2015-08-18blkcg: misc preparations for unified hierarchy interfaceTejun Heo
* Export blkg_dev_name() * Drop unnecessary @cft from __cfq_set_weight(). Signed-off-by: Tejun Heo <tj@kernel.org> Signed-off-by: Jens Axboe <axboe@fb.com>
2015-08-18blkcg: move body parsing from blkg_conf_prep() to its callersTejun Heo
Currently, blkg_conf_prep() expects input to be of the following form MAJ:MIN NUM and reads the NUM part into blkg_conf_ctx->v. This is quite restrictive and gets in the way in implementing blkcg interface for the unified hierarchy. This patch updates blkg_conf_prep() so that it expects MAJ:MIN BODY_STR where BODY_STR is an arbitrary string. blkg_conf_ctx->v is replaced with ->body which is a char pointer pointing to the start of BODY_STR. Parsing of the body is moved to blkg_conf_prep()'s callers. To allow using, for example, strsep() on blkg_conf_ctx->val, it is a non-const pointer and to accommodate that const is dropped from @input too. This doesn't cause any behavior changes. Signed-off-by: Tejun Heo <tj@kernel.org> Signed-off-by: Jens Axboe <axboe@fb.com>
2015-08-18blkcg: mark existing cftypes as legacyTejun Heo
blkcg is about to grow interface for the unified hierarchy. Add legacy to existing cftypes. * blkcg_policy->cftypes -> blkcg_policy->legacy_cftypes * blk-cgroup.c:blkcg_files -> blkcg_legacy_files * cfq-iosched.c:cfq_blkcg_files -> cfq_blkcg_legacy_files * blk-throttle.c:throtl_files -> throtl_legacy_files Pure renames. No functional change. Signed-off-by: Tejun Heo <tj@kernel.org> Signed-off-by: Jens Axboe <axboe@fb.com>
2015-08-18blkcg: rename subsystem name from blkio to ioTejun Heo
blkio interface has become messy over time and is currently the largest. In addition to the inconsistent naming scheme, it has multiple stat files which report more or less the same thing, a number of debug stat files which expose internal details which shouldn't have been part of the public interface in the first place, recursive and non-recursive stats and leaf and non-leaf knobs. Both recursive vs. non-recursive and leaf vs. non-leaf distinctions don't make any sense on the unified hierarchy as only leaf cgroups can contain processes. cgroups is going through a major interface revision with the unified hierarchy involving significant fundamental usage changes and given that a significant portion of the interface doesn't make sense anymore, it's a good time to reorganize the interface. As the first step, this patch renames the external visible subsystem name from "blkio" to "io". This is more concise, matches the other two major subsystem names, "cpu" and "memory", and better suited as blkcg will be involved in anything writeback related too whether an actual block device is involved or not. As the subsystem legacy_name is set to "blkio", the only userland visible change outside the unified hierarchy is that blkcg is reported as "io" instead of "blkio" in the subsystem initialized message during boot. On the unified hierarchy, blkcg now appears as "io". Signed-off-by: Tejun Heo <tj@kernel.org> Cc: Li Zefan <lizefan@huawei.com> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: cgroups@vger.kernel.org Signed-off-by: Jens Axboe <axboe@fb.com>
2015-08-18blkcg: move io_service_bytes and io_serviced stats into blkcg_gqTejun Heo
Currently, both cfq-iosched and blk-throttle keep track of io_service_bytes and io_serviced stats. While keeping track of them separately may be useful during development, it doesn't make much sense otherwise. Also, blk-throttle was counting bio's as IOs while cfq-iosched request's, which is more confusing than informative. This patch adds ->stat_bytes and ->stat_ios to blkg (blkcg_gq), removes the counterparts from cfq-iosched and blk-throttle and let them print from the common blkg counters. The common counters are incremented during bio issue in blkcg_bio_issue_check(). The outputs are still filtered by whether the policy has blkg_policy_data on a given blkg, so cfq's output won't show up if it has never been used for a given blkg. The only times when the outputs would differ significantly are when policies are attached on the fly or elevators are switched back and forth. Those are quite exceptional operations and I don't think they warrant keeping separate counters. v3: Update blkio-controller.txt accordingly. v2: Account IOs during bio issues instead of request completions so that bio-based drivers can be handled the same way. Signed-off-by: Tejun Heo <tj@kernel.org> Cc: Vivek Goyal <vgoyal@redhat.com> Signed-off-by: Jens Axboe <axboe@fb.com>
2015-08-18blkcg: make blkg_[rw]stat_recursive_sum() to be able to index into blkcg_gqTejun Heo
Currently, blkg_[rw]stat_recursive_sum() assume that the target counter is located in pd (blkg_policy_data); however, some counters are planned to be moved to blkg (blkcg_gq). This patch updates blkg_[rw]stat_recursive_sum() to take blkg and blkg_policy pointers instead of pd. If policy is NULL, it indexes into blkg. If non-NULL, into the blkg's pd of the policy. The existing usages are updated to maintain the current behaviors. Signed-off-by: Tejun Heo <tj@kernel.org> Cc: Vivek Goyal <vgoyal@redhat.com> Signed-off-by: Jens Axboe <axboe@fb.com>
2015-08-18blkcg: make blkcg_[rw]stat per-cpuTejun Heo
blkcg_[rw]stat are used as stat counters for blkcg policies. It isn't per-cpu by itself and blk-throttle makes it per-cpu by wrapping around it. This patch makes blkcg_[rw]stat per-cpu and drop the ad-hoc per-cpu wrapping in blk-throttle. * blkg_[rw]stat->cnt is replaced with cpu_cnt which is struct percpu_counter. This makes syncp unnecessary as remote accesses are handled by percpu_counter itself. * blkg_[rw]stat_init() can now fail due to percpu allocation failure and thus are updated to return int. * percpu_counters need explicit freeing. blkg_[rw]stat_exit() added. * As blkg_rwstat->cpu_cnt[] can't be read directly anymore, reading and summing results are stored in ->aux_cnt[] instead. * Custom per-cpu stat implementation in blk-throttle is removed. This makes all blkcg stat counters per-cpu without complicating policy implmentations. Signed-off-by: Tejun Heo <tj@kernel.org> Cc: Vivek Goyal <vgoyal@redhat.com> Signed-off-by: Jens Axboe <axboe@fb.com>
2015-08-18blkcg: add blkg_[rw]stat->aux_cnt and replace cfq_group->dead_stats with itTejun Heo
cgroup stats are local to each cgroup and doesn't propagate to ancestors by default. When recursive stats are necessary, the sum is calculated over all the descendants. This initially was for backward compatibility to support both group-local and recursive stats but this mode of operation makes general sense as stat update is much hotter thafn reporting those stats. This however ends up losing recursive stats when a child is removed. To work around this, cfq-iosched adds its stats to its parent cfq_group->dead_stats which is summed up together when calculating recursive stats. It's planned that the core stats will be moved to blkcg_gq, so we want to move the mechanism for keeping track of the stats of dead children from cfq to blkcg core. This patch adds blkg_[rw]stat->aux_cnt which are atomic64_t's keeping track of auxiliary counts which are excluded when reading local counts but included for recursive. blkg_[rw]stat_merge() which were used by cfq to implement dead_stats are replaced by blkg_[rw]stat_add_aux(), and cfq now forwards stats of a dead cgroup to the aux counts of parent->stats instead of separate ->dead_stats. This will also help making blkg_[rw]stats per-cpu. Signed-off-by: Tejun Heo <tj@kernel.org> Cc: Vivek Goyal <vgoyal@redhat.com> Signed-off-by: Jens Axboe <axboe@fb.com>
2015-08-18blkcg: consolidate blkg creation in blkcg_bio_issue_check()Tejun Heo
blkg (blkcg_gq) currently is created by blkcg policies invoking blkg_lookup_create() which ends up repeating about the same code in different policies. Theoretically, this can avoid the overhead of looking and/or creating blkg's if blkcg is enabled but no policy is in use; however, the cost of blkg lookup / creation is very low especially if only the root blkcg is in use which is highly likely if no blkcg policy is in active use - it boils down to a single very predictable conditional and surrounding RCU protection. This patch consolidates blkg creation to a new function blkcg_bio_issue_check() which is called during bio issue from generic_make_request_checks(). blkcg_bio_issue_check() is now the only function which tries to create missing blkg's. The subsequent policy and request_list operations just perform blkg_lookup() and if missing falls back to the root. * blk_get_rl() no longer tries to create blkg. It uses blkg_lookup() instead of blkg_lookup_create(). * blk_throtl_bio() is now called from blkcg_bio_issue_check() with rcu read locked and blkg already looked up. Both throtl_lookup_tg() and throtl_lookup_create_tg() are dropped. * cfq is similarly updated. cfq_lookup_create_cfqg() is replaced with cfq_lookup_cfqg()which uses blkg_lookup(). This consolidates blkg handling and avoids unnecessary blkg creation retries under memory pressure. In addition, this provides a common bio entry point into blkcg where things like common accounting can be performed. v2: Build fixes for !CONFIG_CFQ_GROUP_IOSCHED and !CONFIG_BLK_DEV_THROTTLING. Signed-off-by: Tejun Heo <tj@kernel.org> Cc: Vivek Goyal <vgoyal@redhat.com> Cc: Arianna Avanzini <avanzini.arianna@gmail.com> Signed-off-by: Jens Axboe <axboe@fb.com>
2015-08-18blkcg: move root blkg lookup optimization from throtl_lookup_tg() to ↵Tejun Heo
__blkg_lookup() Currently, both throttle and cfq policies implement their own root blkg (blkcg_gq) lookup fast path. This patch moves root blkg optimization from throtl_lookup_tg() to __blkg_lookup(). cfq-iosched currently doesn't use blkg_lookup() but will be converted and drop the optimization too. Signed-off-by: Tejun Heo <tj@kernel.org> Cc: Vivek Goyal <vgoyal@redhat.com> Cc: Arianna Avanzini <avanzini.arianna@gmail.com> Signed-off-by: Jens Axboe <axboe@fb.com>
2015-08-18blkcg: inline [__]blkg_lookup()Tejun Heo
blkg_lookup() checks whether the target queue is bypassing and, if not, calls __blkg_lookup() which first checks the lookup hint and then performs radix tree walk. The operations upto hint checking are trivial and there are many users of this function. This patch inlines blkg_lookup() and the fast path part of __blkg_lookup(). The radix tree lookup and hint update are now in blkg_lookup_slowpath(). This will help consolidating blkg handling by easing moving root blkcg short-circuit to inlined lookup fast path. Signed-off-by: Tejun Heo <tj@kernel.org> Cc: Vivek Goyal <vgoyal@redhat.com> Cc: Arianna Avanzini <avanzini.arianna@gmail.com> Signed-off-by: Jens Axboe <axboe@fb.com>
2015-08-18blkcg: replace blkcg_policy->cpd_size with ->cpd_alloc/free_fn() methodsTejun Heo
Each active policy has a cpd (blkcg_policy_data) on each blkcg. The cpd's were allocated by blkcg core and each policy could request to allocate extra space at the end by setting blkcg_policy->cpd_size larger than the size of cpd. This is a bit unusual but blkg (blkcg_gq) policy data used to be handled this way too so it made sense to be consistent; however, blkg policy data switched to alloc/free callbacks. This patch makes similar changes to cpd handling. blkcg_policy->cpd_alloc/free_fn() are added to replace ->cpd_size. As cpd allocation is now done from policy side, it can simply allocate a larger area which embeds cpd at the beginning. As ->cpd_alloc_fn() may be able to perform all necessary initializations, this patch makes ->cpd_init_fn() optional. Signed-off-by: Tejun Heo <tj@kernel.org> Cc: Vivek Goyal <vgoyal@redhat.com> Cc: Arianna Avanzini <avanzini.arianna@gmail.com> Signed-off-by: Jens Axboe <axboe@fb.com>
2015-08-18blkcg: minor updates around blkcg_policy_dataTejun Heo
* Rename blkcg->pd[] to blkcg->cpd[] so that cpd is consistently used for blkcg_policy_data. * Make blkcg_policy->cpd_init_fn() take blkcg_policy_data instead of blkcg. This makes it consistent with blkg_policy_data methods and to-be-added cpd alloc/free methods. * blkcg_policy_data->blkcg and cpd_to_blkcg() added so that cpd_init_fn() can determine the associated blkcg from blkcg_policy_data. v2: blkcg_policy_data->blkcg initializations were missing. Added. Signed-off-by: Tejun Heo <tj@kernel.org> Cc: Vivek Goyal <vgoyal@redhat.com> Cc: Arianna Avanzini <avanzini.arianna@gmail.com> Signed-off-by: Jens Axboe <axboe@fb.com>