summaryrefslogtreecommitdiff
path: root/include
AgeCommit message (Collapse)Author
2020-07-27btrfs: add filesystem generation to FS_INFO ioctlJohannes Thumshirn
Add retrieval of the filesystem's generation to the fsinfo ioctl. This is driven by setting the BTRFS_FS_INFO_FLAG_GENERATION flag in btrfs_ioctl_fs_info_args::flags. Reviewed-by: Nikolay Borisov <nborisov@suse.com> Signed-off-by: Johannes Thumshirn <johannes.thumshirn@wdc.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2020-07-27btrfs: pass checksum type via BTRFS_IOC_FS_INFO ioctlJohannes Thumshirn
With the recent addition of filesystem checksum types other than CRC32c, it is not anymore hard-coded which checksum type a btrfs filesystem uses. Up to now there is no good way to read the filesystem checksum, apart from reading the filesystem UUID and then query sysfs for the checksum type. Add a new csum_type and csum_size fields to the BTRFS_IOC_FS_INFO ioctl command which usually is used to query filesystem features. Also add a flags member indicating that the kernel responded with a set csum_type and csum_size field. For compatibility reasons, only return the csum_type and csum_size if the BTRFS_FS_INFO_FLAG_CSUM_INFO flag was passed to the kernel. Also clear any unknown flags so we don't pass false positives to user-space newer than the kernel. To simplify further additions to the ioctl, also switch the padding to a u8 array. Pahole was used to verify the result of this switch: The csum members are added before flags, which might look odd, but this is to keep the alignment requirements and not to introduce holes in the structure. $ pahole -C btrfs_ioctl_fs_info_args fs/btrfs/btrfs.ko struct btrfs_ioctl_fs_info_args { __u64 max_id; /* 0 8 */ __u64 num_devices; /* 8 8 */ __u8 fsid[16]; /* 16 16 */ __u32 nodesize; /* 32 4 */ __u32 sectorsize; /* 36 4 */ __u32 clone_alignment; /* 40 4 */ __u16 csum_type; /* 44 2 */ __u16 csum_size; /* 46 2 */ __u64 flags; /* 48 8 */ __u8 reserved[968]; /* 56 968 */ /* size: 1024, cachelines: 16, members: 10 */ }; Fixes: 3951e7f050ac ("btrfs: add xxhash64 to checksumming algorithms") Fixes: 3831bf0094ab ("btrfs: add sha256 to checksumming algorithm") CC: stable@vger.kernel.org # 5.5+ Signed-off-by: Johannes Thumshirn <johannes.thumshirn@wdc.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2020-07-27btrfs: use __u16 for the return value of btrfs_qgroup_level()Qu Wenruo
The qgroup level is limited to u16, so no need to use u64 for it. Signed-off-by: Qu Wenruo <wqu@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2020-07-27btrfs: tracepoints: convert flush states to using EM macrosNikolay Borisov
Only 6 out of all flush states were being printed correctly since only they were exported via the TRACE_DEFINE_ENUM macro. This patch converts all flush states to use the newly introduced EM macro so that they can all be printed correctly. Signed-off-by: Nikolay Borisov <nborisov@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2020-07-27btrfs: tracepoints: switch extent_io_tree_owner to using EM macroNikolay Borisov
This fixes correct pint out of the extent io tree owner in btrfs_set_extent_bit/btrfs_clear_extent_bit/btrfs_convert_extent_bit tracepoints. Signed-off-by: Nikolay Borisov <nborisov@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2020-07-27btrfs: tracepoints: fix qgroup reservation type printingNikolay Borisov
Since qgroup's reservation types are define in a macro they must be exported to user space in order for user space tools to convert raw binary data to symbolic names. Currently trace-cmd report produces the following output: kworker/u8:2-459 [003] 1208.543587: qgroup_update_reserve: 2b742cae-e0e5-4def-9ef7-28a9b34a951e: qgid=5 type=0x2 cur_reserved=54870016 diff=-32768 With this fix the output is: kworker/u8:2-459 [003] 1208.543587: qgroup_update_reserve: 2b742cae-e0e5-4def-9ef7-28a9b34a951e: qgid=5 type=BTRFS_QGROUP_RSV_META_PREALLOC cur_reserved=54870016 diff=-32768 Signed-off-by: Nikolay Borisov <nborisov@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2020-07-27btrfs: tracepoints: move FLUSH_ACTIONS defineNikolay Borisov
Since all enums used in btrfs' tracepoints are going to be redefined to allow proper parsing of their values by userspace tools let's rearrange when they are defined. This will allow to use only a single set of #define EM/#undef EM sequence. No functional changes. Signed-off-by: Nikolay Borisov <nborisov@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2020-07-27btrfs: tracepoints: fix extent type symbolic name printNikolay Borisov
extent's type is an enum and this requires that the enum values be exported to user space so that user space tools can correctly map raw binary data to the symbolic name. Currently tracepoints using btrfs__file_extent_item_regular or btrfs__file_extent_item_inline result in the following output: fio-443 [002] 586.609450: btrfs_get_extent_show_fi_regular: f0c3bf8e-0174-4bcc-92aa-6c2d62430420:i root=5(FS_TREE) inode=258 size=2136457216 disk_isize=0 file extent range=[2126946304 2136457216] (num_bytes=9510912 ram_bytes=9510912 disk_bytenr=0 disk_num_bytes=0 extent_offset=0 type=0x1 compression=0 E.g type is 0x1 . With this patch applie the output is: <ommitted for brevity> disk_bytenr=141348864 disk_num_bytes=4096 extent_offset=0 type=REG compression=0 Signed-off-by: Nikolay Borisov <nborisov@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2020-07-27btrfs: tracepoints: fix btrfs_trigger_flush symbolic string for flagsNikolay Borisov
When tracepoints use __print_symbolic to print textual representation of a value that comes from an ENUM each enum value needs to be exported to user space so that user space tools can convert the binary value data to the trings as user space does not know what those enums are about. Doing a trace-cmd record && trace-cmd report currently results in: kworker/u8:1-61 [000] 66.299527: btrfs_flush_space: 5302ee13-c65e-45bb-98ef-8fe3835bd943: state=3(0x3) flags=4(METADATA) num_bytes=2621440 ret=0 I.e state is not translated to its symbolic counterpart. With this patch applied the output is: fio-370 [002] 56.762402: btrfs_trigger_flush: d04cd7ac-38e2-452f-a7f5-8157529fd5f0: preempt: flush=3(BTRFS_RESERVE_FLUSH_ALL) flags=4(METADATA) bytes=655360 See also 190f0b76ca49 ("mm: tracing: Export enums in tracepoints to user space"). Signed-off-by: Nikolay Borisov <nborisov@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
2020-07-27Merge 5.8-rc7 into staging-nextGreg Kroah-Hartman
We need the staging fixes in here as well. Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2020-07-27Merge 5.8-rc7 into tty-nextGreg Kroah-Hartman
we need the tty/serial fixes in here as well. Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2020-07-27Merge 5.8-rc7 into driver-core-nextGreg Kroah-Hartman
We want the driver core fixes in here as well. Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2020-07-27Merge back cpufreq material for v5.9.Rafael J. Wysocki
2020-07-27ACPICA: Preserve memory opregion mappingsRafael J. Wysocki
The ACPICA's strategy with respect to the handling of memory mappings associated with memory operation regions is to avoid mapping the entire region at once which may be problematic at least in principle (for example, it may lead to conflicts with overlapping mappings having different attributes created by drivers). It may also be wasteful, because memory opregions on some systems take up vast chunks of address space while the fields in those regions actually accessed by AML are sparsely distributed. For this reason, a one-page "window" is mapped for a given opregion on the first memory access through it and if that "window" does not cover an address range accessed through that opregion subsequently, it is unmapped and a new "window" is mapped to replace it. Next, if the new "window" is not sufficient to acess memory through the opregion in question in the future, it will be replaced with yet another "window" and so on. That may lead to a suboptimal sequence of memory mapping and unmapping operations, for example if two fields in one opregion separated from each other by a sufficiently wide chunk of unused address space are accessed in an alternating pattern. The situation may still be suboptimal if the deferred unmapping introduced previously is supported by the OS layer. For instance, the alternating memory access pattern mentioned above may produce a relatively long list of mappings to release with substantial duplication among the entries in it, which could be avoided if acpi_ex_system_memory_space_handler() did not release the mapping used by it previously as soon as the current access was not covered by it. In order to improve that, modify acpi_ex_system_memory_space_handler() to preserve all of the memory mappings created by it until the memory regions associated with them go away. Accordingly, update acpi_ev_system_memory_region_setup() to unmap all memory associated with memory opregions that go away. Reported-by: Dan Williams <dan.j.williams@intel.com> Tested-by: Xiang Li <xiang.z.li@intel.com> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2020-07-27Revert "test_firmware: Test platform fw loading on non-EFI systems"Greg Kroah-Hartman
This reverts commit 2d38dbf89a06d0f689daec9842c5d3295c49777f as it broke the build in linux-next Reported-by: Stephen Rothwell <sfr@canb.auug.org.au> Fixes: 2d38dbf89a06 ("test_firmware: Test platform fw loading on non-EFI systems") Cc: stable@vger.kernel.org Cc: Scott Branden <scott.branden@broadcom.com> Cc: Kees Cook <keescook@chromium.org> Link: https://lore.kernel.org/r/20200727165539.0e8797ab@canb.auug.org.au Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2020-07-27Merge 5.8-rc7 into char-misc-nextGreg Kroah-Hartman
This should resolve the merge/build issues reported when trying to create linux-next. Reported-by: Stephen Rothwell <sfr@canb.auug.org.au> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2020-07-27dmaengine: dw: Don't include unneeded header to platform data headerAndy Shevchenko
Including device.h is too much for the dma-dw.h platform data header. Replace it with the headers of which dma-dw.h is direct user. Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Link: https://lore.kernel.org/r/20200721130844.64162-1-andriy.shevchenko@linux.intel.com Signed-off-by: Vinod Koul <vkoul@kernel.org>
2020-07-27dmaengine: idxd: add missing invalid flags field to completionDave Jiang
Add missing "invalid flags" field to completion record struct. Reported-by: Nikhil Rao <nikhil.rao@intel.com> Signed-off-by: Dave Jiang <dave.jiang@intel.com> Link: https://lore.kernel.org/r/159526025819.49266.13176787210106133664.stgit@djiang5-desk3.ch.intel.com Signed-off-by: Vinod Koul <vkoul@kernel.org>
2020-07-27dmaengine: dw: Introduce max burst length hw configSerge Semin
IP core of the DW DMA controller may be synthesized with different max burst length of the transfers per each channel. According to Synopsis having the fixed maximum burst transactions length may provide some performance gain. At the same time setting up the source and destination multi size exceeding the max burst length limitation may cause a serious problems. In our case the DMA transaction just hangs up. In order to fix this lets introduce the max burst length platform config of the DW DMA controller device and don't let the DMA channels configuration code exceed the burst length hardware limitation. Note the maximum burst length parameter can be detected either in runtime from the DWC parameter registers or from the dedicated DT property. Depending on the IP core configuration the maximum value can vary from channel to channel so by overriding the channel slave max_burst capability we make sure a DMA consumer will get the channel-specific max burst length. Signed-off-by: Serge Semin <Sergey.Semin@baikalelectronics.ru> Reviewed-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Link: https://lore.kernel.org/r/20200723005848.31907-10-Sergey.Semin@baikalelectronics.ru Signed-off-by: Vinod Koul <vkoul@kernel.org>
2020-07-27dmaengine: dw: Initialize min and max burst DMA device capabilitySerge Semin
According to the DW APB DMAC data book the minimum burst transaction length is 1 and it's true for any version of the controller since isn't parametrised in the coreAssembler so can't be changed at the IP-core synthesis stage. The maximum burst transaction can vary from channel to channel and from controller to controller depending on a IP-core parameter the system engineer activated during the IP-core synthesis. Let's initialise both min_burst and max_burst members of the DMA controller descriptor with extreme values so the DMA clients could use them to properly optimize the DMA requests. The channels and controller-specific max_burst length initialization will be introduced by the follow-up patches. Signed-off-by: Serge Semin <Sergey.Semin@baikalelectronics.ru> Reviewed-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Link: https://lore.kernel.org/r/20200723005848.31907-9-Sergey.Semin@baikalelectronics.ru Signed-off-by: Vinod Koul <vkoul@kernel.org>
2020-07-27dmaengine: Introduce DMA-device device_caps callbackSerge Semin
There are DMA devices (like ours version of Synopsys DW DMAC) which have DMA capabilities non-uniformly redistributed between the device channels. In order to provide a way of exposing the channel-specific parameters to the DMA engine consumers, we introduce a new DMA-device callback. In case if provided it gets called from the dma_get_slave_caps() method and is able to override the generic DMA-device capabilities. Signed-off-by: Serge Semin <Sergey.Semin@baikalelectronics.ru> Reviewed-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Link: https://lore.kernel.org/r/20200723005848.31907-6-Sergey.Semin@baikalelectronics.ru Signed-off-by: Vinod Koul <vkoul@kernel.org>
2020-07-27dmaengine: Introduce max SG burst capabilitySerge Semin
Some devices may lack the support of the hardware accelerated SG list entries automatic walking through and execution. In this case a burden of the SG list traversal and DMA engine re-initialization lies on the DMA engine driver (normally implemented by using a DMA transfer completion IRQ to recharge the DMA device with a next SG list entry). But such solution may not be suitable for some DMA consumers. In particular SPI devices need both Tx and Rx DMA channels work synchronously in order to avoid the Rx FIFO overflow. In case if Rx DMA channel is paused for some time while the Tx DMA channel works implicitly pulling data into the Rx FIFO, the later will be eventually overflown, which will cause the data loss. So if SG list entries aren't automatically fetched by the DMA engine, but are one-by-one manually selected for execution in the ISRs/deferred work/etc., such problem will eventually happen due to the non-deterministic latencies of the service execution. In order to let the DMA consumer know about the DMA device capabilities regarding the hardware accelerated SG list traversal we introduce the max_sg_burst capability. It is supposed to be initialized by the DMA engine driver with 0 if there is no limitation of the number of SG entries atomically executed and with non-zero value if there is such constraints, so the upper limit is determined by the number set to the property. Suggested-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Signed-off-by: Serge Semin <Sergey.Semin@baikalelectronics.ru> Reviewed-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Link: https://lore.kernel.org/r/20200723005848.31907-5-Sergey.Semin@baikalelectronics.ru Signed-off-by: Vinod Koul <vkoul@kernel.org>
2020-07-27dmaengine: Introduce min burst length capabilitySerge Semin
Some hardware aside from default 0/1 may have greater minimum burst transactions length constraints. Here we introduce the DMA device and slave capability, which if required can be initialized by the DMA engine driver with the device-specific value. Signed-off-by: Serge Semin <Sergey.Semin@baikalelectronics.ru> Reviewed-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Link: https://lore.kernel.org/r/20200723005848.31907-4-Sergey.Semin@baikalelectronics.ru Signed-off-by: Vinod Koul <vkoul@kernel.org>
2020-07-27printk: Make linux/printk.h self-containedHerbert Xu
As it stands if you include printk.h by itself it will fail to compile because it requires definitions from ratelimit.h. However, simply including ratelimit.h from printk.h does not work due to inclusion loops involving sched.h and kernel.h. This patch solves this by moving bits from ratelimit.h into a new header file which can then be included by printk.h without any worries about header loops. The build bot then revealed some intriguing failures arising out of this patch. On s390 there is an inclusion loop with asm/bug.h and linux/kernel.h that triggers a compile failure, because kernel.h will cause asm-generic/bug.h to be included before s390's own asm/bug.h has finished processing. This has been fixed by not including kernel.h in arch/s390/include/asm/bug.h. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Reviewed-by: Andy Shevchenko <andy.shevchenko@gmail.com> Reviewed-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Acked-by: Petr Mladek <pmladek@suse.com> Acked-by: Steven Rostedt (VMware) <rostedt@goodmis.org> Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Link: https://lore.kernel.org/r/20200721062248.GA18383@gondor.apana.org.au
2020-07-27io: Fix return type of _inb and _inlStafford Horne
The return type of functions _inb, _inw and _inl are all u16 which looks wrong. This patch makes them u8, u16 and u32 respectively. The original commit text for these does not indicate that these should be all forced to u16. Fixes: f009c89df79a ("io: Provide _inX() and _outX()") Signed-off-by: Stafford Horne <shorne@gmail.com> Reviewed-by: John Garry <john.garry@huawei.com> Signed-off-by: Arnd Bergmann <arnd@arndb.de>
2020-07-27irqchip: Fix IRQCHIP_PLATFORM_DRIVER_* compilation by including module.hMarc Zyngier
The newly introduced IRQCHIP_PLATFORM_DRIVER_* macros expand into module-related macros, but do so without including module.h. Depending on the driver and/or architecture, this happens to work, or not. Unconditionnaly include linux/module.h to sort it out. Fixes: f3b5e608ed6d ("irqchip: Add IRQCHIP_PLATFORM_DRIVER_BEGIN/END and IRQCHIP_MATCH helper macros") Reported-by: kernel test robot <lkp@intel.com> Cc: Saravana Kannan <saravanak@google.com> Signed-off-by: Marc Zyngier <maz@kernel.org>
2020-07-27irqchip: Add IRQCHIP_PLATFORM_DRIVER_BEGIN/END and IRQCHIP_MATCH helper macrosSaravana Kannan
Compiling an irqchip driver as a platform driver needs to bunch of things to be done right: - Making sure the parent domain is initialized first - Making sure the device can't be unbound from sysfs - Disallowing module unload if it's built as a module - Finding the parent node - Etc. Instead of trying to make sure all future irqchip platform drivers get this right, provide boilerplate macros that take care of all of this. An example use would look something like this. Where acme_foo_init and acme_bar_init are similar to what would be passed to IRQCHIP_DECLARE. IRQCHIP_PLATFORM_DRIVER_BEGIN(acme_irq) IRQCHIP_MATCH("acme,foo", acme_foo_init) IRQCHIP_MATCH("acme,bar", acme_bar_init) IRQCHIP_PLATFORM_DRIVER_END(acme_irq) Signed-off-by: Saravana Kannan <saravanak@google.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Cc: John Stultz <john.stultz@linaro.org> Link: https://lore.kernel.org/r/20200718000637.3632841-2-saravanak@google.com
2020-07-27irqchip: irq-bcm2836.h: drop a duplicated wordRandy Dunlap
Drop the repeated word "the" in a comment. Signed-off-by: Randy Dunlap <rdunlap@infradead.org> Signed-off-by: Marc Zyngier <maz@kernel.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Jason Cooper <jason@lakedaemon.net> Cc: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20200719002853.20419-1-rdunlap@infradead.org
2020-07-27irqchip/gic-v3: Remove unused register definitionZenghui Yu
[maz: The GICv3 spec has evolved quite a bit since the draft the Linux driver was written against, and some register definitions are simply gone] As per the GICv3 specification, GIC{D,R}_SEIR are not assigned and the locations (0x0068) are actually Reserved. GICR_MOV{LPI,ALL}R are two IMP DEF registers and might be defined by some specific micro-architecture. As they're not used anywhere in the kernel, just drop all of them. Signed-off-by: Zenghui Yu <yuzenghui@huawei.com> [maz: added context explaination] Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20200630134126.880-1-yuzenghui@huawei.com
2020-07-27Merge 5.8-rc7 into usb-nextGreg Kroah-Hartman
We want the USB fixes in here as well. Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2020-07-27gpio: regmap: fix type clashMichael Walle
GPIO_REGMAP_ADDR_ZERO() cast to unsigned long but the actual config parameters are unsigned int. We use unsigned int here because that is the type which is used by the underlying regmap. Fixes: ebe363197e52 ("gpio: add a reusable generic gpio_chip using regmap") Reported-by: kernel test robot <lkp@intel.com> Signed-off-by: Michael Walle <michael@walle.cc> Link: https://lore.kernel.org/r/20200725232337.27581-1-michael@walle.cc Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
2020-07-27power: fix duplicated words in bq2415x_charger.hRandy Dunlap
Drop the doubled word "for". Change "It it" to "If it". Signed-off-by: Randy Dunlap <rdunlap@infradead.org> Cc: linux-pm@vger.kernel.org Acked-by: Pali Rohár <pali@kernel.org> Signed-off-by: Sebastian Reichel <sebastian.reichel@collabora.com>
2020-07-26Merge branch 'x86/urgent' into x86/cleanupsIngo Molnar
Refresh the branch for a dependent commit. Signed-off-by: Ingo Molnar <mingo@kernel.org>
2020-07-26entry: Correct __secure_computing() stubThomas Gleixner
The original version of that used secure_computing() which has no arguments. Review requested to switch to __secure_computing() which has one. The function name was correct, but no argument added and of course compiling without SECCOMP was deemed overrated. Add the missing function argument. Fixes: 6823ecabf030 ("seccomp: Provide stub for __secure_computing()") Reported-by: Ingo Molnar <mingo@kernel.org> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2020-07-27powerpc/pseries: Implement paravirt qspinlocks for SPLPARNicholas Piggin
This implements the generic paravirt qspinlocks using H_PROD and H_CONFER to kick and wait. This uses an un-directed yield to any CPU rather than the directed yield to a pre-empted lock holder that paravirtualised simple spinlocks use, that requires no kick hcall. This is something that could be investigated and improved in future. Performance results can be found in the commit which added queued spinlocks. Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Acked-by: Waiman Long <longman@redhat.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200724131423.1362108-5-npiggin@gmail.com
2020-07-27powerpc/64s: Implement queued spinlocks and rwlocksNicholas Piggin
These have shown significantly improved performance and fairness when spinlock contention is moderate to high on very large systems. With this series including subsequent patches, on a 16 socket 1536 thread POWER9, a stress test such as same-file open/close from all CPUs gets big speedups, 11620op/s aggregate with simple spinlocks vs 384158op/s (33x faster), where the difference in throughput between the fastest and slowest thread goes from 7x to 1.4x. Thanks to the fast path being identical in terms of atomics and barriers (after a subsequent optimisation patch), single threaded performance is not changed (no measurable difference). On smaller systems, performance and fairness seems to be generally improved. Using dbench on tmpfs as a test (that starts to run into kernel spinlock contention), a 2-socket OpenPOWER POWER9 system was tested with bare metal and KVM guest configurations. Results can be found here: https://github.com/linuxppc/issues/issues/305#issuecomment-663487453 Observations are: - Queued spinlocks are equal when contention is insignificant, as expected and as measured with microbenchmarks. - When there is contention, on bare metal queued spinlocks have better throughput and max latency at all points. - When virtualised, queued spinlocks are slightly worse approaching peak throughput, but significantly better throughput and max latency at all points beyond peak, until queued spinlock maximum latency rises when clients are 2x vCPUs. The regressions haven't been analysed very well yet, there are a lot of things that can be tuned, particularly the paravirtualised locking, but the numbers already look like a good net win even on relatively small systems. Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Acked-by: Waiman Long <longman@redhat.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200724131423.1362108-4-npiggin@gmail.com
2020-07-25bpf, xdp: Remove XDP_QUERY_PROG and XDP_QUERY_PROG_HW XDP commandsAndrii Nakryiko
Now that BPF program/link management is centralized in generic net_device code, kernel code never queries program id from drivers, so XDP_QUERY_PROG/XDP_QUERY_PROG_HW commands are unnecessary. This patch removes all the implementations of those commands in kernel, along the xdp_attachment_query(). This patch was compile-tested on allyesconfig. Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20200722064603.3350758-10-andriin@fb.com
2020-07-25bpf: Implement BPF XDP link-specific introspection APIsAndrii Nakryiko
Implement XDP link-specific show_fdinfo and link_info to emit ifindex. Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20200722064603.3350758-7-andriin@fb.com
2020-07-25bpf, xdp: Add bpf_link-based XDP attachment APIAndrii Nakryiko
Add bpf_link-based API (bpf_xdp_link) to attach BPF XDP program through BPF_LINK_CREATE command. bpf_xdp_link is mutually exclusive with direct BPF program attachment, previous BPF program should be detached prior to attempting to create a new bpf_xdp_link attachment (for a given XDP mode). Once BPF link is attached, it can't be replaced by other BPF program attachment or link attachment. It will be detached only when the last BPF link FD is closed. bpf_xdp_link will be auto-detached when net_device is shutdown, similarly to how other BPF links behave (cgroup, flow_dissector). At that point bpf_link will become defunct, but won't be destroyed until last FD is closed. Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20200722064603.3350758-5-andriin@fb.com
2020-07-25bpf, xdp: Maintain info on attached XDP BPF programs in net_deviceAndrii Nakryiko
Instead of delegating to drivers, maintain information about which BPF programs are attached in which XDP modes (generic/skb, driver, or hardware) locally in net_device. This effectively obsoletes XDP_QUERY_PROG command. Such re-organization simplifies existing code already. But it also allows to further add bpf_link-based XDP attachments without drivers having to know about any of this at all, which seems like a good setup. XDP_SETUP_PROG/XDP_SETUP_PROG_HW are just low-level commands to driver to install/uninstall active BPF program. All the higher-level concerns about prog/link interaction will be contained within generic driver-agnostic logic. All the XDP_QUERY_PROG calls to driver in dev_xdp_uninstall() were removed. It's not clear for me why dev_xdp_uninstall() were passing previous prog_flags when resetting installed programs. That seems unnecessary, plus most drivers don't populate prog_flags anyways. Having XDP_SETUP_PROG vs XDP_SETUP_PROG_HW should be enough of an indicator of what is required of driver to correctly reset active BPF program. dev_xdp_uninstall() is also generalized as an iteration over all three supported mode. Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20200722064603.3350758-3-andriin@fb.com
2020-07-25bpf: Make bpf_link API available indepently of CONFIG_BPF_SYSCALLAndrii Nakryiko
Similarly to bpf_prog, make bpf_link and related generic API available unconditionally to make it easier to have bpf_link support in various parts of the kernel. Stub out init/prime/settle/cleanup and inc/put APIs. Reported-by: kernel test robot <lkp@intel.com> Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20200722064603.3350758-2-andriin@fb.com
2020-07-25bpf: Make cgroup storages shared between programs on the same cgroupYiFei Zhu
This change comes in several parts: One, the restriction that the CGROUP_STORAGE map can only be used by one program is removed. This results in the removal of the field 'aux' in struct bpf_cgroup_storage_map, and removal of relevant code associated with the field, and removal of now-noop functions bpf_free_cgroup_storage and bpf_cgroup_storage_release. Second, we permit a key of type u64 as the key to the map. Providing such a key type indicates that the map should ignore attach type when comparing map keys. However, for simplicity newly linked storage will still have the attach type at link time in its key struct. cgroup_storage_check_btf is adapted to accept u64 as the type of the key. Third, because the storages are now shared, the storages cannot be unconditionally freed on program detach. There could be two ways to solve this issue: * A. Reference count the usage of the storages, and free when the last program is detached. * B. Free only when the storage is impossible to be referred to again, i.e. when either the cgroup_bpf it is attached to, or the map itself, is freed. Option A has the side effect that, when the user detach and reattach a program, whether the program gets a fresh storage depends on whether there is another program attached using that storage. This could trigger races if the user is multi-threaded, and since nondeterminism in data races is evil, go with option B. The both the map and the cgroup_bpf now tracks their associated storages, and the storage unlink and free are removed from cgroup_bpf_detach and added to cgroup_bpf_release and cgroup_storage_map_free. The latter also new holds the cgroup_mutex to prevent any races with the former. Fourth, on attach, we reuse the old storage if the key already exists in the map, via cgroup_storage_lookup. If the storage does not exist yet, we create a new one, and publish it at the last step in the attach process. This does not create a race condition because for the whole attach the cgroup_mutex is held. We keep track of an array of new storages that was allocated and if the process fails only the new storages would get freed. Signed-off-by: YiFei Zhu <zhuyifei@google.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/d5401c6106728a00890401190db40020a1f84ff1.1595565795.git.zhuyifei@google.com
2020-07-25bpf: Fail PERF_EVENT_IOC_SET_BPF when bpf_get_[stack|stackid] cannot workSong Liu
bpf_get_[stack|stackid] on perf_events with precise_ip uses callchain attached to perf_sample_data. If this callchain is not presented, do not allow attaching BPF program that calls bpf_get_[stack|stackid] to this event. In the error case, -EPROTO is returned so that libbpf can identify this error and print proper hint message. Signed-off-by: Song Liu <songliubraving@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20200723180648.1429892-3-songliubraving@fb.com
2020-07-25bpf: Separate bpf_get_[stack|stackid] for perf events BPFSong Liu
Calling get_perf_callchain() on perf_events from PEBS entries may cause unwinder errors. To fix this issue, the callchain is fetched early. Such perf_events are marked with __PERF_SAMPLE_CALLCHAIN_EARLY. Similarly, calling bpf_get_[stack|stackid] on perf_events from PEBS may also cause unwinder errors. To fix this, add separate version of these two helpers, bpf_get_[stack|stackid]_pe. These two hepers use callchain in bpf_perf_event_data_kern->data->callchain. Signed-off-by: Song Liu <songliubraving@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20200723180648.1429892-2-songliubraving@fb.com
2020-07-25bpf: Implement bpf iterator for map elementsYonghong Song
The bpf iterator for map elements are implemented. The bpf program will receive four parameters: bpf_iter_meta *meta: the meta data bpf_map *map: the bpf_map whose elements are traversed void *key: the key of one element void *value: the value of the same element Here, meta and map pointers are always valid, and key has register type PTR_TO_RDONLY_BUF_OR_NULL and value has register type PTR_TO_RDWR_BUF_OR_NULL. The kernel will track the access range of key and value during verification time. Later, these values will be compared against the values in the actual map to ensure all accesses are within range. A new field iter_seq_info is added to bpf_map_ops which is used to add map type specific information, i.e., seq_ops, init/fini seq_file func and seq_file private data size. Subsequent patches will have actual implementation for bpf_map_ops->iter_seq_info. In user space, BPF_ITER_LINK_MAP_FD needs to be specified in prog attr->link_create.flags, which indicates that attr->link_create.target_fd is a map_fd. The reason for such an explicit flag is for possible future cases where one bpf iterator may allow more than one possible customization, e.g., pid and cgroup id for task_file. Current kernel internal implementation only allows the target to register at most one required bpf_iter_link_info. To support the above case, optional bpf_iter_link_info's are needed, the target can be extended to register such link infos, and user provided link_info needs to match one of target supported ones. Signed-off-by: Yonghong Song <yhs@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20200723184112.590360-1-yhs@fb.com
2020-07-25bpf: Support readonly/readwrite buffers in verifierYonghong Song
Readonly and readwrite buffer register states are introduced. Totally four states, PTR_TO_RDONLY_BUF[_OR_NULL] and PTR_TO_RDWR_BUF[_OR_NULL] are supported. As suggested by their respective names, PTR_TO_RDONLY_BUF[_OR_NULL] are for readonly buffers and PTR_TO_RDWR_BUF[_OR_NULL] for read/write buffers. These new register states will be used by later bpf map element iterator. New register states share some similarity to PTR_TO_TP_BUFFER as it will calculate accessed buffer size during verification time. The accessed buffer size will be later compared to other metrics during later attach/link_create time. Similar to reg_state PTR_TO_BTF_ID_OR_NULL in bpf iterator programs, PTR_TO_RDONLY_BUF_OR_NULL or PTR_TO_RDWR_BUF_OR_NULL reg_types can be set at prog->aux->bpf_ctx_arg_aux, and bpf verifier will retrieve the values during btf_ctx_access(). Later bpf map element iterator implementation will show how such information will be assigned during target registeration time. The verifier is also enhanced such that PTR_TO_RDONLY_BUF can be passed to ARG_PTR_TO_MEM[_OR_NULL] helper argument, and PTR_TO_RDWR_BUF can be passed to ARG_PTR_TO_MEM[_OR_NULL] or ARG_PTR_TO_UNINIT_MEM. Signed-off-by: Yonghong Song <yhs@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20200723184111.590274-1-yhs@fb.com
2020-07-25bpf: Refactor to provide aux info to bpf_iter_init_seq_priv_tYonghong Song
This patch refactored target bpf_iter_init_seq_priv_t callback function to accept additional information. This will be needed in later patches for map element targets since a particular map should be passed to traverse elements for that particular map. In the future, other information may be passed to target as well, e.g., pid, cgroup id, etc. to customize the iterator. Signed-off-by: Yonghong Song <yhs@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20200723184110.590156-1-yhs@fb.com
2020-07-25bpf: Refactor bpf_iter_reg to have separate seq_info memberYonghong Song
There is no functionality change for this patch. Struct bpf_iter_reg is used to register a bpf_iter target, which includes information for both prog_load, link_create and seq_file creation. This patch puts fields related seq_file creation into a different structure. This will be useful for map elements iterator where one iterator covers different map types and different map types may have different seq_ops, init/fini private_data function and private_data size. Signed-off-by: Yonghong Song <yhs@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20200723184109.590030-1-yhs@fb.com
2020-07-25bpf: Add bpf_prog iteratorAlexei Starovoitov
It's mostly a copy paste of commit 6086d29def80 ("bpf: Add bpf_map iterator") that is use to implement bpf_seq_file opreations to traverse all bpf programs. v1->v2: Tweak to use build time btf_id Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Yonghong Song <yhs@fb.com> Acked-by: Daniel Borkmann <daniel@iogearbox.net>
2020-07-25Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/netDavid S. Miller
The UDP reuseport conflict was a little bit tricky. The net-next code, via bpf-next, extracted the reuseport handling into a helper so that the BPF sk lookup code could invoke it. At the same time, the logic for reuseport handling of unconnected sockets changed via commit efc6b6f6c3113e8b203b9debfb72d81e0f3dcace which changed the logic to carry on the reuseport result into the rest of the lookup loop if we do not return immediately. This requires moving the reuseport_has_conns() logic into the callers. While we are here, get rid of inline directives as they do not belong in foo.c files. The other changes were cases of more straightforward overlapping modifications. Signed-off-by: David S. Miller <davem@davemloft.net>