summaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
2019-06-15net: dsa: rtl8366: Fix up VLAN filteringLinus Walleij
We get this regression when using RTL8366RB as part of a bridge with OpenWrt: WARNING: CPU: 0 PID: 1347 at net/switchdev/switchdev.c:291 switchdev_port_attr_set_now+0x80/0xa4 lan0: Commit of attribute (id=7) failed. (...) realtek-smi switch lan0: failed to initialize vlan filtering on this port This is because it is trying to disable VLAN filtering on VLAN0, as we have forgot to add 1 to the port number to get the right VLAN in rtl8366_vlan_filtering(): when we initialize the VLAN we associate VLAN1 with port 0, VLAN2 with port 1 etc, so we need to add 1 to the port offset. Fixes: d8652956cf37 ("net: dsa: realtek-smi: Add Realtek SMI driver") Signed-off-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-06-15net: phylink: set the autoneg state in phylink_phy_changeIoana Ciornei
The phy_state field of phylink should carry only valid information especially when this can be passed to the .mac_config callback. Update the an_enabled field with the autoneg state in the phylink_phy_change function. Fixes: 9525ae83959b ("phylink: add phylink infrastructure") Signed-off-by: Ioana Ciornei <ioana.ciornei@nxp.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-06-15Merge tag 'platform-drivers-x86-v5.2-3' of ↵Linus Torvalds
git://git.infradead.org/linux-platform-drivers-x86 Pull x86 platform driver fixes from Andy Shevchenko: - fix a couple of Mellanox driver enumeration issues - fix ASUS laptop regression with backlight - fix Dell computers that got a wrong mode (tablet versus laptop) after resume * tag 'platform-drivers-x86-v5.2-3' of git://git.infradead.org/linux-platform-drivers-x86: platform/mellanox: mlxreg-hotplug: Add devm_free_irq call to remove flow platform/x86: mlx-platform: Fix parent device in i2c-mux-reg device registration platform/x86: intel-vbtn: Report switch events when event wakes device platform/x86: asus-wmi: Only Tell EC the OS will handle display hotkeys from asus_nb_wmi
2019-06-15Merge tag 'usb-5.2-rc5' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/usb Pull USB fixes from Greg KH: "Here are some small USB driver fixes for 5.2-rc5 Nothing major, just some small gadget fixes, usb-serial new device ids, a few new quirks, and some small fixes for some regressions that have been found after the big 5.2-rc1 merge. All of these have been in linux-next for a while with no reported issues" * tag 'usb-5.2-rc5' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/usb: usb: typec: Make sure an alt mode exist before getting its partner usb: gadget: udc: lpc32xx: fix return value check in lpc32xx_udc_probe() usb: gadget: dwc2: fix zlp handling usb: dwc2: Set actual frame number for completed ISOC transfer for none DDMA usb: gadget: udc: lpc32xx: allocate descriptor with GFP_ATOMIC usb: gadget: fusb300_udc: Fix memory leak of fusb300->ep[i] usb: phy: mxs: Disable external charger detect in mxs_phy_hw_init() usb: dwc2: Fix DMA cache alignment issues usb: dwc2: host: Fix wMaxPacketSize handling (fix webcam regression) USB: Fix chipmunk-like voice when using Logitech C270 for recording audio. USB: usb-storage: Add new ID to ums-realtek usb: typec: ucsi: ccg: fix memory leak in do_flash USB: serial: option: add Telit 0x1260 and 0x1261 compositions USB: serial: pl2303: add Allied Telesis VT-Kit3 USB: serial: option: add support for Simcom SIM7500/SIM7600 RNDIS mode
2019-06-15Merge tag 'powerpc-5.2-4' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux Pull powerpc fixes from Michael Ellerman: "One fix for a regression introduced by our 32-bit KASAN support, which broke booting on machines with "bootx" early debugging enabled. A fix for a bug which broke kexec on 32-bit, introduced by changes to the 32-bit STRICT_KERNEL_RWX support in v5.1. Finally two fixes going to stable for our THP split/collapse handling, discovered by Nick. The first fixes random crashes and/or corruption in guests under sufficient load. Thanks to: Nicholas Piggin, Christophe Leroy, Aaro Koskinen, Mathieu Malaterre" * tag 'powerpc-5.2-4' of git://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux: powerpc/32s: fix booting with CONFIG_PPC_EARLY_DEBUG_BOOTX powerpc/64s: __find_linux_pte() synchronization vs pmdp_invalidate() powerpc/64s: Fix THP PMD collapse serialisation powerpc: Fix kexec failure on book3s/32
2019-06-15Merge tag 'trace-v5.2-rc4' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-trace Pull tracing fixes from Steven Rostedt: - Out of range read of stack trace output - Fix for NULL pointer dereference in trace_uprobe_create() - Fix to a livepatching / ftrace permission race in the module code - Fix for NULL pointer dereference in free_ftrace_func_mapper() - A couple of build warning clean ups * tag 'trace-v5.2-rc4' of git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-trace: ftrace: Fix NULL pointer dereference in free_ftrace_func_mapper() module: Fix livepatch/ftrace module text permissions race tracing/uprobe: Fix obsolete comment on trace_uprobe_create() tracing/uprobe: Fix NULL pointer dereference in trace_uprobe_create() tracing: Make two symbols static tracing: avoid build warning with HAVE_NOP_MCOUNT tracing: Fix out-of-range read in trace_stack_print()
2019-06-15blkcg, writeback: dead memcgs shouldn't contribute to writeback ownership ↵Tejun Heo
arbitration wbc_account_io() collects information on cgroup ownership of writeback pages to determine which cgroup should own the inode. Pages can stay associated with dead memcgs but we want to avoid attributing IOs to dead blkcgs as much as possible as the association is likely to be stale. However, currently, pages associated with dead memcgs contribute to the accounting delaying and/or confusing the arbitration. Fix it by ignoring pages associated with dead memcgs. Signed-off-by: Tejun Heo <tj@kernel.org> Cc: Jan Kara <jack@suse.cz> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-06-15blkcg: blkcg_activate_policy() should initialize ancestors firstTejun Heo
When blkcg_activate_policy() is creating blkg_policy_data for existing blkgs, it did in the wrong order - descendants first. Fix it. None of the existing controllers seem affected by this. Signed-off-by: Tejun Heo <tj@kernel.org> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-06-15blkcg: perpcu_ref init/exit should be done from blkg_alloc/free()Tejun Heo
blkg alloc is performed as a separate step from the rest of blkg creation so that GFP_KERNEL allocations can be used when creating blkgs from configuration file writes because otherwise user actions may fail due to failures of opportunistic GFP_NOWAIT allocations. While making blkgs use percpu_ref, 7fcf2b033b84 ("blkcg: change blkg reference counting to use percpu_ref") incorrectly added unconditional opportunistic percpu_ref_init() to blkg_create() breaking this guarantee. This patch moves percpu_ref_init() to blkg_alloc() so makes it use @gfp_mask that blkg_alloc() is called with. Also, percpu_ref_exit() is moved to blkg_free() for consistency. Signed-off-by: Tejun Heo <tj@kernel.org> Fixes: 7fcf2b033b84 ("blkcg: change blkg reference counting to use percpu_ref") Cc: Dennis Zhou <dennis@kernel.org> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-06-15blkcg: update blkcg_print_stat() to handle larger outputsTejun Heo
Depending on the number of devices, blkcg stats can go over the default seqfile buf size. seqfile normally retries with a larger buffer but since the ->pd_stat() addition, blkcg_print_stat() doesn't tell seqfile that overflow has happened and the output gets printed truncated. Fix it by calling seq_commit() w/ -1 on possible overflows. Signed-off-by: Tejun Heo <tj@kernel.org> Fixes: 903d23f0a354 ("blk-cgroup: allow controllers to output their own stats") Cc: stable@vger.kernel.org # v4.19+ Cc: Josef Bacik <jbacik@fb.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-06-15blk-iolatency: clear use_delay when io.latency is set to zeroTejun Heo
If use_delay was non-zero when the latency target of a cgroup was set to zero, it will stay stuck until io.latency is enabled on the cgroup again. This keeps readahead disabled for the cgroup impacting performance negatively. Signed-off-by: Tejun Heo <tj@kernel.org> Cc: Josef Bacik <jbacik@fb.com> Fixes: d70675121546 ("block: introduce blk-iolatency io controller") Cc: stable@vger.kernel.org # v4.19+ Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-06-16powerpc/32: fix build failure on book3e with KVMChristophe Leroy
Build failure was introduced by the commit identified below, due to missed macro expension leading to wrong called function's name. arch/powerpc/kernel/head_fsl_booke.o: In function `SystemCall': arch/powerpc/kernel/head_fsl_booke.S:416: undefined reference to `kvmppc_handler_BOOKE_INTERRUPT_SYSCALL_SPRN_SRR1' Makefile:1052: recipe for target 'vmlinux' failed The called function should be kvmppc_handler_8_0x01B(). This patch fixes it. Reported-by: Paul Mackerras <paulus@ozlabs.org> Fixes: 1a4b739bbb4f ("powerpc/32: implement fast entry for syscalls on BOOKE") Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2019-06-15powerpc/booke: fix fast syscall entry on SMPChristophe Leroy
Use r10 instead of r9 to calculate CPU offset as r9 contains the value from SRR1 which is used later. Fixes: 1a4b739bbb4f ("powerpc/32: implement fast entry for syscalls on BOOKE") Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2019-06-15powerpc/32s: fix initial setup of segment registers on secondary CPUChristophe Leroy
The patch referenced below moved the loading of segment registers out of load_up_mmu() in order to do it earlier in the boot sequence. However, the secondary CPU still needs it to be done when loading up the MMU. Reported-by: Erhard F. <erhard_f@mailbox.org> Fixes: 215b823707ce ("powerpc/32s: set up an early static hash table for KASAN") Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2019-06-15Update default configurationMartin Schwidefsky
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com> Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
2019-06-15processor: get rid of cpu_relax_yieldHeiko Carstens
stop_machine is the only user left of cpu_relax_yield. Given that it now has special semantics which are tied to stop_machine introduce a weak stop_machine_yield function which architectures can override, and get rid of the generic cpu_relax_yield implementation. Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Acked-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
2019-06-15s390: improve wait logic of stop_machineMartin Schwidefsky
The stop_machine loop to advance the state machine and to wait for all affected CPUs to check-in calls cpu_relax_yield in a tight loop until the last missing CPUs acknowledged the state transition. On a virtual system where not all logical CPUs are backed by real CPUs all the time it can take a while for all CPUs to check-in. With the current definition of cpu_relax_yield a diagnose 0x44 is done which tells the hypervisor to schedule *some* other CPU. That can be any CPU and not necessarily one of the CPUs that need to run in order to advance the state machine. This can lead to a pretty bad diagnose 0x44 storm until the last missing CPU finally checked-in. Replace the undirected cpu_relax_yield based on diagnose 0x44 with a directed yield. Each CPU in the wait loop will pick up the next CPU in the cpumask of stop_machine. The diagnose 0x9c is used to tell the hypervisor to run this next CPU instead of the current one. If there is only a limited number of real CPUs backing the virtual CPUs we end up with the real CPUs passed around in a round-robin fashion. [heiko.carstens@de.ibm.com]: Use cpumask_next_wrap as suggested by Peter Zijlstra. Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Acked-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
2019-06-15processor: remove spin_cpu_yieldHeiko Carstens
spin_cpu_yield is unused, therefore remove it. Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Acked-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
2019-06-15s390/traps: simplify data exception handlerVasily Gorbik
Simplify conditions and remove unnecessary variable in data exception handler. Signed-off-by: Vasily Gorbik <gor@linux.ibm.com> Reviewed-by: Heiko Carstens <heiko.carstens@de.ibm.com> Reviewed-by: Hendrik Brueckner <brueckner@linux.ibm.com> Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
2019-06-15virtio/s390: make airq summary indicators DMAHalil Pasic
The hypervisor needs to interact with the summary indicators, so these need to be DMA memory as well (at least for protected virtualization guests). Signed-off-by: Halil Pasic <pasic@linux.ibm.com> Reviewed-by: Cornelia Huck <cohuck@redhat.com> Reviewed-by: Michael Mueller <mimu@linux.ibm.com> Tested-by: Michael Mueller <mimu@linux.ibm.com> Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
2019-06-15virtio/s390: use DMA memory for ccw I/O and classic notifiersHalil Pasic
Before virtio-ccw could get away with not using DMA API for the pieces of memory it does ccw I/O with. With protected virtualization this has to change, since the hypervisor needs to read and sometimes also write these pieces of memory. The hypervisor is supposed to poke the classic notifiers, if these are used, out of band with regards to ccw I/O. So these need to be allocated as DMA memory (which is shared memory for protected virtualization guests). Let us factor out everything from struct virtio_ccw_device that needs to be DMA memory in a satellite that is allocated as such. Note: The control blocks of I/O instructions do not need to be shared. These are marshalled by the ultravisor. Signed-off-by: Halil Pasic <pasic@linux.ibm.com> Reviewed-by: Pierre Morel <pmorel@linux.ibm.com> Reviewed-by: Cornelia Huck <cohuck@redhat.com> Reviewed-by: Michael Mueller <mimu@linux.ibm.com> Tested-by: Michael Mueller <mimu@linux.ibm.com> Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
2019-06-15virtio/s390: add indirection to indicators accessHalil Pasic
This will come in handy soon when we pull out the indicators from virtio_ccw_device to a memory area that is shared with the hypervisor (in particular for protected virtualization guests). Signed-off-by: Halil Pasic <pasic@linux.ibm.com> Reviewed-by: Pierre Morel <pmorel@linux.ibm.com> Reviewed-by: Cornelia Huck <cohuck@redhat.com> Reviewed-by: Michael Mueller <mimu@linux.ibm.com> Tested-by: Michael Mueller <mimu@linux.ibm.com> Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
2019-06-15virtio/s390: use cacheline aligned airq bit vectorsHalil Pasic
The flag AIRQ_IV_CACHELINE was recently added to airq_iv_create(). Let us use it! We actually wanted the vector to span a cacheline all along. Signed-off-by: Halil Pasic <pasic@linux.ibm.com> Reviewed-by: Christian Borntraeger <borntraeger@de.ibm.com> Reviewed-by: Cornelia Huck <cohuck@redhat.com> Reviewed-by: Michael Mueller <mimu@linux.ibm.com> Tested-by: Michael Mueller <mimu@linux.ibm.com> Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
2019-06-15s390/airq: use DMA memory for adapter interruptsHalil Pasic
Protected virtualization guests have to use shared pages for airq notifier bit vectors, because the hypervisor needs to write these bits. Let us make sure we allocate DMA memory for the notifier bit vectors by replacing the kmem_cache with a dma_cache and kalloc() with cio_dma_zalloc(). Signed-off-by: Halil Pasic <pasic@linux.ibm.com> Reviewed-by: Sebastian Ott <sebott@linux.ibm.com> Reviewed-by: Michael Mueller <mimu@linux.ibm.com> Tested-by: Michael Mueller <mimu@linux.ibm.com> Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
2019-06-15s390/cio: add basic protected virtualization supportHalil Pasic
As virtio-ccw devices are channel devices, we need to use the dma area within the common I/O layer for any communication with the hypervisor. Note that we do not need to use that area for control blocks directly referenced by instructions, e.g. the orb. It handles neither QDIO in the common code, nor any device type specific stuff (like channel programs constructed by the DASD driver). An interesting side effect is that virtio structures are now going to get allocated in 31 bit addressable storage. Signed-off-by: Halil Pasic <pasic@linux.ibm.com> Reviewed-by: Sebastian Ott <sebott@linux.ibm.com> Reviewed-by: Cornelia Huck <cohuck@redhat.com> Reviewed-by: Michael Mueller <mimu@linux.ibm.com> Tested-by: Michael Mueller <mimu@linux.ibm.com> Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
2019-06-15s390/cio: introduce DMA pools to cioHalil Pasic
To support protected virtualization cio will need to make sure the memory used for communication with the hypervisor is DMA memory. Let us introduce one global pool for cio. Our DMA pools are implemented as a gen_pool backed with DMA pages. The idea is to avoid each allocation effectively wasting a page, as we typically allocate much less than PAGE_SIZE. Signed-off-by: Halil Pasic <pasic@linux.ibm.com> Reviewed-by: Sebastian Ott <sebott@linux.ibm.com> Reviewed-by: Cornelia Huck <cohuck@redhat.com> Reviewed-by: Michael Mueller <mimu@linux.ibm.com> Tested-by: Michael Mueller <mimu@linux.ibm.com> Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
2019-06-15s390/mm: force swiotlb for protected virtualizationHalil Pasic
On s390, protected virtualization guests have to use bounced I/O buffers. That requires some plumbing. Let us make sure, any device that uses DMA API with direct ops correctly is spared from the problems, that a hypervisor attempting I/O to a non-shared page would bring. Signed-off-by: Halil Pasic <pasic@linux.ibm.com> Reviewed-by: Claudio Imbrenda <imbrenda@linux.ibm.com> Reviewed-by: Michael Mueller <mimu@linux.ibm.com> Tested-by: Michael Mueller <mimu@linux.ibm.com> Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
2019-06-15s390/crypto: sha: Use -ENODEV instead of -EOPNOTSUPPDavid Hildenbrand
Let's use the error value that is typically used if HW support is not available when trying to load a module - this is also what systemd's systemd-modules-load.service expects. Reviewed-by: Cornelia Huck <cohuck@redhat.com> Reviewed-by: Harald Freudenberger <freude@linux.ibm.com> Signed-off-by: David Hildenbrand <david@redhat.com> Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
2019-06-15s390/crypto: prng: Use -ENODEV instead of -EOPNOTSUPPDavid Hildenbrand
Let's use the error value that is typically used if HW support is not available when trying to load a module - this is also what systemd's systemd-modules-load.service expects. Reviewed-by: Cornelia Huck <cohuck@redhat.com> Reviewed-by: Harald Freudenberger <freude@linux.ibm.com> Signed-off-by: David Hildenbrand <david@redhat.com> Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
2019-06-15s390/crypto: ghash: Use -ENODEV instead of -EOPNOTSUPPDavid Hildenbrand
Let's use the error value that is typically used if HW support is not available when trying to load a module - this is also what systemd's systemd-modules-load.service expects. Reviewed-by: Cornelia Huck <cohuck@redhat.com> Reviewed-by: Harald Freudenberger <freude@linux.ibm.com> Signed-off-by: David Hildenbrand <david@redhat.com> Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
2019-06-15s390/pkey: Use -ENODEV instead of -EOPNOTSUPPDavid Hildenbrand
systemd-modules-load.service automatically tries to load the pkey module on systems that have MSA. Pkey also requires the MSA3 facility and a bunch of subfunctions. Failing with -EOPNOTSUPP makes "systemd-modules-load.service" fail on any system that does not have all needed subfunctions. For example, when running under QEMU TCG (but also on systems where protected keys are disabled via the HMC). Let's use -ENODEV, so systemd-modules-load.service properly ignores failing to load the pkey module because of missing HW functionality. While at it, also convert the -EOPNOTSUPP in pkey_clr2protkey() to -ENODEV. Reviewed-by: Cornelia Huck <cohuck@redhat.com> Reviewed-by: Harald Freudenberger <freude@linux.ibm.com> Signed-off-by: David Hildenbrand <david@redhat.com> Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
2019-06-15x86/microcode, cpuhotplug: Add a microcode loader CPU hotplug callbackBorislav Petkov
Adric Blake reported the following warning during suspend-resume: Enabling non-boot CPUs ... x86: Booting SMP configuration: smpboot: Booting Node 0 Processor 1 APIC 0x2 unchecked MSR access error: WRMSR to 0x10f (tried to write 0x0000000000000000) \ at rIP: 0xffffffff8d267924 (native_write_msr+0x4/0x20) Call Trace: intel_set_tfa intel_pmu_cpu_starting ? x86_pmu_dead_cpu x86_pmu_starting_cpu cpuhp_invoke_callback ? _raw_spin_lock_irqsave notify_cpu_starting start_secondary secondary_startup_64 microcode: sig=0x806ea, pf=0x80, revision=0x96 microcode: updated to revision 0xb4, date = 2019-04-01 CPU1 is up The MSR in question is MSR_TFA_RTM_FORCE_ABORT and that MSR is emulated by microcode. The log above shows that the microcode loader callback happens after the PMU restoration, leading to the conjecture that because the microcode hasn't been updated yet, that MSR is not present yet, leading to the #GP. Add a microcode loader-specific hotplug vector which comes before the PERF vectors and thus executes earlier and makes sure the MSR is present. Fixes: 400816f60c54 ("perf/x86/intel: Implement support for TSX Force Abort") Reported-by: Adric Blake <promarbler14@gmail.com> Signed-off-by: Borislav Petkov <bp@suse.de> Reviewed-by: Thomas Gleixner <tglx@linutronix.de> Cc: Peter Zijlstra <peterz@infradead.org> Cc: <stable@vger.kernel.org> Cc: x86@kernel.org Link: https://bugzilla.kernel.org/show_bug.cgi?id=203637
2019-06-15block: bio: Use struct_size() in kmalloc()Gustavo A. R. Silva
One of the more common cases of allocation size calculations is finding the size of a structure that has a zero-sized array at the end, along with memory for some number of elements for that array. For example: struct bio_map_data { ... struct iovec iov[]; }; instance = kmalloc(sizeof(sizeof(struct bio_map_data) + sizeof(struct iovec) * count, GFP_KERNEL); Instead of leaving these open-coded and prone to type mistakes, we can now use the new struct_size() helper: instance = kmalloc(struct_size(instance, iov, count), GFP_KERNEL); This code was detected with the help of Coccinelle. Reviewed-by: Kees Cook <keescook@chromium.org> Signed-off-by: Gustavo A. R. Silva <gustavo@embeddedor.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-06-15block: genhd: Use struct_size() helperGustavo A. R. Silva
Make use of the struct_size() helper instead of an open-coded version in order to avoid any potential type mistakes, in particular in the context in which this code is being used. So, replace the following form: sizeof(*new_ptbl) + target * sizeof(new_ptbl->part[0]) with: struct_size(new_ptbl, part, target) Also, notice that variable size is unnecessary, hence it is removed. This code was detected with the help of Coccinelle. Signed-off-by: Gustavo A. R. Silva <gustavo@embeddedor.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-06-15block: null_blk: fix race condition for null_del_devBob Liu
Dulicate call of null_del_dev() will trigger null pointer error like below. The reason is a race condition between nullb_device_power_store() and nullb_group_drop_item(). CPU#0 CPU#1 ---------------- ----------------- do_rmdir() >configfs_rmdir() >client_drop_item() >nullb_group_drop_item() nullb_device_power_store() >null_del_dev() >test_and_clear_bit(NULLB_DEV_FL_UP >null_del_dev() ^^^^^ Duplicated null_dev_dev() triger null pointer error >clear_bit(NULLB_DEV_FL_UP The fix could be keep the sequnce of clear NULLB_DEV_FL_UP and null_del_dev(). [ 698.613600] BUG: unable to handle kernel NULL pointer dereference at 0000000000000018 [ 698.613608] #PF error: [normal kernel read fault] [ 698.613611] PGD 0 P4D 0 [ 698.613619] Oops: 0000 [#1] SMP PTI [ 698.613627] CPU: 3 PID: 6382 Comm: rmdir Not tainted 5.0.0+ #35 [ 698.613631] Hardware name: LENOVO 20LJS2EV08/20LJS2EV08, BIOS R0SET33W (1.17 ) 07/18/2018 [ 698.613644] RIP: 0010:null_del_dev+0xc/0x110 [null_blk] [ 698.613649] Code: 00 00 00 5b 41 5c 41 5d 41 5e 41 5f 5d c3 0f 0b eb 97 e8 47 bb 2a e8 0f 1f 80 00 00 00 00 0f 1f 44 00 00 55 48 89 e5 41 54 53 <8b> 77 18 48 89 fb 4c 8b 27 48 c7 c7 40 57 1e c1 e8 bf c7 cb e8 48 [ 698.613654] RSP: 0018:ffffb887888bfde0 EFLAGS: 00010286 [ 698.613659] RAX: 0000000000000000 RBX: ffff9d436d92bc00 RCX: ffff9d43a9184681 [ 698.613663] RDX: ffffffffc11e5c30 RSI: 0000000068be6540 RDI: 0000000000000000 [ 698.613667] RBP: ffffb887888bfdf0 R08: 0000000000000001 R09: 0000000000000000 [ 698.613671] R10: ffffb887888bfdd8 R11: 0000000000000f16 R12: ffff9d436d92bc08 [ 698.613675] R13: ffff9d436d94e630 R14: ffffffffc11e5088 R15: ffffffffc11e5000 [ 698.613680] FS: 00007faa68be6540(0000) GS:ffff9d43d14c0000(0000) knlGS:0000000000000000 [ 698.613685] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [ 698.613689] CR2: 0000000000000018 CR3: 000000042f70c002 CR4: 00000000003606e0 [ 698.613693] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 [ 698.613697] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 [ 698.613700] Call Trace: [ 698.613712] nullb_group_drop_item+0x50/0x70 [null_blk] [ 698.613722] client_drop_item+0x29/0x40 [ 698.613728] configfs_rmdir+0x1ed/0x300 [ 698.613738] vfs_rmdir+0xb2/0x130 [ 698.613743] do_rmdir+0x1c7/0x1e0 [ 698.613750] __x64_sys_rmdir+0x17/0x20 [ 698.613759] do_syscall_64+0x5a/0x110 [ 698.613768] entry_SYSCALL_64_after_hwframe+0x44/0xa9 Signed-off-by: Bob Liu <bob.liu@oracle.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-06-15blk-mq/debugfs: Fix improper print qualifierPavel Begunkov
struct blk_rq_stat::mean is a u64 value, so use %llu Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-06-15md/raid10: read balance chooses idlest disk for SSDGuoqing Jiang
Andy reported that raid10 array with SSD disks has poor read performance. Compared with raid1, RAID-1 can be 3x faster than RAID-10 sometimes [1]. The thing is that raid10 chooses the low distance disk for read request, however, the approach doesn't work well for SSD device since it doesn't have spindle like HDD, we should just read from the SSD which has less pending IO like commit 9dedf60313fa4 ("md/raid1: read balance chooses idlest disk for SSD"). So this commit selects the idlest SSD disk for read if array has none rotational disk, otherwise, read_balance uses the previous distance priority algorithm. With the change, the performance of raid10 gets increased largely per Andy's test [2]. [1]. https://marc.info/?l=linux-raid&m=155915890004761&w=2 [2]. https://marc.info/?l=linux-raid&m=155990654223786&w=2 Tested-by: Andy Smith <andy@strugglers.net> Signed-off-by: Guoqing Jiang <gqjiang@suse.com> Signed-off-by: Song Liu <songliubraving@fb.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-06-15md: raid1-10: Unify r{1,10}bio_pool_freeMarcos Paulo de Souza
Avoiding duplicated code, since they just execute a kfree. Signed-off-by: Marcos Paulo de Souza <marcos.souza.org@gmail.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-06-15md: raid10: Use struct_size() in kmalloc()Gustavo A. R. Silva
One of the more common cases of allocation size calculations is finding the size of a structure that has a zero-sized array at the end, along with memory for some number of elements for that array. For example: struct foo { int stuff; struct boo entry[]; }; instance = kmalloc(size, GFP_KERNEL); Instead of leaving these open-coded and prone to type mistakes, we can now use the new struct_size() helper: instance = kmalloc(struct_size(instance, entry, count), GFP_KERNEL); This code was detected with the help of Coccinelle. Signed-off-by: Gustavo A. R. Silva <gustavo@embeddedor.com> Signed-off-by: Song Liu <songliubraving@fb.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-06-15md/raid1: get rid of extra blank line and spaceYufen Yu
This patch get rid of extra blank line and space, and add necessary space for code. Signed-off-by: Yufen Yu <yuyufen@huawei.com> Signed-off-by: Song Liu <songliubraving@fb.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-06-15md: fix spelling typo and add necessary spaceYufen Yu
This patch fix a spelling typo and add necessary space for code. In addition, the patch get rid of the unnecessary 'if'. Signed-off-by: Yufen Yu <yuyufen@huawei.com> Signed-off-by: Song Liu <songliubraving@fb.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-06-15md: md.c: Return -ENODEV when mddev is NULL in rdev_attr_showMarcos Paulo de Souza
Commit c42d3240990814eec1e4b2b93fa0487fc4873aed ("md: return -ENODEV if rdev has no mddev assigned") changed rdev_attr_store to return -ENODEV when rdev->mddev is NULL, now do the same to rdev_attr_show. Signed-off-by: Marcos Paulo de Souza <marcos.souza.org@gmail.com> Signed-off-by: Song Liu <songliubraving@fb.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-06-15raid5-cache: Need to do start() part job after adding journal deviceXiao Ni
commit d5d885fd514f ("md: introduce new personality funciton start()") splits the init job to two parts. The first part run() does the jobs that do not require the md threads. The second part start() does the jobs that require the md threads. Now it just does run() in adding new journal device. It needs to do the second part start() too. Fixes: d5d885fd514f ("md: introduce new personality funciton start()") Cc: stable@vger.kernel.org #v4.9+ Reported-by: Michal Soltys <soltys@ziu.info> Signed-off-by: Xiao Ni <xni@redhat.com> Signed-off-by: Song Liu <songliubraving@fb.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-06-15drivers: md: Unify common definitions of raid1 and raid10Marcos Paulo de Souza
These definitions are being moved to raid1-10.c. Signed-off-by: Marcos Paulo de Souza <marcos.souza.org@gmail.com> Signed-off-by: Song Liu <songliubraving@fb.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2019-06-14Merge branch 'for-5.2-fixes' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/tj/cgroup Pull cgroup fixes from Tejun Heo: "This has an unusually high density of tricky fixes: - task_get_css() could deadlock when it races against a dying cgroup. - cgroup.procs didn't list thread group leaders with live threads. This could mislead readers to think that a cgroup is empty when it's not. Fixed by making PROCS iterator include dead tasks. I made a couple mistakes making this change and this pull request contains a couple follow-up patches. - When cpusets run out of online cpus, it updates cpusmasks of member tasks in bizarre ways. Joel improved the behavior significantly" * 'for-5.2-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/tj/cgroup: cpuset: restore sanity to cpuset_cpus_allowed_fallback() cgroup: Fix css_task_iter_advance_css_set() cset skip condition cgroup: css_task_iter_skip()'d iterators must be advanced before accessed cgroup: Include dying leaders with live threads in PROCS iterations cgroup: Implement css_task_iter_skip() cgroup: Call cgroup_release() before __exit_signal() docs cgroups: add another example size for hugetlb cgroup: Use css_tryget() instead of css_tryget_online() in task_get_css()
2019-06-14Merge tag 'drm-fixes-2019-06-14' of git://anongit.freedesktop.org/drm/drmLinus Torvalds
Pull drm fixes from Daniel Vetter: "Nothing unsettling here, also not aware of anything serious still pending. The edid override regression fix took a bit longer since this seems to be an area with an overabundance of bad options. But the fix we have now seems like a good path forward. Next week it should be back to Dave. Summary: - fix regression on amdgpu on SI - fix edid override regression - driver fixes: amdgpu, i915, mediatek, meson, panfrost - fix writecombine for vmap in gem-shmem helper (used by panfrost) - add more panel quirks" * tag 'drm-fixes-2019-06-14' of git://anongit.freedesktop.org/drm/drm: (25 commits) drm/amdgpu: return 0 by default in amdgpu_pm_load_smu_firmware drm/amdgpu: Fix bounds checking in amdgpu_ras_is_supported() drm: add fallback override/firmware EDID modes workaround drm/edid: abstract override/firmware EDID retrieval drm/i915/perf: fix whitelist on Gen10+ drm/i915/sdvo: Implement proper HDMI audio support for SDVO drm/i915: Fix per-pixel alpha with CCS drm/i915/dmc: protect against reading random memory drm/i915/dsi: Use a fuzzy check for burst mode clock check drm/amdgpu/{uvd,vcn}: fetch ring's read_ptr after alloc drm/panfrost: Require the simple_ondemand governor drm/panfrost: make devfreq optional again drm/gem_shmem: Use a writecombine mapping for ->vaddr drm: panel-orientation-quirks: Add quirk for GPD MicroPC drm: panel-orientation-quirks: Add quirk for GPD pocket2 drm/meson: fix G12A primary plane disabling drm/meson: fix primary plane disabling drm/meson: fix G12A HDMI PLL settings for 4K60 1000/1001 variations drm/mediatek: call mtk_dsi_stop() after mtk_drm_crtc_atomic_disable() drm/mediatek: clear num_pipes when unbind driver ...
2019-06-14Merge tag 'gfs2-v5.2.fixes2' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/gfs2/linux-gfs2 Pull gfs2 fix from Andreas Gruenbacher: "Fix rounding error in gfs2_iomap_page_prepare" * tag 'gfs2-v5.2.fixes2' of git://git.kernel.org/pub/scm/linux/kernel/git/gfs2/linux-gfs2: gfs2: Fix rounding error in gfs2_iomap_page_prepare
2019-06-14Merge branch 'tcp-add-three-static-keys'David S. Miller
Eric Dumazet says: ==================== tcp: add three static keys Recent addition of per TCP socket rx/tx cache brought regressions for some workloads, as reported by Feng Tang. It seems better to make them opt-in, before we adopt better heuristics. The last patch adds high_order_alloc_disable sysctl to ask TCP sendmsg() to exclusively use order-0 allocations, as mm layer has specific optimizations. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
2019-06-14net: add high_order_alloc_disable sysctl/static keyEric Dumazet
>From linux-3.7, (commit 5640f7685831 "net: use a per task frag allocator") TCP sendmsg() has preferred using order-3 allocations. While it gives good results for most cases, we had reports that heavy uses of TCP over loopback were hitting a spinlock contention in page allocations/freeing. This commits adds a sysctl so that admins can opt-in for order-0 allocations. Hopefully mm layer might optimize order-3 allocations in the future since it could give us a nice boost (see 8 lines of following benchmark) The following benchmark shows a win when more than 8 TCP_STREAM threads are running (56 x86 cores server in my tests) for thr in {1..30} do sysctl -wq net.core.high_order_alloc_disable=0 T0=`./super_netperf $thr -H 127.0.0.1 -l 15` sysctl -wq net.core.high_order_alloc_disable=1 T1=`./super_netperf $thr -H 127.0.0.1 -l 15` echo $thr:$T0:$T1 done 1: 49979: 37267 2: 98745: 76286 3: 141088: 110051 4: 177414: 144772 5: 197587: 173563 6: 215377: 208448 7: 241061: 234087 8: 267155: 263373 9: 295069: 297402 10: 312393: 335213 11: 340462: 368778 12: 371366: 403954 13: 412344: 443713 14: 426617: 473580 15: 474418: 507861 16: 503261: 538539 17: 522331: 563096 18: 532409: 567084 19: 550824: 605240 20: 525493: 641988 21: 564574: 665843 22: 567349: 690868 23: 583846: 710917 24: 588715: 736306 25: 603212: 763494 26: 604083: 792654 27: 602241: 796450 28: 604291: 797993 29: 611610: 833249 30: 577356: 841062 Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-06-14tcp: add tcp_tx_skb_cache sysctlEric Dumazet
Feng Tang reported a performance regression after introduction of per TCP socket tx/rx caches, for TCP over loopback (netperf) There is high chance the regression is caused by a change on how well the 32 KB per-thread page (current->task_frag) can be recycled, and lack of pcp caches for order-3 pages. I could not reproduce the regression myself, cpus all being spinning on the mm spinlocks for page allocs/freeing, regardless of enabling or disabling the per tcp socket caches. It seems best to disable the feature by default, and let admins enabling it. MM layer either needs to provide scalable order-3 pages allocations, or could attempt a trylock on zone->lock if the caller only attempts to get a high-order page and is able to fallback to order-0 ones in case of pressure. Tests run on a 56 cores host (112 hyper threads) - 35.49% netperf [kernel.vmlinux] [k] queued_spin_lock_slowpath - 35.49% queued_spin_lock_slowpath - 18.18% get_page_from_freelist - __alloc_pages_nodemask - 18.18% alloc_pages_current skb_page_frag_refill sk_page_frag_refill tcp_sendmsg_locked tcp_sendmsg inet_sendmsg sock_sendmsg __sys_sendto __x64_sys_sendto do_syscall_64 entry_SYSCALL_64_after_hwframe __libc_send + 17.31% __free_pages_ok + 31.43% swapper [kernel.vmlinux] [k] intel_idle + 9.12% netperf [kernel.vmlinux] [k] copy_user_enhanced_fast_string + 6.53% netserver [kernel.vmlinux] [k] copy_user_enhanced_fast_string + 0.69% netserver [kernel.vmlinux] [k] queued_spin_lock_slowpath + 0.68% netperf [kernel.vmlinux] [k] skb_release_data + 0.52% netperf [kernel.vmlinux] [k] tcp_sendmsg_locked 0.46% netperf [kernel.vmlinux] [k] _raw_spin_lock_irqsave Fixes: 472c2e07eef0 ("tcp: add one skb cache for tx") Signed-off-by: Eric Dumazet <edumazet@google.com> Reported-by: Feng Tang <feng.tang@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>