summaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
2020-12-14Merge branch 'for-5.11' into for-linusPetr Mladek
2020-12-14Merge branch 'for-5.11-null-console' into for-linusPetr Mladek
2020-12-14can: m_can: use struct m_can_classdev as drvdataMarc Kleine-Budde
The m_can driver's suspend and resume functions (m_can_class_suspend() and m_can_class_resume()) make use of dev_get_drvdata() and assume that the drvdata is a pointer to the struct net_device. With upcoming conversion of the tcan4x5x driver to pm_runtime this assumption is no longer valid. As the suspend and resume functions actually need a struct m_can_classdev pointer, change the m_can_platform and the m_can_pci driver to hold a pointer to struct m_can_classdev instead, as the tcan4x5x driver already does. Link: https://lore.kernel.org/r/20201212175518.139651-8-mkl@pengutronix.de Reviewed-by: Sean Nyekjaer <sean@geanix.com> Reviewed-by: Dan Murphy <dmurphy@ti.com> Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
2020-12-14can: m_can: let m_can_class_allocate_dev() allocate driver specific private dataMarc Kleine-Budde
This patch enhances m_can_class_allocate_dev() to allocate driver specific private data. The driver's private data struct must contain struct m_can_classdev as its first member followed by the remaining private data. Link: https://lore.kernel.org/r/20201212175518.139651-7-mkl@pengutronix.de Reviewed-by: Sean Nyekjaer <sean@geanix.com> Reviewed-by: Dan Murphy <dmurphy@ti.com> Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
2020-12-14can: m_can: m_can_clk_start(): make use of pm_runtime_resume_and_get()Marc Kleine-Budde
With patch | dd8088d5a896 PM: runtime: Add pm_runtime_resume_and_get to deal with usage counter the usual pm_runtime_get_sync() and pm_runtime_put_noidle() in-case-of-error dance is no longer needed. Convert the m_can driver to use this function. Link: https://lore.kernel.org/r/20201212175518.139651-6-mkl@pengutronix.de Reviewed-by: Sean Nyekjaer <sean@geanix.com> Reviewed-by: Dan Murphy <dmurphy@ti.com> Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
2020-12-14can: m_can: m_can_config_endisable(): mark as staticMarc Kleine-Budde
The function m_can_config_endisable() is not used outside of the m_can driver, so mark it as static. Link: https://lore.kernel.org/r/20201212175518.139651-5-mkl@pengutronix.de Reviewed-by: Sean Nyekjaer <sean@geanix.com> Reviewed-by: Dan Murphy <dmurphy@ti.com> Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
2020-12-14can: m_can: use cdev as name for struct m_can_classdev uniformlyMarc Kleine-Budde
This patch coverts the m_can driver to use cdev as name for struct m_can_classdev uniformly throughout the whole driver. Link: https://lore.kernel.org/r/20201212175518.139651-4-mkl@pengutronix.de Reviewed-by: Sean Nyekjaer <sean@geanix.com> Reviewed-by: Dan Murphy <dmurphy@ti.com> Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
2020-12-14can: m_can: convert indention to kernel coding styleMarc Kleine-Budde
This patch converts the indention in the m_can driver to kernel coding style. Link: https://lore.kernel.org/r/20201212175518.139651-3-mkl@pengutronix.de Reviewed-by: Sean Nyekjaer <sean@geanix.com> Reviewed-by: Dan Murphy <dmurphy@ti.com> Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
2020-12-14can: m_can: update link to M_CAN user manualMarc Kleine-Budde
Old versions of the user manual are regularly depublished, so change link to the linux-can github page, which has a mirror off all published datasheets. Link: https://lore.kernel.org/r/20201212175518.139651-2-mkl@pengutronix.de Reviewed-by: Sean Nyekjaer <sean@geanix.com> Reviewed-by: Dan Murphy <dmurphy@ti.com> Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
2020-12-14powerpc/64s: Mark the kuap/kuep functions non __initAneesh Kumar K.V
The kernel calls these functions on CPU online and hence they must not be marked __init. Otherwise if the memory they occupied has been reused the system can crash in various ways. Sachin reported it caused his LPAR to spontaneously restart with no other output. With xmon enabled it may drop into xmon with a dump like: cpu 0x1: Vector: 700 (Program Check) at [c000000003c5fcb0] pc: 00000000011e0a78 lr: 00000000011c51d4 sp: c000000003c5ff50 msr: 8000000000081001 current = 0xc000000002c12b00 paca = 0xc000000003cff280 irqmask: 0x03 irq_happened: 0x01 pid = 0, comm = swapper/1 ... [c000000003c5ff50] 0000000000087c38 (unreliable) [c000000003c5ff70] 000000000003870c [c000000003c5ff90] 000000000000d108 Fixes: 3b47b7549ead ("powerpc/book3s64/kuap: Move KUAP related function outside radix") Reported-by: Sachin Sant <sachinp@linux.vnet.ibm.com> Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com> [mpe: Expand change log with details and xmon output] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20201214080121.358567-1-aneesh.kumar@linux.ibm.com
2020-12-14NFSv4.2/pnfs: Don't use READ_PLUS with pNFS yetTrond Myklebust
We have no way of tracking server READ_PLUS support in pNFS for now, so just disable it. Reported-by: "Mkrtchyan, Tigran" <tigran.mkrtchyan@desy.de> Signed-off-by: Trond Myklebust <trond.myklebust@hammerspace.com>
2020-12-14NFSv4.2: Deal with potential READ_PLUS data extent buffer overflowTrond Myklebust
If the server returns more data than we have buffer space for, then we need to truncate and exit early. Signed-off-by: Trond Myklebust <trond.myklebust@hammerspace.com>
2020-12-14NFSv4.2: Don't error when exiting early on a READ_PLUS buffer overflowTrond Myklebust
Expanding the READ_PLUS extents can cause the read buffer to overflow. If it does, then don't error, but just exit early. Signed-off-by: Trond Myklebust <trond.myklebust@hammerspace.com>
2020-12-14NFSv4.2: Handle hole lengths that exceed the READ_PLUS read bufferTrond Myklebust
If a hole extends beyond the READ_PLUS read buffer, then we want to fill just the remaining buffer with zeros. Also ignore eof... Signed-off-by: Trond Myklebust <trond.myklebust@hammerspace.com>
2020-12-14NFSv4.2: decode_read_plus_hole() needs to check the extent offsetTrond Myklebust
The server is allowed to return a hole extent with an offset that starts before the offset supplied in the READ_PLUS argument. Ensure that we support that case too. Signed-off-by: Trond Myklebust <trond.myklebust@hammerspace.com>
2020-12-14NFSv4.2: decode_read_plus_data() must skip padding after data segmentTrond Myklebust
All XDR opaque object sizes are 32-bit aligned, and a data segment is no exception. Signed-off-by: Trond Myklebust <trond.myklebust@hammerspace.com>
2020-12-14NFSv4.2: Ensure we always reset the result->count in decode_read_plus()Trond Myklebust
Signed-off-by: Trond Myklebust <trond.myklebust@hammerspace.com>
2020-12-14SUNRPC: When expanding the buffer, we may need grow the sparse pagesTrond Myklebust
If we're shifting the page data to the right, and this happens to be a sparse page array, then we may need to allocate new pages in order to receive the data. Reported-by: "Mkrtchyan, Tigran" <tigran.mkrtchyan@desy.de> Signed-off-by: Trond Myklebust <trond.myklebust@hammerspace.com>
2020-12-14SUNRPC: Cleanup - constify a number of xdr_buf helpersTrond Myklebust
There are a number of xdr helpers for struct xdr_buf that do not change the structure itself. Mark those as taking const pointers for documentation purposes. Signed-off-by: Trond Myklebust <trond.myklebust@hammerspace.com>
2020-12-14SUNRPC: Clean up open coded setting of the xdr_stream 'nwords' fieldTrond Myklebust
Move the setting of the xdr_stream 'nwords' field into the helpers that reset the xdr_stream cursor. Signed-off-by: Trond Myklebust <trond.myklebust@hammerspace.com>
2020-12-14SUNRPC: _copy_to/from_pages() now check for zero lengthTrond Myklebust
Clean up callers of _copy_to/from_pages() that still check for a zero length. Signed-off-by: Trond Myklebust <trond.myklebust@hammerspace.com>
2020-12-14SUNRPC: Cleanup xdr_shrink_bufhead()Trond Myklebust
Clean up xdr_shrink_bufhead() to use the new helpers instead of doing its own thing. Signed-off-by: Trond Myklebust <trond.myklebust@hammerspace.com>
2020-12-14SUNRPC: Fix xdr_expand_hole()Trond Myklebust
We do want to try to grow the buffer if possible, but if that attempt fails, we still want to move the data and truncate the XDR message. Signed-off-by: Trond Myklebust <trond.myklebust@hammerspace.com>
2020-12-14SUNRPC: Fixes for xdr_align_data()Trond Myklebust
The main use case right now for xdr_align_data() is to shift the page data to the left, and in practice shrink the total XDR data buffer. This patch ensures that we fix up the accounting for the buffer length as we shift that data around. Signed-off-by: Trond Myklebust <trond.myklebust@hammerspace.com>
2020-12-14SUNRPC: _shift_data_left/right_pages should check the shift lengthTrond Myklebust
Exit early if the shift is zero. Signed-off-by: Trond Myklebust <trond.myklebust@hammerspace.com>
2020-12-14NFSv4.1: use BITS_PER_LONG macro in nfs4session.hGeliang Tang
Use the existing BITS_PER_LONG macro instead of calculating the value. Signed-off-by: Geliang Tang <geliangtang@gmail.com> Signed-off-by: Trond Myklebust <trond.myklebust@hammerspace.com>
2020-12-14xprtrdma: Fix XDRBUF_SPARSE_PAGES supportChuck Lever
Olga K. observed that rpcrdma_marsh_req() allocates sparse pages only when it has determined that a Reply chunk is necessary. There are plenty of cases where no Reply chunk is needed, but the XDRBUF_SPARSE_PAGES flag is set. The result would be a crash in rpcrdma_inline_fixup() when it tries to copy parts of the received Reply into a missing page. To avoid crashing, handle sparse page allocation up front. Until XATTR support was added, this issue did not appear often because the only SPARSE_PAGES consumer always expected a reply large enough to always require a Reply chunk. Reported-by: Olga Kornievskaia <kolga@netapp.com> Signed-off-by: Chuck Lever <chuck.lever@oracle.com> Cc: <stable@vger.kernel.org> Signed-off-by: Trond Myklebust <trond.myklebust@hammerspace.com>
2020-12-14NFSv4.2: improve page handling for GETXATTRFrank van der Linden
XDRBUF_SPARSE_PAGES can cause problems for the RDMA transport, and it's easy enough to allocate enough pages for the request up front, so do that. Also, since we've allocated the pages anyway, use the full page aligned length for the receive buffer. This will allow caching of valid replies that are too large for the caller, but that still fit in the allocated pages. Signed-off-by: Frank van der Linden <fllinden@amazon.com> Signed-off-by: Trond Myklebust <trond.myklebust@hammerspace.com>
2020-12-14sunrpc: fix xs_read_xdr_buf for partial pages receiveDan Aloni
When receiving pages data, return value 'ret' when positive includes `buf->page_base`, so we should subtract that before it is used for changing `offset` and comparing against `want`. This was discovered on the very rare cases where the server returned a chunk of bytes that when added to the already received amount of bytes for the pages happened to match the current `recv.len`, for example on this case: buf->page_base : 258356 actually received from socket: 1740 ret : 260096 want : 260096 In this case neither of the two 'if ... goto out' trigger, and we continue to tail parsing. Worth to mention that the ensuing EMSGSIZE from the continued execution of `xs_read_xdr_buf` may be observed by an application due to 4 superfluous bytes being added to the pages data. Fixes: 277e4ab7d530 ("SUNRPC: Simplify TCP receive code by switching to using iterators") Signed-off-by: Dan Aloni <dan@kernelim.com> Signed-off-by: Trond Myklebust <trond.myklebust@hammerspace.com>
2020-12-14Merge branches 'edac-spr', 'edac-igen6' and 'edac-misc' into ↵Borislav Petkov
edac-updates-for-v5.11 Signed-off-by: Borislav Petkov <bp@suse.de>
2020-12-14xen-blkback: set ring->xenblkd to NULL after kthread_stop()Pawel Wieczorkiewicz
When xen_blkif_disconnect() is called, the kernel thread behind the block interface is stopped by calling kthread_stop(ring->xenblkd). The ring->xenblkd thread pointer being non-NULL determines if the thread has been already stopped. Normally, the thread's function xen_blkif_schedule() sets the ring->xenblkd to NULL, when the thread's main loop ends. However, when the thread has not been started yet (i.e. wake_up_process() has not been called on it), the xen_blkif_schedule() function would not be called yet. In such case the kthread_stop() call returns -EINTR and the ring->xenblkd remains dangling. When this happens, any consecutive call to xen_blkif_disconnect (for example in frontend_changed() callback) leads to a kernel crash in kthread_stop() (e.g. NULL pointer dereference in exit_creds()). This is XSA-350. Cc: <stable@vger.kernel.org> # 4.12 Fixes: a24fa22ce22a ("xen/blkback: don't use xen_blkif_get() in xen-blkback kthread") Reported-by: Olivier Benjamin <oliben@amazon.com> Reported-by: Pawel Wieczorkiewicz <wipawel@amazon.de> Signed-off-by: Pawel Wieczorkiewicz <wipawel@amazon.de> Reviewed-by: Julien Grall <jgrall@amazon.com> Reviewed-by: Juergen Gross <jgross@suse.com> Signed-off-by: Juergen Gross <jgross@suse.com>
2020-12-14xenbus/xenbus_backend: Disallow pending watch messagesSeongJae Park
'xenbus_backend' watches 'state' of devices, which is writable by guests. Hence, if guests intensively updates it, dom0 will have lots of pending events that exhausting memory of dom0. In other words, guests can trigger dom0 memory pressure. This is known as XSA-349. However, the watch callback of it, 'frontend_changed()', reads only 'state', so doesn't need to have the pending events. To avoid the problem, this commit disallows pending watch messages for 'xenbus_backend' using the 'will_handle()' watch callback. This is part of XSA-349 Cc: stable@vger.kernel.org Signed-off-by: SeongJae Park <sjpark@amazon.de> Reported-by: Michael Kurth <mku@amazon.de> Reported-by: Pawel Wieczorkiewicz <wipawel@amazon.de> Reviewed-by: Juergen Gross <jgross@suse.com> Signed-off-by: Juergen Gross <jgross@suse.com>
2020-12-14xen/xenbus: Count pending messages for each watchSeongJae Park
This commit adds a counter of pending messages for each watch in the struct. It is used to skip unnecessary pending messages lookup in 'unregister_xenbus_watch()'. It could also be used in 'will_handle' callback. This is part of XSA-349 Cc: stable@vger.kernel.org Signed-off-by: SeongJae Park <sjpark@amazon.de> Reported-by: Michael Kurth <mku@amazon.de> Reported-by: Pawel Wieczorkiewicz <wipawel@amazon.de> Reviewed-by: Juergen Gross <jgross@suse.com> Signed-off-by: Juergen Gross <jgross@suse.com>
2020-12-14xen/xenbus/xen_bus_type: Support will_handle watch callbackSeongJae Park
This commit adds support of the 'will_handle' watch callback for 'xen_bus_type' users. This is part of XSA-349 Cc: stable@vger.kernel.org Signed-off-by: SeongJae Park <sjpark@amazon.de> Reported-by: Michael Kurth <mku@amazon.de> Reported-by: Pawel Wieczorkiewicz <wipawel@amazon.de> Reviewed-by: Juergen Gross <jgross@suse.com> Signed-off-by: Juergen Gross <jgross@suse.com>
2020-12-14xen/xenbus: Add 'will_handle' callback support in xenbus_watch_path()SeongJae Park
Some code does not directly make 'xenbus_watch' object and call 'register_xenbus_watch()' but use 'xenbus_watch_path()' instead. This commit adds support of 'will_handle' callback in the 'xenbus_watch_path()' and it's wrapper, 'xenbus_watch_pathfmt()'. This is part of XSA-349 Cc: stable@vger.kernel.org Signed-off-by: SeongJae Park <sjpark@amazon.de> Reported-by: Michael Kurth <mku@amazon.de> Reported-by: Pawel Wieczorkiewicz <wipawel@amazon.de> Reviewed-by: Juergen Gross <jgross@suse.com> Signed-off-by: Juergen Gross <jgross@suse.com>
2020-12-14xen/xenbus: Allow watches discard events before queueingSeongJae Park
If handling logics of watch events are slower than the events enqueue logic and the events can be created from the guests, the guests could trigger memory pressure by intensively inducing the events, because it will create a huge number of pending events that exhausting the memory. Fortunately, some watch events could be ignored, depending on its handler callback. For example, if the callback has interest in only one single path, the watch wouldn't want multiple pending events. Or, some watches could ignore events to same path. To let such watches to volutarily help avoiding the memory pressure situation, this commit introduces new watch callback, 'will_handle'. If it is not NULL, it will be called for each new event just before enqueuing it. Then, if the callback returns false, the event will be discarded. No watch is using the callback for now, though. This is part of XSA-349 Cc: stable@vger.kernel.org Signed-off-by: SeongJae Park <sjpark@amazon.de> Reported-by: Michael Kurth <mku@amazon.de> Reported-by: Pawel Wieczorkiewicz <wipawel@amazon.de> Reviewed-by: Juergen Gross <jgross@suse.com> Signed-off-by: Juergen Gross <jgross@suse.com>
2020-12-14ALSA: pcm: oss: Fix potential out-of-bounds shiftTakashi Iwai
syzbot spotted a potential out-of-bounds shift in the PCM OSS layer where it calculates the buffer size with the arbitrary shift value given via an ioctl. Add a range check for avoiding the undefined behavior. As the value can be treated by a signed integer, the max shift should be 30. Reported-by: syzbot+df7dc146ebdd6435eea3@syzkaller.appspotmail.com Cc: <stable@vger.kernel.org> Link: https://lore.kernel.org/r/20201209084552.17109-2-tiwai@suse.de Signed-off-by: Takashi Iwai <tiwai@suse.de>
2020-12-14ALSA: usb-audio: Fix potential out-of-bounds shiftTakashi Iwai
syzbot spotted a potential out-of-bounds shift in the USB-audio format parser that receives the arbitrary shift value from the USB descriptor. Add a range check for avoiding the undefined behavior. Reported-by: syzbot+df7dc146ebdd6435eea3@syzkaller.appspotmail.com Cc: <stable@vger.kernel.org> Link: https://lore.kernel.org/r/20201209084552.17109-1-tiwai@suse.de Signed-off-by: Takashi Iwai <tiwai@suse.de>
2020-12-14Merge branch 'for-linus' into for-nextTakashi Iwai
Signed-off-by: Takashi Iwai <tiwai@suse.de>
2020-12-14soc: ti: k3-ringacc: Use correct error casting in k3_ringacc_dmarings_initPeter Ujfalusi
Use ERR_CAST() when devm_ioremap_resource() fails. Reported-by: kernel test robot <lkp@intel.com> Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com> Link: https://lore.kernel.org/r/20201214065421.5138-1-peter.ujfalusi@ti.com Signed-off-by: Vinod Koul <vkoul@kernel.org>
2020-12-13Linux 5.10v5.10Linus Torvalds
2020-12-13Merge tag 'x86-urgent-2020-12-13' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull x86 fixes from Thomas Gleixner: "A set of x86 and membarrier fixes: - Correct a few problems in the x86 and the generic membarrier implementation. Small corrections for assumptions about visibility which have turned out not to be true. - Make the PAT bits for memory encryption correct vs 4K and 2M/1G page table entries as they are at a different location. - Fix a concurrency issue in the the local bandwidth readout of resource control leading to incorrect values - Fix the ordering of allocating a vector for an interrupt. The order missed to respect the provided cpumask when the first attempt of allocating node local in the mask fails. It then tries the node instead of trying the full provided mask first. This leads to erroneous error messages and breaking the (user) supplied affinity request. Reorder it. - Make the INT3 padding detection in optprobe work correctly" * tag 'x86-urgent-2020-12-13' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: x86/kprobes: Fix optprobe to detect INT3 padding correctly x86/apic/vector: Fix ordering in vector assignment x86/resctrl: Fix incorrect local bandwidth when mba_sc is enabled x86/mm/mem_encrypt: Fix definition of PMD_FLAGS_DEC_WP membarrier: Execute SYNC_CORE on the calling thread membarrier: Explicitly sync remote cores when SYNC_CORE is requested membarrier: Add an actual barrier before rseq_preempt() x86/membarrier: Get rid of a dubious optimization
2020-12-13Merge tag 'block-5.10-2020-12-12' of git://git.kernel.dk/linux-blockLinus Torvalds
Pull block fixes from Jens Axboe: "This should be it for 5.10. Mike and Song looked into the warning case, and thankfully it appears the fix was pretty trivial - we can just change the md device chunk type to unsigned int to get rid of it. They cannot currently be < 0, and nobody is checking for that either. We're reverting the discard changes as the corruption reports came in very late, and there's just no time to attempt to deal with it at this point. Reverting the changes in question is the right call for 5.10" * tag 'block-5.10-2020-12-12' of git://git.kernel.dk/linux-block: md: change mddev 'chunk_sectors' from int to unsigned Revert "md: add md_submit_discard_bio() for submitting discard bio" Revert "md/raid10: extend r10bio devs to raid disks" Revert "md/raid10: pull codes that wait for blocked dev into one function" Revert "md/raid10: improve raid10 discard request" Revert "md/raid10: improve discard request for far layout" Revert "dm raid: remove unnecessary discard limits for raid10"
2020-12-13hv_balloon: do adjust_managed_page_count() when ballooning/un-ballooningVitaly Kuznetsov
Unlike virtio_balloon/virtio_mem/xen balloon drivers, Hyper-V balloon driver does not adjust managed pages count when ballooning/un-ballooning and this leads to incorrect stats being reported, e.g. unexpected 'free' output. Note, the calculation in post_status() seems to remain correct: ballooned out pages are never 'available' and we manually add dm->num_pages_ballooned to 'commited'. Suggested-by: David Hildenbrand <david@redhat.com> Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com> Reviewed-by: David Hildenbrand <david@redhat.com> Link: https://lore.kernel.org/r/20201202161245.2406143-3-vkuznets@redhat.com Signed-off-by: Wei Liu <wei.liu@kernel.org>
2020-12-13hv_balloon: simplify math in alloc_balloon_pages()Vitaly Kuznetsov
'alloc_unit' in alloc_balloon_pages() is either '512' for 2M allocations or '1' for 4k allocations. So 1 << get_order(alloc_unit << PAGE_SHIFT) equals to 'alloc_unit' and the for loop basically sets all them offline. Simplify the math to improve the readability. Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com> Reviewed-by: David Hildenbrand <david@redhat.com> Link: https://lore.kernel.org/r/20201202161245.2406143-2-vkuznets@redhat.com Signed-off-by: Wei Liu <wei.liu@kernel.org>
2020-12-13driver core: platform: don't oops in platform_shutdown() on unbound devicesDmitry Baryshkov
On shutdown the driver core calls the bus' shutdown callback also for unbound devices. A driver's shutdown callback however is only called for devices bound to this driver. Commit 9c30921fe799 ("driver core: platform: use bus_type functions") changed the platform bus from driver callbacks to bus callbacks, so the shutdown function must be prepared to be called without a driver. Add the corresponding check in the shutdown function. Fixes: 9c30921fe799 ("driver core: platform: use bus_type functions") Tested-by: Guenter Roeck <linux@roeck-us.net> Reviewed-by: Uwe Kleine-König <u.kleine-koenig@pengutronix.de> Signed-off-by: Dmitry Baryshkov <dmitry.baryshkov@linaro.org> Link: https://lore.kernel.org/r/20201212235533.247537-1-dmitry.baryshkov@linaro.org Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2020-12-13ntp: Fix prototype in the !CONFIG_GENERIC_CMOS_UPDATE caseIngo Molnar
In the !CONFIG_GENERIC_CMOS_UPDATE case the update_persistent_clock64() function gets defined as a stub in ntp.c - make the prototype in <linux/timekeeping.h> conditional on CONFIG_GENERIC_CMOS_UPDATE as well. Fixes: 76e87d96b30b5 ("ntp: Consolidate the RTC update implementation") Signed-off-by: Ingo Molnar <mingo@kernel.org> Acked-by: Thomas Gleixner <tglx@linutronix.de>
2020-12-12net: x25: Remove unimplemented X.25-over-LLC code stubsXie He
According to the X.25 documentation, there was a plan to implement X.25-over-802.2-LLC. It never finished but left various code stubs in the X.25 code. At this time it is unlikely that it would ever finish so it may be better to remove those code stubs. Also change the documentation to make it clear that this is not a ongoing plan anymore. Change words like "will" to "could", "would", etc. Cc: Martin Schiller <ms@dev.tdt.de> Signed-off-by: Xie He <xie.he.0141@gmail.com> Link: https://lore.kernel.org/r/20201209033346.83742-1-xie.he.0141@gmail.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-12-12inet: frags: batch fqdir destroy worksSeongJae Park
On a few of our systems, I found frequent 'unshare(CLONE_NEWNET)' calls make the number of active slab objects including 'sock_inode_cache' type rapidly and continuously increase. As a result, memory pressure occurs. In more detail, I made an artificial reproducer that resembles the workload that we found the problem and reproduce the problem faster. It merely repeats 'unshare(CLONE_NEWNET)' 50,000 times in a loop. It takes about 2 minutes. On 40 CPU cores / 70GB DRAM machine, the available memory continuously reduced in a fast speed (about 120MB per second, 15GB in total within the 2 minutes). Note that the issue don't reproduce on every machine. On my 6 CPU cores machine, the problem didn't reproduce. 'cleanup_net()' and 'fqdir_work_fn()' are functions that deallocate the relevant memory objects. They are asynchronously invoked by the work queues and internally use 'rcu_barrier()' to ensure safe destructions. 'cleanup_net()' works in a batched maneer in a single thread worker, while 'fqdir_work_fn()' works for each 'fqdir_exit()' call in the 'system_wq'. Therefore, 'fqdir_work_fn()' called frequently under the workload and made the contention for 'rcu_barrier()' high. In more detail, the global mutex, 'rcu_state.barrier_mutex' became the bottleneck. This commit avoids such contention by doing the 'rcu_barrier()' and subsequent lightweight works in a batched manner, as similar to that of 'cleanup_net()'. The fqdir hashtable destruction, which is done before the 'rcu_barrier()', is still allowed to run in parallel for fast processing, but this commit makes it to use a dedicated work queue instead of the 'system_wq', to make sure that the number of threads is bounded. Signed-off-by: SeongJae Park <sjpark@amazon.de> Reviewed-by: Eric Dumazet <edumazet@google.com> Link: https://lore.kernel.org/r/20201211112405.31158-1-sjpark@amazon.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2020-12-12nfc: s3fwrn5: let core configure the interrupt triggerKrzysztof Kozlowski
If interrupt trigger is not set when requesting the interrupt, the core will take care of reading trigger type from Devicetree. There is no point to do it in the driver. Signed-off-by: Krzysztof Kozlowski <krzk@kernel.org> Link: https://lore.kernel.org/r/20201210211824.214949-1-krzk@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>