summaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
2021-06-24ext4: consolidate checks for resize of bigalloc into ext4_resize_beginJosh Triplett
Two different places checked for attempts to resize a filesystem with the bigalloc feature. Move the check into ext4_resize_begin, which both places already call. Signed-off-by: Josh Triplett <josh@joshtriplett.org> Link: https://lore.kernel.org/r/bee03303d999225ecb3bfa5be8576b2f4c6edbe6.1623093259.git.josh@joshtriplett.org Signed-off-by: Theodore Ts'o <tytso@mit.edu>
2021-06-24hwmon: Support set_trips() of thermal device opsDmitry Osipenko
Support set_trips() callback of thermal device ops. This allows HWMON device to operatively notify thermal core about temperature changes, which is very handy to have in a case where HWMON sensor is used by CPU thermal zone that performs passive cooling and emergency shutdown on overheat. Thermal core will be able to react faster to temperature changes. The set_trips() callback is entirely optional. If HWMON sensor doesn't support setting thermal trips, then the callback is a NO-OP. The dummy callback has no effect on the thermal core. The temperature trips are either complement the temperature polling mechanism of thermal core or replace the polling if sensor can set the trips and polling is disabled by a particular device in a device-tree. Signed-off-by: Dmitry Osipenko <digetx@gmail.com> Link: https://lore.kernel.org/r/20210623042231.16008-3-digetx@gmail.com Signed-off-by: Guenter Roeck <linux@roeck-us.net>
2021-06-24hwmon: (lm90) Prevent integer underflows of temperature calculationsDmitry Osipenko
The min/max/crit and all other temperature values that are passed to the driver are unlimited and value that is close to INT_MIN results in integer underflow of the temperature calculations made by the driver for LM99 sensor. Temperature hysteresis is among those values that need to be limited, but limiting of hysteresis is independent from the sensor version. Add the missing limits. Signed-off-by: Dmitry Osipenko <digetx@gmail.com> Link: https://lore.kernel.org/r/20210623042231.16008-2-digetx@gmail.com Signed-off-by: Guenter Roeck <linux@roeck-us.net>
2021-06-24ext4: remove duplicate definition of ext4_xattr_ibody_inline_set()Ritesh Harjani
ext4_xattr_ibody_inline_set() & ext4_xattr_ibody_set() have the exact same definition. Hence remove ext4_xattr_ibody_inline_set() and all its call references. Convert the callers of it to call ext4_xattr_ibody_set() instead. [ Modified to preserve ext4_xattr_ibody_set() and remove ext4_xattr_ibody_inline_set() instead. -- TYT ] Signed-off-by: Ritesh Harjani <riteshh@linux.ibm.com> Link: https://lore.kernel.org/r/fd566b799bbbbe9b668eb5eecde5b5e319e3694f.1622685482.git.riteshh@linux.ibm.com Signed-off-by: Theodore Ts'o <tytso@mit.edu>
2021-06-25powerpc/64s: Fix copy-paste data exposure into newly created tasksNicholas Piggin
copy-paste contains implicit "copy buffer" state that can contain arbitrary user data (if the user process executes a copy instruction). This could be snooped by another process if a context switch hits while the state is live. So cp_abort is executed on context switch to clear out possible sensitive data and prevent the leak. cp_abort is done after the low level _switch(), which means it is never reached by newly created tasks, so they could snoop on this buffer between their first and second context switch. Fix this by doing the cp_abort before calling _switch. Add some comments which should make the issue harder to miss. Fixes: 07d2a628bc000 ("powerpc/64s: Avoid cpabort in context switch when possible") Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20210622053036.474678-1-npiggin@gmail.com
2021-06-25powerpc/32: Avoid #ifdef nested with FTR_SECTION on booke syscall entryChristophe Leroy
On booke, SYSCALL_ENTRY macro nests an FTR_SECTION with a #ifdef CONFIG_KVM_BOOKE_HV. Duplicate the single instruction alternative to avoid nesting. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/33db61d5f85146262dbe26648f8f87eca3cae393.1622818435.git.christophe.leroy@csgroup.eu
2021-06-25powerpc/32: Reduce code duplication of system call entryChristophe Leroy
booke and non booke do pretty similar things in SYSCALL_ENTRY macro just before calling jumping to transfer_to_syscall(). Do them in transfer_to_syscall() instead. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/552e27fa09394a6bc70585fcdfa237f99a5d1267.1622818435.git.christophe.leroy@csgroup.eu
2021-06-25powerpc/32: Interchange r1 and r11 in SYSCALL_ENTRY on bookeChristophe Leroy
To better match non booke version of SYSCALL_ENTRY macro, interchange r1 and r11 in the booke version. While at it, in both versions use r1 instead of r11 to save _NIP and _CCR. All other uses of r11 will go away in next patch, so don't bother changing them for now. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/1684c39724a069b0ce1aa82eaee6ec194e354e4e.1622818435.git.christophe.leroy@csgroup.eu
2021-06-25powerpc/32: Interchange r10 and r12 in SYSCALL_ENTRY on non bookeChristophe Leroy
To better match booke version of SYSCALL_ENTRY macro, interchange r10 and r12 in the non booke version. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/5ab3a517bc883a2fc905fb2cb5ee9344f37b2cfa.1622818435.git.christophe.leroy@csgroup.eu
2021-06-25powerpc: Remove klimitChristophe Leroy
klimit is a global variable initialised at build time with the value of _end. This variable is never modified, so _end symbol can be used directly. Remove klimit. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Reviewed-by: Kefeng Wang <wangkefeng.wang@huawei.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/9fa9ba6807c17f93f35a582c199c646c4a8bfd9c.1622800638.git.christophe.leroy@csgroup.eu
2021-06-25powerpc/mm: Properly coalesce pages in ptdumpChristophe Leroy
Commit aaa229529244 ("powerpc/mm: Add physical address to Linux page table dump") changed range coalescing to only combine ranges that are both virtually and physically contiguous, in order to avoid erroneous combination of unrelated mappings in IOREMAP space. But in the VMALLOC space, mappings almost never have contiguous physical pages, so the commit mentionned above leads to dumping one line per page for vmalloc mappings. Taking into account the vmalloc always leave a gap between two areas, we never have two mappings dumped as a single combination even if they have the exact same flags. The only space that may have encountered such an issue was the early IOREMAP which is not using vmalloc engine. But previous commits added gaps between early IO mappings, so it is not an issue anymore. That commit created some difficulties with KASAN mappings, see commit cabe8138b23c ("powerpc: dump as a single line areas mapping a single physical page.") and with huge page, see commit b00ff6d8c1c3 ("powerpc/ptdump: Properly handle non standard page size"). So, almost revert commit aaa229529244 to properly coalesce pages mapped with the same flags as before, only keep the display of the first physical address of the range, as it can be usefull especially for IO mappings. It brings back powerpc at the same level as other architectures and simplifies the conversion to GENERIC PTDUMP. With the patch: ---[ kasan shadow mem start ]--- 0xf8000000-0xf8ffffff 0x07000000 16M huge rw present dirty accessed 0xf9000000-0xf91fffff 0x01434000 2M r present accessed 0xf9200000-0xf95affff 0x02104000 3776K rw present dirty accessed 0xfef5c000-0xfeffffff 0x01434000 656K r present accessed ---[ kasan shadow mem end ]--- Before: ---[ kasan shadow mem start ]--- 0xf8000000-0xf8ffffff 0x07000000 16M huge rw present dirty accessed 0xf9000000-0xf91fffff 0x01434000 16K r present accessed 0xf9200000-0xf9203fff 0x02104000 16K rw present dirty accessed 0xf9204000-0xf9207fff 0x0213c000 16K rw present dirty accessed 0xf9208000-0xf920bfff 0x02174000 16K rw present dirty accessed 0xf920c000-0xf920ffff 0x02188000 16K rw present dirty accessed 0xf9210000-0xf9213fff 0x021dc000 16K rw present dirty accessed 0xf9214000-0xf9217fff 0x02220000 16K rw present dirty accessed 0xf9218000-0xf921bfff 0x023c0000 16K rw present dirty accessed 0xf921c000-0xf921ffff 0x023d4000 16K rw present dirty accessed 0xf9220000-0xf9227fff 0x023ec000 32K rw present dirty accessed ... 0xf93b8000-0xf93e3fff 0x02614000 176K rw present dirty accessed 0xf93e4000-0xf94c3fff 0x027c0000 896K rw present dirty accessed 0xf94c4000-0xf94c7fff 0x0236c000 16K rw present dirty accessed 0xf94c8000-0xf94cbfff 0x041f0000 16K rw present dirty accessed 0xf94cc000-0xf94cffff 0x029c0000 16K rw present dirty accessed 0xf94d0000-0xf94d3fff 0x041ec000 16K rw present dirty accessed 0xf94d4000-0xf94d7fff 0x0407c000 16K rw present dirty accessed 0xf94d8000-0xf94f7fff 0x041c0000 128K rw present dirty accessed ... 0xf95ac000-0xf95affff 0x042b0000 16K rw present dirty accessed 0xfef5c000-0xfeffffff 0x01434000 16K r present accessed ---[ kasan shadow mem end ]--- Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/c56ce1f5c3c75adc9811b1a5f9c410fa74183a8d.1618828806.git.christophe.leroy@csgroup.eu
2021-06-25powerpc/mm: Leave a gap between early allocated IO areasChristophe Leroy
Vmalloc system leaves a gap between allocated areas. It helps catching overflows. Do the same for IO areas which are allocated with early_ioremap_range() until slab_is_available(). Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/c433e358190fb5d47650463ea1ab755fc7b73e6e.1618828806.git.christophe.leroy@csgroup.eu
2021-06-25powerpc/papr_scm: Properly handle UUID types and APIAndy Shevchenko
Parse to and export from UUID own type, before dereferencing. This also fixes wrong comment (Little Endian UUID is something else) and should eliminate the direct strict types assignments. Fixes: 43001c52b603 ("powerpc/papr_scm: Use ibm,unit-guid as the iset cookie") Fixes: 259a948c4ba1 ("powerpc/pseries/scm: Use a specific endian format for storing uuid from the device tree") Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Reviewed-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20210616134303.58185-1-andriy.shevchenko@linux.intel.com
2021-06-25powerpc/pseries: fail quicker in dlpar_memory_add_by_ic()Daniel Henrique Barboza
The validation done at the start of dlpar_memory_add_by_ic() is an all of nothing scenario - if any LMBs in the range is marked as RESERVED we can fail right away. We then can remove the 'lmbs_available' var and its check with 'lmbs_to_add' since the whole LMB range was already validated in the previous step. Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20210622133923.295373-4-danielhb413@gmail.com
2021-06-25powerpc/pseries: break early in dlpar_memory_add_by_count() loopsDaniel Henrique Barboza
After a successful dlpar_add_lmb() call the LMB is marked as reserved. Later on, depending whether we added enough LMBs or not, we rely on the marked LMBs to see which ones might need to be removed, and we remove the reservation of all of them. These are done in for_each_drmem_lmb() loops without any break condition. This means that we're going to check all LMBs of the partition even after going through all the reserved ones. This patch adds break conditions in both loops to avoid this. The 'lmbs_added' variable was renamed to 'lmbs_reserved', and it's now being decremented each time a lmb reservation is removed, indicating if there are still marked LMBs to be processed. Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20210622133923.295373-3-danielhb413@gmail.com
2021-06-25powerpc/pseries: skip reserved LMBs in dlpar_memory_add_by_count()Daniel Henrique Barboza
The function is counting reserved LMBs as available to be added, but they aren't. This will cause the function to miscalculate the available LMBs and can trigger errors later on when executing dlpar_add_lmb(). Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20210622133923.295373-2-danielhb413@gmail.com
2021-06-25powerpc: Offline CPU in stop_this_cpu()Nicholas Piggin
printk_safe_flush_on_panic() has special lock breaking code for the case where we panic()ed with the console lock held. It relies on panic IPI causing other CPUs to mark themselves offline. Do as most other architectures do. This effectively reverts commit de6e5d38417e ("powerpc: smp_send_stop do not offline stopped CPUs"), unfortunately it may result in some false positive warnings, but the alternative is more situations where we can crash without getting messages out. Fixes: de6e5d38417e ("powerpc: smp_send_stop do not offline stopped CPUs") Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20210623041245.865134-1-npiggin@gmail.com
2021-06-25powerpc: Make PPC_IRQ_SOFT_MASK_DEBUG depend on PPC64Nicholas Piggin
32-bit platforms don't have irq soft masking. Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20210623032909.826010-1-npiggin@gmail.com
2021-06-25powerpc/64s: Remove irq mask workaround in accumulate_stolen_time()Nicholas Piggin
The caller has been moved to C after irq soft-mask state has been reconciled, and Linux IRQs have been marked as disabled, so this no longer needs to play games with IRQ internals. Fixes: 68b34588e202 ("powerpc/64/sycall: Implement syscall entry/exit logic in C") Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20210623022924.704645-1-npiggin@gmail.com
2021-06-25powerpc/pseries: Enable hardlockup watchdog for PowerVM partitionsNicholas Piggin
PowerVM will not arbitrarily oversubscribe or stop guests, page out the guest kernel text to a NFS volume connected by carrier pigeon to abacus based storage, etc., as a KVM host might. So PowerVM guests are not likely to be killed by the hard lockup watchdog in normal operation, even with shared processor LPARs which still get a minimum allotment of CPU time. Enable the hard lockup detector by default on !KVM guests, which we will assume is PowerVM. It has been useful in finding problems on bare metal kernels. Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20210623021528.702241-1-npiggin@gmail.com
2021-06-25powerpc/64s/interrupt: Check and fix srr_valid without crashingNicholas Piggin
The PPC_RFI_SRR_DEBUG check added by patch "powerpc/64s: avoid reloading (H)SRR registers if they are still valid" has a few deficiencies. It does not fix the actual problem, it's not enabled by default, and it causes a program check interrupt which can cause more difficulties. However there are a lot of paths which may clobber SRRs or change return regs, and difficult to have a high confidence that all paths are covered without wider testing. Add a relatively low overhead always-enabled check that catches most such cases, reports once, and fixes it so the kernel can continue. Signed-off-by: Nicholas Piggin <npiggin@gmail.com> [mpe: Rebase, use switch & INT names, squash in race fix from Nick] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2021-06-25powerpc/interrupt: Remove prep_irq_for_user_exit()Christophe Leroy
prep_irq_for_user_exit() has only one caller, squash it inside that caller. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20210617155116.2167984-18-npiggin@gmail.com
2021-06-25powerpc/interrupt: Refactor prep_irq_for_{user/kernel_enabled}_exit()Christophe Leroy
prep_irq_for_user_exit() is a superset of prep_irq_for_kernel_enabled_exit(). Rename prep_irq_for_kernel_enabled_exit() as prep_irq_for_enabled_exit() and have prep_irq_for_user_exit() use it. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Reviewed-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20210617155116.2167984-17-npiggin@gmail.com
2021-06-25powerpc/interrupt: Interchange prep_irq_for_{kernel_enabled/user}_exit()Christophe Leroy
prep_irq_for_user_exit() is a superset of prep_irq_for_kernel_enabled_exit(). In order to allow refactoring in following patch, interchange the two. This will allow prep_irq_for_user_exit() to call a renamed version of prep_irq_for_kernel_enabled_exit(). Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Reviewed-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20210617155116.2167984-16-npiggin@gmail.com
2021-06-25powerpc/interrupt: Refactor interrupt_exit_user_prepare()Christophe Leroy
interrupt_exit_user_prepare() is a superset of interrupt_exit_user_prepare_main(). Refactor to avoid code duplication. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20210617155116.2167984-15-npiggin@gmail.com
2021-06-25powerpc/interrupt: Rename and lightly change syscall_exit_prepare_main()Christophe Leroy
Rename syscall_exit_prepare_main() into interrupt_exit_prepare_main() Pass it the 'ret' so that it can 'or' it directly instead of oring twice, once inside the function and once outside. And remove 'r3' parameter which is not used. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Reviewed-by: Nicholas Piggin <npiggin@gmail.com> [np: split out some changes into other patches] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20210617155116.2167984-14-npiggin@gmail.com
2021-06-25powerpc/64: use interrupt restart table to speed up return from interruptNicholas Piggin
Use the restart table facility to return from interrupt or system calls without disabling MSR[EE] or MSR[RI]. Interrupt return asm is put into the low soft-masked region, to prevent interrupts being processed here, although they are still taken as masked interrupts which causes SRRs to be clobbered, and a pending soft-masked interrupt to require replaying. The return code uses restart table regions to redirct to a fixup handler rather than continue with the exit, if such an interrupt happens. In this case the interrupt return is redirected to a fixup handler which reloads r1 for the interrupt stack and reloads registers and sets state up to replay the soft-masked interrupt and try the exit again. Some types of security exit fallback flushes and barriers are currently unable to cope with reentrant interrupts, e.g., because they store some state in the scratch SPR which would be clobbered even by masked interrupts. For now the interrupts-enabled exits are disabled when these flushes are used. Signed-off-by: Nicholas Piggin <npiggin@gmail.com> [mpe: Guard unused exit_must_hard_disable() as reported by lkp] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20210617155116.2167984-13-npiggin@gmail.com
2021-06-25powerpc/64: treat low kernel text as irqs soft-maskedNicholas Piggin
Treat code below __end_soft_masked as soft-masked for the purpose of alternate return. 64s already mostly does this for scv entry. This will be used to exit from interrupts without disabling MSR[EE]. Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20210617155116.2167984-12-npiggin@gmail.com
2021-06-25powerpc/64: interrupt soft-enable race fixNicholas Piggin
Prevent interrupt restore from allowing racing hard interrupts going ahead of previous soft-pending ones, by using the soft-masked restart handler to allow a store to clear the soft-mask while knowing nothing is soft-pending. This probably doesn't matter much in practice, but it's a simple demonstrator / test case to exercise the restart table logic. Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20210617155116.2167984-11-npiggin@gmail.com
2021-06-25powerpc/64: allow alternate return locations for soft-masked interruptsNicholas Piggin
The exception table fixup adjusts a failed page fault's interrupt return location if it was taken at an address specified in the exception table, to a corresponding fixup handler address. Introduce a variation of that idea which adds a fixup table for NMIs and soft-masked asynchronous interrupts. This will be used to protect certain critical sections that are sensitive to being clobbered by interrupts coming in (due to using the same SPRs and/or irq soft-mask state). Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20210617155116.2167984-10-npiggin@gmail.com
2021-06-25powerpc/64s: save one more register in the masked interrupt handlerNicholas Piggin
This frees up one more register (and takes advantage of that to clean things up a little bit). This register will be used in the following patch. Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20210617155116.2167984-9-npiggin@gmail.com
2021-06-25powerpc/64s: system call avoid setting MSR[RI] until we set MSR[EE]Nicholas Piggin
This extends the MSR[RI]=0 window a little further into the system call in order to pair RI and EE enabling with a single mtmsrd. Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20210617155116.2167984-8-npiggin@gmail.com
2021-06-25powerpc/64: move interrupt return asm to interrupt_64.SNicholas Piggin
The next patch would like to move interrupt return assembly code to a low location before general text, so move it into its own file and include via head_64.S Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20210617155116.2167984-7-npiggin@gmail.com
2021-06-25powerpc/64s: avoid reloading (H)SRR registers if they are still validNicholas Piggin
When an interrupt is taken, the SRR registers are set to return to where it left off. Unless they are modified in the meantime, or the return address or MSR are modified, there is no need to reload these registers when returning from interrupt. Introduce per-CPU flags that track the validity of SRR and HSRR registers. These are cleared when returning from interrupt, when using the registers for something else (e.g., OPAL calls), when adjusting the return address or MSR of a context, and when context switching (which changes the return address and MSR). This improves the performance of interrupt returns. Signed-off-by: Nicholas Piggin <npiggin@gmail.com> [mpe: Fold in fixup patch from Nick] Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20210617155116.2167984-5-npiggin@gmail.com
2021-06-25powerpc/64s: introduce different functions to return from SRR vs HSRR interruptsNicholas Piggin
This makes no real difference yet except that HSRR type interrupts will use hrfid to return. This is important for the next patch. Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20210617155116.2167984-4-npiggin@gmail.com
2021-06-25powerpc: remove interrupt exit helpers unused argumentNicholas Piggin
The msr argument is not used, remove it. Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20210617155116.2167984-3-npiggin@gmail.com
2021-06-25powerpc/interrupt: Fix CONFIG ifdef typoChristophe Leroy
CONFIG_PPC_BOOK3S should be CONFIG_PPC_BOOK3S_64. restore_math is a no-op for other configurations. Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> [np: split from another patch] Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20210617155116.2167984-2-npiggin@gmail.com
2021-06-25powerpc/prom_init: Pass linux_banner to firmware via option vector 7Michael Ellerman
Pass the value of linux_banner to firmware via option vector 7. Option vector 7 is described in "LoPAR" Linux on Power Architecture Reference v2.9, in table B.7 on page 824: An ASCII character formatted null terminated string that describes the client operating system. The string shall be human readable and may be displayed on the console. The string can be up to 256 bytes total, including the nul terminator. linux_banner contains lots of information, and should make it possible to identify the exact kernel version that is running: const char linux_banner[] = "Linux version " UTS_RELEASE " (" LINUX_COMPILE_BY "@" LINUX_COMPILE_HOST ") (" LINUX_COMPILER ") " UTS_VERSION "\n"; For example: Linux version 4.15.0-144-generic (buildd@bos02-ppc64el-018) (gcc version 7.5.0 (Ubuntu 7.5.0-3ubuntu1~18.04)) #148-Ubuntu SMP Sat May 8 02:32:13 UTC 2021 (Ubuntu 4.15.0-144.148-generic 4.15.18) It's also printed at boot to the console/dmesg, which should make it possible to correlate what firmware receives with the console/dmesg on the machine. Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20210621064938.2021419-2-mpe@ellerman.id.au
2021-06-25powerpc/prom_init: Convert prom_strcpy() into prom_strscpy_pad()Michael Ellerman
In a subsequent patch we'd like to have something like a strscpy_pad() implementation usable in prom_init.c. Currently we have a strcpy() implementation with only one caller, so convert it into strscpy_pad() and update the caller. Reviewed-by: Daniel Axtens <dja@axtens.net> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20210621064938.2021419-1-mpe@ellerman.id.au
2021-06-25powerpc/64s: Fix boot failure with 4K RadixMichael Ellerman
When using the Radix MMU our PGD is always 64K, and must be naturally aligned. For a 4K page size kernel that means page alignment of swapper_pg_dir is not sufficient, leading to failure to boot. Use the existing MAX_PTRS_PER_PGD which has the correct value, and avoids us hard-coding 64K here. Fixes: e72421a085a8 ("powerpc: Define swapper_pg_dir[] in C") Reported-by: Daniel Axtens <dja@axtens.net> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20210624123420.2784187-1-mpe@ellerman.id.au
2021-06-24bpf: Support all gso types in bpf_skb_change_proto()Maciej Żenczykowski
Since we no longer modify gso_size, it is now theoretically safe to not set SKB_GSO_DODGY and reset gso_segs to zero. This also means the skb_is_gso_tcp() check should no longer be necessary. Unfortunately we cannot remove the skb_{decrease,increase}_gso_size() helpers, as they are still used elsewhere: bpf_skb_net_grow() without BPF_F_ADJ_ROOM_FIXED_GSO bpf_skb_net_shrink() without BPF_F_ADJ_ROOM_FIXED_GSO net/core/lwt_bpf.c's handle_gso_type() Signed-off-by: Maciej Żenczykowski <maze@google.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Cc: Dongseok Yi <dseok.yi@samsung.com> Cc: Willem de Bruijn <willemb@google.com> Link: https://lore.kernel.org/bpf/20210617000953.2787453-3-zenczykowski@gmail.com
2021-06-24mcb: Use DEFINE_RES_MEM() helper macro and fix the end addressZhen Lei
Use DEFINE_RES_MEM() to save a couple of lines of code, which makes the code a bit shorter and easier to read. The start address does not need to appear twice. By the way, the value of '.end' should be "start + size - 1". So the previous writing should have omitted subtracted 1. Fixes: acf5e051ac44 ("MCB: add support for SC31 to mcb-lpc") Fixes: 73edc8f7ccef ("mcb: Added support for LPC or non PCI based MCB carrier") Signed-off-by: Zhen Lei <thunder.leizhen@huawei.com> Link: https://lore.kernel.org/r/20210616073030.834-2-thunder.leizhen@huawei.com Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2021-06-24PNP: moved EXPORT_SYMBOL so that it immediately followed its function/variableJinchao Wang
change made to resolve following checkpatch message: WARNING: EXPORT_SYMBOL(foo); should immediately follow its function/variable Signed-off-by: Jinchao Wang <wjc@cdjrlc.com> Link: https://lore.kernel.org/r/20210624020929.49968-1-wjc@cdjrlc.com Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2021-06-24KVM: arm64: Set the MTE tag bit before releasing the pageMarc Zyngier
Setting a page flag without holding a reference to the page is living dangerously. In the tag-writing path, we drop the reference to the page by calling kvm_release_pfn_dirty(), and only then set the PG_mte_tagged bit. It would be safer to do it the other way round. Fixes: f0376edb1ddca ("KVM: arm64: Add ioctl to fetch/store tags in a guest") Cc: Catalin Marinas <catalin.marinas@arm.com> Reviewed-by: Steven Price <steven.price@arm.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/87k0mjidwb.wl-maz@kernel.org
2021-06-24bus: mhi: pci-generic: Add missing 'pci_disable_pcie_error_reporting()' callsChristophe JAILLET
If an error occurs after a 'pci_enable_pcie_error_reporting()' call, it must be undone by a corresponding 'pci_disable_pcie_error_reporting()' call Add the missing call in the error handling path of the probe and in the remove function. Cc: <stable@vger.kernel.org> Fixes: b012ee6bfe2a ("mhi: pci_generic: Add PCI error handlers") Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr> Reviewed-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org> Link: https://lore.kernel.org/r/f70c14701f4922d67e717633c91b6c481b59f298.1623445348.git.christophe.jaillet@wanadoo.fr Signed-off-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org> Link: https://lore.kernel.org/r/20210621161616.77524-6-manivannan.sadhasivam@linaro.org Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2021-06-24bus: mhi: Wait for M2 state during system resumeBaochen Qiang
During system resume, MHI host triggers M3->M0 transition and then waits for target device to enter M0 state. Once done, the device queues a state change event into ctrl event ring and notifies MHI host by raising an interrupt, where a tasklet is scheduled to process this event. In most cases, the tasklet is served timely and wait operation succeeds. However, there are cases where CPU is busy and cannot serve this tasklet for some time. Once delay goes long enough, the device moves itself to M1 state and also interrupts MHI host after inserting a new state change event to ctrl ring. Later when CPU finally has time to process the ring, there will be two events: 1. For M3->M0 event, which is the first event to be processed queued first. The tasklet handler serves the event, updates device state to M0 and wakes up the task. 2. For M0->M1 event, which is processed later, the tasklet handler triggers M1->M2 transition and updates device state to M2 directly, then wakes up the MHI host (if it is still sleeping on this wait queue). Note that although MHI host has been woken up while processing the first event, it may still has no chance to run before the second event is processed. In other words, MHI host has to keep waiting till timeout causing the M0 state to be missed. kernel log here: ... Apr 15 01:45:14 test-NUC8i7HVK kernel: [ 4247.911251] mhi 0000:06:00.0: Entered with PM state: M3, MHI state: M3 Apr 15 01:45:14 test-NUC8i7HVK kernel: [ 4247.917762] mhi 0000:06:00.0: State change event to state: M0 Apr 15 01:45:14 test-NUC8i7HVK kernel: [ 4247.917767] mhi 0000:06:00.0: State change event to state: M1 Apr 15 01:45:14 test-NUC8i7HVK kernel: [ 4338.788231] mhi 0000:06:00.0: Did not enter M0 state, MHI state: M2, PM state: M2 ... Fix this issue by simply adding M2 as a valid state for resume. Tested-on: WCN6855 hw2.0 PCI WLAN.HSP.1.1-01720.1-QCAHSPSWPL_V1_V2_SILICONZ_LITE-1 Cc: stable@vger.kernel.org Fixes: 0c6b20a1d720 ("bus: mhi: core: Add support for MHI suspend and resume") Signed-off-by: Baochen Qiang <bqiang@codeaurora.org> Reviewed-by: Hemant Kumar <hemantk@codeaurora.org> Reviewed-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org> Link: https://lore.kernel.org/r/20210524040312.14409-1-bqiang@codeaurora.org [mani: slightly massaged the commit message] Signed-off-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org> Link: https://lore.kernel.org/r/20210621161616.77524-4-manivannan.sadhasivam@linaro.org Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2021-06-24bus: mhi: core: Fix power down latencyLoic Poulain
On graceful power-down/disable transition, when an MHI reset is performed, the MHI device loses its context, including interrupt configuration. However, the current implementation is waiting for event(irq) driven state change to confirm reset has been completed, which never happens, and causes reset timeout, leading to unexpected high latency of the mhi_power_down procedure (up to 45 seconds). Fix that by moving to the recently introduced poll_reg_field method, waiting for the reset bit to be cleared, in the same way as the power_on procedure. Cc: stable@vger.kernel.org Fixes: a6e2e3522f29 ("bus: mhi: core: Add support for PM state transitions") Signed-off-by: Loic Poulain <loic.poulain@linaro.org> Reviewed-by: Bhaumik Bhatt <bbhatt@codeaurora.org> Reviewed-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org> Reviewed-by: Hemant Kumar <hemantk@codeaurora.org> Link: https://lore.kernel.org/r/1620029090-8975-1-git-send-email-loic.poulain@linaro.org Signed-off-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org> Link: https://lore.kernel.org/r/20210621161616.77524-3-manivannan.sadhasivam@linaro.org Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2021-06-24intel_th: Wait until port is in reset before programming itAlexander Shishkin
Some devices don't drain their pipelines if we don't make sure that the corresponding output port is in reset before programming it for a new trace capture, resulting in bits of old trace appearing in the new trace capture. Fix that by explicitly making sure the reset is asserted before programming new trace capture. Reviewed-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Signed-off-by: Alexander Shishkin <alexander.shishkin@linux.intel.com> Link: https://lore.kernel.org/r/20210621151246.31891-5-alexander.shishkin@linux.intel.com Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2021-06-24intel_th: msu: Make contiguous buffers uncachedAlexander Shishkin
We already keep the multiblock mode buffers uncached, but forget the single mode. Address this. Reviewed-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Signed-off-by: Alexander Shishkin <alexander.shishkin@linux.intel.com> Link: https://lore.kernel.org/r/20210621151246.31891-4-alexander.shishkin@linux.intel.com Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2021-06-24intel_th: Remove an unused exit point from intel_th_remove()Uwe Kleine-König
As described in the added comment device_for_each_child() never returns a non-zero value. So remove the corresponding error check. This simplifies the quest to make struct bus_type::remove() return void. Reviewed-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Signed-off-by: Uwe Kleine-König <u.kleine-koenig@pengutronix.de> Signed-off-by: Alexander Shishkin <alexander.shishkin@linux.intel.com> Link: https://lore.kernel.org/r/20210621151246.31891-3-alexander.shishkin@linux.intel.com Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>