summaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
2020-07-29powerpc/powernv/sriov: Remove unused but set variable 'phb'Wei Yongjun
Gcc report warning as follows: arch/powerpc/platforms/powernv/pci-sriov.c:602:25: warning: variable 'phb' set but not used [-Wunused-but-set-variable] 602 | struct pnv_phb *phb; | ^~~ This variable is not used, so this commit removing it. Reported-by: Hulk Robot <hulkci@huawei.com> Signed-off-by: Wei Yongjun <weiyongjun1@huawei.com> Acked-by: Oliver O'Halloran <oohall@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200727171112.2781-1-weiyongjun1@huawei.com
2020-07-29powerpc: use for_each_child_of_node() macroQinglang Miao
Use for_each_child_of_node() macro instead of open coding it. Signed-off-by: Qinglang Miao <miaoqinglang@huawei.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200728022807.87815-1-miaoqinglang@huawei.com
2020-07-29RDMA/efa: Add EFA 0xefa1 PCI IDGal Pressman
Add support for 0xefa1 devices. Link: https://lore.kernel.org/r/20200722140312.3651-5-galpress@amazon.com Reviewed-by: Shadi Ammouri <sammouri@amazon.com> Reviewed-by: Yossi Leybovich <sleybo@amazon.com> Signed-off-by: Gal Pressman <galpress@amazon.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2020-07-29RDMA/efa: User/kernel compatibility handshake mechanismGal Pressman
Introduce a mechanism that performs an handshake between the userspace provider and kernel driver which verifies that the user supports all required features in order to operate correctly. The handshake verifies the needed functionality by comparing the reported device caps and the provider caps. If the device reports a non-zero capability the appropriate comp mask is required from the userspace provider in order to allocate the context. Link: https://lore.kernel.org/r/20200722140312.3651-4-galpress@amazon.com Reviewed-by: Shadi Ammouri <sammouri@amazon.com> Reviewed-by: Yossi Leybovich <sleybo@amazon.com> Signed-off-by: Gal Pressman <galpress@amazon.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2020-07-29RDMA/efa: Expose minimum SQ sizeGal Pressman
The device reports the minimum SQ size required for creation. This patch queries the min SQ size and reports it back to the userspace library. Link: https://lore.kernel.org/r/20200722140312.3651-3-galpress@amazon.com Reviewed-by: Firas JahJah <firasj@amazon.com> Reviewed-by: Shadi Ammouri <sammouri@amazon.com> Signed-off-by: Gal Pressman <galpress@amazon.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2020-07-29RDMA/efa: Expose maximum TX doorbell batchGal Pressman
The device reports the maximum number of bytes to be written before ringing the doorbell (zero means unlimited). This patch queries the max batch size and reports it back to the userspace library. Link: https://lore.kernel.org/r/20200722140312.3651-2-galpress@amazon.com Reviewed-by: Daniel Kranzdorf <dkkranzd@amazon.com> Reviewed-by: Firas JahJah <firasj@amazon.com> Signed-off-by: Gal Pressman <galpress@amazon.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2020-07-29usb: typec: tcpm: Add WARN_ON ensure we are not trying to send 2 VDM packets ↵Hans de Goede
at the same time The tcpm.c code for sending VDMs assumes that there will only be one VDM in flight at the time. The "queue" used by tcpm_queue_vdm is only 1 entry deep. This assumes that the higher layers (tcpm state-machine and alt-mode drivers) ensure that queuing a new VDM before the old one has been completely send (or it timed out) add a WARN_ON to check for this. Reviewed-by: Heikki Krogerus <heikki.krogerus@linux.intel.com> Reviewed-by: Guenter Roeck <linux@roeck-us.net> Signed-off-by: Hans de Goede <hdegoede@redhat.com> Link: https://lore.kernel.org/r/20200724174702.61754-6-hdegoede@redhat.com Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2020-07-29usb: typec: tcpm: Fix AB BA lock inversion between tcpm code and the ↵Hans de Goede
alt-mode drivers When we receive a PD data packet which ends up being for the alt-mode driver we have the following lock order: 1. tcpm_pd_rx_handler take the tcpm-port lock 2. We call into the alt-mode driver which takes the alt-mode's lock And when the alt-mode driver initiates communication we have the following lock order: 3. alt-mode driver takes the alt-mode's lock 4. alt-mode driver calls tcpm_altmode_enter which takes the tcpm-port lock This is a classic AB BA lock inversion issue. With the refactoring of tcpm_handle_vdm_request() done before this patch, we don't rely on, or need to make changes to the tcpm-port data by the time we make call 2. from above. All data to be passed to the alt-mode driver sits on our stack at this point, and thus does not need locking. So after the refactoring we can simply fix this by releasing the tcpm-port lock before calling into the alt-mode driver. This fixes the following lockdep warning: [ 191.454238] ====================================================== [ 191.454240] WARNING: possible circular locking dependency detected [ 191.454244] 5.8.0-rc5+ #1 Not tainted [ 191.454246] ------------------------------------------------------ [ 191.454248] kworker/u8:5/794 is trying to acquire lock: [ 191.454251] ffff9bac8e30d4a8 (&dp->lock){+.+.}-{3:3}, at: dp_altmode_vdm+0x30/0xf0 [typec_displayport] [ 191.454263] but task is already holding lock: [ 191.454264] ffff9bac9dc240a0 (&port->lock#2){+.+.}-{3:3}, at: tcpm_pd_rx_handler+0x43/0x12c0 [tcpm] [ 191.454273] which lock already depends on the new lock. [ 191.454275] the existing dependency chain (in reverse order) is: [ 191.454277] -> #1 (&port->lock#2){+.+.}-{3:3}: [ 191.454286] __mutex_lock+0x7b/0x820 [ 191.454290] tcpm_altmode_enter+0x23/0x90 [tcpm] [ 191.454293] dp_altmode_work+0xca/0xe0 [typec_displayport] [ 191.454299] process_one_work+0x23f/0x570 [ 191.454302] worker_thread+0x55/0x3c0 [ 191.454305] kthread+0x138/0x160 [ 191.454309] ret_from_fork+0x22/0x30 [ 191.454311] -> #0 (&dp->lock){+.+.}-{3:3}: [ 191.454317] __lock_acquire+0x1241/0x2090 [ 191.454320] lock_acquire+0xa4/0x3d0 [ 191.454323] __mutex_lock+0x7b/0x820 [ 191.454326] dp_altmode_vdm+0x30/0xf0 [typec_displayport] [ 191.454330] tcpm_pd_rx_handler+0x11ae/0x12c0 [tcpm] [ 191.454333] process_one_work+0x23f/0x570 [ 191.454336] worker_thread+0x55/0x3c0 [ 191.454338] kthread+0x138/0x160 [ 191.454341] ret_from_fork+0x22/0x30 [ 191.454343] other info that might help us debug this: [ 191.454345] Possible unsafe locking scenario: [ 191.454347] CPU0 CPU1 [ 191.454348] ---- ---- [ 191.454350] lock(&port->lock#2); [ 191.454353] lock(&dp->lock); [ 191.454355] lock(&port->lock#2); [ 191.454357] lock(&dp->lock); [ 191.454360] *** DEADLOCK *** Reviewed-by: Heikki Krogerus <heikki.krogerus@linux.intel.com> Reviewed-by: Guenter Roeck <linux@roeck-us.net> Signed-off-by: Hans de Goede <hdegoede@redhat.com> Link: https://lore.kernel.org/r/20200724174702.61754-5-hdegoede@redhat.com Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2020-07-29usb: typec: tcpm: Refactor tcpm_handle_vdm_requestHans de Goede
Refactor tcpm_handle_vdm_request and its tcpm_pd_svdm helper function so that reporting the results of the vdm to the altmode-driver is separated out into a clear separate step inside tcpm_handle_vdm_request, instead of being scattered over various places inside the tcpm_pd_svdm helper. This is a preparation patch for fixing an AB BA lock inversion between the tcpm code and some altmode drivers. Reviewed-by: Heikki Krogerus <heikki.krogerus@linux.intel.com> Reviewed-by: Guenter Roeck <linux@roeck-us.net> Signed-off-by: Hans de Goede <hdegoede@redhat.com> Link: https://lore.kernel.org/r/20200724174702.61754-4-hdegoede@redhat.com Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2020-07-29usb: typec: tcpm: Refactor tcpm_handle_vdm_request payload handlingHans de Goede
Refactor the tcpm_handle_vdm_request payload handling by doing the endianness conversion only once directly inside tcpm_handle_vdm_request itself instead of doing it multiple times inside various helper functions called by tcpm_handle_vdm_request. This is a preparation patch for some further refactoring to fix an AB BA lock inversion between the tcpm code and some altmode drivers. Reviewed-by: Guenter Roeck <linux@roeck-us.net> Reviewed-by: Heikki Krogerus <heikki.krogerus@linux.intel.com> Signed-off-by: Hans de Goede <hdegoede@redhat.com> Link: https://lore.kernel.org/r/20200724174702.61754-3-hdegoede@redhat.com Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2020-07-29usb: typec: tcpm: Add tcpm_queue_vdm_unlocked() helperHans de Goede
Various callers (all the typec_altmode_ops) take the port-lock just for the tcpm_queue_vdm() call. Add a new tcpm_queue_vdm_unlocked() helper which takes the lock, so that its callers don't have to do this themselves. This is a preparation patch for fixing an AB BA lock inversion between the tcpm code and some altmode drivers. Reviewed-by: Heikki Krogerus <heikki.krogerus@linux.intel.com> Reviewed-by: Guenter Roeck <linux@roeck-us.net> Signed-off-by: Hans de Goede <hdegoede@redhat.com> Link: https://lore.kernel.org/r/20200724174702.61754-2-hdegoede@redhat.com Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2020-07-29usb: typec: tcpm: Move mod_delayed_work(&port->vdm_state_machine) call into ↵Hans de Goede
tcpm_queue_vdm() All callers of tcpm_queue_vdm() immediately follow the tcpm_queue_vdm() vdm call with a: mod_delayed_work(port->wq, &port->vdm_state_machine, 0); Call, fold this into tcpm_queue_vdm() itself. Reviewed-by: Guenter Roeck <linux@roeck-us.net> Reviewed-by: Heikki Krogerus <heikki.krogerus@linux.intel.com> Signed-off-by: Hans de Goede <hdegoede@redhat.com> Link: https://lore.kernel.org/r/20200724174702.61754-1-hdegoede@redhat.com Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2020-07-29IB/srpt: use new shared CQ mechanismYamin Friedman
Have the driver use shared CQs provided by the rdma core driver. This provides the advantage of improved efficiency handling interrupts. Link: https://lore.kernel.org/r/20200722135629.49467-3-maxg@mellanox.com Signed-off-by: Yamin Friedman <yaminf@mellanox.com> Reviewed-by: Max Gurtovoy <maxg@mellanox.com> Reviewed-by: Bart Van Assche <bvanassche@acm.org> Reviewed-by: Sagi Grimberg <sagi@grimberg.me> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2020-07-29IB/isert: use new shared CQ mechanismYamin Friedman
Have the driver use shared CQs provided by the rdma core driver. Since this provides similar functionality to iser_comp it has been removed. Now there is no reason to allocate very large CQs when the driver is loaded while gaining the advantage of shared CQs. Previously when a single connection was opened a CQ was opened for every core with enough space for eight connections, this is a very large overhead that in most cases will not be utilized. Link: https://lore.kernel.org/r/20200722135629.49467-2-maxg@mellanox.com Signed-off-by: Yamin Friedman <yaminf@mellanox.com> Signed-off-by: Max Gurtovoy <maxg@mellanox.com> Reviewed-by: Sagi Grimberg <sagi@grimberg.me> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2020-07-29IB/iser: use new shared CQ mechanismYamin Friedman
Have the driver use shared CQs provided by the rdma core driver. Since this provides similar functionality to iser_comp it has been removed. Now there is no reason to allocate very large CQs when the driver is loaded while gaining the advantage of shared CQs. Link: https://lore.kernel.org/r/20200722135629.49467-1-maxg@mellanox.com Signed-off-by: Yamin Friedman <yaminf@mellanox.com> Acked-by: Max Gurtovoy <maxg@mellanox.com> Reviewed-by: Sagi Grimberg <sagi@grimberg.me> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
2020-07-29staging/speakup: Move out of stagingSamuel Thibault
The nasty TODO items are done. Signed-off-by: Samuel Thibault <samuel.thibault@ens-lyon.org> Link: https://lore.kernel.org/r/20200729003531.907370-1-samuel.thibault@ens-lyon.org Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2020-07-29staging: sm750fb: use generic power managementVaibhav Gupta
Drivers using legacy power management .suspen()/.resume() callbacks have to manage PCI states and device's PM states themselves. They also need to take care of standard configuration registers. Switch to generic power management framework using a single "struct dev_pm_ops" variable to take the unnecessary load from the driver. This also avoids the need for the driver to directly call most of the PCI helper functions and device power state control functions, as through the generic framework PCI Core takes care of the necessary operations, and drivers are required to do only device-specific jobs. Signed-off-by: Vaibhav Gupta <vaibhavgupta40@gmail.com> Link: https://lore.kernel.org/r/20200728123349.1331679-1-vaibhavgupta40@gmail.com Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2020-07-29Staging: rtl8712: Fixed a coding sytle issueAnkit Baluni
Removed braces for a 'if' condition as it contain only single line & there is no need for braces for such case according to coding style rules. Signed-off-by: Ankit Baluni <b18007@students.iitmandi.ac.in> Link: https://lore.kernel.org/r/20200729074541.1972-1-b18007@students.iitmandi.ac.in Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2020-07-29staging: rtl8723bs: remove redundant assignment to variable retColin Ian King
The variable ret is being assigned an error return value that is never read, the control passes to a return statement and ret is never referenced. Remove the redundant assignment. Also remove an empty line. Addresses-Coverity: ("Unused value") Signed-off-by: Colin Ian King <colin.king@canonical.com> Link: https://lore.kernel.org/r/20200729100525.573500-1-colin.king@canonical.com Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2020-07-29staging: most: usb: remove NET dependencyChristian Gromm
This patch removes the dependency to NET as it is no longer needed. Signed-off-by: Christian Gromm <christian.gromm@microchip.com> Link: https://lore.kernel.org/r/1595927275-27462-1-git-send-email-christian.gromm@microchip.com Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2020-07-29Merge tag 'usb-ci-v5.9-rc1' of ↵Greg Kroah-Hartman
git://git.kernel.org/pub/scm/linux/kernel/git/peter.chen/usb into usb-next Peter writes: ENDIAN issue fix and one query controller role API is introduced. * tag 'usb-ci-v5.9-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/peter.chen/usb: usb: chipidea: imx: get available runtime dr mode for wakeup setting usb: chipidea: add query_available_role interface Documentation: ABI: usb: chipidea: Update Li Jun's e-mail usb: chipidea: udc: fix the ENDIAN issue
2020-07-29Documentation/sysctl: Document uclamp sysctl knobsQais Yousef
Uclamp exposes 3 sysctl knobs: * sched_util_clamp_min * sched_util_clamp_max * sched_util_clamp_min_rt_default Document them in sysctl/kernel.rst. Signed-off-by: Qais Yousef <qais.yousef@arm.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lkml.kernel.org/r/20200716110347.19553-3-qais.yousef@arm.com
2020-07-29sched/uclamp: Add a new sysctl to control RT default boost valueQais Yousef
RT tasks by default run at the highest capacity/performance level. When uclamp is selected this default behavior is retained by enforcing the requested uclamp.min (p->uclamp_req[UCLAMP_MIN]) of the RT tasks to be uclamp_none(UCLAMP_MAX), which is SCHED_CAPACITY_SCALE; the maximum value. This is also referred to as 'the default boost value of RT tasks'. See commit 1a00d999971c ("sched/uclamp: Set default clamps for RT tasks"). On battery powered devices, it is desired to control this default (currently hardcoded) behavior at runtime to reduce energy consumed by RT tasks. For example, a mobile device manufacturer where big.LITTLE architecture is dominant, the performance of the little cores varies across SoCs, and on high end ones the big cores could be too power hungry. Given the diversity of SoCs, the new knob allows manufactures to tune the best performance/power for RT tasks for the particular hardware they run on. They could opt to further tune the value when the user selects a different power saving mode or when the device is actively charging. The runtime aspect of it further helps in creating a single kernel image that can be run on multiple devices that require different tuning. Keep in mind that a lot of RT tasks in the system are created by the kernel. On Android for instance I can see over 50 RT tasks, only a handful of which created by the Android framework. To control the default behavior globally by system admins and device integrator, introduce the new sysctl_sched_uclamp_util_min_rt_default to change the default boost value of the RT tasks. I anticipate this to be mostly in the form of modifying the init script of a particular device. To avoid polluting the fast path with unnecessary code, the approach taken is to synchronously do the update by traversing all the existing tasks in the system. This could race with a concurrent fork(), which is dealt with by introducing sched_post_fork() function which will ensure the racy fork will get the right update applied. Tested on Juno-r2 in combination with the RT capacity awareness [1]. By default an RT task will go to the highest capacity CPU and run at the maximum frequency, which is particularly energy inefficient on high end mobile devices because the biggest core[s] are 'huge' and power hungry. With this patch the RT task can be controlled to run anywhere by default, and doesn't cause the frequency to be maximum all the time. Yet any task that really needs to be boosted can easily escape this default behavior by modifying its requested uclamp.min value (p->uclamp_req[UCLAMP_MIN]) via sched_setattr() syscall. [1] 804d402fb6f6: ("sched/rt: Make RT capacity-aware") Signed-off-by: Qais Yousef <qais.yousef@arm.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lkml.kernel.org/r/20200716110347.19553-2-qais.yousef@arm.com
2020-07-29sched/uclamp: Fix a deadlock when enabling uclamp static keyQais Yousef
The following splat was caught when setting uclamp value of a task: BUG: sleeping function called from invalid context at ./include/linux/percpu-rwsem.h:49 cpus_read_lock+0x68/0x130 static_key_enable+0x1c/0x38 __sched_setscheduler+0x900/0xad8 Fix by ensuring we enable the key outside of the critical section in __sched_setscheduler() Fixes: 46609ce22703 ("sched/uclamp: Protect uclamp fast path code with static key") Signed-off-by: Qais Yousef <qais.yousef@arm.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lkml.kernel.org/r/20200716110347.19553-4-qais.yousef@arm.com
2020-07-29powerpc/build: vdso linker warning for orphan sectionsNicholas Piggin
Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200303012748.4190929-1-npiggin@gmail.com
2020-07-29powerpc: Use fallthrough pseudo-keywordGustavo A. R. Silva
Replace the existing /* fall through */ comments and its variants with the new pseudo-keyword macro fallthrough[1]. Also, remove unnecessary fall-through markings when it is the case. [1] https://www.kernel.org/doc/html/v5.7/process/deprecated.html?highlight=fallthrough#implicit-switch-case-fall-through Signed-off-by: Gustavo A. R. Silva <gustavoars@kernel.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200727224201.GA10133@embeddedor
2020-07-29powerpc/book3s64/radix: Add kernel command line option to disable radix GTSEAneesh Kumar K.V
This adds a kernel command line option that can be used to disable GTSE support. Disabling GTSE implies kernel will make hcalls to invalidate TLB entries. This was done so that we can do VM migration between configs that enable/disable GTSE support via hypervisor. To migrate a VM from a system that supports GTSE to a system that doesn't, we can boot the guest with radix_hcall_invalidate=on, thereby forcing the guest to use hcalls for TLB invalidates. The check for hcall availability is done in pSeries_setup_arch so that the panic message appears on the console. This should only happen on a hypervisor that doesn't force the guest to hash translation even though it can't handle the radix GTSE=0 request via CAS. With radix_hcall_invalidate=on if the hypervisor doesn't support hcall_rpt_invalidate hcall it should force the LPAR to hash translation. Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com> Tested-by: Bharata B Rao <bharata@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200727085908.420806-1-aneesh.kumar@linux.ibm.com
2020-07-29powerpc/kvm/cma: Improve kernel log during bootAneesh Kumar K.V
Current kernel gives: [ 0.000000] cma: Reserved 26224 MiB at 0x0000007959000000 [ 0.000000] hugetlb_cma: reserve 65536 MiB, up to 16384 MiB per node [ 0.000000] cma: Reserved 16384 MiB at 0x0000001800000000 With the fix [ 0.000000] kvm_cma_reserve: reserving 26214 MiB for global area [ 0.000000] cma: Reserved 26224 MiB at 0x0000007959000000 [ 0.000000] hugetlb_cma: reserve 65536 MiB, up to 16384 MiB per node [ 0.000000] cma: Reserved 16384 MiB at 0x0000001800000000 Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200713150749.25245-2-aneesh.kumar@linux.ibm.com
2020-07-29powerpc/hugetlb/cma: Allocate gigantic hugetlb pages using CMAAneesh Kumar K.V
commit: cf11e85fc08c ("mm: hugetlb: optionally allocate gigantic hugepages using cma") added support for allocating gigantic hugepages using CMA. This patch enables the same for powerpc Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200713150749.25245-1-aneesh.kumar@linux.ibm.com
2020-07-29powerpc/xmon: Use `dcbf` inplace of `dcbi` instruction for 64bit Book3SBalamuruhan S
Data Cache Block Invalidate (dcbi) instruction implemented back in PowerPC architecture version 2.03. But as per Power Processor Users Manual it is obsolete and not supported by POWER8/POWER9 core. Attempt to use of this illegal instruction results in a hypervisor emulation assistance interrupt. So, ifdef it out the option `i` in xmon for 64bit Book3S. 0:mon> fi cpu 0x0: Vector: 700 (Program Check) at [c000000003be74a0] pc: c000000000102030: cacheflush+0x180/0x1a0 lr: c000000000101f3c: cacheflush+0x8c/0x1a0 sp: c000000003be7730 msr: 8000000000081033 current = 0xc0000000035e5c00 paca = 0xc000000001910000 irqmask: 0x03 irq_happened: 0x01 pid = 1025, comm = bash Linux version 5.6.0-rc5-g5aa19adac (root@ltc-wspoon6) (gcc version 7.4.0 (Ubuntu 7.4.0-1ubuntu1~18.04.1)) #1 SMP Tue Mar 10 04:38:41 CDT 2020 cpu 0x0: Exception 700 (Program Check) in xmon, returning to main loop [c000000003be7c50] c00000000084abb0 __handle_sysrq+0xf0/0x2a0 [c000000003be7d00] c00000000084b3c0 write_sysrq_trigger+0xb0/0xe0 [c000000003be7d30] c0000000004d1edc proc_reg_write+0x8c/0x130 [c000000003be7d60] c00000000040dc7c __vfs_write+0x3c/0x70 [c000000003be7d80] c000000000410e70 vfs_write+0xd0/0x210 [c000000003be7dd0] c00000000041126c ksys_write+0xdc/0x130 [c000000003be7e20] c00000000000b9d0 system_call+0x5c/0x68 --- Exception: c01 (System Call) at 00007fffa345e420 SP (7ffff0b08ab0) is in userspace Signed-off-by: Balamuruhan S <bala24@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200330075954.538773-1-bala24@linux.ibm.com
2020-07-29powerpc: Drop old comment about CONFIG_POWERMichael Ellerman
There's a comment in time.h referring to CONFIG_POWER, which doesn't exist. That confuses scripts/checkkconfigsymbols.py. Presumably the comment was referring to a CONFIG_POWER vs CONFIG_PPC, in which case for CONFIG_POWER we would #define __USE_RTC to 1. But instead we have CONFIG_PPC_BOOK3S_601, and these days we have IS_ENABLED(). So the comment is no longer relevant, drop it. Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200724131728.1643966-9-mpe@ellerman.id.au
2020-07-29powerpc/kvm: Use correct CONFIG symbol in commentMichael Ellerman
This comment refers to the non-existent CONFIG_PPC_BOOK3S_XX, which confuses scripts/checkkconfigsymbols.py. Change it to use the correct symbol. Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200724131728.1643966-8-mpe@ellerman.id.au
2020-07-29powerpc/boot: Fix CONFIG_PPC_MPC52XX referencesMichael Ellerman
Commit 866bfc75f40e ("powerpc: conditionally compile platform-specific serial drivers") made some code depend on CONFIG_PPC_MPC52XX, which doesn't exist. Fix it to use CONFIG_PPC_MPC52xx. Fixes: 866bfc75f40e ("powerpc: conditionally compile platform-specific serial drivers") Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200724131728.1643966-7-mpe@ellerman.id.au
2020-07-29powerpc/32s: Remove TAUException wart in traps.cMichael Ellerman
All 32 and 64-bit builds that don't have CONFIG_TAU_INT enabled (all of them), get a definition of TAUException() in traps.c. On 64-bit it's completely useless, and just wastes ~120 bytes of text. On 32-bit it allows the kernel to link because head_32.S calls it unconditionally. Instead follow the example of altivec_assist_exception(), and if CONFIG_TAU_INT is not enabled just point it at unknown_exception using the preprocessor. Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200724131728.1643966-6-mpe@ellerman.id.au
2020-07-29powerpc/32s: Fix CONFIG_BOOK3S_601 usesMichael Ellerman
We have two uses of CONFIG_BOOK3S_601, which doesn't exist. Fix them to use CONFIG_PPC_BOOK3S_601 which is the correct symbol. Fixes: 12c3f1fd87bf ("powerpc/32s: get rid of CPU_FTR_601 feature") Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200724131728.1643966-5-mpe@ellerman.id.au
2020-07-29powerpc/64e: Drop dead BOOK3E_MMU_TLB_STATS codeMichael Ellerman
This code was merged 11 years ago in commit 13363ab9b9d0 ("powerpc: Add definitions used by exception handling on 64-bit Book3E") but was never able to be built because CONFIG_BOOK3E_MMU_TLB_STATS never existed. Remove it. Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200724131728.1643966-4-mpe@ellerman.id.au
2020-07-29powerpc/52xx: Fix comment about CONFIG_BDI*Michael Ellerman
There's a comment in lite5200_sleep.S that refers to "CONFIG_BDI*". This confuses scripts/checkkconfigsymbols.py, which thinks it should be able to find CONFIG_BDI. Change the comment to refer to CONFIG_BDI_SWITCH which is presumably roughly what it was referring to. AFAICS there never has been a CONFIG_BDI. Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200724131728.1643966-3-mpe@ellerman.id.au
2020-07-29powerpc/configs: Remove dead symbolsMichael Ellerman
Remove references to symbols that no longer exist as reported by scripts/checkkconfigsymbols.py. Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200724131728.1643966-2-mpe@ellerman.id.au
2020-07-29powerpc/configs: Drop old symbols from ppc6xx_defconfigMichael Ellerman
ppc6xx_defconfig refers to quite a few symbols that no longer exist, as reported by scripts/checkkconfigsymbols.py, remove them. Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200724131728.1643966-1-mpe@ellerman.id.au
2020-07-29powerpc/mm: Limit resize_hpt_for_hotplug() call to hash guests onlyBharata B Rao
During memory hotplug and unplug, resize_hpt_for_hotplug() gets called for both hash and radix guests but it should be called only for hash guests. Though the call does nothing in the radix guest case, it is cleaner to push this call into hash specific memory hotplug routines. Reported-by: Nathan Lynch <nathanl@linux.ibm.com> Signed-off-by: Bharata B Rao <bharata@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200727095704.1432916-1-bharata@linux.ibm.com
2020-07-29selftests/powerpc: Remove powerpc special cases from stack expansion testMichael Ellerman
Now that the powerpc code behaves the same as other architectures we can drop the special cases we had. Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200724092528.1578671-5-mpe@ellerman.id.au
2020-07-29powerpc/mm: Remove custom stack expansion checkingMichael Ellerman
We have powerpc specific logic in our page fault handling to decide if an access to an unmapped address below the stack pointer should expand the stack VMA. The logic aims to prevent userspace from doing bad accesses below the stack pointer. However as long as the stack is < 1MB in size, we allow all accesses without further checks. Adding some debug I see that I can do a full kernel build and LTP run, and not a single process has used more than 1MB of stack. So for the majority of processes the logic never even fires. We also recently found a nasty bug in this code which could cause userspace programs to be killed during signal delivery. It went unnoticed presumably because most processes use < 1MB of stack. The generic mm code has also grown support for stack guard pages since this code was originally written, so the most heinous case of the stack expanding into other mappings is now handled for us. Finally although some other arches have special logic in this path, from what I can tell none of x86, arm64, arm and s390 impose any extra checks other than those in expand_stack(). So drop our complicated logic and like other architectures just let the stack expand as long as its within the rlimit. Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Tested-by: Daniel Axtens <dja@axtens.net> Link: https://lore.kernel.org/r/20200724092528.1578671-4-mpe@ellerman.id.au
2020-07-29selftests/powerpc: Update the stack expansion testMichael Ellerman
Update the stack expansion load/store test to take into account the new allowance of 4224 bytes below the stack pointer. Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200724092528.1578671-3-mpe@ellerman.id.au
2020-07-29powerpc: Allow 4224 bytes of stack expansion for the signal frameMichael Ellerman
We have powerpc specific logic in our page fault handling to decide if an access to an unmapped address below the stack pointer should expand the stack VMA. The code was originally added in 2004 "ported from 2.4". The rough logic is that the stack is allowed to grow to 1MB with no extra checking. Over 1MB the access must be within 2048 bytes of the stack pointer, or be from a user instruction that updates the stack pointer. The 2048 byte allowance below the stack pointer is there to cover the 288 byte "red zone" as well as the "about 1.5kB" needed by the signal delivery code. Unfortunately since then the signal frame has expanded, and is now 4224 bytes on 64-bit kernels with transactional memory enabled. This means if a process has consumed more than 1MB of stack, and its stack pointer lies less than 4224 bytes from the next page boundary, signal delivery will fault when trying to expand the stack and the process will see a SEGV. The total size of the signal frame is the size of struct rt_sigframe (which includes the red zone) plus __SIGNAL_FRAMESIZE (128 bytes on 64-bit). The 2048 byte allowance was correct until 2008 as the signal frame was: struct rt_sigframe { struct ucontext uc; /* 0 1440 */ /* --- cacheline 11 boundary (1408 bytes) was 32 bytes ago --- */ long unsigned int _unused[2]; /* 1440 16 */ unsigned int tramp[6]; /* 1456 24 */ struct siginfo * pinfo; /* 1480 8 */ void * puc; /* 1488 8 */ struct siginfo info; /* 1496 128 */ /* --- cacheline 12 boundary (1536 bytes) was 88 bytes ago --- */ char abigap[288]; /* 1624 288 */ /* size: 1920, cachelines: 15, members: 7 */ /* padding: 8 */ }; 1920 + 128 = 2048 Then in commit ce48b2100785 ("powerpc: Add VSX context save/restore, ptrace and signal support") (Jul 2008) the signal frame expanded to 2304 bytes: struct rt_sigframe { struct ucontext uc; /* 0 1696 */ <-- /* --- cacheline 13 boundary (1664 bytes) was 32 bytes ago --- */ long unsigned int _unused[2]; /* 1696 16 */ unsigned int tramp[6]; /* 1712 24 */ struct siginfo * pinfo; /* 1736 8 */ void * puc; /* 1744 8 */ struct siginfo info; /* 1752 128 */ /* --- cacheline 14 boundary (1792 bytes) was 88 bytes ago --- */ char abigap[288]; /* 1880 288 */ /* size: 2176, cachelines: 17, members: 7 */ /* padding: 8 */ }; 2176 + 128 = 2304 At this point we should have been exposed to the bug, though as far as I know it was never reported. I no longer have a system old enough to easily test on. Then in 2010 commit 320b2b8de126 ("mm: keep a guard page below a grow-down stack segment") caused our stack expansion code to never trigger, as there was always a VMA found for a write up to PAGE_SIZE below r1. That meant the bug was hidden as we continued to expand the signal frame in commit 2b0a576d15e0 ("powerpc: Add new transactional memory state to the signal context") (Feb 2013): struct rt_sigframe { struct ucontext uc; /* 0 1696 */ /* --- cacheline 13 boundary (1664 bytes) was 32 bytes ago --- */ struct ucontext uc_transact; /* 1696 1696 */ <-- /* --- cacheline 26 boundary (3328 bytes) was 64 bytes ago --- */ long unsigned int _unused[2]; /* 3392 16 */ unsigned int tramp[6]; /* 3408 24 */ struct siginfo * pinfo; /* 3432 8 */ void * puc; /* 3440 8 */ struct siginfo info; /* 3448 128 */ /* --- cacheline 27 boundary (3456 bytes) was 120 bytes ago --- */ char abigap[288]; /* 3576 288 */ /* size: 3872, cachelines: 31, members: 8 */ /* padding: 8 */ /* last cacheline: 32 bytes */ }; 3872 + 128 = 4000 And commit 573ebfa6601f ("powerpc: Increase stack redzone for 64-bit userspace to 512 bytes") (Feb 2014): struct rt_sigframe { struct ucontext uc; /* 0 1696 */ /* --- cacheline 13 boundary (1664 bytes) was 32 bytes ago --- */ struct ucontext uc_transact; /* 1696 1696 */ /* --- cacheline 26 boundary (3328 bytes) was 64 bytes ago --- */ long unsigned int _unused[2]; /* 3392 16 */ unsigned int tramp[6]; /* 3408 24 */ struct siginfo * pinfo; /* 3432 8 */ void * puc; /* 3440 8 */ struct siginfo info; /* 3448 128 */ /* --- cacheline 27 boundary (3456 bytes) was 120 bytes ago --- */ char abigap[512]; /* 3576 512 */ <-- /* size: 4096, cachelines: 32, members: 8 */ /* padding: 8 */ }; 4096 + 128 = 4224 Then finally in 2017, commit 1be7107fbe18 ("mm: larger stack guard gap, between vmas") exposed us to the existing bug, because it changed the stack VMA to be the correct/real size, meaning our stack expansion code is now triggered. Fix it by increasing the allowance to 4224 bytes. Hard-coding 4224 is obviously unsafe against future expansions of the signal frame in the same way as the existing code. We can't easily use sizeof() because the signal frame structure is not in a header. We will either fix that, or rip out all the custom stack expansion checking logic entirely. Fixes: ce48b2100785 ("powerpc: Add VSX context save/restore, ptrace and signal support") Cc: stable@vger.kernel.org # v2.6.27+ Reported-by: Tom Lane <tgl@sss.pgh.pa.us> Tested-by: Daniel Axtens <dja@axtens.net> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200724092528.1578671-2-mpe@ellerman.id.au
2020-07-29selftests/powerpc: Add test of stack expansion logicMichael Ellerman
We have custom stack expansion checks that it turns out are extremely badly tested and contain bugs, surprise. So add some tests that exercise the code and capture the current boundary conditions. The signal test currently fails on 64-bit kernels because the 2048 byte allowance for the signal frame is too small, we will fix that in a subsequent patch. Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200724092528.1578671-1-mpe@ellerman.id.au
2020-07-29selftests/powerpc: Squash spurious errors due to device removalOliver O'Halloran
For drivers that don't have the error handling callbacks we implement recovery by removing the device and re-probing it. This causes the sysfs directory for the PCI device to be removed which causes the following spurious error to be printed when checking the PE state: Breaking 0005:03:00.0... ./eeh-basic.sh: line 13: can't open /sys/bus/pci/devices/0005:03:00.0/eeh_pe_state: no such file 0005:03:00.0, waited 0/60 0005:03:00.0, waited 1/60 0005:03:00.0, waited 2/60 0005:03:00.0, waited 3/60 0005:03:00.0, waited 4/60 0005:03:00.0, waited 5/60 0005:03:00.0, waited 6/60 0005:03:00.0, waited 7/60 0005:03:00.0, Recovered after 8 seconds We currently try to avoid this by checking if the PE state file exists before reading from it. This is however inherently racy so re-work the state checking so that we only read from the file once, and we squash any errors that occur while reading. Fixes: 85d86c8aa52e ("selftests/powerpc: Add basic EEH selftest") Signed-off-by: Oliver O'Halloran <oohall@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20200727010127.23698-1-oohall@gmail.com
2020-07-29selftests/powerpc: Add test for pkey siginfo verificationSandipan Das
Commit c46241a370a61 ("powerpc/pkeys: Check vma before returning key fault error to the user") fixes a bug which causes the kernel to set the wrong pkey in siginfo when a pkey fault occurs after two competing threads that have allocated different pkeys, one fully permissive and the other restrictive, attempt to protect a common page at the same time. This adds a test to detect the bug. Signed-off-by: Sandipan Das <sandipan@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/ce40b6ee270bda52e8f4088578ed2faf7d1d509a.1595821792.git.sandipan@linux.ibm.com
2020-07-29selftests/powerpc: Add wrapper for gettidSandipan Das
The gettid() syscall wrapper was first introduced in glibc 2.30. This adds a wrapper for use in distros running older versions. Suggested-by: Christophe Leroy <christophe.leroy@csgroup.eu> Suggested-by: Michael Ellerman <mpe@ellerman.id.au> Signed-off-by: Sandipan Das <sandipan@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/8ca3b0eeda989707815d1cf337cc33f090408965.1595821792.git.sandipan@linux.ibm.com
2020-07-29selftests/powerpc: Add helper to exit on failureSandipan Das
This adds a helper similar to FAIL_IF() which lets a program exit with code 1 (to indicate failure) when the given condition is true. Signed-off-by: Sandipan Das <sandipan@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/dac282d5c2e96e7816dc522e4e20d56d7c79c898.1595821792.git.sandipan@linux.ibm.com
2020-07-29selftests/powerpc: Harden test for execute-disabled pkeysSandipan Das
Commit 192b6a7805989 ("powerpc/book3s64/pkeys: Fix pkey_access_permitted() for execute disable pkey") fixed a bug that caused repetitive faults for pkeys with no execute rights alongside some combination of read and write rights. This removes the last two cases of the test, which check the behaviour of pkeys with read, write but no execute rights and all the rights, in favour of checking all the possible combinations of read, write and execute rights to be able to detect bugs like the one mentioned above. Signed-off-by: Sandipan Das <sandipan@linux.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/db467500f8af47727bba6b35796e8974a78b71e5.1595821792.git.sandipan@linux.ibm.com