summaryrefslogtreecommitdiff
path: root/drivers/base
AgeCommit message (Collapse)Author
2017-02-20Merge tag 'regmap-v4.11' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/broonie/regmap Pull regmap updates from Mark Brown: "For v4.11 activity on the regmap API has literally doubled, there are two patches this release: - fixes from Charles Keepax to make the kerneldoc generate correctly - a cleanup from Geliang Tang using rb_entry() rather than open coding it with container_of()" * tag 'regmap-v4.11' of git://git.kernel.org/pub/scm/linux/kernel/git/broonie/regmap: regmap: Fixup the kernel-doc comments on functions/structures regmap: use rb_entry()
2017-02-20Merge branch 'irq-core-for-linus' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull irq updates from Thomas Gleixner: "This update provides: - Yet another two irq controller chip drivers - A few updates and fixes for GICV3 - A resource managed function for interrupt allocation - Fixes, updates and enhancements all over the place" * 'irq-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: irqchip/qcom: Fix error handling genirq: Clarify logic calculating bogus irqreturn_t values genirq/msi: Add stubs for get_cached_msi_msg/pci_write_msi_msg genirq/devres: Use dev_name(dev) as default for devname genirq: Fix /proc/interrupts output alignment irqdesc: Add a resource managed version of irq_alloc_descs() irqchip/gic-v3-its: Zero command on allocation irqchip/gic-v3-its: Fix command buffer allocation irqchip/mips-gic: Fix local interrupts irqchip: Add a driver for Cortina Gemini irqchip: DT bindings for Cortina Gemini irqchip irqchip/gic-v3: Remove duplicate definition of GICD_TYPER_LPIS irqchip/gic-v3-its: Rename MAPVI to MAPTI irqchip/gic-v3-its: Drop deprecated GITS_BASER_TYPE_CPU irqchip/gic-v3-its: Refactor command encoding irqchip/gic-v3-its: Enable cacheable attribute Read-allocate hints irqchip/qcom: Add IRQ combiner driver ACPI: Add support for ResourceSource/IRQ domain mapping ACPI: Generic GSI: Do not attempt to map non-GSI IRQs during bus scan irq/platform-msi: Fix comment about maximal MSIs
2017-02-20Merge branches 'pm-core', 'pm-qos' and 'pm-domains'Rafael J. Wysocki
* pm-core: PM / wakeirq: report a wakeup_event on dedicated wekup irq PM / wakeirq: Fix spurious wake-up events for dedicated wakeirqs PM / wakeirq: Enable dedicated wakeirq for suspend * pm-qos: PM / QoS: Fix memory leak on resume_latency.notifiers PM / QoS: Remove unneeded linux/miscdevice.h include * pm-domains: PM / Domains: Provide dummy governors if CONFIG_PM_GENERIC_DOMAINS=n PM / Domains: Fix asynchronous execution of *noirq() callbacks PM / Domains: Correct comment in irq_safe_dev_in_no_sleep_domain() PM / Domains: Rename functions in genpd for power on/off
2017-02-20Merge branch 'pm-cpuidle'Rafael J. Wysocki
* pm-cpuidle: CPU / PM: expose pm_qos_resume_latency for CPUs cpuidle/menu: add per CPU PM QoS resume latency consideration cpuidle/menu: stop seeking deeper idle if current state is deep enough ACPI / idle: small formatting fixes
2017-02-20Merge branch 'pm-opp'Rafael J. Wysocki
* pm-opp: (24 commits) PM / OPP: Expose _of_get_opp_desc_node as dev_pm_opp API PM / OPP: Make _find_opp_table_unlocked() static PM / OPP: Update Documentation to remove RCU specific bits PM / OPP: Simplify dev_pm_opp_get_max_volt_latency() PM / OPP: Simplify _opp_set_availability() PM / OPP: Move away from RCU locking PM / OPP: Take kref from _find_opp_table() PM / OPP: Update OPP users to put reference PM / OPP: Add 'struct kref' to struct dev_pm_opp PM / OPP: Use dev_pm_opp_get_opp_table() instead of _add_opp_table() PM / OPP: Take reference of the OPP table while adding/removing OPPs PM / OPP: Return opp_table from dev_pm_opp_set_*() routines PM / OPP: Add 'struct kref' to OPP table PM / OPP: Add per OPP table mutex PM / OPP: Split out part of _add_opp_table() and _remove_opp_table() PM / OPP: Don't expose srcu_head to register notifiers PM / OPP: Rename dev_pm_opp_get_suspend_opp() and return OPP rate PM / OPP: Don't allocate OPP table from _opp_allocate() PM / OPP: Rename and split _dev_pm_opp_remove_table() PM / OPP: Add light weight _opp_free() routine ...
2017-02-18PM / QoS: Fix memory leak on resume_latency.notifiersJohn Keeping
Since commit 2d984ad132a8 (PM / QoS: Introcuce latency tolerance device PM QoS type) we reassign "c" to point at qos->latency_tolerance before freeing c->notifiers, but the notifiers field of latency_tolerance is never used. Restore the original behaviour of freeing the notifiers pointer on qos->resume_latency, which is used, and fix the following kmemleak warning. unreferenced object 0xed9dba00 (size 64): comm "kworker/0:1", pid 36, jiffies 4294670128 (age 15202.983s) hex dump (first 32 bytes): 00 00 00 00 04 ba 9d ed 04 ba 9d ed 00 00 00 00 ................ 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ backtrace: [<c06f6084>] kmemleak_alloc+0x74/0xb8 [<c011c964>] kmem_cache_alloc_trace+0x170/0x25c [<c035f448>] dev_pm_qos_constraints_allocate+0x3c/0xe4 [<c035f574>] __dev_pm_qos_add_request+0x84/0x1a0 [<c035f6cc>] dev_pm_qos_add_request+0x3c/0x54 [<c03c3fc4>] usb_hub_create_port_device+0x110/0x2b8 [<c03b2a60>] hub_probe+0xadc/0xc80 [<c03bb050>] usb_probe_interface+0x1b4/0x260 [<c035773c>] driver_probe_device+0x198/0x40c [<c0357b14>] __device_attach_driver+0x8c/0x98 [<c0355bbc>] bus_for_each_drv+0x8c/0x9c [<c0357494>] __device_attach+0x98/0x138 [<c0357c64>] device_initial_probe+0x14/0x18 [<c03569dc>] bus_probe_device+0x30/0x88 [<c0354c54>] device_add+0x430/0x554 [<c03b92d8>] usb_set_configuration+0x660/0x6fc Fixes: 2d984ad132a8 (PM / QoS: Introcuce latency tolerance device PM QoS type) Signed-off-by: John Keeping <john@metanate.com> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2017-02-13PM / wakeirq: report a wakeup_event on dedicated wekup irqGrygorii Strashko
There are two reasons for reporting wakeup event when dedicated wakeup IRQ is triggered: - wakeup events accounting, so proper statistical data will be displayed in sysfs and debugfs; - there are small window when System is entering suspend during which dedicated wakeup IRQ can be lost: dpm_suspend_noirq() |- device_wakeup_arm_wake_irqs() |- dev_pm_arm_wake_irq(X) |- IRQ is enabled and marked as wakeup source [1]... |- suspend_device_irqs() |- suspend_device_irq(X) |- irqd_set(X, IRQD_WAKEUP_ARMED); |- wakup IRQ armed The wakeup IRQ can be lost if it's triggered at point [1] and not armed yet. Hence, fix above cases by adding simple pm_wakeup_event() call in handle_threaded_wake_irq(). Fixes: 4990d4fe327b (PM / Wakeirq: Add automated device wake IRQ handling) Signed-off-by: Grygorii Strashko <grygorii.strashko@ti.com> Tested-by: Keerthy <j-keerthy@ti.com> [ tony@atomide.com: added missing return to avoid warnings ] Tested-by: Tony Lindgren <tony@atomide.com> Signed-off-by: Tony Lindgren <tony@atomide.com> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2017-02-13PM / wakeirq: Fix spurious wake-up events for dedicated wakeirqsGrygorii Strashko
Dedicated wakeirq is a one time event to wake-up the system from low-power state and then call pm_runtime_resume() on the device wired with the dedicated wakeirq. Sometimes dedicated wakeirqs can get deferred if they trigger after we call disable_irq_nosync() in dev_pm_disable_wake_irq(). This can happen if pm_runtime_get() is called around the same time a wakeirq fires. If an interrupt fires after disable_irq_nosync(), by default it will get tagged with IRQS_PENDING and will run later on when the interrupt is enabled again. Deferred wakeirqs usually just produce pointless wake-up events. But they can also cause suspend to fail if the deferred wakeirq fires during dpm_suspend_noirq() for example. So we really don't want to see the deferred wakeirqs triggering after the device has resumed. Let's fix the issue by setting IRQ_DISABLE_UNLAZY flag for the dedicated wakeirqs. The other option would be to implement irq_disable() in the dedicated wakeirq controller, but that's not a generic solution. For reference below is what happens with a IRQ_TYPE_EDGE_BOTH IRQ type wakeirq: - resume by dedicated IRQ (EDGE_FALLING) - suspend_enter() .... - arch_suspend_enable_irqs() |- dedicated IRQ armed and fired |- irq_pm_check_wakeup() |- disarm, disable IRQ and mark as IRQS_PENDING .... - dpm_resume_noirq() |- resume_device_irqs() |- __enable_irq() |- check_irq_resend() |- handle_threaded_wake_irq() |- dedicated IRQ processed |- device_wakeup_disarm_wake_irqs() |- disable_irq_wake() .... !-> dedicated IRQ (EDGE_RISING) -| handle_edge_irq() |- IRQ disabled: mask_ack_irq and mark as IRQS_PENDING .... - subsequent suspend .... |- dpm_suspend_noirq() |- device_wakeup_arm_wake_irqs() |- __enable_irq() |- check_irq_resend() (a) |- handle_threaded_wake_irq() |- pm_wakeup_event() --> abort suspend .... |- suspend_device_irqs() |- suspend_device_irq() |- dedicated IRQ armed .... (b) |- resend_irqs |- irq_pm_check_wakeup() |- IRQ armed -> abort suspend because of pending IRQ System suspend can be aborted at points (a)-not armed or (b)-armed. Fixes: 4990d4fe327b (PM / Wakeirq: Add automated device wake IRQ handling) Signed-off-by: Grygorii Strashko <grygorii.strashko@ti.com> [ tony@atomide.com: added a comment, updated the description ] Tested-by: Tony Lindgren <tony@atomide.com> Signed-off-by: Tony Lindgren <tony@atomide.com> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2017-02-13PM / wakeirq: Enable dedicated wakeirq for suspendGrygorii Strashko
We currently rely on runtime PM to enable dedicated wakeirq for suspend. This assumption fails in the following two cases: 1. If the consumer driver does not have runtime PM implemented, the dedicated wakeirq never gets enabled for suspend 2. If the consumer driver has runtime PM implemented, but does not idle in suspend Let's fix the issue by always enabling the dedicated wakeirq during suspend. Depends-on: bed570307ed7 (PM / wakeirq: Fix dedicated wakeirq for drivers not using autosuspend) Fixes: 4990d4fe327b (PM / Wakeirq: Add automated device wake IRQ handling) Reported-by: Keerthy <j-keerthy@ti.com> Tested-by: Keerthy <j-keerthy@ti.com> Signed-off-by: Grygorii Strashko <grygorii.strashko@ti.com> [ tony@atomide.com: updated based on bed570307ed7, added description ] Tested-by: Tony Lindgren <tony@atomide.com> Signed-off-by: Tony Lindgren <tony@atomide.com> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2017-02-09Merge OPP material for v4.11 to satisfy dependencies.Rafael J. Wysocki
2017-02-09PM / OPP: Expose _of_get_opp_desc_node as dev_pm_opp APIDave Gerlach
Rename _of_get_opp_desc_node to dev_pm_opp_of_get_opp_desc_node and add it to include/linux/pm_opp.h to allow other drivers, such as platform OPP and cpufreq drivers, to make use of it. Acked-by: Viresh Kumar <viresh.kumar@linaro.org> Signed-off-by: Dave Gerlach <d-gerlach@ti.com> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2017-02-09Merge tag 'irqchip-for-4.11' of ↵Thomas Gleixner
git://git.kernel.org/pub/scm/linux/kernel/git/maz/arm-platforms into irq/core Pull irqchip updates for 4.11 from Marc Zyngier - A number of gic-v3-its cleanups and fixes - A fix for the MIPS GIC - One new interrupt controller for the Cortina Gemini platform - Support for the Qualcomm interrupt combiner, together with its ACPI goodness
2017-02-09PM / Domains: Fix asynchronous execution of *noirq() callbacksUlf Hansson
As the PM core may invoke the *noirq() callbacks asynchronously, the current lock-less approach in genpd doesn't work. The consequence is that we may find concurrent operations racing to power on/off the PM domain. As of now, no immediate errors has been reported, but it's probably only a matter time. Therefor let's fix the problem now before this becomes a real issue, by deploying the locking scheme to the relevant functions. Reported-by: Brian Norris <briannorris@chromium.org> Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org> Tested-by: Geert Uytterhoeven <geert+renesas@glider.be> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2017-02-08PM / Domains: Correct comment in irq_safe_dev_in_no_sleep_domain()Ulf Hansson
The earlier comment stated that the dev_warn_once() was going to be printed once per device. Let's fix that, as dev_warn_once() is printed only once, no matter of the device. Reported-by: Geert Uytterhoeven <geert@linux-m68k.org> Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org> Acked-by: Lina Iyer <lina.iyer@linaro.org> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2017-02-07PM / OPP: Make _find_opp_table_unlocked() staticWei Yongjun
Fixes the following sparse warning: drivers/base/power/opp/core.c:49:18: warning: symbol '_find_opp_table_unlocked' was not declared. Should it be static? Signed-off-by: Wei Yongjun <weiyongjun1@huawei.com> Reviewed-by: Stephen Boyd <sboyd@codeaurora.org> Acked-by: Viresh Kumar <viresh.kumar@linaro.org> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2017-02-07device property: export code duplicating array of property entriesDmitry Torokhov
When augmenting ACPI-enumerated devices with additional property data based on DMI info, a module has often several potential property sets, with only one being active on a given box. In order to save memory it should be possible to mark everything and __initdata or __initconst, execute DMI match early, and duplicate relevant properties. Then kernel will discard the rest of them. Signed-off-by: Dmitry Torokhov <dmitry.torokhov@gmail.com> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2017-02-07device property: constify property arrays valuesDmitry Torokhov
Data that is fed into property arrays should not be modified, so let's mark relevant pointers as const. This will allow us making source arrays as const/__initconst. Signed-off-by: Dmitry Torokhov <dmitry.torokhov@gmail.com> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2017-02-07device property: allow to constify propertiesDmitry Torokhov
There is no reason why statically defined properties should be modifiable, so let's make device_add_properties() and the rest of pset_*() functions to take const pointers to properties. This will allow us to mark properties as const/__initconst at definition sites. Signed-off-by: Dmitry Torokhov <dmitry.torokhov@gmail.com> Reviewed-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2017-02-06Merge branches 'pm-core-fixes' and 'pm-cpufreq-fixes'Rafael J. Wysocki
* pm-core-fixes: PM / runtime: Avoid false-positive warnings from might_sleep_if() * pm-cpufreq-fixes: cpufreq: intel_pstate: Disable energy efficiency optimization cpufreq: brcmstb-avs-cpufreq: properly retrieve P-state upon suspend cpufreq: brcmstb-avs-cpufreq: extend sysfs entry brcm_avs_pmap
2017-02-04Merge tag 'char-misc-4.10-rc7' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/char-misc Pull char/misc driver fixes from Greg KH: "Here are two bugfixes that resolve some reported issues. One in the firmware loader, that should fix the much-reported problem of crashes with it. The other is a hyperv fix for a reported regression. Both have been in linux-next for a week or so with no reported issues" * tag 'char-misc-4.10-rc7' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/char-misc: Drivers: hv: vmbus: finally fix hv_need_to_signal_on_read() firmware: fix NULL pointer dereference in __fw_load_abort()
2017-02-04PM / runtime: Avoid false-positive warnings from might_sleep_if()Rafael J. Wysocki
The might_sleep_if() assertions in __pm_runtime_idle(), __pm_runtime_suspend() and __pm_runtime_resume() may generate false-positive warnings in some situations. For example, that happens if a nested pm_runtime_get_sync()/pm_runtime_put() pair is executed with disabled interrupts within an outer pm_runtime_get_sync()/pm_runtime_put() section for the same device. [Generally, pm_runtime_get_sync() may sleep, so it should not be called with disabled interrupts, but in this particular case the previous pm_runtime_get_sync() guarantees that the device will not be suspended, so the inner pm_runtime_get_sync() will return immediately after incrementing the device's usage counter.] That started to happen in the i915 driver in 4.10-rc, leading to the following splat: BUG: sleeping function called from invalid context at drivers/base/power/runtime.c:1032 in_atomic(): 1, irqs_disabled(): 0, pid: 1500, name: Xorg 1 lock held by Xorg/1500: #0: (&dev->struct_mutex){+.+.+.}, at: [<ffffffffa0680c13>] i915_mutex_lock_interruptible+0x43/0x140 [i915] CPU: 0 PID: 1500 Comm: Xorg Not tainted Call Trace: dump_stack+0x85/0xc2 ___might_sleep+0x196/0x260 __might_sleep+0x53/0xb0 __pm_runtime_resume+0x7a/0x90 intel_runtime_pm_get+0x25/0x90 [i915] aliasing_gtt_bind_vma+0xaa/0xf0 [i915] i915_vma_bind+0xaf/0x1e0 [i915] i915_gem_execbuffer_relocate_entry+0x513/0x6f0 [i915] i915_gem_execbuffer_relocate_vma.isra.34+0x188/0x250 [i915] ? trace_hardirqs_on+0xd/0x10 ? i915_gem_execbuffer_reserve_vma.isra.31+0x152/0x1f0 [i915] ? i915_gem_execbuffer_reserve.isra.32+0x372/0x3a0 [i915] i915_gem_do_execbuffer.isra.38+0xa70/0x1a40 [i915] ? __might_fault+0x4e/0xb0 i915_gem_execbuffer2+0xc5/0x260 [i915] ? __might_fault+0x4e/0xb0 drm_ioctl+0x206/0x450 [drm] ? i915_gem_execbuffer+0x340/0x340 [i915] ? __fget+0x5/0x200 do_vfs_ioctl+0x91/0x6f0 ? __fget+0x111/0x200 ? __fget+0x5/0x200 SyS_ioctl+0x79/0x90 entry_SYSCALL_64_fastpath+0x23/0xc6 even though the code triggering it is correct. Unfortunately, the might_sleep_if() assertions in question are too coarse-grained to cover such cases correctly, so make them a bit less sensitive in order to avoid the false-positives. Reported-and-tested-by: Sedat Dilek <sedat.dilek@gmail.com> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2017-02-03base/memory, hotplug: fix a kernel oops in show_valid_zones()Toshi Kani
Reading a sysfs "memoryN/valid_zones" file leads to the following oops when the first page of a range is not backed by struct page. show_valid_zones() assumes that 'start_pfn' is always valid for page_zone(). BUG: unable to handle kernel paging request at ffffea017a000000 IP: show_valid_zones+0x6f/0x160 This issue may happen on x86-64 systems with 64GiB or more memory since their memory block size is bumped up to 2GiB. [1] An example of such systems is desribed below. 0x3240000000 is only aligned by 1GiB and this memory block starts from 0x3200000000, which is not backed by struct page. BIOS-e820: [mem 0x0000003240000000-0x000000603fffffff] usable Since test_pages_in_a_zone() already checks holes, fix this issue by extending this function to return 'valid_start' and 'valid_end' for a given range. show_valid_zones() then proceeds with the valid range. [1] 'Commit bdee237c0343 ("x86: mm: Use 2GB memory block size on large-memory x86-64 systems")' Link: http://lkml.kernel.org/r/20170127222149.30893-3-toshi.kani@hpe.com Signed-off-by: Toshi Kani <toshi.kani@hpe.com> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: Zhang Zhen <zhenzhang.zhang@huawei.com> Cc: Reza Arbab <arbab@linux.vnet.ibm.com> Cc: David Rientjes <rientjes@google.com> Cc: Dan Williams <dan.j.williams@intel.com> Cc: <stable@vger.kernel.org> [4.4+] Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2017-02-03ACPI: Add support for ResourceSource/IRQ domain mappingAgustin Vega-Frias
ACPI extended IRQ resources may contain a ResourceSource to specify an alternate interrupt controller. Introduce acpi_irq_get and use it to implement ResourceSource/IRQ domain mapping. The new API is similar to of_irq_get and allows re-initialization of a platform resource from the ACPI extended IRQ resource, and provides proper behavior for probe deferral when the domain is not yet present when called. Acked-by: Rafael J. Wysocki <rafael@kernel.org> Acked-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Reviewed-by: Hanjun Guo <hanjun.guo@linaro.org> Tested-by: Hanjun Guo <hanjun.guo@linaro.org> Signed-off-by: Agustin Vega-Frias <agustinv@codeaurora.org> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
2017-02-03Merge remote-tracking branches 'regmap/topic/doc' and 'regmap/topic/rbtree' ↵Mark Brown
into regmap-next
2017-02-03Merge tag 'regmap-v4.10' into regmap-nextMark Brown
regmap: Fix for v4.10 The only change for regmap this merge window is a single fix for an unused variable. # gpg: Signature made Mon 12 Dec 2016 17:03:19 CET # gpg: using RSA key ADE668AA675718B59FE29FEA24D68B725D5487D0 # gpg: issuer "broonie@kernel.org" # gpg: key 0D9EACE2CD7BEEBC: no public key for trusted key - skipped # gpg: key 0D9EACE2CD7BEEBC marked as ultimately trusted # gpg: key CCB0A420AF88CD16: no public key for trusted key - skipped # gpg: key CCB0A420AF88CD16 marked as ultimately trusted # gpg: key 162614E316005C11: no public key for trusted key - skipped # gpg: key 162614E316005C11 marked as ultimately trusted # gpg: key A730C53A5621E907: no public key for trusted key - skipped # gpg: key A730C53A5621E907 marked as ultimately trusted # gpg: key 276568D75C6153AD: no public key for trusted key - skipped # gpg: key 276568D75C6153AD marked as ultimately trusted # gpg: Good signature from "Mark Brown <broonie@sirena.org.uk>" [ultimate] # gpg: aka "Mark Brown <broonie@debian.org>" [ultimate] # gpg: aka "Mark Brown <broonie@kernel.org>" [ultimate] # gpg: aka "Mark Brown <broonie@tardis.ed.ac.uk>" [ultimate] # gpg: aka "Mark Brown <broonie@linaro.org>" [ultimate] # gpg: aka "Mark Brown <Mark.Brown@linaro.org>" [ultimate]
2017-01-30CPU / PM: expose pm_qos_resume_latency for CPUsAlex Shi
The cpu-dma PM QoS constraint impacts all the cpus in the system. There is no way to let the user to choose a PM QoS constraint per cpu. The following patch exposes to the userspace a per cpu based sysfs file in order to let the userspace to change the value of the PM QoS latency constraint. This change is inoperative in its form and the cpuidle governors have to take into account the per cpu latency constraint in addition to the global cpu-dma latency constraint in order to operate properly. BTW The pm_qos_resume_latency usage defined in Documentation/ABI/testing/sysfs-devices-power The /sys/devices/.../power/pm_qos_resume_latency_us attribute contains the PM QoS resume latency limit for the given device, which is the maximum allowed time it can take to resume the device, after it has been suspended at run time, from a resume request to the moment the device will be ready to process I/O, in microseconds. If it is equal to 0, however, this means that the PM QoS resume latency may be arbitrary. Signed-off-by: Alex Shi <alex.shi@linaro.org> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2017-01-30PM / OPP: Simplify dev_pm_opp_get_max_volt_latency()Viresh Kumar
dev_pm_opp_get_max_volt_latency() calls _find_opp_table() two times effectively. Merge _get_regulator_count() into dev_pm_opp_get_max_volt_latency() to avoid that. Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org> Reviewed-by: Stephen Boyd <sboyd@codeaurora.org> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2017-01-30PM / OPP: Simplify _opp_set_availability()Viresh Kumar
As we don't use RCU locking anymore, there is no need to replace an earlier OPP node with a new one. Just update the existing one. Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org> Reviewed-by: Stephen Boyd <sboyd@codeaurora.org> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2017-01-30PM / OPP: Move away from RCU lockingViresh Kumar
The RCU locking isn't well suited for the OPP core. The RCU locking fits better for reader heavy stuff, while the OPP core have at max one or two readers only at a time. Over that, it was getting very confusing the way RCU locking was used with the OPP core. The individual OPPs are mostly well handled, i.e. for an update a new structure was created and then that replaced the older one. But the OPP tables were updated directly all the time from various parts of the core. Though they were mostly used from within RCU locked region, they didn't had much to do with RCU and were governed by the mutex instead. And that mixed with the 'opp_table_lock' has made the core even more confusing. Now that we are already managing the OPPs and the OPP tables with kernel reference infrastructure, we can get rid of RCU locking completely and simplify the code a lot. Remove all RCU references from code and comments. Acquire opp_table->lock while parsing the list of OPPs though. Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org> Reviewed-by: Stephen Boyd <sboyd@codeaurora.org> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2017-01-30PM / OPP: Take kref from _find_opp_table()Viresh Kumar
Take reference of the OPP table from within _find_opp_table(). Also update the callers of _find_opp_table() to call dev_pm_opp_put_opp_table() after they have used the OPP table. Note that _find_opp_table() increments the reference under the opp_table_lock. Now that the OPP table wouldn't get freed until the callers of _find_opp_table() call dev_pm_opp_put_opp_table(), there is no need to take the opp_table_lock or rcu_read_lock() around it. Drop them. Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org> Reviewed-by: Stephen Boyd <sboyd@codeaurora.org> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2017-01-30PM / OPP: Update OPP users to put referenceViresh Kumar
This patch updates dev_pm_opp_find_freq_*() routines to get a reference to the OPPs returned by them. Also updates the users of dev_pm_opp_find_freq_*() routines to call dev_pm_opp_put() after they are done using the OPPs. As it is guaranteed the that OPPs wouldn't get freed while being used, the RCU read side locking present with the users isn't required anymore. Drop it as well. This patch also updates all users of devfreq_recommended_opp() which was returning an OPP received from the OPP core. Note that some of the OPP core routines have gained rcu_read_{lock|unlock}() calls, as those still use RCU specific APIs within them. Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org> Reviewed-by: Chanwoo Choi <cw00.choi@samsung.com> [Devfreq] Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2017-01-30PM / OPP: Add 'struct kref' to struct dev_pm_oppViresh Kumar
Add kref to struct dev_pm_opp for easier accounting of the OPPs. Note that the OPPs are freed under the opp_table->lock mutex only. Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org> Reviewed-by: Stephen Boyd <sboyd@codeaurora.org> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2017-01-30PM / OPP: Use dev_pm_opp_get_opp_table() instead of _add_opp_table()Viresh Kumar
Migrate all users of _add_opp_table() to use dev_pm_opp_get_opp_table() to guarantee that the OPP table doesn't get freed while being used. Also update _managed_opp() to get the reference to the OPP table. Now that the OPP table wouldn't get freed while these routines are executing after dev_pm_opp_get_opp_table() is called, there is no need to take opp_table_lock. Drop them as well. Now that _add_opp_table(), _remove_opp_table() and the unlocked release routines aren't used anymore, remove them. Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org> Reviewed-by: Stephen Boyd <sboyd@codeaurora.org> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2017-01-30PM / OPP: Take reference of the OPP table while adding/removing OPPsViresh Kumar
Take reference of the OPP table while adding and removing OPPs, that helps us remove special checks in _remove_opp_table(). Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org> Reviewed-by: Stephen Boyd <sboyd@codeaurora.org> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2017-01-30PM / OPP: Return opp_table from dev_pm_opp_set_*() routinesViresh Kumar
Now that we have proper kernel reference infrastructure in place for OPP tables, use it to guarantee that the OPP table isn't freed while being used by the callers of dev_pm_opp_set_*() APIs. Make them all return the pointer to the OPP table after taking its reference and put the reference back with dev_pm_opp_put_*() APIs. Now that the OPP table wouldn't get freed while these routines are executing after dev_pm_opp_get_opp_table() is called, there is no need to take opp_table_lock. Drop them as well. Remove the rcu specific comments from these routines as they aren't relevant anymore. Note that prototypes of dev_pm_opp_{set|put}_regulators() were already updated by another patch. Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org> Reviewed-by: Stephen Boyd <sboyd@codeaurora.org> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2017-01-30PM / OPP: Add 'struct kref' to OPP tableViresh Kumar
Add kref to struct opp_table for easier accounting of the OPP table. Note that the new routine dev_pm_opp_get_opp_table() takes the reference from under the opp_table_lock, which guarantees that the OPP table doesn't get freed unless dev_pm_opp_put_opp_table() is called for the OPP table. Two separate release mechanisms are added: locked and unlocked. In unlocked version the routines aren't required to take/drop opp_table_lock as the callers have already done that. This is required to avoid breaking git bisect, otherwise we may get lockdeps between commits. Once all the users of OPP table are updated the unlocked version shall be removed. Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org> Reviewed-by: Stephen Boyd <sboyd@codeaurora.org> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2017-01-30PM / OPP: Add per OPP table mutexViresh Kumar
Add per OPP table lock to protect opp_table->opp_list. Note that at few places opp_list is used under the rcu_read_lock() and so a mutex can't be added there for now. This will be fixed by a later patch. Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org> Reviewed-by: Stephen Boyd <sboyd@codeaurora.org> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2017-01-27PM / OPP: Split out part of _add_opp_table() and _remove_opp_table()Viresh Kumar
Split out parts of _add_opp_table() and _remove_opp_table() into separate routines. This improves readability as well. Should result in no functional changes. Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org> Reviewed-by: Stephen Boyd <sboyd@codeaurora.org> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2017-01-27PM / OPP: Don't expose srcu_head to register notifiersViresh Kumar
Let the OPP core provide helpers to register notifiers for any device, instead of exposing srcu_head outside of the core. Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org> Acked-by: MyungJoo Ham <myungjoo.ham@samsung.com> Reviewed-by: Stephen Boyd <sboyd@codeaurora.org> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2017-01-27PM / OPP: Rename dev_pm_opp_get_suspend_opp() and return OPP rateViresh Kumar
There is only one user of dev_pm_opp_get_suspend_opp() and that uses it to get the OPP rate for the suspend_opp. Rename dev_pm_opp_get_suspend_opp() as dev_pm_opp_get_suspend_opp_freq() and return the rate directly from it. Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org> Reviewed-by: Stephen Boyd <sboyd@codeaurora.org> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2017-01-27PM / OPP: Don't allocate OPP table from _opp_allocate()Viresh Kumar
There is no point in trying to find/allocate the table for every OPP that is added for a device. It would be far more efficient to allocate the table only once and pass its pointer to the routines that add the OPP entry. Locking is removed from _opp_add_static_v2() and _opp_add_v1() now as the callers call them with that lock already held. Call to _remove_opp_table() routine is also removed from _opp_free() now, as opp_table isn't allocated from within _opp_allocate(). This is handled by the routines which created the OPP table in the first place. Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org> Reviewed-by: Stephen Boyd <sboyd@codeaurora.org> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2017-01-27PM / OPP: Rename and split _dev_pm_opp_remove_table()Viresh Kumar
Later patches would want to remove OPP table (and its OPPs) using the opp_table pointer instead of 'dev'. In order to prepare for that, rename _dev_pm_opp_remove_table() as _dev_pm_opp_find_and_remove_table() split out part of it. Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org> Reviewed-by: Stephen Boyd <sboyd@codeaurora.org> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2017-01-27PM / OPP: Add light weight _opp_free() routineViresh Kumar
The OPPs which are never successfully added using _opp_add() are not required to be freed with the _opp_remove() routine, as a simple kfree() is enough for them. Introduce a new light weight routine _opp_free(), which will do that. That also helps us removing the 'notify' parameter to _opp_remove(), which isn't required anymore. Note that _opp_free() contains a call to _remove_opp_table() as the OPP table might have been added for this very OPP only. The _remove_opp_table() routine returns quickly if there are more OPPs in the table. This will be simplified in later patches though. Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org> Reviewed-by: Stephen Boyd <sboyd@codeaurora.org> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2017-01-27PM / OPP: Error out on failing to add static OPPs for v1 bindingsViresh Kumar
The code adding static OPPs for V2 bindings already does so. Make the V1 bindings specific code behave the same. Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org> Reviewed-by: Stephen Boyd <sboyd@codeaurora.org> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2017-01-27PM / OPP: Rename _allocate_opp() to _opp_allocate()Viresh Kumar
Make the naming consistent with how other routines are named. Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org> Reviewed-by: Stephen Boyd <sboyd@codeaurora.org> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2017-01-27PM / OPP: Remove useless TODOViresh Kumar
This TODO doesn't make sense anymore as we have all the information in a single OPP table. Remove it. Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org> Reviewed-by: Stephen Boyd <sboyd@codeaurora.org> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2017-01-27PM / OPP: Fix memory leak while adding duplicate OPPsViresh Kumar
There are two types of duplicate OPPs that get different behavior from the core: A) An earlier OPP is marked 'available' and has same freq/voltages as the new one. B) An earlier OPP with same frequency, but is marked 'unavailable' OR doesn't have same voltages as the new one. The OPP core returns 0 for the first one, but -EEXIST for the second. While the OPP core returns 0 for the first case, its callers don't free the newly allocated OPP structure which isn't used anymore. Fix that by returning -EBUSY instead of 0, but make the callers return 0 eventually. As this isn't a critical fix, its not getting marked for stable kernel. Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org> Reviewed-by: Stephen Boyd <sboyd@codeaurora.org> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2017-01-27firmware: fix NULL pointer dereference in __fw_load_abort()Luis R. Rodriguez
Since commit 5d47ec02c37ea6 ("firmware: Correct handling of fw_state_wait() return value") fw_load_abort() could be called twice and lead us to a kernel crash. This happens only when the firmware fallback mechanism (regular or custom) is used. The fallback mechanism exposes a sysfs interface for userspace to upload a file and notify the kernel when the file is loaded and ready, or to cancel an upload by echo'ing -1 into on the loading file: echo -n "-1" > /sys/$DEVPATH/loading This will call fw_load_abort(). Some distributions actually have a udev rule in place to *always* immediately cancel all firmware fallback mechanism requests (Debian), they have: $ cat /lib/udev/rules.d/50-firmware.rules # stub for immediately telling the kernel that userspace firmware loading # failed; necessary to avoid long timeouts with CONFIG_FW_LOADER_USER_HELPER=y SUBSYSTEM=="firmware", ACTION=="add", ATTR{loading}="-1 Distributions with this udev rule would run into this crash only if the fallback mechanism is used. Since most distributions disable by default using the fallback mechanism (CONFIG_FW_LOADER_USER_HELPER_FALLBACK), this would typicaly mean only 2 drivers which *require* the fallback mechanism could typically incur a crash: drivers/firmware/dell_rbu.c and the drivers/leds/leds-lp55xx-common.c driver. Distributions enabling CONFIG_FW_LOADER_USER_HELPER_FALLBACK by default are obviously more exposed to this crash. The crash happens because after commit 5b029624948d ("firmware: do not use fw_lock for fw_state protection") and subsequent fix commit 5d47ec02c37ea6 ("firmware: Correct handling of fw_state_wait() return value") a race can happen between this cancelation and the firmware fw_state_wait_timeout() being woken up after a state change with which fw_load_abort() as that calls swake_up(). Upon error fw_state_wait_timeout() will also again call fw_load_abort() and trigger a null reference. At first glance we could just fix this with a !buf check on fw_load_abort() before accessing buf->fw_st, however there is a logical issue in having a state machine used for the fallback mechanism and preventing access from it once we abort as its inside the buf (buf->fw_st). The firmware_class.c code is setting the buf to NULL to annotate an abort has occurred. Replace this mechanism by simply using the state check instead. All the other code in place already uses similar checks for aborting as well so no further changes are needed. An oops can be reproduced with the new fw_fallback.sh fallback mechanism cancellation test. Either cancelling the fallback mechanism or the custom fallback mechanism triggers a crash. mcgrof@piggy ~/linux-next/tools/testing/selftests/firmware (git::20170111-fw-fixes)$ sudo ./fw_fallback.sh ./fw_fallback.sh: timeout works ./fw_fallback.sh: firmware comparison works ./fw_fallback.sh: fallback mechanism works [ this then sits here when it is trying the cancellation test ] Kernel log: test_firmware: loading 'nope-test-firmware.bin' misc test_firmware: Direct firmware load for nope-test-firmware.bin failed with error -2 misc test_firmware: Falling back to user helper BUG: unable to handle kernel NULL pointer dereference at 0000000000000038 IP: _request_firmware+0xa27/0xad0 PGD 0 Oops: 0000 [#1] SMP Modules linked in: test_firmware(E) ... etc ... CPU: 1 PID: 1396 Comm: fw_fallback.sh Tainted: G W E 4.10.0-rc3-next-20170111+ #30 Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.10.1-0-g8891697-prebuilt.qemu-project.org 04/01/2014 task: ffff9740b27f4340 task.stack: ffffbb15c0bc8000 RIP: 0010:_request_firmware+0xa27/0xad0 RSP: 0018:ffffbb15c0bcbd10 EFLAGS: 00010246 RAX: 00000000fffffffe RBX: ffff9740afe5aa80 RCX: 0000000000000000 RDX: ffff9740b27f4340 RSI: 0000000000000283 RDI: 0000000000000000 RBP: ffffbb15c0bcbd90 R08: ffffbb15c0bcbcd8 R09: 0000000000000000 R10: 0000000894a0d4b1 R11: 000000000000008c R12: ffffffffc0312480 R13: 0000000000000005 R14: ffff9740b1c32400 R15: 00000000000003e8 FS: 00007f8604422700(0000) GS:ffff9740bfc80000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 0000000000000038 CR3: 000000012164c000 CR4: 00000000000006e0 Call Trace: request_firmware+0x37/0x50 trigger_request_store+0x79/0xd0 [test_firmware] dev_attr_store+0x18/0x30 sysfs_kf_write+0x37/0x40 kernfs_fop_write+0x110/0x1a0 __vfs_write+0x37/0x160 ? _cond_resched+0x1a/0x50 vfs_write+0xb5/0x1a0 SyS_write+0x55/0xc0 ? trace_do_page_fault+0x37/0xd0 entry_SYSCALL_64_fastpath+0x1e/0xad RIP: 0033:0x7f8603f49620 RSP: 002b:00007fff6287b788 EFLAGS: 00000246 ORIG_RAX: 0000000000000001 RAX: ffffffffffffffda RBX: 000055c307b110a0 RCX: 00007f8603f49620 RDX: 0000000000000016 RSI: 000055c3084d8a90 RDI: 0000000000000001 RBP: 0000000000000016 R08: 000000000000c0ff R09: 000055c3084d6336 R10: 000055c307b108b0 R11: 0000000000000246 R12: 000055c307b13c80 R13: 000055c3084d6320 R14: 0000000000000000 R15: 00007fff6287b950 Code: 9f 64 84 e8 9c 61 fe ff b8 f4 ff ff ff e9 6b f9 ff ff 48 c7 c7 40 6b 8d 84 89 45 a8 e8 43 84 18 00 49 8b be 00 03 00 00 8b 45 a8 <83> 7f 38 02 74 08 e8 6e ec ff ff 8b 45 a8 49 c7 86 00 03 00 00 RIP: _request_firmware+0xa27/0xad0 RSP: ffffbb15c0bcbd10 CR2: 0000000000000038 ---[ end trace 6d94ac339c133e6f ]--- Fixes: 5d47ec02c37e ("firmware: Correct handling of fw_state_wait() return value") Reported-and-Tested-by: Jakub Kicinski <jakub.kicinski@netronome.com> Reported-and-Tested-by: Patrick Bruenn <p.bruenn@beckhoff.com> Reported-by: Chris Wilson <chris@chris-wilson.co.uk> CC: <stable@vger.kernel.org> [3.10+] Signed-off-by: Luis R. Rodriguez <mcgrof@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-01-27PM / Domains: Rename functions in genpd for power on/offUlf Hansson
Currently the mix of genpd_poweron(), genpd_power_on(), genpd_sync_poweron() and the ->power_on() callback, makes a bit difficult to follow the path of execution. The similar applies to the functions dealing with power off. In a way to improve this understanding, let's do the following renaming: genpd_power_on() -> _genpd_power_on() genpd_poweron() -> genpd_power_on() genpd_sync_poweron() -> genpd_sync_power_on() genpd_power_off() -> _genpd_power_off() genpd_poweroff() -> genpd_power_off() genpd_sync_poweroff() -> genpd_sync_power_off() genpd_poweroff_unused() -> genpd_power_off_unused() Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2017-01-24memory_hotplug: make zone_can_shift() return a boolean valueYasuaki Ishimatsu
online_{kernel|movable} is used to change the memory zone to ZONE_{NORMAL|MOVABLE} and online the memory. To check that memory zone can be changed, zone_can_shift() is used. Currently the function returns minus integer value, plus integer value and 0. When the function returns minus or plus integer value, it means that the memory zone can be changed to ZONE_{NORNAL|MOVABLE}. But when the function returns 0, there are two meanings. One of the meanings is that the memory zone does not need to be changed. For example, when memory is in ZONE_NORMAL and onlined by online_kernel the memory zone does not need to be changed. Another meaning is that the memory zone cannot be changed. When memory is in ZONE_NORMAL and onlined by online_movable, the memory zone may not be changed to ZONE_MOVALBE due to memory online limitation(see Documentation/memory-hotplug.txt). In this case, memory must not be onlined. The patch changes the return type of zone_can_shift() so that memory online operation fails when memory zone cannot be changed as follows: Before applying patch: # grep -A 35 "Node 2" /proc/zoneinfo Node 2, zone Normal <snip> node_scanned 0 spanned 8388608 present 7864320 managed 7864320 # echo online_movable > memory4097/state # grep -A 35 "Node 2" /proc/zoneinfo Node 2, zone Normal <snip> node_scanned 0 spanned 8388608 present 8388608 managed 8388608 online_movable operation succeeded. But memory is onlined as ZONE_NORMAL, not ZONE_MOVABLE. After applying patch: # grep -A 35 "Node 2" /proc/zoneinfo Node 2, zone Normal <snip> node_scanned 0 spanned 8388608 present 7864320 managed 7864320 # echo online_movable > memory4097/state bash: echo: write error: Invalid argument # grep -A 35 "Node 2" /proc/zoneinfo Node 2, zone Normal <snip> node_scanned 0 spanned 8388608 present 7864320 managed 7864320 online_movable operation failed because of failure of changing the memory zone from ZONE_NORMAL to ZONE_MOVABLE Fixes: df429ac03936 ("memory-hotplug: more general validation of zone during online") Link: http://lkml.kernel.org/r/2f9c3837-33d7-b6e5-59c0-6ca4372b2d84@gmail.com Signed-off-by: Yasuaki Ishimatsu <isimatu.yasuaki@jp.fujitsu.com> Reviewed-by: Reza Arbab <arbab@linux.vnet.ibm.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>