summaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
2017-04-19null_blk: Use blk_init_request_from_bio() instead of open-coding itBart Van Assche
This patch changes the behavior of the null_blk driver for the LightNVM mode as follows: * REQ_FAILFAST_MASK is set for read-ahead requests. * If no I/O priority has been set in the bio, the I/O priority is copied from the I/O context. * The rq_disk member is initialized if bio->bi_bdev != NULL. * req->errors is initialized to zero. Signed-off-by: Bart Van Assche <bart.vanassche@sandisk.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Cc: Matias Bjørling <m@bjorling.me> Cc: Adam Manzanares <adam.manzanares@wdc.com> Signed-off-by: Jens Axboe <axboe@fb.com>
2017-04-19block: Export blk_init_request_from_bio()Bart Van Assche
Export this function such that it becomes available to block drivers. Signed-off-by: Bart Van Assche <bart.vanassche@sandisk.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Cc: Matias Bjørling <m@bjorling.me> Cc: Adam Manzanares <adam.manzanares@wdc.com> Signed-off-by: Jens Axboe <axboe@fb.com>
2017-04-19nsfs: mark dentry with DCACHE_RCUACCESSCong Wang
Andrey reported a use-after-free in __ns_get_path(): spin_lock include/linux/spinlock.h:299 [inline] lockref_get_not_dead+0x19/0x80 lib/lockref.c:179 __ns_get_path+0x197/0x860 fs/nsfs.c:66 open_related_ns+0xda/0x200 fs/nsfs.c:143 sock_ioctl+0x39d/0x440 net/socket.c:1001 vfs_ioctl fs/ioctl.c:45 [inline] do_vfs_ioctl+0x1bf/0x1780 fs/ioctl.c:685 SYSC_ioctl fs/ioctl.c:700 [inline] SyS_ioctl+0x8f/0xc0 fs/ioctl.c:691 We are under rcu read lock protection at that point: rcu_read_lock(); d = atomic_long_read(&ns->stashed); if (!d) goto slow; dentry = (struct dentry *)d; if (!lockref_get_not_dead(&dentry->d_lockref)) goto slow; rcu_read_unlock(); but don't use a proper RCU API on the free path, therefore a parallel __d_free() could free it at the same time. We need to mark the stashed dentry with DCACHE_RCUACCESS so that __d_free() will be called after all readers leave RCU. Fixes: e149ed2b805f ("take the targets of /proc/*/ns/* symlinks to separate fs") Cc: Alexander Viro <viro@zeniv.linux.org.uk> Cc: Andrew Morton <akpm@linux-foundation.org> Reported-by: Andrey Konovalov <andreyknvl@google.com> Signed-off-by: Cong Wang <xiyou.wangcong@gmail.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2017-04-19mm: make mm_percpu_wq non freezableMichal Hocko
Geert has reported a freeze during PM resume and some additional debugging has shown that the device_resume worker cannot make a forward progress because it waits for an event which is stuck waiting in drain_all_pages: INFO: task kworker/u4:0:5 blocked for more than 120 seconds. Not tainted 4.11.0-rc7-koelsch-00029-g005882e53d62f25d-dirty #3476 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. kworker/u4:0 D 0 5 2 0x00000000 Workqueue: events_unbound async_run_entry_fn __schedule schedule schedule_timeout wait_for_common dpm_wait_for_superior device_resume async_resume async_run_entry_fn process_one_work worker_thread kthread [...] bash D 0 1703 1694 0x00000000 __schedule schedule schedule_timeout wait_for_common flush_work drain_all_pages start_isolate_page_range alloc_contig_range cma_alloc __alloc_from_contiguous cma_allocator_alloc __dma_alloc arm_dma_alloc sh_eth_ring_init sh_eth_open sh_eth_resume dpm_run_callback device_resume dpm_resume dpm_resume_end suspend_devices_and_enter pm_suspend state_store kernfs_fop_write __vfs_write vfs_write SyS_write [...] Showing busy workqueues and worker pools: [...] workqueue mm_percpu_wq: flags=0xc pwq 2: cpus=1 node=0 flags=0x0 nice=0 active=0/0 delayed: drain_local_pages_wq, vmstat_update pwq 0: cpus=0 node=0 flags=0x0 nice=0 active=0/0 delayed: drain_local_pages_wq BAR(1703), vmstat_update Tetsuo has properly noted that mm_percpu_wq is created as WQ_FREEZABLE so it is frozen this early during resume so we are effectively deadlocked. Fix this by dropping WQ_FREEZABLE when creating mm_percpu_wq. We really want to have it operational all the time. Fixes: ce612879ddc7 ("mm: move pcp and lru-pcp draining into single wq") Reported-and-tested-by: Geert Uytterhoeven <geert@linux-m68k.org> Debugged-by: Tetsuo Handa <penguin-kernel@i-love.sakura.ne.jp> Signed-off-by: Michal Hocko <mhocko@suse.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2017-04-19Merge tag 'backlight-for-v4.11' of ↵Linus Torvalds
git://git.linaro.org/people/daniel.thompson/linux Pull backlight fix from Daniel Thompson: "Normally pull requests for backlight come from Lee Jones (and will continue to do so) but the bug fixed here is annoying for few people so I'm providing a little holiday cover. Fix a single bug in the PWM backlight driver and make it play nice with a wider range of GPIO devices. This bug is a regression and was independently discovered by Geert Uytterhoevan and Paul Kocialkowski (and is tested by both)" * tag 'backlight-for-v4.11' of git://git.linaro.org/people/daniel.thompson/linux: backlight: pwm_bl: Fix GPIO out for unimplemented .get_direction()
2017-04-19PM / runtime: Document autosuspend-helper side effectsJohan Hovold
Document the fact that the autosuspend delay and enable helpers may change the power.usage_count and resume or suspend a device depending on the values of power.autosuspend_delay and power.use_autosuspend. Note that this means that a driver must disable autosuspend before disabling runtime pm on probe errors and on driver unbind if the device is to be suspended upon return (as a negative delay may otherwise keep the device resumed). Signed-off-by: Johan Hovold <johan@kernel.org> Acked-by: Tony Lindgren <tony@atomide.com> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2017-04-19PM / runtime: Fix autosuspend documentationJohan Hovold
Update the autosuspend documentation which claimed that the autosuspend delay is not taken into account when using the non-autosuspend helper functions, something which is no longer true since commit d66e6db28df3 ("PM / Runtime: Respect autosuspend when idle triggers suspend"). This specifically means that drivers must now disable autosuspend before disabling runtime pm in probe error paths and remove callbacks if pm_runtime_put_sync was being used to suspend the device before returning. (If an idle callback can prevent suspend, pm_runtime_put_sync_suspend must be used instead of pm_runtime_put_sync as before.) Also remove the claim that the autosuspend helpers behave "just like the non-autosuspend counterparts", something which have never really been true as some of the latter use idle notifications. Signed-off-by: Johan Hovold <johan@kernel.org> Acked-by: Tony Lindgren <tony@atomide.com> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2017-04-19tools: power: pm-graph: Package makefile and man pagesTodd E Brandt
BootGraph and SleepGraph man pages - includes full descriptions of tool arguments and commands - includes examples of common use cases Makefile - no build required, used only for install - installs man pages and tools as libraries with links - includes an uninstall Signed-off-by: Todd Brandt <todd.e.brandt@linux.intel.com> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2017-04-19tools: power: pm-graph: AnalyzeBoot v2.0Todd E Brandt
First release into the kernel tools source - pulls in analyze_suspend.py as as library, same html formatting - supplants scripts/bootgraph.pl, outputs HTML instead of SVG - enables automatic reboot and collection for easy timeline capture - enables ftrace callgraph collection from early boot Signed-off-by: Todd Brandt <todd.e.brandt@linux.intel.com> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2017-04-19tools: power: pm-graph: AnalyzeSuspend v4.6Todd E Brandt
Moved from scripts into tools, and updated from 4.5 to 4.6 - Changed the tool title to SleepGraph - Reformatted the code so analyze_suspend can be used as a library - Reorganized all html/js/css handling code to be used by other tools - upgraded the -summary feature to work faster with better readability Signed-off-by: Todd Brandt <todd.e.brandt@linux.intel.com> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2017-04-19cpufreq: Add Tegra186 cpufreq driverMikko Perttunen
Add a new cpufreq driver for Tegra186 (and likely later). The CPUs are organized into two clusters, Denver and A57, with two and four cores respectively. CPU frequency can be adjusted by writing the desired rate divisor and a voltage hint to a special per-core register. The frequency of each core can be set individually; however, this is just a hint as all CPUs in a cluster will run at the maximum rate of non-idle CPUs in the cluster. Signed-off-by: Mikko Perttunen <mperttunen@nvidia.com> Acked-by: Viresh Kumar <viresh.kumar@linaro.org> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2017-04-19cpufreq: imx6q: Fix error handling codeChristophe Jaillet
According to the previous error handling code, it is likely that 'goto out_free_opp' is expected here in order to avoid a memory leak in error handling path. Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr> Acked-by: Viresh Kumar <viresh.kumar@linaro.org> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2017-04-19cpufreq: imx6q: Set max suspend_freq to avoid changes during suspendLeonard Crestez
If the cpufreq driver tries to modify voltage/freq during suspend/resume it might need to control an external PMIC via I2C or SPI but those devices might be already suspended. This issue is likely to happen whenever the LDOs have their vin-supply set. To avoid this scenario we just increase cpufreq to the maximum before suspend. Signed-off-by: Leonard Crestez <leonard.crestez@nxp.com> Reviewed-by: Lucas Stach <l.stach@pengutronix.de> Acked-by: Viresh Kumar <viresh.kumar@linaro.org> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2017-04-19cpufreq: imx6q: Fix handling EPROBE_DEFER from regulatorIrina Tirdea
If there are any errors in getting the cpu0 regulators, the driver returns -ENOENT. In case the regulators are not yet available, the devm_regulator_get calls will return -EPROBE_DEFER, so that the driver can be probed later. If we return -ENOENT, the driver will fail its initialization and will not try to probe again (when the regulators become available). Return the actual error received from regulator_get in probe. Print a differentiated message in case we need to probe the device later and in case we actually failed. Also add a message to inform when the driver has been successfully registered. Signed-off-by: Irina Tirdea <irina.tirdea@nxp.com> Signed-off-by: Leonard Crestez <leonard.crestez@nxp.com> Reviewed-by: Lucas Stach <l.stach@pengutronix.de> Acked-by: Viresh Kumar <viresh.kumar@linaro.org> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2017-04-19cpuidle: powernv: Avoid a branch in the core snooze_loop() loopAnton Blanchard
When in the snooze_loop() we want to take up the least amount of resources. On my version of gcc (6.3), we end up with an extra branch because it predicts snooze_timeout_en to be false, whereas it is almost always true. Use likely() to avoid the branch and be a little nicer to the other non idle threads on the core. Signed-off-by: Anton Blanchard <anton@samba.org> Reviewed-by: Vaidyanathan Srinivasan <svaidy@linux.vnet.ibm.com> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2017-04-19cpuidle: powernv: Don't continually set thread priority in snooze_loop()Anton Blanchard
The powerpc64 kernel exception handlers have preserved thread priorities for a long time now, so there is no need to continually set it. Just set it once on entry and once exit. Signed-off-by: Anton Blanchard <anton@samba.org> Reviewed-by: Vaidyanathan Srinivasan <svaidy@linux.vnet.ibm.com> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2017-04-19cpuidle: powernv: Don't bounce between low and very low thread priorityAnton Blanchard
The core of snooze_loop() continually bounces between low and very low thread priority. Changing thread priorities is an expensive operation that can negatively impact other threads on a core. All CPUs that can run PowerNV support very low priority, so we can avoid the change completely. Signed-off-by: Anton Blanchard <anton@samba.org> Reviewed-by: Vaidyanathan Srinivasan <svaidy@linux.vnet.ibm.com> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2017-04-19cpuidle: cpuidle-cps: remove unused variableMarcin Nowakowski
'core' in cps_cpuidle_init has never been used and is unnecessary, so remove the dead code. Signed-off-by: Marcin Nowakowski <marcin.nowakowski@imgtec.com> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2017-04-19ACPI / scan: Avoid enumerating devices more than onceRafael J. Wysocki
acpi_bus_attach() does not check the visited flag for devices that have been enumerated already and some of them may be enumerated for multiple times as a result, because some callers of acpi_bus_scan() don't check the visited flag either. For this reason, modify acpi_bus_attach() to check the visited flag and avoid enumerating devices that have already been enumerated. Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com> Reviewed-by: Mika Westerberg <mika.westerberg@linux.intel.com> Reviewed-by: Joey Lee <jlee@suse.com>
2017-04-19ACPI / scan: Apply default enumeration to devices with ACPI driversRafael J. Wysocki
The current code in acpi_bus_attach() is inconsistent with respect to device objects with ACPI drivers bound to them, as it allows ACPI drivers to bind to device objects with existing "physical" device companions, but it doesn't allow "physical" device objects to be created for ACPI device objects with ACPI drivers bound to them. Thus, in some cases, the outcome depends on the ordering of events which is confusing at best. For this reason, modify acpi_bus_attach() to call acpi_default_enumeration() for device objects with the pnp.type.platform_id flag set regardless of whether or not any ACPI drivers are bound to them. Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com> Reviewed-by: Mika Westerberg <mika.westerberg@linux.intel.com> Reviewed-by: Joey Lee <jlee@suse.com>
2017-04-19power: supply: axp288_charger: Only wait for INT3496 device if presentHans de Goede
On some devices with an axp288 pmic setting vbus path based on the id-pin is handled by an ACPI _AIE interrupt on the gpio and the INT3496 device is disabled. Instead of returning -EPROBE_DEFER on these devices waiting for the never to show up INT3496 device, check for its presence and only request and monitor the matching extcon if the device is there, otherwise let the firmware handle the vbus path control. Signed-off-by: Hans de Goede <hdegoede@redhat.com> Acked-by: Sebastian Reichel <sre@kernel.org> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2017-04-19ACPI / AC: Add a blacklist with PMIC ACPI HIDs with a native charger driverHans de Goede
On some systems we have a native PMIC driver which provides Mains monitoring, while the ACPI ac driver is broken on these systems due to bad DSTDs or because we do not support the proprietary and undocumented ACPI opregions these ACPI battery devices rely on (e.g. BMOP opregion). This leads for example to a ADP1 power_supply which reports itself as always online even if no mains are connected. This commit adds a blacklist with PMIC ACPI HIDs for which we've a native charger or extcon driver and makes the ACPI ac driver not register itself when a PMIC on this list is present. Signed-off-by: Hans de Goede <hdegoede@redhat.com> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2017-04-19ACPI / battery: Add a blacklist with PMIC ACPI HIDs with a native battery driverHans de Goede
On some systems we have a native PMIC driver which provides battery monitoring, while the ACPI battery driver is broken on these systems due to bad DSDTs or because we do not support the proprietary and undocumented ACPI opregions these ACPI battery devices rely on (e.g. BMOP opregion). This leads to there being 2 battery power_supply-s registed like this: ~$ acpi Battery 0: Charging, 84%, 00:49:39 until charged Battery 1: Unknown, 0%, rate information unavailable Even if the ACPI battery where to function fine (which on systems where we have a native PMIC driver it often doesn't) we still do not want to export the same battery to userspace twice. This commit adds a blacklist with PMIC ACPI HIDs for which we've a native battery driver and makes the ACPI battery driver not register itself when a PMIC on this list is present. Link: https://bugzilla.kernel.org/show_bug.cgi?id=194811 Signed-off-by: Hans de Goede <hdegoede@redhat.com> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2017-04-19ACPI / battery: Fix acpi_battery_exit on acpi_battery_init_async errorsHans de Goede
The acpi_lock_battery_dir() / acpi_bus_register_driver() calls in acpi_battery_init_async() may fail. Check that they succeeded before undoing them. Signed-off-by: Hans de Goede <hdegoede@redhat.com> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2017-04-19ACPI / utils: Add new acpi_dev_present helperHans de Goede
acpi_dev_found just iterates over all ACPI-ids and sees if one matches. This means that it will return true for devices which are in the DSDT but disabled (their _STA method returns 0). For some drivers it is useful to be able to check if a certain HID is not only present in the namespace, but also actually present as in acpi_device_is_present() will return true for the device. For example because if a certain device is present then the driver will want to use an extcon or IIO ADC channel provided by that device. This commit adds a new acpi_dev_present helper which drivers can use to this end. Like acpi_dev_found, acpi_dev_present take a HID as argument, but it also has 2 extra optional arguments to only check for an ACPI device with a specific UID and/or HRV value. This makes it more generic and allows it to replace custom code doing similar checks in several places. Arguably acpi_dev_present is what acpi_dev_found should have been, but there are too many users to just change acpi_dev_found without the risk of breaking something. Signed-off-by: Hans de Goede <hdegoede@redhat.com> Reviewed-by: Lukas Wunner <lukas@wunner.de> Reviewed-by: Mika Westerberg <mika.westerberg@linux.intel.com> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2017-04-19ACPI / video: add comments about subtle casesDmitry Frank
The comment for acpi_video_bqc_quirk is by Felipe Contreras, taken from the git history. Signed-off-by: Dmitry Frank <mail@dmitryfrank.com> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2017-04-19ACPI / power: Avoid maybe-uninitialized warningArnd Bergmann
gcc -O2 cannot always prove that the loop in acpi_power_get_inferred_state() is enterered at least once, so it assumes that cur_state might not get initialized: drivers/acpi/power.c: In function 'acpi_power_get_inferred_state': drivers/acpi/power.c:222:9: error: 'cur_state' may be used uninitialized in this function [-Werror=maybe-uninitialized] This sets the variable to zero at the start of the loop, to ensure that there is well-defined behavior even for an empty list. This gets rid of the warning. The warning first showed up when the -Os flag got removed in a bug fix patch in linux-4.11-rc5. I would suggest merging this addon patch on top of that bug fix to avoid introducing a new warning in the stable kernels. Fixes: 61b79e16c68d (ACPI: Fix incompatibility with mcount-based function graph tracing) Cc: All applicable <stable@vger.kernel.org> Signed-off-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2017-04-19mtip32xx: pass BLK_MQ_F_NO_SCHEDMing Lei
The recent introduced MQ IO scheduler breaks mtip32xx in the following way. mtip32xx use the 'request_index' passed to .init_request() as hardware tag index for initializing hardware queue, and it actually require that rq->tag is always same with 'request_index' passed to .init_request(). Current blk-mq IO scheduler can't guarantee this point, so this patch passes BLK_MQ_F_NO_SCHED and at least make mtip32xx working. This patch fixes the following strange hardware failure. The issue can be triggered easily when doing I/O with mq-deadline enabled. [ 186.972578] {1}[Hardware Error]: Hardware error from APEI Generic Hardware Error Source: 32993 [ 186.972578] {1}[Hardware Error]: event severity: fatal [ 186.972579] {1}[Hardware Error]: Error 0, type: fatal [ 186.972580] {1}[Hardware Error]: section_type: PCIe error [ 186.972580] {1}[Hardware Error]: port_type: 0, PCIe end point [ 186.972581] {1}[Hardware Error]: version: 1.0 [ 186.972581] {1}[Hardware Error]: command: 0x0407, status: 0x0010 [ 186.972582] {1}[Hardware Error]: device_id: 0000:07:00.0 [ 186.972582] {1}[Hardware Error]: slot: 4 [ 186.972583] {1}[Hardware Error]: secondary_bus: 0x00 [ 186.972583] {1}[Hardware Error]: vendor_id: 0x1344, device_id: 0x5150 [ 186.972584] {1}[Hardware Error]: class_code: 008001 [ 186.972585] Kernel panic - not syncing: Fatal hardware error! Reported-by: Jozef Mikovic <jmikovic@redhat.com> Signed-off-by: Ming Lei <ming.lei@redhat.com> Signed-off-by: Jens Axboe <axboe@fb.com>
2017-04-19block: respect BLK_MQ_F_NO_SCHEDMing Lei
If one driver claims that it doesn't support io scheduler via BLK_MQ_F_NO_SCHED, we should not allow to change and show the availabe io schedulers. This patch adds check to enhance this behaviour. Signed-off-by: Ming Lei <ming.lei@redhat.com> Signed-off-by: Jens Axboe <axboe@fb.com>
2017-04-19backlight: pwm_bl: Fix GPIO out for unimplemented .get_direction()Geert Uytterhoeven
Commit 7613c922315e308a ("backlight: pwm_bl: Move the checks for initial power state to a separate function") not just moved some code, but made slight changes in semantics. If a gpiochip doesn't implement the optional .get_direction() callback, gpiod_get_direction always returns -EINVAL, which is never equal to GPIOF_DIR_IN, leading to the GPIO not being configured for output. To avoid this, invert the test and check for not GPIOF_DIR_OUT instead, like the original code did. This restores the display on r8a7740/armadillo. Fixes: 7613c922315e308a ("backlight: pwm_bl: Move the checks for initial power state to a separate function") Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be> Reviewed-by: Peter Ujfalusi <peter.ujfalusi@ti.com> Acked-by: Philipp Zabel <p.zabel@pengutronix.de> Acked-by: Daniel Thompson <daniel.thompson@linaro.org> Signed-off-by: Daniel Thompson <daniel.thompson@linaro.org>
2017-04-19leds: pca9532: Extend pca9532 device tree supportFelix Brack
This patch extends the device tree support for the pca9532 by adding the leds 'default-state' property. Signed-off-by: Felix Brack <fb@ltec.ch> Signed-off-by: Jacek Anaszewski <jacek.anaszewski@gmail.com>
2017-04-19tracing: Allocate the snapshot buffer before enabling probeSteven Rostedt (VMware)
Currently the snapshot trigger enables the probe and then allocates the snapshot. If the probe triggers before the allocation, it could cause the snapshot to fail and turn tracing off. It's best to allocate the snapshot buffer first, and then enable the trigger. If something goes wrong in the enabling of the trigger, the snapshot buffer is still allocated, but it can also be freed by the user by writting zero into the snapshot buffer file. Also add a check of the return status of alloc_snapshot(). Cc: stable@vger.kernel.org Fixes: 77fd5c15e3 ("tracing: Add snapshot trigger to function probes") Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
2017-04-19lightnvm: assume 64-bit lba numbersArnd Bergmann
The driver uses both u64 and sector_t to refer to offsets, and assigns between the two. This causes one harmless warning when sector_t is 32-bit: drivers/lightnvm/pblk-rb.c: In function 'pblk_rb_write_entry_gc': include/linux/lightnvm.h:215:20: error: large integer implicitly truncated to unsigned type [-Werror=overflow] drivers/lightnvm/pblk-rb.c:324:22: note: in expansion of macro 'ADDR_EMPTY' As the driver is already doing this inconsistently, changing the type won't make it worse and is an easy way to avoid the warning. Fixes: a4bd217b4326 ("lightnvm: physical block device (pblk) target") Signed-off-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Jens Axboe <axboe@fb.com>
2017-04-19block: make __blk_end_bidi_request privateChristoph Hellwig
blk_insert_flush should be using __blk_end_request to start with. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Jens Axboe <axboe@fb.com>
2017-04-19block: remove blk_end_request_curChristoph Hellwig
This function is not used anywhere in the kernel. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Johannes Thumshirn <jthumshirn@suse.de> Signed-off-by: Jens Axboe <axboe@fb.com>
2017-04-19block: remove blk_end_request_err and __blk_end_request_errChristoph Hellwig
Both functions are entirely unused. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Johannes Thumshirn <jthumshirn@suse.de> Signed-off-by: Jens Axboe <axboe@fb.com>
2017-04-19block: remove the osdblk driverChristoph Hellwig
This was just a proof of concept user for the SCSI OSD library, and never had any real users. Signed-off-by: Christoph Hellwig <hch@lst.de> Acked-by: Boaz Harrosh <ooo@electrozaur.com> Signed-off-by: Jens Axboe <axboe@fb.com>
2017-04-19block: Make writeback throttling defaults consistent for SQ devicesJan Kara
When CFQ is used as an elevator, it disables writeback throttling because they don't play well together. Later when a different elevator is chosen for the device, writeback throttling doesn't get enabled again as it should. Make sure CFQ enables writeback throttling (if it should be enabled by default) when we switch from it to another IO scheduler. Signed-off-by: Jan Kara <jack@suse.cz> Signed-off-by: Jens Axboe <axboe@fb.com>
2017-04-19block, bfq: split bfq-iosched.c into multiple source filesPaolo Valente
The BFQ I/O scheduler features an optimal fair-queuing (proportional-share) scheduling algorithm, enriched with several mechanisms to boost throughput and reduce latency for interactive and real-time applications. This makes BFQ a large and complex piece of code. This commit addresses this issue by splitting BFQ into three main, independent components, and by moving each component into a separate source file: 1. Main algorithm: handles the interaction with the kernel, and decides which requests to dispatch; it uses the following two further components to achieve its goals. 2. Scheduling engine (Hierarchical B-WF2Q+ scheduling algorithm): computes the schedule, using weights and budgets provided by the above component. 3. cgroups support: handles group operations (creation, destruction, move, ...). Signed-off-by: Paolo Valente <paolo.valente@linaro.org> Signed-off-by: Jens Axboe <axboe@fb.com>
2017-04-19block, bfq: remove all get and put of I/O contextsPaolo Valente
When a bfq queue is set in service and when it is merged, a reference to the I/O context associated with the queue is taken. This reference is then released when the queue is deselected from service or split. More precisely, the release of the reference is postponed to when the scheduler lock is released, to avoid nesting between the scheduler and the I/O-context lock. In fact, such nesting would lead to deadlocks, because of other code paths that take the same locks in the opposite order. This postponing of I/O-context releases does complicate code. This commit addresses these issue by modifying involved operations in such a way to not need to get the above I/O-context references any more. Then it also removes any get and release of these references. Signed-off-by: Paolo Valente <paolo.valente@linaro.org> Signed-off-by: Jens Axboe <axboe@fb.com>
2017-04-19block, bfq: handle bursts of queue activationsArianna Avanzini
Many popular I/O-intensive services or applications spawn or reactivate many parallel threads/processes during short time intervals. Examples are systemd during boot or git grep. These services or applications benefit mostly from a high throughput: the quicker the I/O generated by their processes is cumulatively served, the sooner the target job of these services or applications gets completed. As a consequence, it is almost always counterproductive to weight-raise any of the queues associated to the processes of these services or applications: in most cases it would just lower the throughput, mainly because weight-raising also implies device idling. To address this issue, an I/O scheduler needs, first, to detect which queues are associated with these services or applications. In this respect, we have that, from the I/O-scheduler standpoint, these services or applications cause bursts of activations, i.e., activations of different queues occurring shortly after each other. However, a shorter burst of activations may be caused also by the start of an application that does not consist in a lot of parallel I/O-bound threads (see the comments on the function bfq_handle_burst for details). In view of these facts, this commit introduces: 1) an heuristic to detect (only) bursts of queue activations caused by services or applications consisting in many parallel I/O-bound threads; 2) the prevention of device idling and weight-raising for the queues belonging to these bursts. Signed-off-by: Arianna Avanzini <avanzini.arianna@gmail.com> Signed-off-by: Paolo Valente <paolo.valente@linaro.org> Signed-off-by: Jens Axboe <axboe@fb.com>
2017-04-19block, bfq: boost the throughput with random I/O on NCQ-capable HDDsPaolo Valente
This patch is basically the counterpart, for NCQ-capable rotational devices, of the previous patch. Exactly as the previous patch does on flash-based devices and for any workload, this patch disables device idling on rotational devices, but only for random I/O. In fact, only with these queues disabling idling boosts the throughput on NCQ-capable rotational devices. To not break service guarantees, idling is disabled for NCQ-enabled rotational devices only when the same symmetry conditions considered in the previous patches hold. Signed-off-by: Paolo Valente <paolo.valente@linaro.org> Signed-off-by: Arianna Avanzini <avanzini.arianna@gmail.com> Signed-off-by: Jens Axboe <axboe@fb.com>
2017-04-19block, bfq: boost the throughput on NCQ-capable flash-based devicesPaolo Valente
This patch boosts the throughput on NCQ-capable flash-based devices, while still preserving latency guarantees for interactive and soft real-time applications. The throughput is boosted by just not idling the device when the in-service queue remains empty, even if the queue is sync and has a non-null idle window. This helps to keep the drive's internal queue full, which is necessary to achieve maximum performance. This solution to boost the throughput is a port of commits a68bbdd and f7d7b7a for CFQ. As already highlighted in a previous patch, allowing the device to prefetch and internally reorder requests trivially causes loss of control on the request service order, and hence on service guarantees. Fortunately, as discussed in detail in the comments on the function bfq_bfqq_may_idle(), if every process has to receive the same fraction of the throughput, then the service order enforced by the internal scheduler of a flash-based device is relatively close to that enforced by BFQ. In particular, it is close enough to let service guarantees be substantially preserved. Things change in an asymmetric scenario, i.e., if not every process has to receive the same fraction of the throughput. In this case, to guarantee the desired throughput distribution, the device must be prevented from prefetching requests. This is exactly what this patch does in asymmetric scenarios. Signed-off-by: Paolo Valente <paolo.valente@linaro.org> Signed-off-by: Arianna Avanzini <avanzini.arianna@gmail.com> Signed-off-by: Jens Axboe <axboe@fb.com>
2017-04-19block, bfq: reduce idling only in symmetric scenariosArianna Avanzini
A seeky queue (i..e, a queue containing random requests) is assigned a very small device-idling slice, for throughput issues. Unfortunately, given the process associated with a seeky queue, this behavior causes the following problem: if the process, say P, performs sync I/O and has a higher weight than some other processes doing I/O and associated with non-seeky queues, then BFQ may fail to guarantee to P its reserved share of the throughput. The reason is that idling is key for providing service guarantees to processes doing sync I/O [1]. This commit addresses this issue by allowing the device-idling slice to be reduced for a seeky queue only if the scenario happens to be symmetric, i.e., if all the queues are to receive the same share of the throughput. [1] P. Valente, A. Avanzini, "Evolution of the BFQ Storage I/O Scheduler", Proceedings of the First Workshop on Mobile System Technologies (MST-2015), May 2015. http://algogroup.unimore.it/people/paolo/disk_sched/mst-2015.pdf Signed-off-by: Arianna Avanzini <avanzini.arianna@gmail.com> Signed-off-by: Riccardo Pizzetti <riccardo.pizzetti@gmail.com> Signed-off-by: Samuele Zecchini <samuele.zecchini92@gmail.com> Signed-off-by: Paolo Valente <paolo.valente@linaro.org> Signed-off-by: Jens Axboe <axboe@fb.com>
2017-04-19block, bfq: add Early Queue Merge (EQM)Arianna Avanzini
A set of processes may happen to perform interleaved reads, i.e., read requests whose union would give rise to a sequential read pattern. There are two typical cases: first, processes reading fixed-size chunks of data at a fixed distance from each other; second, processes reading variable-size chunks at variable distances. The latter case occurs for example with QEMU, which splits the I/O generated by a guest into multiple chunks, and lets these chunks be served by a pool of I/O threads, iteratively assigning the next chunk of I/O to the first available thread. CFQ denotes as 'cooperating' a set of processes that are doing interleaved I/O, and when it detects cooperating processes, it merges their queues to obtain a sequential I/O pattern from the union of their I/O requests, and hence boost the throughput. Unfortunately, in the following frequent case, the mechanism implemented in CFQ for detecting cooperating processes and merging their queues is not responsive enough to handle also the fluctuating I/O pattern of the second type of processes. Suppose that one process of the second type issues a request close to the next request to serve of another process of the same type. At that time the two processes would be considered as cooperating. But, if the request issued by the first process is to be merged with some other already-queued request, then, from the moment at which this request arrives, to the moment when CFQ controls whether the two processes are cooperating, the two processes are likely to be already doing I/O in distant zones of the disk surface or device memory. CFQ uses however preemption to get a sequential read pattern out of the read requests performed by the second type of processes too. As a consequence, CFQ uses two different mechanisms to achieve the same goal: boosting the throughput with interleaved I/O. This patch introduces Early Queue Merge (EQM), a unified mechanism to get a sequential read pattern with both types of processes. The main idea is to immediately check whether a newly-arrived request lets some pair of processes become cooperating, both in the case of actual request insertion and, to be responsive with the second type of processes, in the case of request merge. Both types of processes are then handled by just merging their queues. Signed-off-by: Arianna Avanzini <avanzini.arianna@gmail.com> Signed-off-by: Mauro Andreolini <mauro.andreolini@unimore.it> Signed-off-by: Paolo Valente <paolo.valente@linaro.org> Signed-off-by: Jens Axboe <axboe@fb.com>
2017-04-19block, bfq: reduce latency during request-pool saturationPaolo Valente
This patch introduces an heuristic that reduces latency when the I/O-request pool is saturated. This goal is achieved by disabling device idling, for non-weight-raised queues, when there are weight- raised queues with pending or in-flight requests. In fact, as explained in more detail in the comment on the function bfq_bfqq_may_idle(), this reduces the rate at which processes associated with non-weight-raised queues grab requests from the pool, thereby increasing the probability that processes associated with weight-raised queues get a request immediately (or at least soon) when they need one. Along the same line, if there are weight-raised queues, then this patch halves the service rate of async (write) requests for non-weight-raised queues. Signed-off-by: Paolo Valente <paolo.valente@linaro.org> Signed-off-by: Arianna Avanzini <avanzini.arianna@gmail.com> Signed-off-by: Jens Axboe <axboe@fb.com>
2017-04-19block, bfq: preserve a low latency also with NCQ-capable drivesPaolo Valente
I/O schedulers typically allow NCQ-capable drives to prefetch I/O requests, as NCQ boosts the throughput exactly by prefetching and internally reordering requests. Unfortunately, as discussed in detail and shown experimentally in [1], this may cause fairness and latency guarantees to be violated. The main problem is that the internal scheduler of an NCQ-capable drive may postpone the service of some unlucky (prefetched) requests as long as it deems serving other requests more appropriate to boost the throughput. This patch addresses this issue by not disabling device idling for weight-raised queues, even if the device supports NCQ. This allows BFQ to start serving a new queue, and therefore allows the drive to prefetch new requests, only after the idling timeout expires. At that time, all the outstanding requests of the expired queue have been most certainly served. [1] P. Valente and M. Andreolini, "Improving Application Responsiveness with the BFQ Disk I/O Scheduler", Proceedings of the 5th Annual International Systems and Storage Conference (SYSTOR '12), June 2012. Slightly extended version: http://algogroup.unimore.it/people/paolo/disk_sched/bfq-v1-suite- results.pdf Signed-off-by: Paolo Valente <paolo.valente@linaro.org> Signed-off-by: Arianna Avanzini <avanzini.arianna@gmail.com> Signed-off-by: Jens Axboe <axboe@fb.com>
2017-04-19block, bfq: reduce I/O latency for soft real-time applicationsPaolo Valente
To guarantee a low latency also to the I/O requests issued by soft real-time applications, this patch introduces a further heuristic, which weight-raises (in the sense explained in the previous patch) also the queues associated to applications deemed as soft real-time. To be deemed as soft real-time, an application must meet two requirements. First, the application must not require an average bandwidth higher than the approximate bandwidth required to playback or record a compressed high-definition video. Second, the request pattern of the application must be isochronous, i.e., after issuing a request or a batch of requests, the application must stop issuing new requests until all its pending requests have been completed. After that, the application may issue a new batch, and so on. As for the second requirement, it is critical to require also that, after all the pending requests of the application have been completed, an adequate minimum amount of time elapses before the application starts issuing new requests. This prevents also greedy (i.e., I/O-bound) applications from being incorrectly deemed, occasionally, as soft real-time. In fact, if *any amount of time* is fine, then even a greedy application may, paradoxically, meet both the above requirements, if: (1) the application performs random I/O and/or the device is slow, and (2) the CPU load is high. The reason is the following. First, if condition (1) is true, then, during the service of the application, the throughput may be low enough to let the application meet the bandwidth requirement. Second, if condition (2) is true as well, then the application may occasionally behave in an apparently isochronous way, because it may simply stop issuing requests while the CPUs are busy serving other processes. To address this issue, the heuristic leverages the simple fact that greedy applications issue *all* their requests as quickly as they can, whereas soft real-time applications spend some time processing data after each batch of requests is completed. In particular, the heuristic works as follows. First, according to the above isochrony requirement, the heuristic checks whether an application may be soft real-time, thereby giving to the application the opportunity to be deemed as such, only when both the following two conditions happen to hold: 1) the queue associated with the application has expired and is empty, 2) there is no outstanding request of the application. Suppose that both conditions hold at time, say, t_c and that the application issues its next request at time, say, t_i. At time t_c the heuristic computes the next time instant, called soft_rt_next_start in the code, such that, only if t_i >= soft_rt_next_start, then both the next conditions will hold when the application issues its next request: 1) the application will meet the above bandwidth requirement, 2) a given minimum time interval, say Delta, will have elapsed from time t_c (so as to filter out greedy application). The current value of Delta is a little bit higher than the value that we have found, experimentally, to be adequate on a real, general-purpose machine. In particular we had to increase Delta to make the filter quite precise also in slower, embedded systems, and in KVM/QEMU virtual machines (details in the comments on the code). If the application actually issues its next request after time soft_rt_next_start, then its associated queue will be weight-raised for a relatively short time interval. If, during this time interval, the application proves again to meet the bandwidth and isochrony requirements, then the end of the weight-raising period for the queue is moved forward, and so on. Note that an application whose associated queue never happens to be empty when it expires will never have the opportunity to be deemed as soft real-time. Signed-off-by: Paolo Valente <paolo.valente@linaro.org> Signed-off-by: Arianna Avanzini <avanzini.arianna@gmail.com> Signed-off-by: Jens Axboe <axboe@fb.com>
2017-04-19block, bfq: improve responsivenessPaolo Valente
This patch introduces a simple heuristic to load applications quickly, and to perform the I/O requested by interactive applications just as quickly. To this purpose, both a newly-created queue and a queue associated with an interactive application (we explain in a moment how BFQ decides whether the associated application is interactive), receive the following two special treatments: 1) The weight of the queue is raised. 2) The queue unconditionally enjoys device idling when it empties; in fact, if the requests of a queue are sync, then performing device idling for the queue is a necessary condition to guarantee that the queue receives a fraction of the throughput proportional to its weight (see [1] for details). For brevity, we call just weight-raising the combination of these two preferential treatments. For a newly-created queue, weight-raising starts immediately and lasts for a time interval that: 1) depends on the device speed and type (rotational or non-rotational), and 2) is equal to the time needed to load (start up) a large-size application on that device, with cold caches and with no additional workload. Finally, as for guaranteeing a fast execution to interactive, I/O-related tasks (such as opening a file), consider that any interactive application blocks and waits for user input both after starting up and after executing some task. After a while, the user may trigger new operations, after which the application stops again, and so on. Accordingly, the low-latency heuristic weight-raises again a queue in case it becomes backlogged after being idle for a sufficiently long (configurable) time. The weight-raising then lasts for the same time as for a just-created queue. According to our experiments, the combination of this low-latency heuristic and of the improvements described in the previous patch allows BFQ to guarantee a high application responsiveness. [1] P. Valente, A. Avanzini, "Evolution of the BFQ Storage I/O Scheduler", Proceedings of the First Workshop on Mobile System Technologies (MST-2015), May 2015. http://algogroup.unimore.it/people/paolo/disk_sched/mst-2015.pdf Signed-off-by: Paolo Valente <paolo.valente@linaro.org> Signed-off-by: Arianna Avanzini <avanzini.arianna@gmail.com> Signed-off-by: Jens Axboe <axboe@fb.com>
2017-04-19block, bfq: add more fairness with writes and slow processesPaolo Valente
This patch deals with two sources of unfairness, which can also cause high latencies and throughput loss. The first source is related to write requests. Write requests tend to starve read requests, basically because, on one side, writes are slower than reads, whereas, on the other side, storage devices confuse schedulers by deceptively signaling the completion of write requests immediately after receiving them. This patch addresses this issue by just throttling writes. In particular, after a write request is dispatched for a queue, the budget of the queue is decremented by the number of sectors to write, multiplied by an (over)charge coefficient. The value of the coefficient is the result of our tuning with different devices. The second source of unfairness has to do with slowness detection: when the in-service queue is expired, BFQ also controls whether the queue has been "too slow", i.e., has consumed its last-assigned budget at such a low rate that it would have been impossible to consume all of this budget within the maximum time slice T_max (Subsec. 3.5 in [1]). In this case, the queue is always (over)charged the whole budget, to reduce its utilization of the device. Both this overcharge and the slowness-detection criterion may cause unfairness. First, always charging a full budget to a slow queue is too coarse. It is much more accurate, and this patch lets BFQ do so, to charge an amount of service 'equivalent' to the amount of time during which the queue has been in service. As explained in more detail in the comments on the code, this enables BFQ to provide time fairness among slow queues. Secondly, because of ZBR, a queue may be deemed as slow when its associated process is performing I/O on the slowest zones of a disk. However, unless the process is truly too slow, not reducing the disk utilization of the queue is more profitable in terms of disk throughput than the opposite. A similar problem is caused by logical block mapping on non-rotational devices. For this reason, this patch lets a queue be charged time, and not budget, only if the queue has consumed less than 2/3 of its assigned budget. As an additional, important benefit, this tolerance allows BFQ to preserve enough elasticity to still perform bandwidth, and not time, distribution with little unlucky or quasi-sequential processes. Finally, for the same reasons as above, this patch makes slowness detection itself much less harsh: a queue is deemed slow only if it has consumed its budget at less than half of the peak rate. [1] P. Valente and M. Andreolini, "Improving Application Responsiveness with the BFQ Disk I/O Scheduler", Proceedings of the 5th Annual International Systems and Storage Conference (SYSTOR '12), June 2012. Slightly extended version: http://algogroup.unimore.it/people/paolo/disk_sched/bfq-v1-suite- results.pdf Signed-off-by: Paolo Valente <paolo.valente@linaro.org> Signed-off-by: Arianna Avanzini <avanzini.arianna@gmail.com> Signed-off-by: Jens Axboe <axboe@fb.com>