summaryrefslogtreecommitdiff
path: root/drivers
AgeCommit message (Collapse)Author
2017-11-10lpfc: tie in to new dev_loss_tmo interface in nvme transportJames Smart
This patch calls the new nvme transport routine for dev_loss_tmo whenever the SCSI fc transport calls the lldd to make a dynamic change to a remote ports dev_loss_tmo. Signed-off-by: James Smart <james.smart@broadcom.com> Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2017-11-10nvme-fc: decouple ns references from lldd referencesJames Smart
In the lldd api, a lldd may unregister a remoteport (loss of connectivity or driver unload) or localport (driver unload). The lldd must wait for the remoteport_delete or localport_delete before completing its actions post the unregister. The xxx_deletes currently occur only when the xxxport structure is fully freed after all references are removed. Thus the lldd may be held hostage until an app or in-kernel entity that has a namespace open finally closes so the namespace can be removed, the controller removed, thus the transport objects, thus the lldd. This patch decouples the transport and os-facing objects from the lldd and the remoteport and localport. There is a point in all deletions where the transport will no longer interact with the lldd on behalf of a controller. That point centers around the association established with the target/subsystem. It will access the lldd whenever it attempts to create an association and while the association is active. New associations may only be created if the remoteport is live (thus the localport is live). It will not access the lldd after deleting the association. Therefore, the patch tracks the count of active controllers - those with associations being created or that are active - on a remoteport. It also tracks the number of remoteports that have active controllers, on a a localport. When a remoteport is unregistered, as soon as there are no active controllers, the lldd's remoteport_delete may be called and the lldd may continue. Similarly, when a localport is unregistered, as soon as there are no remoteports with active controllers, the localport_delete callback may be made. This significantly speeds up unregistration with the lldd. The transport objects continue in suspended status with reconnect timers running, and upon expiration, normal ref-counting will occur and the objects will be freed. The transport object may still be held hostage by the application/kernel module, but that is acceptable. With this change, the lldd may be fully unloaded and reloaded, and if registrations occur prior to the timeouts, the nvme controller and namespaces will resume normally as if a link bounce. Signed-off-by: James Smart <james.smart@broadcom.com> Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2017-11-10nvme-fc: fix localport resume using stale valuesJames Smart
The localport resume was not updating the lldd ops structure. If the lldd is unloaded and reloaded, the ops pointers will differ. Additionally, as there are device references taken by the localport, ensure that resume only resumes if the device matches as well. Signed-off-by: James Smart <james.smart@broadcom.com> Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2017-11-10nvme: check admin passthru command effectsKeith Busch
The NVMe standard provides a command effects log page so the host may be aware of special requirements it may need to do for a particular command. For example, the command may need to run with IO quiesced to prevent timeouts or undefined behavior, or it may change the logical block formats that determine how the host needs to construct future commands. This patch saves the nvme command effects log page if the controller supports it, and performs appropriate actions before and after an admin passthrough command is completed. If the controller does not support the command effects log page, the driver will define the effects for known opcodes. The nvme format and santize are the only commands in this patch with known effects. Signed-off-by: Keith Busch <keith.busch@intel.com> Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2017-11-10nvme: factor get log into a helperKeith Busch
And fix the warning on a successful firmware log. Reviewed-by: Javier González <javier@cnexlabs.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Keith Busch <keith.busch@intel.com> Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2017-11-10nvme: fix and clarify the check for missing metadataChristoph Hellwig
Update the check in nvme_setup_rw for missing metadata so that it is together with the other metadata handling, does not contain impossible to reach conditions and warns if we get an impossible requests for a (non-PI) metadata-enabled namespace when CONFIG_BLK_DEV_INTEGRITY is not set. Also add a little helper that checks if a given metadata configuration contains protection information Signed-off-by: Christoph Hellwig <hch@lst.de> Reported-by: Javier González <jg@lightnvm.io> Reviewed-by: Keith Busch <keith.busch@intel.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2017-11-10nvme: split __nvme_revalidate_diskChristoph Hellwig
Split out the code that applies the calculate value to a given disk/queue into new helper that can be reused by the multipath code. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Keith Busch <keith.busch@intel.com> Reviewed-by: Sagi Grimberg <sagi@grimberg.me> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2017-11-10nvme: set the chunk size before freezing the queueChristoph Hellwig
We don't need a frozen queue to update the chunk_size, which just is a hint, and moving it a little earlier will allow for some better code reuse with the multipath code. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Keith Busch <keith.busch@intel.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2017-11-10nvme: don't pass struct nvme_ns to nvme_config_discardChristoph Hellwig
To allow reusing this function for the multipath node. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Keith Busch <keith.busch@intel.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2017-11-10nvme: don't pass struct nvme_ns to nvme_init_integrityChristoph Hellwig
To allow reusing this function for the multipath node. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Keith Busch <keith.busch@intel.com> Reviewed-by: Sagi Grimberg <sagi@grimberg.me> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2017-11-10nvme: always unregister the integrity profile in __nvme_revalidate_diskChristoph Hellwig
This is safe because the queue is always frozen when we revalidate, and it simplifies both the existing code as well as the multipath implementation. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Keith Busch <keith.busch@intel.com> Reviewed-by: Sagi Grimberg <sagi@grimberg.me> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2017-11-10nvme: move the dying queue check from cancel to completionChristoph Hellwig
With multipath we don't want a hard DNR bit on a request that is cancelled by a controller reset, but instead want to be able to retry it on another patch. To archive this don't always set the DNR bit when the queue is dying in nvme_cancel_request, but defer that decision to nvme_req_needs_retry. Note that it applies to any command there and not just cancelled commands, but one the queue is dying that is the right thing to do anyway. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Keith Busch <keith.busch@intel.com> Reviewed-by: Sagi Grimberg <sagi@grimberg.me> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2017-11-10Merge tag 'ceph-for-4.14-rc9' of git://github.com/ceph/ceph-clientLinus Torvalds
Pull ceph gix from Ilya Dryomov: "Memory allocation flags fix, marked for stable" * tag 'ceph-for-4.14-rc9' of git://github.com/ceph/ceph-client: rbd: use GFP_NOIO for parent stat and data requests
2017-11-10Merge branch 'for-linus' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/dtor/input Pull input layer updates from Dmitry Torokhov: - a new ACPI ID for Elan touchpad found in yet another Ideapad model - Synaptics RMI4 will allow binding to controllers reporting SMB version 3 (note that we are not adding any new ACPI IDs to the Synaptics PS/2 drover so unless user explicitly enables intertouch support there is no user-visible change) - a fixup to TSC 2004/5 touchscreen driver to mark input devices as "direct" to help userspace identify the type of device they are dealing with * 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/dtor/input: Input: synaptics-rmi4 - RMI4 can also use SMBUS version 3 Input: tsc200x-core - set INPUT_PROP_DIRECT Input: elan_i2c - add ELAN060C to the ACPI table
2017-11-10Merge remote-tracking branches 'spi/topic/sh-msiof', 'spi/topic/slave', ↵Mark Brown
'spi/topic/spreadtrum' and 'spi/topic/tegra114' into spi-next
2017-11-10Merge remote-tracking branches 'spi/topic/imx', 'spi/topic/mxs', ↵Mark Brown
'spi/topic/orion', 'spi/topic/rspi' and 'spi/topic/s3c64xx' into spi-next
2017-11-10Merge remote-tracking branches 'spi/topic/armada', 'spi/topic/axi', ↵Mark Brown
'spi/topic/davinci' and 'spi/topic/fsl-dspi' into spi-next
2017-11-10Merge remote-tracking branch 'spi/topic/core' into spi-nextMark Brown
2017-11-10Merge remote-tracking branches 'spi/fix/idr' and 'spi/fix/sh-msiof' into ↵Mark Brown
spi-linus
2017-11-10Merge remote-tracking branches 'regulator/topic/da9211', ↵Mark Brown
'regulator/topic/pfuze100' and 'regulator/topic/tps65218' into regulator-next
2017-11-10Merge remote-tracking branch 'regulator/topic/qcom-spmi' into regulator-nextMark Brown
2017-11-10Merge remote-tracking branch 'regulator/topic/axp20x' into regulator-nextMark Brown
2017-11-10Merge remote-tracking branch 'regulator/fix/qcom-spmi' into regulator-linusMark Brown
2017-11-10spi: imx: Don't require platform data chipselect arrayTrent Piepho
If the array is not present, assume all chip selects are native. This is the standard behavior for SPI masters configured via the device tree and the behavior of this driver as well when it is configured via device tree. This reduces platform data vs DT differences and allows most of the platform data based boards to remove their chip select arrays. CC: Shawn Guo <shawnguo@kernel.org> CC: Sascha Hauer <kernel@pengutronix.de> CC: Fabio Estevam <fabio.estevam@nxp.com> CC: Mark Brown <broonie@kernel.org> Signed-off-by: Trent Piepho <tpiepho@impinj.com> Signed-off-by: Mark Brown <broonie@kernel.org>
2017-11-10spi: imx: Fix failure path leak on GPIO request errorTrent Piepho
If the code that requests any chip select GPIOs fails, the cleanup of spi_bitbang_start() by calling spi_bitbang_stop() is not done. Add this to the failure path. Note that spi_bitbang_start() has to be called before requesting GPIOs because the GPIO data in the spi master is populated when the master is registed, and that doesn't happen until spi_bitbang_start() is called. CC: Shawn Guo <shawnguo@kernel.org> CC: Sascha Hauer <kernel@pengutronix.de> CC: Fabio Estevam <fabio.estevam@nxp.com> CC: Mark Brown <broonie@kernel.org> CC: Oleksij Rempel <o.rempel@pengutronix.de> Signed-off-by: Trent Piepho <tpiepho@impinj.com> Signed-off-by: Mark Brown <broonie@kernel.org>
2017-11-10spi: imx: GPIO based chip selects should not be requiredTrent Piepho
The driver will fail to load if no gpio chip selects are specified, this patch changes this so that it no longer fails. It's possible to use all native chip selects, in which case there is no reason to have a gpio chip select array. This is what happens if the *optional* device tree property "cs-gpios" is omitted. The spi core already checks for the absence of gpio chip selects in the master and assigns any slaves the gpio_cs value of -ENOENT. Also have the driver respect the standard SPI device tree property "num-cs" to allow setting the number of chip selects without using cs-gpios. CC: Mark Brown <broonie@kernel.org> CC: Shawn Guo <shawnguo@kernel.org> CC: Sascha Hauer <kernel@pengutronix.de> CC: Fabio Estevam <fabio.estevam@nxp.com> CC: Oleksij Rempel <o.rempel@pengutronix.de> Signed-off-by: Trent Piepho <tpiepho@impinj.com> Signed-off-by: Mark Brown <broonie@kernel.org>
2017-11-10dm cache: lift common migration preparation code to alloc_migration()Mike Snitzer
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
2017-11-10dm cache: remove usused deferred_cells member from struct cacheJoe Thornber
Signed-off-by: Joe Thornber <ejt@redhat.com> Signed-off-by: Mike Snitzer <snitzer@redhat.com>
2017-11-10dm cache policy smq: allocate cache blocks in orderJoe Thornber
Previously, cache blocks were being allocated in reverse order. Fix this by pulling the block off the head of the free list. Shouldn't have any impact on performance or latency but it is more correct to have the cache blocks allocated/mapped in ascending order. This fix will slightly increase the chances of two adjacent oblocks being in adjacent cblocks. Signed-off-by: Joe Thornber <ejt@redhat.com> Signed-off-by: Mike Snitzer <snitzer@redhat.com>
2017-11-10dm cache policy smq: change max background work from 10240 to 4096 blocksJoe Thornber
10240 blocks was too much, lowering this reduces the latency of copying and consumes less memory. Signed-off-by: Joe Thornber <ejt@redhat.com> Signed-off-by: Mike Snitzer <snitzer@redhat.com>
2017-11-10dm cache background tracker: limit amount of background work that may be ↵Joe Thornber
issued at once On large systems the cache policy can be over enthusiastic and queue far too much dirty data to be written back. This consumes memory. Signed-off-by: Joe Thornber <ejt@redhat.com> Signed-off-by: Mike Snitzer <snitzer@redhat.com>
2017-11-10dm cache policy smq: take origin idle status into account when queuing ↵Joe Thornber
writebacks If the origin device is idle try and writeback more data. Signed-off-by: Joe Thornber <ejt@redhat.com> Signed-off-by: Mike Snitzer <snitzer@redhat.com>
2017-11-10dm cache policy smq: handle races with queuing background_workJoe Thornber
The background_tracker holds a set of promotions/demotions that the cache policy wishes the core target to implement. When adding a new operation to the tracker it's possible that an operation on the same block is already present (but in practise this doesn't appear to be happening). Catch these situations and do the appropriate cleanup. Signed-off-by: Joe Thornber <ejt@redhat.com> Signed-off-by: Mike Snitzer <snitzer@redhat.com>
2017-11-10dm raid: fix panic when attempting to force a raid to syncHeinz Mauelshagen
Requesting a sync on an active raid device via a table reload (see 'sync' parameter in Documentation/device-mapper/dm-raid.txt) skips the super_load() call that defines the superblock size (rdev->sb_size) -- resulting in an oops if/when super_sync()->memset() is called. Fix by moving the initialization of the superblock start and size out of super_load() to the caller (analyse_superblocks). Signed-off-by: Heinz Mauelshagen <heinzm@redhat.com> Signed-off-by: Mike Snitzer <snitzer@redhat.com>
2017-11-10dm integrity: allow unaligned bv_offsetMikulas Patocka
When slub_debug is enabled kmalloc returns unaligned memory. XFS uses this unaligned memory for its buffers (if an unaligned buffer crosses a page, XFS frees it and allocates a full page instead - see the function xfs_buf_allocate_memory). dm-integrity checks if bv_offset is aligned on page size and this check fail with slub_debug and XFS. Fix this bug by removing the bv_offset check, leaving only the check for bv_len. Fixes: 7eada909bfd7 ("dm: add integrity target") Cc: stable@vger.kernel.org # v4.12+ Reported-by: Bruno Prémont <bonbons@sysophe.eu> Reviewed-by: Milan Broz <gmazyland@gmail.com> Signed-off-by: Mikulas Patocka <mpatocka@redhat.com> Signed-off-by: Mike Snitzer <snitzer@redhat.com>
2017-11-10dm crypt: allow unaligned bv_offsetMikulas Patocka
When slub_debug is enabled kmalloc returns unaligned memory. XFS uses this unaligned memory for its buffers (if an unaligned buffer crosses a page, XFS frees it and allocates a full page instead - see the function xfs_buf_allocate_memory). dm-crypt checks if bv_offset is aligned on page size and these checks fail with slub_debug and XFS. Fix this bug by removing the bv_offset checks. Switch to checking if bv_len is aligned instead of bv_offset (this check should be sufficient to prevent overruns if a bio with too small bv_len is received). Fixes: 8f0009a22517 ("dm crypt: optionally support larger encryption sector size") Cc: stable@vger.kernel.org # v4.12+ Reported-by: Bruno Prémont <bonbons@sysophe.eu> Tested-by: Bruno Prémont <bonbons@sysophe.eu> Signed-off-by: Mikulas Patocka <mpatocka@redhat.com> Reviewed-by: Milan Broz <gmazyland@gmail.com> Signed-off-by: Mike Snitzer <snitzer@redhat.com>
2017-11-10dm: small cleanup in dm_get_md()Mike Snitzer
Makes dm_get_md() and dm_get_from_kobject() have similar code. Signed-off-by: Mike Snitzer <snitzer@redhat.com>
2017-11-10dm: fix race between dm_get_from_kobject() and __dm_destroy()Hou Tao
The following BUG_ON was hit when testing repeat creation and removal of DM devices: kernel BUG at drivers/md/dm.c:2919! CPU: 7 PID: 750 Comm: systemd-udevd Not tainted 4.1.44 Call Trace: [<ffffffff81649e8b>] dm_get_from_kobject+0x34/0x3a [<ffffffff81650ef1>] dm_attr_show+0x2b/0x5e [<ffffffff817b46d1>] ? mutex_lock+0x26/0x44 [<ffffffff811df7f5>] sysfs_kf_seq_show+0x83/0xcf [<ffffffff811de257>] kernfs_seq_show+0x23/0x25 [<ffffffff81199118>] seq_read+0x16f/0x325 [<ffffffff811de994>] kernfs_fop_read+0x3a/0x13f [<ffffffff8117b625>] __vfs_read+0x26/0x9d [<ffffffff8130eb59>] ? security_file_permission+0x3c/0x44 [<ffffffff8117bdb8>] ? rw_verify_area+0x83/0xd9 [<ffffffff8117be9d>] vfs_read+0x8f/0xcf [<ffffffff81193e34>] ? __fdget_pos+0x12/0x41 [<ffffffff8117c686>] SyS_read+0x4b/0x76 [<ffffffff817b606e>] system_call_fastpath+0x12/0x71 The bug can be easily triggered, if an extra delay (e.g. 10ms) is added between the test of DMF_FREEING & DMF_DELETING and dm_get() in dm_get_from_kobject(). To fix it, we need to ensure the test of DMF_FREEING & DMF_DELETING and dm_get() are done in an atomic way, so _minor_lock is used. The other callers of dm_get() have also been checked to be OK: some callers invoke dm_get() under _minor_lock, some callers invoke it under _hash_lock, and dm_start_request() invoke it after increasing md->open_count. Cc: stable@vger.kernel.org Signed-off-by: Hou Tao <houtao1@huawei.com> Signed-off-by: Mike Snitzer <snitzer@redhat.com>
2017-11-10dm: allocate struct mapped_device with kvzallocMikulas Patocka
The structure srcu_struct can be very big, its size is proportional to the value CONFIG_NR_CPUS. The Fedora kernel has CONFIG_NR_CPUS 8192, the field io_barrier in the struct mapped_device has 84kB in the debugging kernel and 50kB in the non-debugging kernel. The large size may result in failure of the function kzalloc_node. In order to avoid the allocation failure, we use the function kvzalloc_node, this function falls back to vmalloc if a large contiguous chunk of memory is not available. This patch also moves the field io_barrier to the last position of struct mapped_device - the reason is that on many processor architectures, short memory offsets result in smaller code than long memory offsets - on x86-64 it reduces code size by 320 bytes. Note to stable kernel maintainers - the kernels 4.11 and older don't have the function kvzalloc_node, you can use the function vzalloc_node instead. Signed-off-by: Mikulas Patocka <mpatocka@redhat.com> Cc: stable@vger.kernel.org Signed-off-by: Mike Snitzer <snitzer@redhat.com>
2017-11-10dm zoned: ignore last smaller runt zoneDamien Le Moal
The SCSI layer allows ZBC drives to have a smaller last runt zone. For such a device, specifying the entire capacity for a dm-zoned target table entry fails because the specified capacity is not aligned on a device zone size indicated in the request queue structure of the device. Fix this problem by ignoring the last runt zone in the entry length when seting up the dm-zoned target (ctr method) and when iterating table entries of the target (iterate_devices method). This allows dm-zoned users to still easily setup a target using the entire device capacity (as mandated by dm-zoned) or the aligned capacity excluding the last runt zone. While at it, replace direct references to the device queue chunk_sectors limit with calls to the accessor blk_queue_zone_sectors(). Reported-by: Peter Desnoyers <pjd@ccs.neu.edu> Cc: stable@vger.kernel.org Signed-off-by: Damien Le Moal <damien.lemoal@wdc.com> Signed-off-by: Mike Snitzer <snitzer@redhat.com>
2017-11-10dm space map metadata: use ARRAY_SIZEJérémy Lefaure
Using the ARRAY_SIZE macro improves the readability of the code. Found with Coccinelle with the following semantic patch: @r depends on (org || report)@ type T; T[] E; position p; @@ ( (sizeof(E)@p /sizeof(*E)) | (sizeof(E)@p /sizeof(E[...])) | (sizeof(E)@p /sizeof(T)) ) Signed-off-by: Jérémy Lefaure <jeremy.lefaure@lse.epita.fr> Signed-off-by: Mike Snitzer <snitzer@redhat.com>
2017-11-10dm log writes: add support for DAXRoss Zwisler
Now that we have the ability log filesystem writes using a flat buffer, add support for DAX. The motivation for this support is the need for an xfstest that can test the new MAP_SYNC DAX flag. By logging the filesystem activity with dm-log-writes we can show that the MAP_SYNC page faults are writing out their metadata as they happen, instead of requiring an explicit msync/fsync. Unfortunately we can't easily track data that has been written via mmap() now that the dax_flush() abstraction was removed by commit c3ca015fab6d ("dax: remove the pmem_dax_ops->flush abstraction"). Otherwise we could just treat each flush as a big write, and store the data that is being synced to media. It may be worthwhile to add the dax_flush() entry point back, just as a notifier so we can do this logging. Signed-off-by: Ross Zwisler <ross.zwisler@linux.intel.com> Signed-off-by: Mike Snitzer <snitzer@redhat.com>
2017-11-10dm log writes: add support for inline data buffersRoss Zwisler
Currently dm-log-writes supports writing filesystem data via BIOs, and writing internal metadata from a flat buffer via write_metadata(). For DAX writes, though, we won't have a BIO, but will instead have an iterator that we'll want to use to fill a flat data buffer. So, create write_inline_data() which allows us to write filesystem data using a flat buffer as a source, and wire it up in log_one_block(). Signed-off-by: Ross Zwisler <ross.zwisler@linux.intel.com> Signed-off-by: Mike Snitzer <snitzer@redhat.com>
2017-11-10dm cache: simplify get_per_bio_data() by removing data_size argumentMike Snitzer
There is only one per_bio_data size now that writethrough-specific data was removed from the per_bio_data structure. Signed-off-by: Mike Snitzer <snitzer@redhat.com>
2017-11-10dm cache: remove all obsolete writethrough-specific codeMike Snitzer
Now that the writethrough code is much simpler there is no need to track so much state or cascade bio submission (as was done, via writethrough_endio(), to issue origin then cache IO in series). As such the obsolete writethrough list and workqueue is also removed. Signed-off-by: Mike Snitzer <snitzer@redhat.com>
2017-11-10dm cache: submit writethrough writes in parallel to origin and cacheMike Snitzer
Discontinue issuing writethrough write IO in series to the origin and then cache. Use bio_clone_fast() to create a new origin clone bio that will be mapped to the origin device and then bio_chain() it to the bio that gets remapped to the cache device. The origin clone bio does _not_ have a copy of the per_bio_data -- as such check_if_tick_bio_needed() will not be called. The cache bio (parent bio) will not complete until the origin bio has completed -- this fulfills bio_clone_fast()'s requirements as well as the requirement to not complete the original IO until the write IO has completed to both the origin and cache device. Signed-off-by: Mike Snitzer <snitzer@redhat.com>
2017-11-10dm cache: pass cache structure to mode functionsMike Snitzer
No functional changes, just a bit cleaner than passing cache_features structure. Signed-off-by: Mike Snitzer <snitzer@redhat.com>
2017-11-10dm cache: fix race condition in the writeback mode overwrite_bio optimisationJoe Thornber
When a DM cache in writeback mode moves data between the slow and fast device it can often avoid a copy if the triggering bio either: i) covers the whole block (no point copying if we're about to overwrite it) ii) the migration is a promotion and the origin block is currently discarded Prior to this fix there was a race with case (ii). The discard status was checked with a shared lock held (rather than exclusive). This meant another bio could run in parallel and write data to the origin, removing the discard state. After the promotion the parallel write would have been lost. With this fix the discard status is re-checked once the exclusive lock has been aquired. If the block is no longer discarded it falls back to the slower full copy path. Fixes: b29d4986d ("dm cache: significant rework to leverage dm-bio-prison-v2") Cc: stable@vger.kernel.org # v4.12+ Signed-off-by: Joe Thornber <ejt@redhat.com> Signed-off-by: Mike Snitzer <snitzer@redhat.com>
2017-11-10md: free unused memory after bitmap resizeZdenek Kabelac
When bitmap is resized, the old kalloced chunks just are not released once the resized bitmap starts to use new space. This fixes in particular kmemleak reports like this one: unreferenced object 0xffff8f4311e9c000 (size 4096): comm "lvm", pid 19333, jiffies 4295263268 (age 528.265s) hex dump (first 32 bytes): 02 80 02 80 02 80 02 80 02 80 02 80 02 80 02 80 ................ 02 80 02 80 02 80 02 80 02 80 02 80 02 80 02 80 ................ backtrace: [<ffffffffa69471ca>] kmemleak_alloc+0x4a/0xa0 [<ffffffffa628c10e>] kmem_cache_alloc_trace+0x14e/0x2e0 [<ffffffffa676cfec>] bitmap_checkpage+0x7c/0x110 [<ffffffffa676d0c5>] bitmap_get_counter+0x45/0xd0 [<ffffffffa676d6b3>] bitmap_set_memory_bits+0x43/0xe0 [<ffffffffa676e41c>] bitmap_init_from_disk+0x23c/0x530 [<ffffffffa676f1ae>] bitmap_load+0xbe/0x160 [<ffffffffc04c47d3>] raid_preresume+0x203/0x2f0 [dm_raid] [<ffffffffa677762f>] dm_table_resume_targets+0x4f/0xe0 [<ffffffffa6774b52>] dm_resume+0x122/0x140 [<ffffffffa6779b9f>] dev_suspend+0x18f/0x290 [<ffffffffa677a3a7>] ctl_ioctl+0x287/0x560 [<ffffffffa677a693>] dm_ctl_ioctl+0x13/0x20 [<ffffffffa62d6b46>] do_vfs_ioctl+0xa6/0x750 [<ffffffffa62d7269>] SyS_ioctl+0x79/0x90 [<ffffffffa6956d41>] entry_SYSCALL_64_fastpath+0x1f/0xc2 Signed-off-by: Zdenek Kabelac <zkabelac@redhat.com> Signed-off-by: Shaohua Li <shli@fb.com>
2017-11-10md: release allocated bitset sync_setZdenek Kabelac
Patch fixes kmemleak on md_stop() path used likely only by dm-raid wrapper. Code of md is using mddev_put() where both bitsets are released however this freeing is not shared. Also set NULL to bio_set and sync_set pointers just like mddev_put is doing. Signed-off-by: Zdenek Kabelac <zkabelac@redhat.com> Signed-off-by: Shaohua Li <shli@fb.com>