summaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
2013-12-12Btrfs: make sure we cleanup all reloc roots if error happensWang Shilong
I hit an oops when merging reloc roots fails, the reason is that new reloc roots may be added and we should make sure we cleanup all reloc roots. Signed-off-by: Wang Shilong <wangsl.fnst@cn.fujitsu.com> Signed-off-by: Chris Mason <clm@fb.com>
2013-12-12Btrfs: skip building backref tree for uuid and quota tree when doing balance ↵Wang Shilong
relocation Quota tree and UUID Tree is only cowed, they can not be snapshoted. Signed-off-by: Wang Shilong <wangsl.fnst@cn.fujitsu.com> Signed-off-by: Chris Mason <clm@fb.com>
2013-12-12Btrfs: fix an oops when doing balance relocationWang Shilong
I hit an oops when inserting reloc root into @reloc_root_tree(it can be easily triggered when forcing cow for relocation root) [ 866.494539] [<ffffffffa0499579>] btrfs_init_reloc_root+0x79/0xb0 [btrfs] [ 866.495321] [<ffffffffa044c240>] record_root_in_trans+0xb0/0x110 [btrfs] [ 866.496109] [<ffffffffa044d758>] btrfs_record_root_in_trans+0x48/0x80 [btrfs] [ 866.496908] [<ffffffffa0494da8>] select_reloc_root+0xa8/0x210 [btrfs] [ 866.497703] [<ffffffffa0495c8a>] do_relocation+0x16a/0x540 [btrfs] This is because reloc root inserted into @reloc_root_tree is not within one transaction,reloc root may be cowed and root block bytenr will be reused then oops happens.We should update reloc root in @reloc_root_tree when cow reloc root node, fix it. Signed-off-by: Wang Shilong <wangsl.fnst@cn.fujitsu.com> Reviewed-by: Miao Xie <miaox@cn.fujitsu.com> Signed-off-by: Chris Mason <clm@fb.com>
2013-12-12Btrfs: don't miss skinny extent items on delayed ref head contentionFilipe David Borba Manana
Currently extent-tree.c:btrfs_lookup_extent_info() can miss the lookup of skinny extent items. This can happen when the execution flow is the following: * We do an extent tree lookup and fail to find a skinny extent item; * As a result, we attempt to see if a non-skinny extent item exists, either by looking at previous item in the leaf or by doing another full extent tree search; * We have a transaction and then we check for a matching delayed ref head in the transaction's delayed refs rbtree; * We find such delayed ref head and then we try to lock it with a call to mutex_trylock(); * The lock was contended so we jump to the label "again", which repeats the extent tree search but for a non-skinny extent item, because we set previously metadata variable to 0 and the search key to look for a non-skinny extent-item; * After the jump (and after releasing the transaction's delayed refs lock), a skinny extent item might have been added to the extent tree but we will miss it because metadata is set to 0 and the search key is set for a non-skinny extent-item. The fix here is to not reset metadata to 0 and to jump to the initial search key setup if the delayed ref head is contended, instead of jumping directly to the extent tree search label ("again"). This issue was found while investigating the issue reported at Bugzilla 64961. David Sterba suspected this function was missing extent items, and that this could be caused by the last change to this function, which was made in the following patch: [PATCH] Btrfs: optimize btrfs_lookup_extent_info() (commit 74be9510876a66ad9826613ac8a526d26f9e7f01) But in fact this issue already existed before, because after failing to find a skinny extent item, the code set the search key for a non-skinny extent item, and on contention of a matching delayed ref head it would not search the extent tree for a skinny extent item anymore. Signed-off-by: Filipe David Borba Manana <fdmanana@gmail.com> Reviewed-by: Liu Bo <bo.li.liu@oracle.com> Signed-off-by: Chris Mason <clm@fb.com>
2013-12-12btrfs: call mnt_drop_write after interrupted subvol deletionDavid Sterba
If btrfs_ioctl_snap_destroy blocks on the mutex and the process is killed, mnt_write count is unbalanced and leads to unmountable filesystem. CC: stable@vger.kernel.org Signed-off-by: David Sterba <dsterba@suse.cz> Signed-off-by: Chris Mason <clm@fb.com>
2013-12-12Btrfs: don't clear the default compression typeMiao Xie
We met a oops caused by the wrong compression type: [ 556.512356] BUG: unable to handle kernel NULL pointer dereference at (null) [ 556.512370] IP: [<ffffffff811dbaa0>] __list_del_entry+0x1/0x98 [SNIP] [ 556.512490] [<ffffffff811dbb44>] ? list_del+0xd/0x2b [ 556.512539] [<ffffffffa05dd5ce>] find_workspace+0x97/0x175 [btrfs] [ 556.512546] [<ffffffff813c14b5>] ? _raw_spin_lock+0xe/0x10 [ 556.512576] [<ffffffffa05de276>] btrfs_compress_pages+0x2d/0xa2 [btrfs] [ 556.512601] [<ffffffffa05af060>] compress_file_range.constprop.54+0x1f2/0x4e8 [btrfs] [ 556.512627] [<ffffffffa05af388>] async_cow_start+0x32/0x4d [btrfs] [ 556.512655] [<ffffffffa05cc7a1>] worker_loop+0x144/0x4c3 [btrfs] [ 556.512661] [<ffffffff81059404>] ? finish_task_switch+0x80/0xb8 [ 556.512689] [<ffffffffa05cc65d>] ? btrfs_queue_worker+0x244/0x244 [btrfs] [ 556.512695] [<ffffffff8104fa4e>] kthread+0x8d/0x95 [ 556.512699] [<ffffffff81050000>] ? bit_waitqueue+0x34/0x7d [ 556.512704] [<ffffffff8104f9c1>] ? __kthread_parkme+0x65/0x65 [ 556.512709] [<ffffffff813c7eec>] ret_from_fork+0x7c/0xb0 [ 556.512713] [<ffffffff8104f9c1>] ? __kthread_parkme+0x65/0x65 Steps to reproduce: # mkfs.btrfs -f <dev> # mount -o nodatacow <dev> <mnt> # touch <mnt>/<file> # chattr =c <mnt>/<file> # dd if=/dev/zero of=<mnt>/<file> bs=1M count=10 It is because we cleared the default compression type when setting the nodatacow. In fact, we needn't do it because we have used COMPRESS flag to indicate if we need compressed the file data or not, needn't use the variant -- compress_type -- in btrfs_info to do the same thing, and just use it to hold the default compression type. Or we would get a wrong compress type for a file whose own compress flag is set but the compress flag of its filesystem is not set. Reported-by: Tsutomu Itoh <t-itoh@jp.fujitsu.com> Signed-off-by: Miao Xie <miaox@cn.fujitsu.com> Reviewed-by: Liu Bo <bo.li.liu@oracle.com> Signed-off-by: Chris Mason <clm@fb.com>
2013-12-12hwmon: Prevent some divide by zeros in FAN_TO_REG()Dan Carpenter
The "rpm * div" operations can overflow here, so this patch adds an upper limit to rpm to prevent that. Jean Delvare helped me with this patch. Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com> Acked-by: Roger Lucas <vt8231@hiddenengine.co.uk> Cc: stable@vger.kernel.org Signed-off-by: Jean Delvare <khali@linux-fr.org>
2013-12-12hwmon: (w83l768ng) Fix fan speed control rangeJean Delvare
The W83L786NG stores the fan speed on 4 bits while the sysfs interface uses a 0-255 range. Thus the driver should scale the user input down to map it to the device range, and scale up the value read from the device before presenting it to the user. The reserved register nibble should be left unchanged. Signed-off-by: Jean Delvare <khali@linux-fr.org> Cc: stable@vger.kernel.org Reviewed-by: Guenter Roeck <linux@roeck-us.net>
2013-12-12hwmon: (w83l786ng) Fix fan speed control mode setting and reportingBrian Carnes
The wrong mask is used, which causes some fan speed control modes (pwmX_enable) to be incorrectly reported, and some modes to be impossible to set. [JD: add subject and description.] Signed-off-by: Brian Carnes <bmcarnes@gmail.com> Cc: stable@vger.kernel.org Signed-off-by: Jean Delvare <khali@linux-fr.org>
2013-12-12hwmon: (lm90) Unregister hwmon device if interrupt setup failsGuenter Roeck
Commit 109b1283fb (hwmon: (lm90) Add support to handle IRQ) introduced interrupt support. Its error handling code fails to unregister the already registered hwmon device. Fixes: 109b1283fb532ac773a076748ffccf76a7067cab Signed-off-by: Guenter Roeck <linux@roeck-us.net> Signed-off-by: Jean Delvare <khali@linux-fr.org>
2013-12-11ARM: sun6i: dt: Fix interrupt trigger typesMaxime Ripard
The Allwinner A31 uses the ARM GIC as its internal interrupts controller. The GIC can work on several interrupt triggers, and the A31 was actually setting it up to use a rising edge as a trigger, while it was actually a level high trigger, leading to some interrupts that would be completely ignored if the edge was missed. Signed-off-by: Maxime Ripard <maxime.ripard@free-electrons.com> Acked-by: Hans de Goede <hdegoede@redhat.com> Cc: stable@vger.kernel.org # 3.12+ Signed-off-by: Olof Johansson <olof@lixom.net>
2013-12-11ARM: sun7i: dt: Fix interrupt trigger typesMaxime Ripard
The Allwinner A20 uses the ARM GIC as its internal interrupts controller. The GIC can work on several interrupt triggers, and the A20 was actually setting it up to use a rising edge as a trigger, while it was actually a level high trigger, leading to some interrupts that would be completely ignored if the edge was missed. Signed-off-by: Maxime Ripard <maxime.ripard@free-electrons.com> Acked-by: Hans de Goede <hdegoede@redhat.com> Cc: stable@vger.kernel.org #3.12+ Signed-off-by: Olof Johansson <olof@lixom.net>
2013-12-11MAINTAINERS: merge IMX6 entry into IMXShawn Guo
I have been co-maintaining IMX sub-architecture for a couple of years, and collecting IMX sub-architecture patches rather than IMX6 only ones for a few release cycles. It makes sense to officially add myself as the co-maintainer for IMX sub-architecture now. Consequently, IMX6 entry can just be merged into IMX. While at it, add a 'F:' entry for IMX DTS files. Signed-off-by: Shawn Guo <shawn.guo@linaro.org> Acked-by: Sascha Hauer <s.hauer@pengutronix.de> Signed-off-by: Olof Johansson <olof@lixom.net>
2013-12-11ARM: tegra: add missing break to fuse initialization codeStephen Warren
Add a missing break to the switch in tegra_init_fuse() which determines which SoC the code is running on. This prevents the Tegra30+ fuse handling code from running on Tegra20. Fixes: 3bd1ae57f7bb ("ARM: tegra: add fuses as device randomness") Signed-off-by: Stephen Warren <swarren@nvidia.com> Signed-off-by: Olof Johansson <olof@lixom.net>
2013-12-11ARM: pxa: prevent PXA270 occasional reboot freezesSergei Ianovich
Erratum 71 of PXA270M Processor Family Specification Update (April 19, 2010) explains that watchdog reset time is just 8us insead of 10ms in EMTS. If SDRAM is not reset, it causes memory bus congestion and the device hangs. We put SDRAM in selfresh mode before watchdog reset, removing potential freezes. Without this patch PXA270-based ICP DAS LP-8x4x hangs after up to 40 reboots. With this patch it has successfully rebooted 500 times. Signed-off-by: Sergei Ianovich <ynvich@gmail.com> Tested-by: Marek Vasut <marex@denx.de> Signed-off-by: Haojian Zhuang <haojian.zhuang@gmail.com> Cc: stable@vger.kernel.org Signed-off-by: Olof Johansson <olof@lixom.net>
2013-12-11ARM: pxa: tosa: fix keys mappingDmitry Eremin-Solenikov
When converting from tosa-keyboard driver to matrix keyboard, tosa keys received extra 1 column shift. Replace that with correct values to make keyboard work again. Fixes: f69a6548c9d5 ('[ARM] pxa/tosa: make use of the matrix keypad driver') Signed-off-by: Dmitry Eremin-Solenikov <dbaryshkov@gmail.com> Signed-off-by: Haojian Zhuang <haojian.zhuang@gmail.com> Cc: stable@vger.kernel.org Signed-off-by: Olof Johansson <olof@lixom.net>
2013-12-11Merge remote-tracking branches 'regulator/fix/as3722' and ↵Mark Brown
'regulator/fix/pfuze100' into regulator-linus
2013-12-11hwmon: HIH-6130: Support I2C bus drivers without I2C_FUNC_SMBUS_QUICKJosé Miguel Gonçalves
Some I2C bus drivers do not allow zero-length data transfers which are required to start a measurement with the HIH6130/1 sensor. Nevertheless, we can overcome this limitation by writing a zero dummy byte. This byte is ignored by the sensor and was verified to be working with the OMAP I2C bus driver in a BeagleBone board. Signed-off-by: José Miguel Gonçalves <jose.goncalves@inov.pt> [Guenter Roeck: Simplified complexity of write_length initialization] Cc: stable@vger.kernel.org # v3.10+ Signed-off-by: Guenter Roeck <linux@roeck-us.net>
2013-12-11ALSA: hda - Add static DAC/pin mapping for AD1986A codecTakashi Iwai
AD1986A codec is a pretty old codec and has really many hidden restrictions. One of such is that each DAC is dedicated to certain pin although there are possible connections. Currently, the generic parser tries to assign individual DACs as much as possible, and this lead to two bad situations: connections where the sound actually doesn't work, and connections conflicting other channels. We may fix this by trying to find the best connections more harder, but as of now, it's easier to give some hints for paired DAC/pin connections and honor them if available, since such a hint is needed only for specific codecs (right now only AD1986A, and there will be unlikely any others in future). Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=64971 Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=66621 Cc: <stable@vger.kernel.org> Signed-off-by: Takashi Iwai <tiwai@suse.de>
2013-12-11ALSA: hda - One more Dell headset detection quirkHui Wang
On the Dell machines with codec whose Subsystem Id is 0x10280624, no external microphone can be detected when plugging a 3-ring headset. If we add "model=dell-headset-multi" for the snd-hda-intel.ko, the problem will disappear. BugLink: https://bugs.launchpad.net/bugs/1259790 Cc: David Henningsson <david.henningsson@canonical.com> Cc: stable@vger.kernel.org Signed-off-by: Hui Wang <hui.wang@canonical.com> Signed-off-by: Takashi Iwai <tiwai@suse.de>
2013-12-11ALSA: hda - hdmi: Fix IEC958 ctl indexes for some simple HDMI devicesAnssi Hannula
In case a single HDA card has both HDMI and S/PDIF outputs, the S/PDIF outputs will have their IEC958 controls created starting from index 16 and the HDMI controls will be created starting from index 0. However, HDMI simple_playback_build_controls() as used by old VIA and NVIDIA codecs incorrectly requests the IEC958 controls to be created with an S/PDIF type instead of HDMI. In case the card has other codecs that have HDMI outputs, the controls will be created with wrong index=16, causing them to e.g. be unreachable by the ALSA "hdmi" alias. Fix that by making simple_playback_build_controls() request controls with HDMI indexes. Not many cards have an affected configuration, but e.g. ASUS M3N78-VM contains an integrated NVIDIA HDA "card" with: - a VIA codec that has, among others, an S/PDIF pin incorrectly labelled as an HDMI pin, and - an NVIDIA MCP7x HDMI codec. Reported-by: MysterX on #openelec Tested-by: MysterX on #openelec Signed-off-by: Anssi Hannula <anssi.hannula@iki.fi> Cc: <stable@vger.kernel.org> # 3.8+ Signed-off-by: Takashi Iwai <tiwai@suse.de>
2013-12-10nfsd: when reusing an existing repcache entry, unhash it firstJeff Layton
The DRC code will attempt to reuse an existing, expired cache entry in preference to allocating a new one. It'll then search the cache, and if it gets a hit it'll then free the cache entry that it was going to reuse. The cache code doesn't unhash the entry that it's going to reuse however, so it's possible for it end up designating an entry for reuse and then subsequently freeing the same entry after it finds it. This leads it to a later use-after-free situation and usually some list corruption warnings or an oops. Fix this by simply unhashing the entry that we intend to reuse. That will mean that it's not findable via a search and should prevent this situation from occurring. Cc: stable@vger.kernel.org # v3.10+ Reported-by: Christoph Hellwig <hch@infradead.org> Reported-by: g. artim <gartim@gmail.com> Signed-off-by: Jeff Layton <jlayton@redhat.com> Signed-off-by: J. Bruce Fields <bfields@redhat.com>
2013-12-10dm stats: initialize read-only module parameterMikulas Patocka
The module parameter stats_current_allocated_bytes in dm-mod is read-only. This parameter informs the user about memory consumption. It is not supposed to be changed by the user. However, despite being read-only, this parameter can be set on modprobe or insmod command line: modprobe dm-mod stats_current_allocated_bytes=12345 The kernel doesn't expect that this variable can be non-zero at module initialization and if the user sets it, it results in warning. This patch initializes the variable in the module init routine, so that user-supplied value is ignored. Signed-off-by: Mikulas Patocka <mpatocka@redhat.com> Signed-off-by: Mike Snitzer <snitzer@redhat.com> Cc: stable@vger.kernel.org # 3.12+
2013-12-10dm bufio: initialize read-only module parametersMikulas Patocka
Some module parameters in dm-bufio are read-only. These parameters inform the user about memory consumption. They are not supposed to be changed by the user. However, despite being read-only, these parameters can be set on modprobe or insmod command line, for example: modprobe dm-bufio current_allocated_bytes=12345 The kernel doesn't expect that these variables can be non-zero at module initialization and if the user sets them, it results in BUG. This patch initializes the variables in the module init routine, so that user-supplied values are ignored. Signed-off-by: Mikulas Patocka <mpatocka@redhat.com> Signed-off-by: Mike Snitzer <snitzer@redhat.com> Cc: stable@vger.kernel.org # 3.2+
2013-12-10dm cache: actually resize cacheVincent Pelletier
Commit f494a9c6b1b6dd9a9f21bbb75d9210d478eeb498 ("dm cache: cache shrinking support") broke cache resizing support. dm_cache_resize() is called with cache->cache_size before it gets updated to new_size, so it is a no-op. But the dm-cache superblock is updated with the new_size even though the backing dm-array is not resized. Fix this by passing the new_size to dm_cache_resize(). Signed-off-by: Vincent Pelletier <plr.vincent@gmail.com> Acked-by: Joe Thornber <ejt@redhat.com> Signed-off-by: Mike Snitzer <snitzer@redhat.com>
2013-12-10dm cache: update Documentation for invalidate_cblocks's range syntaxMike Snitzer
The cache target's invalidate_cblocks message allows cache block (cblock) ranges to be expressed with: <cblock start>-<cblock end> The range's <cblock end> value is "one past the end", so the range includes <cblock start> through <cblock end>-1. Signed-off-by: Mike Snitzer <snitzer@redhat.com> Acked-by: Joe Thornber <ejt@redhat.com>
2013-12-10dm cache policy mq: fix promotions to occur as expectedJoe Thornber
Micro benchmarks that repeatedly issued IO to a single block were failing to cause a promotion from the origin device to the cache. Fix this by not updating the stats during map() if -EWOULDBLOCK will be returned. The mq policy will only update stats, consider migration, etc, once per tick period (a unit of time established between dm-cache core and the policies). When the IO thread calls the policy's map method, if it would like to migrate the associated block it returns -EWOULDBLOCK, the IO then gets handed over to a worker thread which handles the migration. The worker thread calls map again, to check the migration is still needed (avoids a race among other things). *BUT*, before this fix, if we were still in the same tick period the stats were already updated by the previous map call -- so the migration would no longer be requested. Signed-off-by: Joe Thornber <ejt@redhat.com> Signed-off-by: Mike Snitzer <snitzer@redhat.com>
2013-12-10dm thin: allow pool in read-only mode to transition to read-write modeJoe Thornber
A thin-pool may be in read-only mode because the pool's data or metadata space was exhausted. To allow for recovery, by adding more space to the pool, we must allow a pool to transition from PM_READ_ONLY to PM_WRITE mode. Otherwise, running out of space will render the pool permanently read-only. Signed-off-by: Joe Thornber <ejt@redhat.com> Signed-off-by: Mike Snitzer <snitzer@redhat.com> Cc: stable@vger.kernel.org
2013-12-10dm thin: re-establish read-only state when switching to fail modeJoe Thornber
If the thin-pool transitioned to fail mode and the thin-pool's table were reloaded for some reason: the new table's default pool mode would be read-write, though it will transition to fail mode during resume. When the pool mode transitions directly from PM_WRITE to PM_FAIL we need to re-establish the intermediate read-only state in both the metadata and persistent-data block manager (as is usually done with the normal pool mode transition sequence: PM_WRITE -> PM_READ_ONLY -> PM_FAIL). Signed-off-by: Joe Thornber <ejt@redhat.com> Signed-off-by: Mike Snitzer <snitzer@redhat.com> Cc: stable@vger.kernel.org
2013-12-10dm thin: always fallback the pool mode if commit failsJoe Thornber
Rename commit_or_fallback() to commit(). Now all previous calls to commit() will trigger the pool mode to fallback if the commit fails. Also, check the error returned from commit() in alloc_data_block(). Signed-off-by: Joe Thornber <ejt@redhat.com> Signed-off-by: Mike Snitzer <snitzer@redhat.com> Cc: stable@vger.kernel.org
2013-12-10dm thin: switch to read-only mode if metadata space is exhaustedMike Snitzer
Switch the thin pool to read-only mode in alloc_data_block() if dm_pool_alloc_data_block() fails because the pool's metadata space is exhausted. Differentiate between data and metadata space in messages about no free space available. This issue was noticed with the device-mapper-test-suite using: dmtest run --suite thin-provisioning -n /exhausting_metadata_space_causes_fail_mode/ The quantity of errors logged in this case must be reduced. before patch: device-mapper: thin: 253:4: reached low water mark for metadata device: sending event. device-mapper: space map metadata: unable to allocate new metadata block device-mapper: space map common: dm_tm_shadow_block() failed device-mapper: space map metadata: unable to allocate new metadata block device-mapper: space map common: dm_tm_shadow_block() failed device-mapper: space map metadata: unable to allocate new metadata block device-mapper: space map common: dm_tm_shadow_block() failed device-mapper: space map metadata: unable to allocate new metadata block device-mapper: space map common: dm_tm_shadow_block() failed device-mapper: space map metadata: unable to allocate new metadata block device-mapper: space map common: dm_tm_shadow_block() failed <snip ... these repeat for a _very_ long while ... > device-mapper: space map metadata: unable to allocate new metadata block device-mapper: thin: 253:4: commit failed: error = -28 device-mapper: thin: 253:4: switching pool to read-only mode after patch: device-mapper: thin: 253:4: reached low water mark for metadata device: sending event. device-mapper: space map metadata: unable to allocate new metadata block device-mapper: thin: 253:4: no free metadata space available. device-mapper: thin: 253:4: switching pool to read-only mode Signed-off-by: Mike Snitzer <snitzer@redhat.com> Acked-by: Joe Thornber <ejt@redhat.com> Cc: stable@vger.kernel.org
2013-12-10dm thin: switch to read only mode if a mapping insert failsJoe Thornber
Switch the thin pool to read-only mode when dm_thin_insert_block() fails since there is little reason to expect the cause of the failure to be resolved without further action by user space. This issue was noticed with the device-mapper-test-suite using: dmtest run --suite thin-provisioning -n /exhausting_metadata_space_causes_fail_mode/ The quantity of errors logged in this case must be reduced. before patch: device-mapper: thin: dm_thin_insert_block() failed device-mapper: space map metadata: unable to allocate new metadata block device-mapper: thin: dm_thin_insert_block() failed device-mapper: space map metadata: unable to allocate new metadata block device-mapper: thin: dm_thin_insert_block() failed device-mapper: space map metadata: unable to allocate new metadata block device-mapper: thin: dm_thin_insert_block() failed device-mapper: space map metadata: unable to allocate new metadata block device-mapper: thin: dm_thin_insert_block() failed device-mapper: space map metadata: unable to allocate new metadata block device-mapper: thin: dm_thin_insert_block() failed device-mapper: space map metadata: unable to allocate new metadata block device-mapper: thin: dm_thin_insert_block() failed device-mapper: space map metadata: unable to allocate new metadata block device-mapper: thin: dm_thin_insert_block() failed device-mapper: space map metadata: unable to allocate new metadata block device-mapper: thin: dm_thin_insert_block() failed device-mapper: space map metadata: unable to allocate new metadata block device-mapper: thin: dm_thin_insert_block() failed device-mapper: space map metadata: unable to allocate new metadata block device-mapper: space map metadata: unable to allocate new metadata block device-mapper: space map metadata: unable to allocate new metadata block device-mapper: space map metadata: unable to allocate new metadata block device-mapper: space map metadata: unable to allocate new metadata block device-mapper: space map metadata: unable to allocate new metadata block <snip ... these repeat for a long while ... > device-mapper: space map metadata: unable to allocate new metadata block device-mapper: space map common: dm_tm_shadow_block() failed device-mapper: thin: 253:4: no free metadata space available. device-mapper: thin: 253:4: switching pool to read-only mode after patch: device-mapper: space map metadata: unable to allocate new metadata block device-mapper: thin: 253:4: dm_thin_insert_block() failed: error = -28 device-mapper: thin: 253:4: switching pool to read-only mode Signed-off-by: Joe Thornber <ejt@redhat.com> Signed-off-by: Mike Snitzer <snitzer@redhat.com> Cc: stable@vger.kernel.org
2013-12-10dm space map metadata: return on failure in sm_metadata_new_blockMike Snitzer
Commit 2fc48021f4afdd109b9e52b6eef5db89ca80bac7 ("dm persistent metadata: add space map threshold callback") introduced a regression to the metadata block allocation path that resulted in errors being ignored. This regression was uncovered by running the following device-mapper-test-suite test: dmtest run --suite thin-provisioning -n /exhausting_metadata_space_causes_fail_mode/ The ignored error codes in sm_metadata_new_block() could crash the kernel through use of either the dm-thin or dm-cache targets, e.g.: device-mapper: thin: 253:4: reached low water mark for metadata device: sending event. device-mapper: space map metadata: unable to allocate new metadata block general protection fault: 0000 [#1] SMP ... Workqueue: dm-thin do_worker [dm_thin_pool] task: ffff880035ce2ab0 ti: ffff88021a054000 task.ti: ffff88021a054000 RIP: 0010:[<ffffffffa0331385>] [<ffffffffa0331385>] metadata_ll_load_ie+0x15/0x30 [dm_persistent_data] RSP: 0018:ffff88021a055a68 EFLAGS: 00010202 RAX: 003fc8243d212ba0 RBX: ffff88021a780070 RCX: ffff88021a055a78 RDX: ffff88021a055a78 RSI: 0040402222a92a80 RDI: ffff88021a780070 RBP: ffff88021a055a68 R08: ffff88021a055ba4 R09: 0000000000000010 R10: 0000000000000000 R11: 00000002a02e1000 R12: ffff88021a055ad4 R13: 0000000000000598 R14: ffffffffa0338470 R15: ffff88021a055ba4 FS: 0000000000000000(0000) GS:ffff88033fca0000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 000000008005003b CR2: 00007f467c0291b8 CR3: 0000000001a0b000 CR4: 00000000000007e0 Stack: ffff88021a055ab8 ffffffffa0332020 ffff88021a055b30 0000000000000001 ffff88021a055b30 0000000000000000 ffff88021a055b18 0000000000000000 ffff88021a055ba4 ffff88021a055b98 ffff88021a055ae8 ffffffffa033304c Call Trace: [<ffffffffa0332020>] sm_ll_lookup_bitmap+0x40/0xa0 [dm_persistent_data] [<ffffffffa033304c>] sm_metadata_count_is_more_than_one+0x8c/0xc0 [dm_persistent_data] [<ffffffffa0333825>] dm_tm_shadow_block+0x65/0x110 [dm_persistent_data] [<ffffffffa0331b00>] sm_ll_mutate+0x80/0x300 [dm_persistent_data] [<ffffffffa0330e60>] ? set_ref_count+0x10/0x10 [dm_persistent_data] [<ffffffffa0331dba>] sm_ll_inc+0x1a/0x20 [dm_persistent_data] [<ffffffffa0332270>] sm_disk_new_block+0x60/0x80 [dm_persistent_data] [<ffffffff81520036>] ? down_write+0x16/0x40 [<ffffffffa001e5c4>] dm_pool_alloc_data_block+0x54/0x80 [dm_thin_pool] [<ffffffffa001b23c>] alloc_data_block+0x9c/0x130 [dm_thin_pool] [<ffffffffa001c27e>] provision_block+0x4e/0x180 [dm_thin_pool] [<ffffffffa001fe9a>] ? dm_thin_find_block+0x6a/0x110 [dm_thin_pool] [<ffffffffa001c57a>] process_bio+0x1ca/0x1f0 [dm_thin_pool] [<ffffffff8111e2ed>] ? mempool_free+0x8d/0xa0 [<ffffffffa001d755>] process_deferred_bios+0xc5/0x230 [dm_thin_pool] [<ffffffffa001d911>] do_worker+0x51/0x60 [dm_thin_pool] [<ffffffff81067872>] process_one_work+0x182/0x3b0 [<ffffffff81068c90>] worker_thread+0x120/0x3a0 [<ffffffff81068b70>] ? manage_workers+0x160/0x160 [<ffffffff8106eb2e>] kthread+0xce/0xe0 [<ffffffff8106ea60>] ? kthread_freezable_should_stop+0x70/0x70 [<ffffffff8152af6c>] ret_from_fork+0x7c/0xb0 [<ffffffff8106ea60>] ? kthread_freezable_should_stop+0x70/0x70 [<ffffffff8152af6c>] ret_from_fork+0x7c/0xb0 [<ffffffff8106ea60>] ? kthread_freezable_should_stop+0x70/0x70 Signed-off-by: Mike Snitzer <snitzer@redhat.com> Acked-by: Joe Thornber <ejt@redhat.com> Cc: stable@vger.kernel.org # v3.10+
2013-12-10dm table: fail dm_table_create on dm_round_up overflowMikulas Patocka
The dm_round_up function may overflow to zero. In this case, dm_table_create() must fail rather than go on to allocate an empty array with alloc_targets(). This fixes a possible memory corruption that could be caused by passing too large a number in "param->target_count". Signed-off-by: Mikulas Patocka <mpatocka@redhat.com> Signed-off-by: Mike Snitzer <snitzer@redhat.com> Cc: stable@vger.kernel.org
2013-12-10dm snapshot: avoid snapshot space leak on crashMikulas Patocka
There is a possible leak of snapshot space in case of crash. The reason for space leaking is that chunks in the snapshot device are allocated sequentially, but they are finished (and stored in the metadata) out of order, depending on the order in which copying finished. For example, supposed that the metadata contains the following records SUPERBLOCK METADATA (blocks 0 ... 250) DATA 0 DATA 1 DATA 2 ... DATA 250 Now suppose that you allocate 10 new data blocks 251-260. Suppose that copying of these blocks finish out of order (block 260 finished first and the block 251 finished last). Now, the snapshot device looks like this: SUPERBLOCK METADATA (blocks 0 ... 250, 260, 259, 258, 257, 256) DATA 0 DATA 1 DATA 2 ... DATA 250 DATA 251 DATA 252 DATA 253 DATA 254 DATA 255 METADATA (blocks 255, 254, 253, 252, 251) DATA 256 DATA 257 DATA 258 DATA 259 DATA 260 Now, if the machine crashes after writing the first metadata block but before writing the second metadata block, the space for areas DATA 250-255 is leaked, it contains no valid data and it will never be used in the future. This patch makes dm-snapshot complete exceptions in the same order they were allocated, thus fixing this bug. Note: when backporting this patch to the stable kernel, change the version field in the following way: * if version in the stable kernel is {1, 11, 1}, change it to {1, 12, 0} * if version in the stable kernel is {1, 10, 0} or {1, 10, 1}, change it to {1, 10, 2} Userspace reads the version to determine if the bug was fixed, so the version change is needed. Signed-off-by: Mikulas Patocka <mpatocka@redhat.com> Signed-off-by: Mike Snitzer <snitzer@redhat.com> Cc: stable@vger.kernel.org
2013-12-10ALSA: hda - Mute all aamix inputs as defaultTakashi Iwai
Not all channels have been initialized, so far, especially when aamix NID itself doesn't have amps but its leaves have. This patch fixes these holes. Otherwise you might get unexpected loopback inputs, e.g. from surround channels. Cc: <stable@vger.kernel.org> Signed-off-by: Takashi Iwai <tiwai@suse.de>
2013-12-10Merge git://www.linux-watchdog.org/linux-watchdogLinus Torvalds
Pull watchdog fixes from Wim Van Sebroeck: "Drop the unnecessary miscdevice.h includes that we forgot in commit 487722cf2d66 ("watchdog: Get rid of MODULE_ALIAS_MISCDEV statements") and fix an oops for the sc1200_wdt driver" * git://www.linux-watchdog.org/linux-watchdog: sc1200_wdt: Fix oops watchdog: Drop unnecessary include of miscdevice.h
2013-12-10Merge branch 'for-linus' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/egtvedt/linux-avr32 Pull AVR32 fixes from Hans-Christian Egtvedt. * 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/egtvedt/linux-avr32: avr32: favr-32: clk_round_rate() can return a zero upon error avr32: remove deprecated IRQF_DISABLED cpufreq_ at32ap-cpufreq.c: Fix section mismatch avr32: pm: Fix section mismatch avr32: Kill CONFIG_MTD_PARTITIONS
2013-12-10Merge branch 'for-linus' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/s390/linux Pull s390 fixes from Martin Schwidefsky: "One patch to increase the number of possible CPUs to 256, with the latest machine a single LPAR can have up to 101 CPUs. Plus a number of bug fixes, the clock_gettime patch fixes a regression added in the 3.13 merge window" * 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/s390/linux: s390/time,vdso: fix clock_gettime for CLOCK_MONOTONIC s390/vdso: ectg gettime support for CLOCK_THREAD_CPUTIME_ID s390/vdso: fix access-list entry initialization s390: increase CONFIG_NR_CPUS limit s390/smp,sclp: fix size of sclp_cpu_info structure s390/sclp: replace uninitialized early_event_mask_sccb variable with sccb_early s390/dasd: fix memory leak caused by dangling references to request_queue
2013-12-10KEYS: correct alignment of system_certificate_list content in assembly fileHendrik Brueckner
Apart from data-type specific alignment constraints, there are also architecture-specific alignment requirements. For example, on s390 symbols must be on even addresses implying a 2-byte alignment. If the system_certificate_list_end symbol is on an odd address and if this address is loaded, the least-significant bit is ignored. As a result, the load_system_certificate_list() fails to load the certificates because of a wrong certificate length calculation. To be safe, align system_certificate_list on an 8-byte boundary. Also improve the length calculation of the system_certificate_list content. Introduce a system_certificate_list_size (8-byte aligned because of unsigned long) variable that stores the length. Let the linker calculate this size by introducing a start and end label for the certificate content. Signed-off-by: Hendrik Brueckner <brueckner@linux.vnet.ibm.com> Signed-off-by: David Howells <dhowells@redhat.com>
2013-12-10Ignore generated file kernel/x509_certificate_listRusty Russell
$ git status # On branch pending-rebases # Untracked files: # (use "git add <file>..." to include in what will be committed) # # kernel/x509_certificate_list nothing added to commit but untracked files present (use "git add" to track) $ Signed-off-by: Rusty Russell <rusty@rustcorp.com.au> Signed-off-by: David Howells <dhowells@redhat.com>
2013-12-10ARM: OMAP2+: omap_device: add fail hook for runtime_pm when bad data is detectedNishanth Menon
Due to the cross dependencies between hwmod for automanaged device information for OMAP and dts node definitions, we can run into scenarios where the dts node is defined, however it's hwmod entry is yet to be added. In these cases: a) omap_device does not register a pm_domain (since it cannot find hwmod entry). b) driver does not know about (a), does a pm_runtime_get_sync which never fails c) It then tries to do some operation on the device (such as read the revision register (as part of probe) without clock or adequate OMAP generic PM operation performed for enabling the module. This causes a crash such as that reported in: https://bugzilla.kernel.org/show_bug.cgi?id=66441 When 'ti,hwmod' is provided in dt node, it is expected that the device will not function without the OMAP's power automanagement. Hence, when we hit a fail condition (due to hwmod entries not present or other similar scenario), fail at pm_domain level due to lack of data, provide enough information for it to be fixed, however, it allows for the driver to take appropriate measures to prevent crash. Reported-by: Tobias Jakobi <tjakobi@math.uni-bielefeld.de> Signed-off-by: Nishanth Menon <nm@ti.com> Acked-by: Kevin Hilman <khilman@linaro.org> Acked-by: Tony Lindgren <tony@atomide.com> Signed-off-by: Kevin Hilman <khilman@linaro.org>
2013-12-10xfs: growfs overruns AGFL buffer on V4 filesystemsDave Chinner
This loop in xfs_growfs_data_private() is incorrect for V4 superblocks filesystems: for (bucket = 0; bucket < XFS_AGFL_SIZE(mp); bucket++) agfl->agfl_bno[bucket] = cpu_to_be32(NULLAGBLOCK); For V4 filesystems, we don't have a agfl header structure, and so XFS_AGFL_SIZE() returns an entire sector's worth of entries, which we then index from an offset into the sector. Hence: buffer overrun. This problem was introduced in 3.10 by commit 77c95bba ("xfs: add CRC checks to the AGFL") which changed the AGFL structure but failed to update the growfs code to handle the different structures. Fix it by using the correct offset into the buffer for both V4 and V5 filesystems. Cc: <stable@vger.kernel.org> Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Jie Liu <jeff.liu@oracle.com> Signed-off-by: Ben Myers <bpm@sgi.com> (cherry picked from commit b7d961b35b3ab69609aeea93f870269cb6e7ba4d)
2013-12-10xfs: don't perform discard if the given range length is less than block sizeJie Liu
For discard operation, we should return EINVAL if the given range length is less than a block size, otherwise it will go through the file system to discard data blocks as the end range might be evaluated to -1, e.g, # fstrim -v -o 0 -l 100 /xfs7 /xfs7: 9811378176 bytes were trimmed This issue can be triggered via xfstests/generic/288. Also, it seems to get the request queue pointer via bdev_get_queue() instead of the hard code pointer dereference is not a bad thing. Signed-off-by: Jie Liu <jeff.liu@oracle.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Ben Myers <bpm@sgi.com> (cherry picked from commit f9fd0135610084abef6867d984e9951c3099950d)
2013-12-10xfs: underflow bug in xfs_attrlist_by_handle()Dan Carpenter
If we allocate less than sizeof(struct attrlist) then we end up corrupting memory or doing a ZERO_PTR_SIZE dereference. This can only be triggered with CAP_SYS_ADMIN. Reported-by: Nico Golde <nico@ngolde.de> Reported-by: Fabian Yamaguchi <fabs@goesec.de> Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com> Reviewed-by: Dave Chinner <dchinner@redhat.com> Signed-off-by: Ben Myers <bpm@sgi.com> (cherry picked from commit 071c529eb672648ee8ca3f90944bcbcc730b4c06)
2013-12-10Merge tag 'for-v3.13-rc/hwmod-fixes-a' of ↵Kevin Hilman
git://git.kernel.org/pub/scm/linux/kernel/git/pjw/omap-pending into fixes From Paul Walmsley: ARM: OMAP2+: hwmod code/data: fixes for v3.13-rc Fix a few hwmod code problems involving recovery with bad data and bad IP block OCP reset handling. Also, fix the hwmod data to enable IP block OCP reset for the OMAP USBHOST devices on OMAP3+. Basic build, boot, and PM tests are available here: http://www.pwsan.com/omap/testlogs/prcm_fixes_a_v3.13-rc/20131209030611/ * tag 'for-v3.13-rc/hwmod-fixes-a' of git://git.kernel.org/pub/scm/linux/kernel/git/pjw/omap-pending: ARM: OMAP2+: hwmod: Fix usage of invalid iclk / oclk when clock node is not present ARM: OMAP3: hwmod data: Don't prevent RESET of USB Host module ARM: OMAP2+: hwmod: Fix SOFTRESET logic ARM: OMAP4+: hwmod data: Don't prevent RESET of USB Host module Signed-off-by: Kevin Hilman <khilman@linaro.org>
2013-12-10ALSA: compress: Fix 64bit ABI incompatibilityTakashi Iwai
snd_pcm_uframes_t is defined as unsigned long so it would take different sizes depending on 32 or 64bit architectures. As we don't want this ABI incompatibility, and there is no real 64bit user yet, let's make it the fixed size with __u32. Also bump the protocol version number to 0.1.2. Acked-by: Vinod Koul <vinod.koul@intel.com> Cc: <stable@vger.kernel.org> Signed-off-by: Takashi Iwai <tiwai@suse.de>
2013-12-10ALSA: memalloc.h - fix wrong truncation of dma_addr_tStefano Panella
When running a 32bit kernel the hda_intel driver is still reporting a 64bit dma_mask if the HW supports it. From sound/pci/hda/hda_intel.c: /* allow 64bit DMA address if supported by H/W */ if ((gcap & ICH6_GCAP_64OK) && !pci_set_dma_mask(pci, DMA_BIT_MASK(64))) pci_set_consistent_dma_mask(pci, DMA_BIT_MASK(64)); else { pci_set_dma_mask(pci, DMA_BIT_MASK(32)); pci_set_consistent_dma_mask(pci, DMA_BIT_MASK(32)); } which means when there is a call to dma_alloc_coherent from snd_malloc_dev_pages a machine address bigger than 32bit can be returned. This can be true in particular if running the 32bit kernel as a pv dom0 under the Xen Hypervisor or PAE on bare metal. The problem is that when calling setup_bdle to program the BLE the dma_addr_t returned from the dma_alloc_coherent is wrongly truncated from snd_sgbuf_get_addr if running a 32bit kernel: static inline dma_addr_t snd_sgbuf_get_addr(struct snd_dma_buffer *dmab, size_t offset) { struct snd_sg_buf *sgbuf = dmab->private_data; dma_addr_t addr = sgbuf->table[offset >> PAGE_SHIFT].addr; addr &= PAGE_MASK; return addr + offset % PAGE_SIZE; } where PAGE_MASK in a 32bit kernel is zeroing the upper 32bit af addr. Without this patch the HW will fetch the 32bit truncated address, which is not the one obtained from dma_alloc_coherent and will result to a non working audio but can corrupt host memory at a random location. The current patch apply to v3.13-rc3-74-g6c843f5 Signed-off-by: Stefano Panella <stefano.panella@citrix.com> Reviewed-by: Frediano Ziglio <frediano.ziglio@citrix.com> Cc: <stable@vger.kernel.org> Signed-off-by: Takashi Iwai <tiwai@suse.de>
2013-12-10ALSA: hda - Another Dell headset detection quirkHui Wang
On the Dell Inspiron 3045 machine (codec Subsystem Id: 0x10280628), no external microphone can be detected when plugging a 3-ring headset. If we add "model=dell-headset-multi" for the snd-hda-intel.ko, the problem will disappear. BugLink: https://bugs.launchpad.net/hwe-somerville/+bug/1259437 CC: David Henningsson <david.henningsson@canonical.com> Signed-off-by: Hui Wang <hui.wang@canonical.com> Signed-off-by: Takashi Iwai <tiwai@suse.de>
2013-12-10ALSA: hda - A Dell headset detection quirkHui Wang
On the Dell Optiplex 3030 machine (codec Subsystem Id: 0x10280623), no external microphone can be detected when plugging a 3-ring headset. If we add "model=dell-headset-multi" for the snd-hda-intel.ko, the problem will disappear. BugLink: https://bugs.launchpad.net/hwe-somerville/+bug/1259435 CC: David Henningsson <david.henningsson@canonical.com> Signed-off-by: Hui Wang <hui.wang@canonical.com> Signed-off-by: Takashi Iwai <tiwai@suse.de>