summaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
2017-05-08drivers/misc/vmw_vmci/vmci_queue_pair.c: fix a couple integer overflow testsDan Carpenter
The "DIV_ROUND_UP(size, PAGE_SIZE)" operation can overflow if "size" is more than ULLONG_MAX - PAGE_SIZE. Link: http://lkml.kernel.org/r/20170322111950.GA11279@mwanda Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com> Cc: Jorgen Hansen <jhansen@vmware.com> Cc: Masahiro Yamada <yamada.masahiro@socionext.com> Cc: Michal Hocko <mhocko@suse.com> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2017-05-08kernel/hung_task.c: defer showing held locksTetsuo Handa
When I was running my testcase which may block hundreds of threads on fs locks, I got lockup due to output from debug_show_all_locks() added by commit b2d4c2edb2e4 ("locking/hung_task: Show all locks"). For example, if 1000 threads were blocked in TASK_UNINTERRUPTIBLE state and 500 out of 1000 threads hold some lock, debug_show_all_locks() from for_each_process_thread() loop will report locks held by 500 threads for 1000 times. This is a too much noise. In order to make sure rcu_lock_break() is called frequently, we should avoid calling debug_show_all_locks() from for_each_process_thread() loop because debug_show_all_locks() effectively calls for_each_process_thread() loop. Let's defer calling debug_show_all_locks() till before panic() or leaving for_each_process_thread() loop. Link: http://lkml.kernel.org/r/1489296834-60436-1-git-send-email-penguin-kernel@I-love.SAKURA.ne.jp Signed-off-by: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp> Reviewed-by: Vegard Nossum <vegard.nossum@oracle.com> Cc: Ingo Molnar <mingo@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2017-05-08make help: add tools help targetRandy Dunlap
Add a top-level Makefile help target for Userspace tools. Also make each help "heading" end with a colon ':'. Link: http://lkml.kernel.org/r/55c986ff-3966-3e47-2984-7349da2cce51@infradead.org Signed-off-by: Randy Dunlap <rdunlap@infradead.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2017-05-08jiffies.h: declare jiffies and jiffies_64 with ____cacheline_aligned_in_smpMatthias Kaehlcke
jiffies_64 is defined in kernel/time/timer.c with ____cacheline_aligned_in_smp, however this macro is not part of the declaration of jiffies and jiffies_64 in jiffies.h. As a result clang generates the following warning: kernel/time/timer.c:57:26: error: section does not match previous declaration [-Werror,-Wsection] __visible u64 jiffies_64 __cacheline_aligned_in_smp = INITIAL_JIFFIES; ^ include/linux/cache.h:39:36: note: expanded from macro '__cacheline_aligned_in_smp' ^ include/linux/cache.h:34:4: note: expanded from macro '__cacheline_aligned' __section__(".data..cacheline_aligned"))) ^ include/linux/jiffies.h:77:12: note: previous attribute is here extern u64 __jiffy_data jiffies_64; ^ include/linux/jiffies.h:70:38: note: expanded from macro '__jiffy_data' Link: http://lkml.kernel.org/r/20170403190200.70273-1-mka@chromium.org Signed-off-by: Matthias Kaehlcke <mka@chromium.org> Cc: "Jason A . Donenfeld" <Jason@zx2c4.com> Cc: Grant Grundler <grundler@chromium.org> Cc: Michael Davidson <md@google.com> Cc: Greg Hackmann <ghackmann@google.com> Cc: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2017-05-08drivers/virt/fsl_hypervisor.c: use get_user_pages_unlocked()Lorenzo Stoakes
Moving from get_user_pages() to get_user_pages_unlocked() simplifies the code and takes advantage of VM_FAULT_RETRY functionality when faulting in pages. Link: http://lkml.kernel.org/r/20161101194332.23961-1-lstoakes@gmail.com Signed-off-by: Lorenzo Stoakes <lstoakes@gmail.com> Cc: Michal Hocko <mhocko@kernel.org> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Kumar Gala <galak@kernel.crashing.org> Cc: Mihai Caraman <mihai.caraman@freescale.com> Cc: Greg KH <gregkh@linuxfoundation.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2017-05-08proc/sysctl: fix the int overflow for jiffies conversionGao Feng
do_proc_dointvec_jiffies_conv() uses LONG_MAX/HZ as the max value to avoid overflow. But actually the *valp is int type, so it still causes overflow. For example, echo 2147483647 > ./sys/net/ipv4/tcp_keepalive_time Then, cat ./sys/net/ipv4/tcp_keepalive_time The output is "-1", it is not expected. Now use INT_MAX/HZ as the max value instead LONG_MAX/HZ to fix it. Link: http://lkml.kernel.org/r/1490109532-9228-1-git-send-email-fgao@ikuai8.com Signed-off-by: Gao Feng <fgao@ikuai8.com> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Ingo Molnar <mingo@kernel.org> Cc: Alexey Dobriyan <adobriyan@gmail.com> Cc: Eric Dumazet <edumazet@google.com> Cc: Josh Poimboeuf <jpoimboe@redhat.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2017-05-08fs/proc/inode.c: remove cast from memory allocationTobin C. Harding
Coccinelle emits this warning: WARNING: casting value returned by memory allocation function to (struct proc_inode *) is useless. Remove unnecessary cast. Link: http://lkml.kernel.org/r/1487745720-16967-1-git-send-email-me@tobin.cc Signed-off-by: Tobin C. Harding <me@tobin.cc> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2017-05-08mm, compaction: finish whole pageblock to reduce fragmentationVlastimil Babka
The main goal of direct compaction is to form a high-order page for allocation, but it should also help against long-term fragmentation when possible. Most lower-than-pageblock-order compactions are for non-movable allocations, which means that if we compact in a movable pageblock and terminate as soon as we create the high-order page, it's unlikely that the fallback heuristics will claim the whole block. Instead there might be a single unmovable page in a pageblock full of movable pages, and the next unmovable allocation might pick another pageblock and increase long-term fragmentation. To help against such scenarios, this patch changes the termination criteria for compaction so that the current pageblock is finished even though the high-order page already exists. Note that it might be possible that the high-order page formed elsewhere in the zone due to parallel activity, but this patch doesn't try to detect that. This is only done with sync compaction, because async compaction is limited to pageblock of the same migratetype, where it cannot result in a migratetype fallback. (Async compaction also eagerly skips order-aligned blocks where isolation fails, which is against the goal of migrating away as much of the pageblock as possible.) As a result of this patch, long-term memory fragmentation should be reduced. In testing based on 4.9 kernel with stress-highalloc from mmtests configured for order-4 GFP_KERNEL allocations, this patch has reduced the number of unmovable allocations falling back to movable pageblocks by 20%. The number Link: http://lkml.kernel.org/r/20170307131545.28577-9-vbabka@suse.cz Signed-off-by: Vlastimil Babka <vbabka@suse.cz> Acked-by: Mel Gorman <mgorman@techsingularity.net> Acked-by: Johannes Weiner <hannes@cmpxchg.org> Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com> Cc: David Rientjes <rientjes@google.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2017-05-08mm, compaction: restrict async compaction to pageblocks of same migratetypeVlastimil Babka
The migrate scanner in async compaction is currently limited to MIGRATE_MOVABLE pageblocks. This is a heuristic intended to reduce latency, based on the assumption that non-MOVABLE pageblocks are unlikely to contain movable pages. However, with the exception of THP's, most high-order allocations are not movable. Should the async compaction succeed, this increases the chance that the non-MOVABLE allocations will fallback to a MOVABLE pageblock, making the long-term fragmentation worse. This patch attempts to help the situation by changing async direct compaction so that the migrate scanner only scans the pageblocks of the requested migratetype. If it's a non-MOVABLE type and there are such pageblocks that do contain movable pages, chances are that the allocation can succeed within one of such pageblocks, removing the need for a fallback. If that fails, the subsequent sync attempt will ignore this restriction. In testing based on 4.9 kernel with stress-highalloc from mmtests configured for order-4 GFP_KERNEL allocations, this patch has reduced the number of unmovable allocations falling back to movable pageblocks by 30%. The number of movable allocations falling back is reduced by 12%. Link: http://lkml.kernel.org/r/20170307131545.28577-8-vbabka@suse.cz Signed-off-by: Vlastimil Babka <vbabka@suse.cz> Cc: Mel Gorman <mgorman@techsingularity.net> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com> Cc: David Rientjes <rientjes@google.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2017-05-08mm, compaction: add migratetype to compact_controlVlastimil Babka
Preparation patch. We are going to need migratetype at lower layers than compact_zone() and compact_finished(). Link: http://lkml.kernel.org/r/20170307131545.28577-7-vbabka@suse.cz Signed-off-by: Vlastimil Babka <vbabka@suse.cz> Acked-by: Mel Gorman <mgorman@techsingularity.net> Acked-by: Johannes Weiner <hannes@cmpxchg.org> Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com> Cc: David Rientjes <rientjes@google.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2017-05-08mm, compaction: change migrate_async_suitable() to suitable_migration_source()Vlastimil Babka
Preparation for making the decisions more complex and depending on compact_control flags. No functional change. Link: http://lkml.kernel.org/r/20170307131545.28577-6-vbabka@suse.cz Signed-off-by: Vlastimil Babka <vbabka@suse.cz> Acked-by: Mel Gorman <mgorman@techsingularity.net> Acked-by: Johannes Weiner <hannes@cmpxchg.org> Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com> Cc: David Rientjes <rientjes@google.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2017-05-08mm, page_alloc: count movable pages when stealing from pageblockVlastimil Babka
When stealing pages from pageblock of a different migratetype, we count how many free pages were stolen, and change the pageblock's migratetype if more than half of the pageblock was free. This might be too conservative, as there might be other pages that are not free, but were allocated with the same migratetype as our allocation requested. While we cannot determine the migratetype of allocated pages precisely (at least without the page_owner functionality enabled), we can count pages that compaction would try to isolate for migration - those are either on LRU or __PageMovable(). The rest can be assumed to be MIGRATE_RECLAIMABLE or MIGRATE_UNMOVABLE, which we cannot easily distinguish. This counting can be done as part of free page stealing with little additional overhead. The page stealing code is changed so that it considers free pages plus pages of the "good" migratetype for the decision whether to change pageblock's migratetype. The result should be more accurate migratetype of pageblocks wrt the actual pages in the pageblocks, when stealing from semi-occupied pageblocks. This should help the efficiency of page grouping by mobility. In testing based on 4.9 kernel with stress-highalloc from mmtests configured for order-4 GFP_KERNEL allocations, this patch has reduced the number of unmovable allocations falling back to movable pageblocks by 47%. The number of movable allocations falling back to other pageblocks are increased by 55%, but these events don't cause permanent fragmentation, so the tradeoff should be positive. Later patches also offset the movable fallback increase to some extent. [akpm@linux-foundation.org: merge fix] Link: http://lkml.kernel.org/r/20170307131545.28577-5-vbabka@suse.cz Signed-off-by: Vlastimil Babka <vbabka@suse.cz> Acked-by: Mel Gorman <mgorman@techsingularity.net> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com> Cc: David Rientjes <rientjes@google.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2017-05-08mm, page_alloc: split smallest stolen page in fallbackVlastimil Babka
The __rmqueue_fallback() function is called when there's no free page of requested migratetype, and we need to steal from a different one. There are various heuristics to make this event infrequent and reduce permanent fragmentation. The main one is to try stealing from a pageblock that has the most free pages, and possibly steal them all at once and convert the whole pageblock. Precise searching for such pageblock would be expensive, so instead the heuristics walks the free lists from MAX_ORDER down to requested order and assumes that the block with highest-order free page is likely to also have the most free pages in total. Chances are that together with the highest-order page, we steal also pages of lower orders from the same block. But then we still split the highest order page. This is wasteful and can contribute to fragmentation instead of avoiding it. This patch thus changes __rmqueue_fallback() to just steal the page(s) and put them on the freelist of the requested migratetype, and only report whether it was successful. Then we pick (and eventually split) the smallest page with __rmqueue_smallest(). This all happens under zone lock, so nobody can steal it from us in the process. This should reduce fragmentation due to fallbacks. At worst we are only stealing a single highest-order page and waste some cycles by moving it between lists and then removing it, but fallback is not exactly hot path so that should not be a concern. As a side benefit the patch removes some duplicate code by reusing __rmqueue_smallest(). [vbabka@suse.cz: fix endless loop in the modified __rmqueue()] Link: http://lkml.kernel.org/r/59d71b35-d556-4fc9-ee2e-1574259282fd@suse.cz Link: http://lkml.kernel.org/r/20170307131545.28577-4-vbabka@suse.cz Signed-off-by: Vlastimil Babka <vbabka@suse.cz> Acked-by: Mel Gorman <mgorman@techsingularity.net> Acked-by: Johannes Weiner <hannes@cmpxchg.org> Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com> Cc: David Rientjes <rientjes@google.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2017-05-08mm, compaction: remove redundant watermark check in compact_finished()Vlastimil Babka
When detecting whether compaction has succeeded in forming a high-order page, __compact_finished() employs a watermark check, followed by an own search for a suitable page in the freelists. This is not ideal for two reasons: - The watermark check also searches high-order freelists, but has a less strict criteria wrt fallback. It's therefore redundant and waste of cycles. This was different in the past when high-order watermark check attempted to apply reserves to high-order pages. - The watermark check might actually fail due to lack of order-0 pages. Compaction can't help with that, so there's no point in continuing because of that. It's possible that high-order page still exists and it terminates. This patch therefore removes the watermark check. This should save some cycles and terminate compaction sooner in some cases. Link: http://lkml.kernel.org/r/20170307131545.28577-3-vbabka@suse.cz Signed-off-by: Vlastimil Babka <vbabka@suse.cz> Acked-by: Mel Gorman <mgorman@techsingularity.net> Acked-by: Johannes Weiner <hannes@cmpxchg.org> Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com> Cc: David Rientjes <rientjes@google.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2017-05-08mm, compaction: reorder fields in struct compact_controlVlastimil Babka
Patch series "try to reduce fragmenting fallbacks", v3. Last year, Johannes Weiner has reported a regression in page mobility grouping [1] and while the exact cause was not found, I've come up with some ways to improve it by reducing the number of allocations falling back to different migratetype and causing permanent fragmentation. The series was tested with mmtests stress-highalloc modified to do GFP_KERNEL order-4 allocations, on 4.9 with "mm, vmscan: fix zone balance check in prepare_kswapd_sleep" (without that, kcompactd indeed wasn't woken up) on UMA machine with 4GB memory. There were 5 repeats of each run, as the extfrag stats are quite volatile (note the stats below are sums, not averages, as it was less perl hacking for me). Success rate are the same, already high due to the low allocation order used, so I'm not including them. Compaction stats: (the patches are stacked, and I haven't measured the non-functional-changes patches separately) patch 1 patch 2 patch 3 patch 4 patch 7 patch 8 Compaction stalls 22449 24680 24846 19765 22059 17480 Compaction success 12971 14836 14608 10475 11632 8757 Compaction failures 9477 9843 10238 9290 10426 8722 Page migrate success 3109022 3370438 3312164 1695105 1608435 2111379 Page migrate failure 911588 1149065 1028264 1112675 1077251 1026367 Compaction pages isolated 7242983 8015530 7782467 4629063 4402787 5377665 Compaction migrate scanned 980838938 987367943 957690188 917647238 947155598 1018922197 Compaction free scanned 557926893 598946443 602236894 594024490 541169699 763651731 Compaction cost 10243 10578 10304 8286 8398 9440 Compaction stats are mostly within noise until patch 4, which decreases the number of compactions, and migrations. Part of that could be due to more pageblocks marked as unmovable, and async compaction skipping those. This changes a bit with patch 7, but not so much. Patch 8 increases free scanner stats and migrations, which comes from the changed termination criteria. Interestingly number of compactions decreases - probably the fully compacted pageblock satisfies multiple subsequent allocations, so it amortizes. Next comes the extfrag tracepoint, where "fragmenting" means that an allocation had to fallback to a pageblock of another migratetype which wasn't fully free (which is almost all of the fallbacks). I have locally added another tracepoint for "Page steal" into steal_suitable_fallback() which triggers in situations where we are allowed to do move_freepages_block(). If we decide to also do set_pageblock_migratetype(), it's "Pages steal with pageblock" with break down for which allocation migratetype we are stealing and from which fallback migratetype. The last part "due to counting" comes from patch 4 and counts the events where the counting of movable pages allowed us to change pageblock's migratetype, while the number of free pages alone wouldn't be enough to cross the threshold. patch 1 patch 2 patch 3 patch 4 patch 7 patch 8 Page alloc extfrag event 10155066 8522968 10164959 15622080 13727068 13140319 Extfrag fragmenting 10149231 8517025 10159040 15616925 13721391 13134792 Extfrag fragmenting for unmovable 159504 168500 184177 97835 70625 56948 Extfrag fragmenting unmovable placed with movable 153613 163549 172693 91740 64099 50917 Extfrag fragmenting unmovable placed with reclaim. 5891 4951 11484 6095 6526 6031 Extfrag fragmenting for reclaimable 4738 4829 6345 4822 5640 5378 Extfrag fragmenting reclaimable placed with movable 1836 1902 1851 1579 1739 1760 Extfrag fragmenting reclaimable placed with unmov. 2902 2927 4494 3243 3901 3618 Extfrag fragmenting for movable 9984989 8343696 9968518 15514268 13645126 13072466 Pages steal 179954 192291 210880 123254 94545 81486 Pages steal with pageblock 22153 18943 20154 33562 29969 33444 Pages steal with pageblock for unmovable 14350 12858 13256 20660 19003 20852 Pages steal with pageblock for unmovable from mov. 12812 11402 11683 19072 17467 19298 Pages steal with pageblock for unmovable from recl. 1538 1456 1573 1588 1536 1554 Pages steal with pageblock for movable 7114 5489 5965 11787 10012 11493 Pages steal with pageblock for movable from unmov. 6885 5291 5541 11179 9525 10885 Pages steal with pageblock for movable from recl. 229 198 424 608 487 608 Pages steal with pageblock for reclaimable 689 596 933 1115 954 1099 Pages steal with pageblock for reclaimable from unmov. 273 219 537 658 547 667 Pages steal with pageblock for reclaimable from mov. 416 377 396 457 407 432 Pages steal with pageblock due to counting 11834 10075 7530 ... for unmovable 8993 7381 4616 ... for movable 2792 2653 2851 ... for reclaimable 49 41 63 What we can see is that "Extfrag fragmenting for unmovable" and "... placed with movable" drops with almost each patch, which is good as we are polluting less movable pageblocks with unmovable pages. The most significant change is patch 4 with movable page counting. On the other hand it increases "Extfrag fragmenting for movable" by 50%. "Pages steal" drops though, so these movable allocation fallbacks find only small free pages and are not allowed to steal whole pageblocks back. "Pages steal with pageblock" raises, because the patch increases the chances of pageblock migratetype changes to happen. This affects all migratetypes. The summary is that patch 4 is not a clear win wrt these stats, but I believe that the tradeoff it makes is a good one. There's less pollution of movable pageblocks by unmovable allocations. There's less stealing between pageblock, and those that remain have higher chance of changing migratetype also the pageblock itself, so it should more faithfully reflect the migratetype of the pages within the pageblock. The increase of movable allocations falling back to unmovable pageblock might look dramatic, but those allocations can be migrated by compaction when needed, and other patches in the series (7-9) improve that aspect. Patches 7 and 8 continue the trend of reduced unmovable fallbacks and also reduce the impact on movable fallbacks from patch 4. [1] https://www.spinics.net/lists/linux-mm/msg114237.html This patch (of 8): While currently there are (mostly by accident) no holes in struct compact_control (on x86_64), but we are going to add more bool flags, so place them all together to the end of the structure. While at it, just order all fields from largest to smallest. Link: http://lkml.kernel.org/r/20170307131545.28577-2-vbabka@suse.cz Signed-off-by: Vlastimil Babka <vbabka@suse.cz> Acked-by: Mel Gorman <mgorman@techsingularity.net> Acked-by: Johannes Weiner <hannes@cmpxchg.org> Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com> Cc: David Rientjes <rientjes@google.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2017-05-08NFS append COMMIT after synchronous COPYOlga Kornievskaia
Instead of messing with the commit path which has been causing issues, add a COMMIT op after the COPY and ask for stable copies in the first space. It saves a round trip, since after the COPY, the client sends a COMMIT anyway. Signed-off-by: Olga Kornievskaia <kolga@netapp.com> Signed-off-by: Trond Myklebust <trond.myklebust@primarydata.com>
2017-05-08lockd: fix lockd shutdown raceJ. Bruce Fields
As reported by David Jeffery: "a signal was sent to lockd while lockd was shutting down from a request to stop nfs. The signal causes lockd to call restart_grace() which puts the lockd_net structure on the grace list. If this signal is received at the wrong time, it will occur after lockd_down_net() has called locks_end_grace() but before lockd_down_net() stops the lockd thread. This leads to lockd putting the lockd_net structure back on the grace list, then exiting without anything removing it from the list." So, perform the final locks_end_grace() from the the lockd thread; this ensures it's serialized with respect to restart_grace(). Reported-by: David Jeffery <djeffery@redhat.com> Signed-off-by: J. Bruce Fields <bfields@redhat.com>
2017-05-08net: mdio-mux: bcm-iproc: call mdiobus_free() in error pathJon Mason
If an error is encountered in mdio_mux_init(), the error path will call mdiobus_free(). Since mdiobus_register() has been called prior to mdio_mux_init(), the bus->state will not be MDIOBUS_UNREGISTERED. This causes a BUG_ON() in mdiobus_free(). To correct this issue, add an error path for mdio_mux_init() which calls mdiobus_unregister() prior to mdiobus_free(). Signed-off-by: Jon Mason <jon.mason@broadcom.com> Fixes: 98bc865a1ec8 ("net: mdio-mux: Add MDIO mux driver for iProc SoCs") Acked-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-05-08ide: don't call memcpy with the same source and destinationMikulas Patocka
The parisc architecture recently reimplemented the memcpy function and their reimplementation crashed when source and destination overlapped. The crash happened in the function ide_complete_cmd where memcpy is called with the same source and destination pointer. According to the C specification, memcpy behavior is undefined if the source and destination range overlaps. This patches fixes the undefined behavior. Signed-off-by: Mikulas Patocka <mpatocka@redhat.com> Reviewed-by: Bartlomiej Zolnierkiewicz <b.zolnierkie@samsung.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-05-08ide: use setup_timerGeliang Tang
Use setup_timer() instead of init_timer() to simplify the code. Signed-off-by: Geliang Tang <geliangtang@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-05-08net: ethernet: ti: cpsw: adjust cpsw fifos depth for fullduplex flow controlGrygorii Strashko
When users set flow control using ethtool the bits are set properly in the CPGMAC_SL MACCONTROL register, but the FIFO depth in the respective Port n Maximum FIFO Blocks (Pn_MAX_BLKS) registers remains set to the minimum size reset value. When receive flow control is enabled on a port, the port's associated FIFO block allocation must be adjusted. The port RX allocation must increase to accommodate the flow control runout. The TRM recommends numbers of 5 or 6. Hence, apply required Port FIFO configuration to Pn_MAX_BLKS.Pn_TX_MAX_BLKS=0xF and Pn_MAX_BLKS.Pn_RX_MAX_BLKS=0x5 during interface initialization. Cc: Schuyler Patton <spatton@ti.com> Signed-off-by: Grygorii Strashko <grygorii.strashko@ti.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-05-08ipv6: reorder ip6_route_dev_notifier after ipv6_dev_notfWANG Cong
For each netns (except init_net), we initialize its null entry in 3 places: 1) The template itself, as we use kmemdup() 2) Code around dst_init_metrics() in ip6_route_net_init() 3) ip6_route_dev_notify(), which is supposed to initialize it after loopback registers Unfortunately the last one still happens in a wrong order because we expect to initialize net->ipv6.ip6_null_entry->rt6i_idev to net->loopback_dev's idev, thus we have to do that after we add idev to loopback. However, this notifier has priority == 0 same as ipv6_dev_notf, and ipv6_dev_notf is registered after ip6_route_dev_notifier so it is called actually after ip6_route_dev_notifier. This is similar to commit 2f460933f58e ("ipv6: initialize route null entry in addrconf_init()") which fixes init_net. Fix it by picking a smaller priority for ip6_route_dev_notifier. Also, we have to release the refcnt accordingly when unregistering loopback_dev because device exit functions are called before subsys exit functions. Acked-by: David Ahern <dsahern@gmail.com> Tested-by: David Ahern <dsahern@gmail.com> Signed-off-by: Cong Wang <xiyou.wangcong@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-05-08Merge tag 'mac80211-for-davem-2017-05-08' of ↵David S. Miller
git://git.kernel.org/pub/scm/linux/kernel/git/jberg/mac80211 Johannes Berg says: ==================== A couple more fixes: * don't try to authenticate during reconfiguration, which causes drivers to get confused * fix a kernel-doc warning for a recently merged change * fix MU-MIMO group configuration (relevant only for monitor mode) * more rate flags fix: remove stray RX_ENC_FLAG_40MHZ * fix IBSS probe response allocation size ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
2017-05-08net: cdc_ncm: Fix TX zero paddingJim Baxter
The zero padding that is added to NTB's does not zero the memory correctly. This is because the skb_put modifies the value of skb_out->len which results in the memset command not setting any memory to zero as (ctx->tx_max - skb_out->len) == 0. I have resolved this by storing the size of the memory to be zeroed before the skb_put and using this in the memset call. Signed-off-by: Jim Baxter <jim_baxter@mentor.com> Reviewed-by: Bjørn Mork <bjorn@mork.no> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-05-08Merge tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvmLinus Torvalds
Pull KVM updates from Paolo Bonzini: "ARM: - HYP mode stub supports kexec/kdump on 32-bit - improved PMU support - virtual interrupt controller performance improvements - support for userspace virtual interrupt controller (slower, but necessary for KVM on the weird Broadcom SoCs used by the Raspberry Pi 3) MIPS: - basic support for hardware virtualization (ImgTec P5600/P6600/I6400 and Cavium Octeon III) PPC: - in-kernel acceleration for VFIO s390: - support for guests without storage keys - adapter interruption suppression x86: - usual range of nVMX improvements, notably nested EPT support for accessed and dirty bits - emulation of CPL3 CPUID faulting generic: - first part of VCPU thread request API - kvm_stat improvements" * tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm: (227 commits) kvm: nVMX: Don't validate disabled secondary controls KVM: put back #ifndef CONFIG_S390 around kvm_vcpu_kick Revert "KVM: Support vCPU-based gfn->hva cache" tools/kvm: fix top level makefile KVM: x86: don't hold kvm->lock in KVM_SET_GSI_ROUTING KVM: Documentation: remove VM mmap documentation kvm: nVMX: Remove superfluous VMX instruction fault checks KVM: x86: fix emulation of RSM and IRET instructions KVM: mark requests that need synchronization KVM: return if kvm_vcpu_wake_up() did wake up the VCPU KVM: add explicit barrier to kvm_vcpu_kick KVM: perform a wake_up in kvm_make_all_cpus_request KVM: mark requests that do not need a wakeup KVM: remove #ifndef CONFIG_S390 around kvm_vcpu_wake_up KVM: x86: always use kvm_make_request instead of set_bit KVM: add kvm_{test,clear}_request to replace {test,clear}_bit s390: kvm: Cpu model support for msa6, msa7 and msa8 KVM: x86: remove irq disablement around KVM_SET_CLOCK/KVM_GET_CLOCK kvm: better MWAIT emulation for guests KVM: x86: virtualize cpuid faulting ...
2017-05-08Merge branch 'for-linus' of git://git.armlinux.org.uk/~rmk/linux-armLinus Torvalds
Pull ARM updates from Russell King: "Lots of little things this time: - allow modules to be autoloaded according to the HWCAP feature bits (used primarily for crypto modules) - split module core and init PLT sections, since the core code and init code could be placed far apart, and the PLT sections need to be local to the code block. - three patches from Chris Brandt to allow Cortex-A9 L2 cache optimisations to be disabled where a SoC didn't wire up the out of band signals. - NoMMU compliance fixes, avoiding corruption of vector table which is not being used at this point, and avoiding possible register state corruption when switching mode. - fixmap memory attribute compliance update. - remove unnecessary locking from update_sections_early() - ftrace fix for DEBUG_RODATA with !FRAME_POINTER" * 'for-linus' of git://git.armlinux.org.uk/~rmk/linux-arm: ARM: 8672/1: mm: remove tasklist locking from update_sections_early() ARM: 8671/1: V7M: Preserve registers across switch from Thread to Handler mode ARM: 8670/1: V7M: Do not corrupt vector table around v7m_invalidate_l1 call ARM: 8668/1: ftrace: Fix dynamic ftrace with DEBUG_RODATA and !FRAME_POINTER ARM: 8667/3: Fix memory attribute inconsistencies when using fixmap ARM: 8663/1: wire up HWCAP/HWCAP2 feature bits to the CPU modalias ARM: 8666/1: mm: dump: Add domain to output ARM: 8662/1: module: split core and init PLT sections ARM: 8661/1: dts: r7s72100: add l2 cache ARM: 8660/1: shmobile: r7s72100: Enable L2 cache ARM: 8659/1: l2c: allow CA9 optimizations to be disabled
2017-05-08Merge tag 'xtensa-20170507' of git://github.com/jcmvbkbc/linux-xtensaLinus Torvalds
Pull Xtensa updates from Max Filippov: - clearly mark references to spilled register locations with SPILL_SLOT macros - clean up xtensa ptrace: use generic tracehooks, move internal kernel definitions from uapi/asm to asm, make locally-used functions static, fix code style and alignment - use command line parameters passed to ISS as kernel command line. * tag 'xtensa-20170507' of git://github.com/jcmvbkbc/linux-xtensa: xtensa: clean up access to spilled registers locations xtensa: use generic tracehooks xtensa: move internal ptrace definitions from uapi/asm to asm xtensa: clean up xtensa/kernel/ptrace.c xtensa: drop unused fast_io_protect function xtensa: use ITLB_HIT_BIT instead of hardcoded number xtensa: ISS: update kernel command line in platform_setup xtensa: ISS: add argc/argv simcall definitions xtensa: ISS: cleanup setup.c
2017-05-08Merge tag 'for-f2fs-4.12' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/jaegeuk/f2fs Pull f2fs updates from Jaegeuk Kim: "In this round, we've focused on enhancing performance with regards to block allocation, GC, and discard/in-place-update IO controls. There are a bunch of clean-ups as well as minor bug fixes. Enhancements: - disable heap-based allocation by default - issue small-sized discard commands by default - change the policy of data hotness for logging - distinguish IOs in terms of size and wbc type - start SSR earlier to avoid foreground GC - enhance data structures managing discard commands - enhance in-place update flow - add some more fault injection routines - secure one more xattr entry Bug fixes: - calculate victim cost for GC correctly - remain correct victim segment number for GC - race condition in nid allocator and initializer - stale pointer produced by atomic_writes - fix missing REQ_SYNC for flush commands - handle missing errors in more corner cases" * tag 'for-f2fs-4.12' of git://git.kernel.org/pub/scm/linux/kernel/git/jaegeuk/f2fs: (111 commits) f2fs: fix a mount fail for wrong next_scan_nid f2fs: enhance scalability of trace macro f2fs: relocate inode_{,un}lock in F2FS_IOC_SETFLAGS f2fs: Make flush bios explicitely sync f2fs: show available_nids in f2fs/status f2fs: flush dirty nats periodically f2fs: introduce CP_TRIMMED_FLAG to avoid unneeded discard f2fs: allow cpc->reason to indicate more than one reason f2fs: release cp and dnode lock before IPU f2fs: shrink size of struct discard_cmd f2fs: don't hold cmd_lock during waiting discard command f2fs: nullify fio->encrypted_page for each writes f2fs: sanity check segment count f2fs: introduce valid_ipu_blkaddr to clean up f2fs: lookup extent cache first under IPU scenario f2fs: reconstruct code to write a data page f2fs: introduce __wait_discard_cmd f2fs: introduce __issue_discard_cmd f2fs: enable small discard by default f2fs: delay awaking discard thread ...
2017-05-08Merge branch 'stmmac-pci-fix-crash-on-Intel-Galileo-Gen2'David S. Miller
Andy Shevchenko says: ==================== stmmac: pci: Fix crash on Intel Galileo Gen2 Due to misconfiguration of PCI driver for Intel Quark the user will get a kernel crash: udhcpc: started, v1.26.2 stmmaceth 0000:00:14.6 eth0: device MAC address 98:4f:ee:05:ac:47 Generic PHY stmmac-a6:01: attached PHY driver [Generic PHY] (mii_bus:phy_addr=stmmac-a6:01, irq=-1) stmmaceth 0000:00:14.6 eth0: IEEE 1588-2008 Advanced Timestamp supported stmmaceth 0000:00:14.6 eth0: registered PTP clock IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready udhcpc: sending discover stmmaceth 0000:00:14.6 eth0: Link is Up - 100Mbps/Full - flow control off IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready BUG: unable to handle kernel NULL pointer dereference at (null) IP: stmmac_xmit+0xf1/0x1080 Fix this by adding necessary settings. P.S. I split fix to three patches according to what each of them adds. ==================== Tested-by: Jan Kiszka <jan.kiszka@siemens.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-05-08stmmac: pci: split out common_default_data() helperAndy Shevchenko
New helper is added in order to prevent misconfiguration happened for one of the platforms when configuration data is expanded. Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Acked-by: Joao Pinto <jpinto@synopsys.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-05-08stmmac: pci: RX queue routing configurationAndy Shevchenko
The commit abe80fdc6ee6 ("net: stmmac: RX queue routing configuration") missed Intel Quark configuration. Append it here. Fixes: abe80fdc6ee6 ("net: stmmac: RX queue routing configuration") Cc: Joao Pinto <Joao.Pinto@synopsys.com> Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Acked-by: Joao Pinto <jpinto@synopsys.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-05-08stmmac: pci: TX and RX queue priority configurationAndy Shevchenko
The commit a8f5102af2a7 ("net: stmmac: TX and RX queue priority configuration") missed Intel Quark configuration. Append it here. Fixes: a8f5102af2a7 ("net: stmmac: TX and RX queue priority configuration") Cc: Joao Pinto <Joao.Pinto@synopsys.com> Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Acked-by: Joao Pinto <jpinto@synopsys.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-05-08stmmac: pci: set default number of rx and tx queuesAndy Shevchenko
The commit 26d6851fd24e ("net: stmmac: set default number of rx and tx queues in stmmac_pci") missed Intel Quark configuration. Append it here. Fixes: 26d6851fd24e ("net: stmmac: set default number of rx and tx queues in stmmac_pci") Cc: Joao Pinto <Joao.Pinto@synopsys.com> Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Acked-by: Joao Pinto <jpinto@synopsys.com> Acked-by: Giuseppe Cavallaro <peppe.cavallaro@st.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-05-08vti: check nla_put_* return valueHangbin Liu
Signed-off-by: Hangbin Liu <liuhangbin@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-05-08bpf: don't let ldimm64 leak map addresses on unprivilegedDaniel Borkmann
The patch fixes two things at once: 1) It checks the env->allow_ptr_leaks and only prints the map address to the log if we have the privileges to do so, otherwise it just dumps 0 as we would when kptr_restrict is enabled on %pK. Given the latter is off by default and not every distro sets it, I don't want to rely on this, hence the 0 by default for unprivileged. 2) Printing of ldimm64 in the verifier log is currently broken in that we don't print the full immediate, but only the 32 bit part of the first insn part for ldimm64. Thus, fix this up as well; it's okay to access, since we verified all ldimm64 earlier already (including just constants) through replace_map_fd_with_map_ptr(). Fixes: 1be7f75d1668 ("bpf: enable non-root eBPF programs") Fixes: cbd357008604 ("bpf: verifier (add ability to receive verification log)") Reported-by: Jann Horn <jannh@google.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-05-08yam: use memdup_userGeliang Tang
Use memdup_user() helper instead of open-coding to simplify the code. Signed-off-by: Geliang Tang <geliangtang@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-05-08net/hippi/rrunner: use memdup_userGeliang Tang
Use memdup_user() helper instead of open-coding to simplify the code. Signed-off-by: Geliang Tang <geliangtang@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-05-08cxgb4: avoid disabling FEC by defaultGanesh Goudar
Recent Chelsio firmware started using few port capablity bits to manage FEC and as driver was not aware of FEC changes those bits were zeroed, consequently disabling FEC. Avoid zeroing those bits and default to whatever the firmware tells us the Link is currently advertising. Signed-off-by: Ganesh Goudar <ganeshgr@chelsio.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-05-08net: dsa: loop: Check for memory allocation failureChristophe Jaillet
If 'devm_kzalloc' fails, a NULL pointer will be dereferenced. Return -ENOMEM instead, as done for some other memory allocation just a few lines above. Fixes: 98cd1552ea27 ("net: dsa: Mock-up driver") Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Acked-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-05-08bonding: check nla_put_be32 return valueHangbin Liu
Signed-off-by: Hangbin Liu <liuhangbin@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-05-08ubi: Add debugfs file for tracking PEB stateBen Shelton
Add a file under debugfs to allow easy access to the erase count for each physical erase block on an UBI device. This is useful when debugging data integrity issues with UBIFS on NAND flash devices. Signed-off-by: Ben Shelton <ben.shelton@ni.com> Signed-off-by: Zach Brown <zach.brown@ni.com> v2: * If ubi_io_is_bad eraseblk_count_seq_show just returns the err. * if ubi->lookuptbl returns null, its no longer treated as an error instead info for that block is not printeded * Removed check for UBI_MAX_ERASECOUNTER since it is impossible to hit * Removed block state from print, if a block is printed then it is good and if it is not printed, then it is bad. v3: * Remove errant ! symbol from if statement checking if erase count is valid. Signed-off-by: Richard Weinberger <richard@nod.at>
2017-05-08ubifs: Fix a typo in comment of ioctl2ubifs & ubifs2ioctlRock Lee
Change 'convert' to 'converts' Change 'UBIFS' to 'UBIFS inode flags' Signed-off-by: Rock Lee <rockdotlee@gmail.com> Signed-off-by: Richard Weinberger <richard@nod.at>
2017-05-08ubifs: Remove unnecessary assignmentStefan Agner
Assigning a value of a variable to itself is not useful. Signed-off-by: Stefan Agner <stefan@agner.ch> Signed-off-by: Richard Weinberger <richard@nod.at>
2017-05-08ubifs: Fix cut and paste error on sb type comparisonsColin Ian King
The check for the bad node type of sb->type is checking sa->type and not sb-type. This looks like a cut and paste error. Fix this. Detected by PVS-Studio, warning: V581 Signed-off-by: Colin Ian King <colin.king@canonical.com> Signed-off-by: Richard Weinberger <richard@nod.at>
2017-05-08ubi: fastmap: Fix slab corruptionRabin Vincent
Booting with UBI fastmap and SLUB debugging enabled results in the following splats. The problem is that ubi_scan_fastmap() moves the fastmap blocks from the scan_ai (allocated in scan_fast()) to the ai allocated in ubi_attach(). This results in two problems: - When the scan_ai is freed, aebs which were allocated from its slab cache are still in use. - When the other ai is being destroyed in destroy_ai(), the arguments to kmem_cache_free() call are incorrect since aebs on its ->fastmap list were allocated with a slab cache from a differnt ai. Fix this by making a copy of the aebs in ubi_scan_fastmap() instead of moving them. ============================================================================= BUG ubi_aeb_slab_cache (Not tainted): Objects remaining in ubi_aeb_slab_cache on __kmem_cache_shutdown() ----------------------------------------------------------------------------- INFO: Slab 0xbfd2da3c objects=17 used=1 fp=0xb33d7748 flags=0x40000080 CPU: 1 PID: 118 Comm: ubiattach Tainted: G B 4.9.15 #3 [<80111910>] (unwind_backtrace) from [<8010d498>] (show_stack+0x18/0x1c) [<8010d498>] (show_stack) from [<804a3274>] (dump_stack+0xb4/0xe0) [<804a3274>] (dump_stack) from [<8026c47c>] (slab_err+0x78/0x88) [<8026c47c>] (slab_err) from [<802735bc>] (__kmem_cache_shutdown+0x180/0x3e0) [<802735bc>] (__kmem_cache_shutdown) from [<8024e13c>] (shutdown_cache+0x1c/0x60) [<8024e13c>] (shutdown_cache) from [<8024ed64>] (kmem_cache_destroy+0x19c/0x20c) [<8024ed64>] (kmem_cache_destroy) from [<8057cc14>] (destroy_ai+0x1dc/0x1e8) [<8057cc14>] (destroy_ai) from [<8057f04c>] (ubi_attach+0x3f4/0x450) [<8057f04c>] (ubi_attach) from [<8056fe70>] (ubi_attach_mtd_dev+0x60c/0xff8) [<8056fe70>] (ubi_attach_mtd_dev) from [<80571d78>] (ctrl_cdev_ioctl+0x110/0x2b8) [<80571d78>] (ctrl_cdev_ioctl) from [<8029c77c>] (do_vfs_ioctl+0xac/0xa00) [<8029c77c>] (do_vfs_ioctl) from [<8029d10c>] (SyS_ioctl+0x3c/0x64) [<8029d10c>] (SyS_ioctl) from [<80108860>] (ret_fast_syscall+0x0/0x1c) INFO: Object 0xb33d7e88 @offset=3720 INFO: Allocated in scan_peb+0x608/0x81c age=72 cpu=1 pid=118 kmem_cache_alloc+0x3b0/0x43c scan_peb+0x608/0x81c ubi_attach+0x124/0x450 ubi_attach_mtd_dev+0x60c/0xff8 ctrl_cdev_ioctl+0x110/0x2b8 do_vfs_ioctl+0xac/0xa00 SyS_ioctl+0x3c/0x64 ret_fast_syscall+0x0/0x1c kmem_cache_destroy ubi_aeb_slab_cache: Slab cache still has objects CPU: 1 PID: 118 Comm: ubiattach Tainted: G B 4.9.15 #3 [<80111910>] (unwind_backtrace) from [<8010d498>] (show_stack+0x18/0x1c) [<8010d498>] (show_stack) from [<804a3274>] (dump_stack+0xb4/0xe0) [<804a3274>] (dump_stack) from [<8024ed80>] (kmem_cache_destroy+0x1b8/0x20c) [<8024ed80>] (kmem_cache_destroy) from [<8057cc14>] (destroy_ai+0x1dc/0x1e8) [<8057cc14>] (destroy_ai) from [<8057f04c>] (ubi_attach+0x3f4/0x450) [<8057f04c>] (ubi_attach) from [<8056fe70>] (ubi_attach_mtd_dev+0x60c/0xff8) [<8056fe70>] (ubi_attach_mtd_dev) from [<80571d78>] (ctrl_cdev_ioctl+0x110/0x2b8) [<80571d78>] (ctrl_cdev_ioctl) from [<8029c77c>] (do_vfs_ioctl+0xac/0xa00) [<8029c77c>] (do_vfs_ioctl) from [<8029d10c>] (SyS_ioctl+0x3c/0x64) [<8029d10c>] (SyS_ioctl) from [<80108860>] (ret_fast_syscall+0x0/0x1c) cache_from_obj: Wrong slab cache. ubi_aeb_slab_cache but object is from ubi_aeb_slab_cache ------------[ cut here ]------------ WARNING: CPU: 1 PID: 118 at mm/slab.h:354 kmem_cache_free+0x39c/0x450 Modules linked in: CPU: 1 PID: 118 Comm: ubiattach Tainted: G B 4.9.15 #3 [<80111910>] (unwind_backtrace) from [<8010d498>] (show_stack+0x18/0x1c) [<8010d498>] (show_stack) from [<804a3274>] (dump_stack+0xb4/0xe0) [<804a3274>] (dump_stack) from [<80120e40>] (__warn+0xf4/0x10c) [<80120e40>] (__warn) from [<80120f20>] (warn_slowpath_null+0x28/0x30) [<80120f20>] (warn_slowpath_null) from [<80271fe0>] (kmem_cache_free+0x39c/0x450) [<80271fe0>] (kmem_cache_free) from [<8057cb88>] (destroy_ai+0x150/0x1e8) [<8057cb88>] (destroy_ai) from [<8057ef1c>] (ubi_attach+0x2c4/0x450) [<8057ef1c>] (ubi_attach) from [<8056fe70>] (ubi_attach_mtd_dev+0x60c/0xff8) [<8056fe70>] (ubi_attach_mtd_dev) from [<80571d78>] (ctrl_cdev_ioctl+0x110/0x2b8) [<80571d78>] (ctrl_cdev_ioctl) from [<8029c77c>] (do_vfs_ioctl+0xac/0xa00) [<8029c77c>] (do_vfs_ioctl) from [<8029d10c>] (SyS_ioctl+0x3c/0x64) [<8029d10c>] (SyS_ioctl) from [<80108860>] (ret_fast_syscall+0x0/0x1c) ---[ end trace 2bd8396277fd0a0b ]--- ============================================================================= BUG ubi_aeb_slab_cache (Tainted: G B W ): page slab pointer corrupt. ----------------------------------------------------------------------------- INFO: Allocated in scan_peb+0x608/0x81c age=104 cpu=1 pid=118 kmem_cache_alloc+0x3b0/0x43c scan_peb+0x608/0x81c ubi_attach+0x124/0x450 ubi_attach_mtd_dev+0x60c/0xff8 ctrl_cdev_ioctl+0x110/0x2b8 do_vfs_ioctl+0xac/0xa00 SyS_ioctl+0x3c/0x64 ret_fast_syscall+0x0/0x1c INFO: Slab 0xbfd2da3c objects=17 used=1 fp=0xb33d7748 flags=0x40000081 INFO: Object 0xb33d7e88 @offset=3720 fp=0xb33d7da0 Redzone b33d7e80: cc cc cc cc cc cc cc cc ........ Object b33d7e88: 02 00 00 00 01 00 00 00 00 f0 ff 7f ff ff ff ff ................ Object b33d7e98: 00 00 00 00 00 00 00 00 bd 16 00 00 00 00 00 00 ................ Object b33d7ea8: 00 01 00 00 00 02 00 00 00 00 00 00 00 00 00 00 ................ Redzone b33d7eb8: cc cc cc cc .... Padding b33d7f60: 5a 5a 5a 5a 5a 5a 5a 5a ZZZZZZZZ CPU: 1 PID: 118 Comm: ubiattach Tainted: G B W 4.9.15 #3 [<80111910>] (unwind_backtrace) from [<8010d498>] (show_stack+0x18/0x1c) [<8010d498>] (show_stack) from [<804a3274>] (dump_stack+0xb4/0xe0) [<804a3274>] (dump_stack) from [<80271770>] (free_debug_processing+0x320/0x3c4) [<80271770>] (free_debug_processing) from [<80271ad0>] (__slab_free+0x2bc/0x430) [<80271ad0>] (__slab_free) from [<80272024>] (kmem_cache_free+0x3e0/0x450) [<80272024>] (kmem_cache_free) from [<8057cb88>] (destroy_ai+0x150/0x1e8) [<8057cb88>] (destroy_ai) from [<8057ef1c>] (ubi_attach+0x2c4/0x450) [<8057ef1c>] (ubi_attach) from [<8056fe70>] (ubi_attach_mtd_dev+0x60c/0xff8) [<8056fe70>] (ubi_attach_mtd_dev) from [<80571d78>] (ctrl_cdev_ioctl+0x110/0x2b8) [<80571d78>] (ctrl_cdev_ioctl) from [<8029c77c>] (do_vfs_ioctl+0xac/0xa00) [<8029c77c>] (do_vfs_ioctl) from [<8029d10c>] (SyS_ioctl+0x3c/0x64) [<8029d10c>] (SyS_ioctl) from [<80108860>] (ret_fast_syscall+0x0/0x1c) FIX ubi_aeb_slab_cache: Object at 0xb33d7e88 not freed Signed-off-by: Rabin Vincent <rabinv@axis.com> Signed-off-by: Richard Weinberger <richard@nod.at>
2017-05-08ubifs: Add CONFIG_UBIFS_FS_SECURITY to disable/enable security labelsHyunchul Lee
When write syscall is called, every time security label is searched to determine that file's privileges should be changed. If LSM(Linux Security Model) is not used, this is useless. So introduce CONFIG_UBIFS_SECURITY to disable security labels. it's default value is "y". Signed-off-by: Hyunchul Lee <cheol.lee@lge.com> Signed-off-by: Richard Weinberger <richard@nod.at>
2017-05-08ubi: Make mtd parameter readableAndy Shevchenko
Fix permissions to allow read mtd parameter back (only for owner). Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Signed-off-by: Richard Weinberger <richard@nod.at>
2017-05-08ubi: Fix section mismatchAndy Shevchenko
WARNING: vmlinux.o(.text+0x1f2a80): Section mismatch in reference from the variable __param_ops_mtd to the function .init.text:ubi_mtd_param_parse() The function __param_ops_mtd() references the function __init ubi_mtd_param_parse(). This is often because __param_ops_mtd lacks a __init annotation or the annotation of ubi_mtd_param_parse is wrong. Cc: Richard Weinberger <richard@nod.at> Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Signed-off-by: Richard Weinberger <richard@nod.at>
2017-05-08bnxt_en: allocate enough space for ->ntp_fltr_bmapDan Carpenter
We have the number of longs, but we need to calculate the number of bytes required. Fixes: c0c050c58d84 ("bnxt_en: New Broadcom ethernet driver.") Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com> Acked-by: Michael Chan <michael.chan@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-05-08qlge: Avoid reading past end of bufferKees Cook
Using memcpy() from a string that is shorter than the length copied means the destination buffer is being filled with arbitrary data from the kernel rodata segment. Instead, use strncpy() which will fill the trailing bytes with zeros. This was found with the future CONFIG_FORTIFY_SOURCE feature. Cc: Daniel Micay <danielmicay@gmail.com> Signed-off-by: Kees Cook <keescook@chromium.org> Signed-off-by: David S. Miller <davem@davemloft.net>