summaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
2018-10-16netfilter: xt_nat: fix DNAT target for shifted portmap rangesPaolo Abeni
The commit 2eb0f624b709 ("netfilter: add NAT support for shifted portmap ranges") did not set the checkentry/destroy callbacks for the newly added DNAT target. As a result, rulesets using only such nat targets are not effective, as the relevant conntrack hooks are not enabled. The above affect also nft_compat rulesets. Fix the issue adding the missing initializers. Fixes: 2eb0f624b709 ("netfilter: add NAT support for shifted portmap ranges") Signed-off-by: Paolo Abeni <pabeni@redhat.com> Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
2018-10-16Merge branch 'hns3-Some-cleanup-and-bugfix-for-desc-filling'David S. Miller
Yunsheng Lin says: ==================== Some cleanup and bugfix for desc filling When retransmiting packets, skb_cow_head which is called in hns3_set_tso may clone a new header. And driver will clear the checksum of the header after doing DMA map, so HW will read the old header whose L3 checksum is not cleared and calculate a wrong L3 checksum. Also When sending a big fragment using multiple buffer descriptor, hns3 does one maping, but do multiple unmapping when tx is done, which may cause unmapping problem. This patchset does some cleanup before fixing the above problem. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
2018-10-16net: hns3: fix for multiple unmapping DMA problemFuyun Liang
When sending a big fragment using multiple buffer descriptor, hns3 does one maping, but do multiple unmapping when tx is done, which may cause unmapping problem. To fix it, this patch makes sure the value of desc_cb.length of the non-first bd is zero. If desc_cb.length is zero, we do not unmap the buffer. Fixes: 76ad4f0ee747 ("net: hns3: Add support of HNS3 Ethernet Driver for hip08 SoC") Signed-off-by: Fuyun Liang <liangfuyun1@huawei.com> Signed-off-by: Peng Li <lipeng321@huawei.com> Signed-off-by: Yunsheng Lin <linyunsheng@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-10-16net: hns3: rename hns_nic_dma_unmapFuyun Liang
To keep symmetrical, this patch renames hns_nic_dma_unmap to hns3_clear_desc. Signed-off-by: Fuyun Liang <liangfuyun1@huawei.com> Signed-off-by: Peng Li <lipeng321@huawei.com> Signed-off-by: Yunsheng Lin <linyunsheng@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-10-16net: hns3: add handling for big TX fragmentFuyun Liang
This patch unifies big tx fragment handling for tso and non-tso case. Signed-off-by: Fuyun Liang <liangfuyun1@huawei.com> Signed-off-by: Peng Li <lipeng321@huawei.com> Signed-off-by: Yunsheng Lin <linyunsheng@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-10-16net: hns3: move DMA map into hns3_fill_descPeng Li
To solve the L3 checksum error problem which happens when driver does not clear L3 checksum, DMA map should be done after calling skb_cow_head. This patch moves DMA map into hns3_fill_desc to ensure that DMA map is done after calling skb_cow_head. Fixes: 76ad4f0ee747 ("net: hns3: Add support of HNS3 Ethernet Driver for hip08 SoC") Signed-off-by: Fuyun Liang <liangfuyun1@huawei.com> Signed-off-by: Peng Li <lipeng321@huawei.com> Signed-off-by: Yunsheng Lin <linyunsheng@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-10-16net: hns3: remove hns3_fill_desc_tsoPeng Li
This patch removes hns3_fill_desc_tso in preparation for fixing some desc filling bug, because for tso or non-tso case, we will use the unified hns3_fill_desc. Signed-off-by: Fuyun Liang <liangfuyun1@huawei.com> Signed-off-by: Peng Li <lipeng321@huawei.com> Signed-off-by: Yunsheng Lin <linyunsheng@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-10-16Merge branch 'qed-Align-PTT-and-add-various-link-modes'David S. Miller
Rahul Verma says: ==================== Align PTT and add various link modes. This series aligns the ptt propagation as local ptt or global ptt. Adds new transceiver modes, speed capabilities and board config, which is utilized to display the enhanced link modes, media types and speed. Enhances the link with detailed information. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
2018-10-16qed: Prevent link getting down in case of autoneg-off.Rahul Verma
Newly added link modes are required to be added during setting link modes. If the new link mode is not available during qed_set_link, it may cause link getting down due to empty supported capability, being passed to MFW, after setting autoneg off/on with current/supported speed. Signed-off-by: Rahul Verma <Rahul.Verma@cavium.com> Signed-off-by: Ariel Elior <ariel.elior@cavium.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-10-16qede: Check available link modes before link set from ethtool.Rahul Verma
Set link mode after checking available "supported" link caps of the port. Signed-off-by: Rahul Verma <Rahul.Verma@cavium.com> Signed-off-by: Ariel Elior <ariel.elior@cavium.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-10-16qed: Add supported link and advertise link to display in ethtool.Rahul Verma
Added transceiver type, speed capability and board types in HSI, are utilizing to display the accurate link information in ethtool. Signed-off-by: Rahul Verma <Rahul.Verma@cavium.com> Signed-off-by: Ariel Elior <ariel.elior@cavium.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-10-16qed: Added supported transceiver modes, speed capability and board config to ↵Rahul Verma
HSI. Added transceiver modes with different speed and media type, speed capability and supported board types in HSI, which will be utilizing to display correct specification of link modes and speed type. Signed-off-by: Rahul Verma <Rahul.Verma@cavium.com> Signed-off-by: Ariel Elior <ariel.elior@cavium.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-10-16qed: Align local and global PTT to propagate through the APIs.Rahul Verma
Align the use of local PTT to propagate through the qed_mcp* API's. Global ptt should not be used. Register access should be done through layers. Register address is mapped into a PTT, PF translation table. Several interface functions require a PTT to direct read/write into register. There is a pool of PTT maintained, and several PTT are used simultaneously to access device registers in different flows. Same PTT should not be used in flows that can run concurrently. To avoid running out of PTT resources, too many PTT should not be acquired without releasing them. Every PF has a global PTT, which is used throughout the life of PF, in most important flows for register access. Generic functions acquire the PTT locally and release after the use. This patch aligns the use of Global PTT and Local PTT accordingly. Signed-off-by: Rahul Verma <rahul.verma@cavium.com> Signed-off-by: Ariel Elior <ariel.elior@cavium.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-10-16net: aquantia: make function aq_fw2x_update_stats staticYueHaibing
Fixes the following sparse warning: drivers/net/ethernet/aquantia/atlantic/hw_atl/hw_atl_utils_fw2x.c:282:5: warning: symbol 'aq_fw2x_update_stats' was not declared. Should it be static? Signed-off-by: YueHaibing <yuehaibing@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-10-16sctp: get pr_assoc and pr_stream all status with SCTP_PR_SCTP_ALL insteadXin Long
According to rfc7496 section 4.3 or 4.4: sprstat_policy: This parameter indicates for which PR-SCTP policy the user wants the information. It is an error to use SCTP_PR_SCTP_NONE in sprstat_policy. If SCTP_PR_SCTP_ALL is used, the counters provided are aggregated over all supported policies. We change to dump pr_assoc and pr_stream all status by SCTP_PR_SCTP_ALL instead, and return error for SCTP_PR_SCTP_NONE, as it also said "It is an error to use SCTP_PR_SCTP_NONE in sprstat_policy. " Fixes: 826d253d57b1 ("sctp: add SCTP_PR_ASSOC_STATUS on sctp sockopt") Fixes: d229d48d183f ("sctp: add SCTP_PR_STREAM_STATUS sockopt for prsctp") Reported-by: Ying Xu <yinxu@redhat.com> Signed-off-by: Xin Long <lucien.xin@gmail.com> Acked-by: Neil Horman <nhorman@tuxdriver.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2018-10-16Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/sparcGreg Kroah-Hartman
David writes: "Sparc fixes 1) Revert the %pOF change, it causes regressions. 2) Wire up io_pgetevents(). 3) Fix perf events on single-PCR sparc64 cpus. 4) Do proper perf event throttling like arm and x86." * git://git.kernel.org/pub/scm/linux/kernel/git/davem/sparc: Revert "sparc: Convert to using %pOFn instead of device_node.name" sparc64: Set %l4 properly on trap return after handling signals. sparc64: Make proc_id signed. sparc: Throttle perf events properly. sparc: Fix single-pcr perf event counter management. sparc: Wire up io_pgetevents system call. sunvdc: Remove VLA usage
2018-10-16Merge tag 'selinux-pr-20181015' of ↵Greg Kroah-Hartman
git://git.kernel.org/pub/scm/linux/kernel/git/pcmoore/selinux Paul writes: "SELinux fixes for v4.19 We've got one SELinux "fix" that I'd like to get into v4.19 if possible. I'm using double quotes on "fix" as this is just an update to the MAINTAINERS file and not a code change. From my perspective, MAINTAINERS updates generally don't warrant inclusion during the -rcX phase, but this is a change to the mailing list location so it seemed prudent to get this in before v4.19 is released" * tag 'selinux-pr-20181015' of git://git.kernel.org/pub/scm/linux/kernel/git/pcmoore/selinux: MAINTAINERS: update the SELinux mailing list location
2018-10-16RDMA/ucma: Fix Spectre v1 vulnerabilityGustavo A. R. Silva
hdr.cmd can be indirectly controlled by user-space, hence leading to a potential exploitation of the Spectre variant 1 vulnerability. This issue was detected with the help of Smatch: drivers/infiniband/core/ucma.c:1686 ucma_write() warn: potential spectre issue 'ucma_cmd_table' [r] (local cap) Fix this by sanitizing hdr.cmd before using it to index ucm_cmd_table. Notice that given that speculation windows are large, the policy is to kill the speculation on the first load and not worry if it can be completed with a dependent load/store [1]. [1] https://marc.info/?l=linux-kernel&m=152449131114778&w=2 Cc: stable@vger.kernel.org Signed-off-by: Gustavo A. R. Silva <gustavo@embeddedor.com> Signed-off-by: Doug Ledford <dledford@redhat.com>
2018-10-16f2fs: allow to mount, if quota is failedJaegeuk Kim
Since we can use the filesystem without quotas till next boot. Reviewed-by: Chao Yu <yuchao0@huawei.com> Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
2018-10-16f2fs: update REQ_TIME in f2fs_cross_rename()Sahitya Tummala
Update REQ_TIME in the missing path - f2fs_cross_rename(). Signed-off-by: Sahitya Tummala <stummala@codeaurora.org> [Jaegeuk Kim: add it in f2fs_rename()] Reviewed-by: Chao Yu <yuchao0@huawei.com> Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
2018-10-16f2fs: do not update REQ_TIME in case of error conditionsSahitya Tummala
The REQ_TIME should be updated only in case of success cases as followed at all other places in the file system. Signed-off-by: Sahitya Tummala <stummala@codeaurora.org> Reviewed-by: Chao Yu <yuchao0@huawei.com> Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
2018-10-16f2fs: remove unneeded disable_nat_bits()Chao Yu
Commit 7735730d39d7 ("f2fs: fix to propagate error from __get_meta_page()") added disable_nat_bits() in error path of __get_nat_bitmaps(), but it's unneeded, beause we will fail mount, we won't have chance to change nid usage status w/o nat full/empty bitmaps. Signed-off-by: Chao Yu <yuchao0@huawei.com> Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
2018-10-16f2fs: remove unused sbi->trigger_ssr_thresholdChao Yu
Commit a2a12b679f36 ("f2fs: export SSR allocation threshold") introduced two threshold .min_ssr_sections and .trigger_ssr_threshold, but only .min_ssr_sections is used, so just remove redundant one for cleanup. Signed-off-by: Chao Yu <yuchao0@huawei.com> Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
2018-10-16f2fs: shrink sbi->sb_lock coverage in set_file_temperature()Chao Yu
file_set_{cold,hot} doesn't need holding sbi->sb_lock, so moving them out of the lock. Signed-off-by: Chao Yu <yuchao0@huawei.com> Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
2018-10-16f2fs: use rb_*_cached friendsChao Yu
As rbtree supports caching leftmost node natively, update f2fs codes to use rb_*_cached helpers to speed up leftmost node visiting. Signed-off-by: Chao Yu <yuchao0@huawei.com> Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
2018-10-16f2fs: fix to recover cold bit of inode block during PORChao Yu
Testcase to reproduce this bug: 1. mkfs.f2fs /dev/sdd 2. mount -t f2fs /dev/sdd /mnt/f2fs 3. touch /mnt/f2fs/file 4. sync 5. chattr +A /mnt/f2fs/file 6. xfs_io -f /mnt/f2fs/file -c "fsync" 7. godown /mnt/f2fs 8. umount /mnt/f2fs 9. mount -t f2fs /dev/sdd /mnt/f2fs 10. chattr -A /mnt/f2fs/file 11. xfs_io -f /mnt/f2fs/file -c "fsync" 12. umount /mnt/f2fs 13. mount -t f2fs /dev/sdd /mnt/f2fs 14. lsattr /mnt/f2fs/file -----------------N- /mnt/f2fs/file But actually, we expect the corrct result is: -------A---------N- /mnt/f2fs/file The reason is in step 9) we missed to recover cold bit flag in inode block, so later, in fsync, we will skip write inode block due to below condition check, result in lossing data in another SPOR. f2fs_fsync_node_pages() if (!IS_DNODE(page) || !is_cold_node(page)) continue; Note that, I guess that some non-dir inode has already lost cold bit during POR, so in order to reenable recovery for those inode, let's try to recover cold bit in f2fs_iget() to save more fsynced data. Fixes: c56675750d7c ("f2fs: remove unneeded set_cold_node()") Cc: <stable@vger.kernel.org> 4.17+ Signed-off-by: Chao Yu <yuchao0@huawei.com> Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
2018-10-16f2fs: submit cached bio to avoid endless PageWritebackChao Yu
When migrating encrypted block from background GC thread, we only add them into f2fs inner bio cache, but forget to submit the cached bio, it may cause potential deadlock when we are waiting page writebacked, fix it. Signed-off-by: Chao Yu <yuchao0@huawei.com> Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
2018-10-16f2fs: checkpoint disablingDaniel Rosenberg
Note that, it requires "f2fs: return correct errno in f2fs_gc". This adds a lightweight non-persistent snapshotting scheme to f2fs. To use, mount with the option checkpoint=disable, and to return to normal operation, remount with checkpoint=enable. If the filesystem is shut down before remounting with checkpoint=enable, it will revert back to its apparent state when it was first mounted with checkpoint=disable. This is useful for situations where you wish to be able to roll back the state of the disk in case of some critical failure. Signed-off-by: Daniel Rosenberg <drosen@google.com> [Jaegeuk Kim: use SB_RDONLY instead of MS_RDONLY] Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
2018-10-16sx8: convert to blk-mqJens Axboe
Convert from the old request_fn style driver to blk-mq. Signed-off-by: Jens Axboe <axboe@kernel.dk>
2018-10-16z2ram: convert to blk-mqJens Axboe
Straight forward conversion to blk-mq, nothing special about this driver. Signed-off-by: Jens Axboe <axboe@kernel.dk>
2018-10-16gdrom: convert to blk-mqJens Axboe
Ditch the deffered list, lock, and workqueue handling. Just mark the set as being blocking, so we are invoked from a workqueue already. Signed-off-by: Jens Axboe <axboe@kernel.dk>
2018-10-16floppy: convert to blk-mqOmar Sandoval
This driver likes to fetch requests from all over the place, so make queue_rq put requests on a list so that the logic stays the same. Tested with QEMU. Signed-off-by: Omar Sandoval <osandov@fb.com> Converted to blk_mq_init_sq_queue() and fixed a few spots where the tag_set leaked on cleanup. Signed-off-by: Jens Axboe <axboe@kernel.dk>
2018-10-16ataflop: convert to blk-mqOmar Sandoval
This driver is already pretty broken, in that it has two wait_events() (one in stdma_lock()) in request_fn. Get rid of the first one by freezing/quiescing the queue on format, and the second one by replacing it with stdma_try_lock(). The rest is straightforward. Compile-tested only and probably incorrect. Cc: Laurent Vivier <lvivier@redhat.com> Signed-off-by: Omar Sandoval <osandov@fb.com> Converted to blk_mq_init_sq_queue() Signed-off-by: Jens Axboe <axboe@kernel.dk>
2018-10-16ataflop: fix error handling during setupOmar Sandoval
Move queue allocation next to disk allocation to fix a couple of issues: - If add_disk() hasn't been called, we should clear disk->queue before calling put_disk(). - If we fail to allocate a request queue, we still need to put all of the disks, not just the ones that we allocated queues for. Signed-off-by: Omar Sandoval <osandov@fb.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2018-10-16ataflop: fold headers into C fileOmar Sandoval
atafd.h and atafdreg.h are only used from ataflop.c, so merge them in there. Signed-off-by: Omar Sandoval <osandov@fb.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2018-10-16amiflop: convert to blk-mqOmar Sandoval
Straightforward conversion, just use the existing amiflop_lock to serialize access to the controller. Compile-tested only. Cc: Laurent Vivier <lvivier@redhat.com> Signed-off-by: Omar Sandoval <osandov@fb.com> Converted to blk_mq_init_sq_queue() Signed-off-by: Jens Axboe <axboe@kernel.dk>
2018-10-16amiflop: clean up on errors during setupOmar Sandoval
The error handling in fd_probe_drives() doesn't clean up at all. Fix it up in preparation for converting to blk-mq. While we're here, get rid of the commented out amiga_floppy_remove(). Signed-off-by: Omar Sandoval <osandov@fb.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2018-10-16amiflop: fold headers into C fileOmar Sandoval
amifd.h and amifdreg.h are only used from amiflop.c, and they're pretty small, so move the contents to amiflop.c and get rid of the .h files. This is preparation for adding a struct blk_mq_tag_set to struct amiga_floppy_struct. Signed-off-by: Omar Sandoval <osandov@fb.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2018-10-16swim3: convert to blk-mqOmar Sandoval
Pretty simple conversion. grab_drive() could probably be replaced by some freeze/quiesce incantation, but I left it alone, and just used freeze/quiesce for eject. Compile-tested only. Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Omar Sandoval <osandov@fb.com> Converted to blk_mq_init_sq_queue(). Signed-off-by: Jens Axboe <axboe@kernel.dk>
2018-10-16swim3: add real error handling in setupOmar Sandoval
The driver doesn't have support for removing a device that has already been configured, but with more careful ordering we can avoid the need for that and make sure that we don't leak generic resources. Signed-off-by: Omar Sandoval <osandov@fb.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2018-10-16swim: convert to blk-mqOmar Sandoval
The only interesting thing here is that there may be two floppies (i.e., request queues) sharing the same controller, so we use the global struct swim_priv->lock to check whether the controller is busy. Compile-tested only. Tested-by: Finn Thain <fthain@telegraphics.com.au> Acked-by: Laurent Vivier <lvivier@redhat.com> Signed-off-by: Omar Sandoval <osandov@fb.com> Converted to blk_mq_init_sq_queue() Signed-off-by: Jens Axboe <axboe@kernel.dk>
2018-10-16swim: fix cleanup on setup errorOmar Sandoval
If we fail to allocate the request queue for a disk, we still need to free that disk, not just the previous ones. Additionally, we need to cleanup the previous request queues. Signed-off-by: Omar Sandoval <osandov@fb.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2018-10-16locking/qspinlock, x86: Provide liveness guaranteePeter Zijlstra
On x86 we cannot do fetch_or() with a single instruction and thus end up using a cmpxchg loop, this reduces determinism. Replace the fetch_or() with a composite operation: tas-pending + load. Using two instructions of course opens a window we previously did not have. Consider the scenario: CPU0 CPU1 CPU2 1) lock trylock -> (0,0,1) 2) lock trylock /* fail */ 3) unlock -> (0,0,0) 4) lock trylock -> (0,0,1) 5) tas-pending -> (0,1,1) load-val <- (0,1,0) from 3 6) clear-pending-set-locked -> (0,0,1) FAIL: _2_ owners where 5) is our new composite operation. When we consider each part of the qspinlock state as a separate variable (as we can when _Q_PENDING_BITS == 8) then the above is entirely possible, because tas-pending will only RmW the pending byte, so the later load is able to observe prior tail and lock state (but not earlier than its own trylock, which operates on the whole word, due to coherence). To avoid this we need 2 things: - the load must come after the tas-pending (obviously, otherwise it can trivially observe prior state). - the tas-pending must be a full word RmW instruction, it cannot be an XCHGB for example, such that we cannot observe other state prior to setting pending. On x86 we can realize this by using "LOCK BTS m32, r32" for tas-pending followed by a regular load. Note that observing later state is not a problem: - if we fail to observe a later unlock, we'll simply spin-wait for that store to become visible. - if we observe a later xchg_tail(), there is no difference from that xchg_tail() having taken place before the tas-pending. Suggested-by: Will Deacon <will.deacon@arm.com> Reported-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Will Deacon <will.deacon@arm.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: andrea.parri@amarulasolutions.com Cc: longman@redhat.com Fixes: 59fb586b4a07 ("locking/qspinlock: Remove unbounded cmpxchg() loop from locking slowpath") Link: https://lkml.kernel.org/r/20181003130957.183726335@infradead.org Signed-off-by: Ingo Molnar <mingo@kernel.org>
2018-10-16x86/asm: 'Simplify' GEN_*_RMWcc() macrosPeter Zijlstra
Currently the GEN_*_RMWcc() macros include a return statement, which pretty much mandates we directly wrap them in a (inline) function. Macros with return statements are tricky and, as per the above, limit use, so remove the return statement and make them statement-expressions. This allows them to be used more widely. Also, shuffle the arguments a bit. Place the @cc argument as 3rd, this makes it consistent between UNARY and BINARY, but more importantly, it makes the @arg0 argument last. Since the @arg0 argument is now last, we can do CPP trickery and make it an optional argument, simplifying the users; 17 out of 18 occurences do not need this argument. Finally, change to asm symbolic names, instead of the numeric ordering of operands, which allows us to get rid of __BINARY_RMWcc_ARG and get cleaner code overall. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: JBeulich@suse.com Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Andy Lutomirski <luto@amacapital.net> Cc: Borislav Petkov <bp@alien8.de> Cc: Brian Gerst <brgerst@gmail.com> Cc: Denys Vlasenko <dvlasenk@redhat.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: bp@alien8.de Cc: hpa@linux.intel.com Link: https://lkml.kernel.org/r/20181003130957.108960094@infradead.org Signed-off-by: Ingo Molnar <mingo@kernel.org>
2018-10-16locking/qspinlock: Rework some commentsPeter Zijlstra
While working my way through the code again; I felt the comments could use help. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Acked-by: Will Deacon <will.deacon@arm.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: andrea.parri@amarulasolutions.com Cc: longman@redhat.com Link: https://lkml.kernel.org/r/20181003130257.156322446@infradead.org Signed-off-by: Ingo Molnar <mingo@kernel.org>
2018-10-16locking/qspinlock: Re-order codePeter Zijlstra
Flip the branch condition after atomic_fetch_or_acquire(_Q_PENDING_VAL) such that we loose the indent. This also result in a more natural code flow IMO. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Acked-by: Will Deacon <will.deacon@arm.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: andrea.parri@amarulasolutions.com Cc: longman@redhat.com Link: https://lkml.kernel.org/r/20181003130257.156322446@infradead.org Signed-off-by: Ingo Molnar <mingo@kernel.org>
2018-10-16IB/ucm: Fix Spectre v1 vulnerabilityGustavo A. R. Silva
hdr.cmd can be indirectly controlled by user-space, hence leading to a potential exploitation of the Spectre variant 1 vulnerability. This issue was detected with the help of Smatch: drivers/infiniband/core/ucm.c:1127 ib_ucm_write() warn: potential spectre issue 'ucm_cmd_table' [r] (local cap) Fix this by sanitizing hdr.cmd before using it to index ucm_cmd_table. Notice that given that speculation windows are large, the policy is to kill the speculation on the first load and not worry if it can be completed with a dependent load/store [1]. [1] https://marc.info/?l=linux-kernel&m=152449131114778&w=2 Cc: stable@vger.kernel.org Signed-off-by: Gustavo A. R. Silva <gustavo@embeddedor.com> Signed-off-by: Doug Ledford <dledford@redhat.com>
2018-10-16Merge branch 'x86/build' into locking/core, to pick up dependent patches and ↵Ingo Molnar
unify jump-label work Signed-off-by: Ingo Molnar <mingo@kernel.org>
2018-10-16perf cpu_map: Align cpu map synthesized events properly.David Miller
The size of the resulting cpu map can be smaller than a multiple of sizeof(u64), resulting in SIGBUS on cpus like Sparc as the next event will not be aligned properly. Signed-off-by: David S. Miller <davem@davemloft.net> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Kan Liang <kan.liang@intel.com> Fixes: 6c872901af07 ("perf cpu_map: Add cpu_map event synthesize function") Link: http://lkml.kernel.org/r/20181011.224655.716771175766946817.davem@davemloft.net Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2018-10-16perf/x86/intel: Export mem events only if there's PEBS supportJiri Olsa
Memory events depends on PEBS support and access to LDLAT MSR, but we display them in /sys/devices/cpu/events even if the CPU does not provide those, like for KVM guests. That brings the false assumption that those events should be available, while they fail event to open. Separating the mem-* events attributes and merging them with cpu_events only if there's PEBS support detected. We could also check if LDLAT MSR is available, but the PEBS check seems to cover the need now. Suggested-by: Peter Zijlstra <peterz@infradead.org> Signed-off-by: Jiri Olsa <jolsa@kernel.org> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Arnaldo Carvalho de Melo <acme@kernel.org> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Michael Petlan <mpetlan@redhat.com> Cc: Stephane Eranian <eranian@google.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Vince Weaver <vincent.weaver@maine.edu> Link: http://lkml.kernel.org/r/20180906135748.GC9577@krava Signed-off-by: Ingo Molnar <mingo@kernel.org>