summaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
2017-04-21nvme_fc: Add ls aborts on remote port teardownJames Smart
remoteport teardown never aborted the LS opertions. Add support. Signed-off-by: James Smart <james.smart@broadcom.com> Signed-off-by: Sagi Grimberg <sagi@grimberg.me>
2017-04-21nvme_fc: Move LS's to rportJames Smart
Link LS's on the remoteport rather than the controller. LS's are between nport's. Makes more sense, especially on async teardown where the controller is torn down regardless of the LS (LS is more of a notifier to the target of the teardown), to have them on the remoteport. While revising ls send/done routines, issues were seen relative to refcounting and cleanup, especially in async path. Reworked these code paths. Signed-off-by: James Smart <james.smart@broadcom.com> Signed-off-by: Sagi Grimberg <sagi@grimberg.me>
2017-04-21nvmet_fc: add missing reference in add_portJames Smart
Add missing reference in add_port Signed-off-by: James Smart <james.smart@broadcom.com> Signed-off-by: Sagi Grimberg <sagi@grimberg.me>
2017-04-21nvmet_fc: Rework target side abort handlingJames Smart
target transport: ---------------------- There are cases when there is a need to abort in-progress target operations (writedata) so that controller termination or errors can clean up. That can't happen currently as the abort is another target op type, so it can't be used till the running one finishes (and it may not). Solve by removing the abort op type and creating a separate downcall from the transport to the lldd to request an io to be aborted. The transport will abort ios on queue teardown or io errors. In general the transport tries to call the lldd abort only when the io state is idle. Meaning: ops that transmit data (readdata or rsp) will always finish their transmit (or the lldd will see a state on the link or initiator port that fails the transmit) and the done call for the operation will occur. The transport will wait for the op done upcall before calling the abort function, and as the io is idle, the io can be cleaned up immediately after the abort call; Similarly, ios that are not waiting for data or transmitting data must be in the nvmet layer being processed. The transport will wait for the nvmet layer completion before calling the abort function, and as the io is idle, the io can be cleaned up immediately after the abort call; As for ops that are waiting for data (writedata), they may be outstanding indefinitely if the lldd doesn't see a condition where the initiatior port or link is bad. In those cases, the transport will call the abort function and wait for the lldd's op done upcall for the operation, where it will then clean up the io. Additionally, if a lldd receives an ABTS and matches it to an outstanding request in the transport, A new new transport upcall was created to abort the outstanding request in the transport. The transport expects any outstanding op call (readdata or writedata) will completed by the lldd and the operation upcall made. The transport doesn't act on the reported abort (e.g. clean up the io) until an op done upcall occurs, a new op is attempted, or the nvmet layer completes the io processing. fcloop: ---------------------- Updated to support the new target apis. On fcp io aborts from the initiator, the loopback context is updated to NULL out the half that has completed. The initiator side is immediately called after the abort request with an io completion (abort status). On fcp io aborts from the target, the io is stopped and the initiator side sees it as an aborted io. Target side ops, perhaps in progress while the initiator side is done, continue but noop the data movement as there's no structure on the initiator side to reference. patch also contains: ---------------------- Revised lpfc to support the new abort api commonized rsp buffer syncing and nulling of private data based on calling paths. errors in op done calls don't take action on the fod. They're bad operations which implies the fod may be bad. Signed-off-by: James Smart <james.smart@broadcom.com> Signed-off-by: Sagi Grimberg <sagi@grimberg.me>
2017-04-21nvme_fcloop: split job struct from transport for req_releaseJames Smart
Current design has the fcloop job struct, used for both initiator and target processing, allocated as part of the initiator request structure. On aborts, the initiator side (based on the request) may terminate, yet the target side wants to continue processing. the target side can't do that if the initiator side goes away. Revise fcloop to allocate an independent target side structure when it starts an io from the initiator. Added a lock to the request struct as well to synchronize pointer updates on abort calls. Modified target downcalls to recognize conditions where initiator has aborted the io (thus nulled the pointer between job structs), thus avoid referencing sgl lists which are gone and no longer making upcalls to the initiator. In conditions where the targetport is no longer connected, have the initiator return an access failure rather than simulating a command completion. Signed-off-by: James Smart <james.smart@broadcom.com> Signed-off-by: Sagi Grimberg <sagi@grimberg.me>
2017-04-21nvmet_fc: add req_release to lldd apiJames Smart
With the advent of the opdone calls changing context, the lldd can no longer assume that once the op->done call returns for RSP operations that the request struct is no longer being accessed. As such, revise the lldd api for a req_release callback that the transport will call when the job is complete. This will also be used with abort cases. Fixed text in api header for change in io complete semantics. Revised lpfc to support the new req_release api. Signed-off-by: James Smart <james.smart@broadcom.com> Signed-off-by: Sagi Grimberg <sagi@grimberg.me>
2017-04-21nvmet_fc: add target feature flags for upcall isr contextsJames Smart
Two new feature flags were added to control whether upcalls to the transport result in context switches or stay in the calling context. NVMET_FCTGTFEAT_CMD_IN_ISR: By default, if the flag is not set, the transport assumes the lldd is in a non-isr context and in the cpu context it should be for the io queue. As such, the cmd handler is called directly in the calling context. If the flag is set, indicating the upcall is an isr context, the transport mandates a transition to a workqueue. The workqueue assigned to the queue is used for the context. NVMET_FCTGTFEAT_OPDONE_IN_ISR By default, if the flag is not set, the transport assumes the lldd is in a non-isr context and in the cpu context it should be for the io queue. As such, the fcp operation done callback is called directly in the calling context. If the flag is set, indicating the upcall is an isr context, the transport mandates a transition to a workqueue. The workqueue assigned to the queue is used for the context. Updated lpfc for flags Signed-off-by: James Smart <james.smart@broadcom.com> Signed-off-by: Sagi Grimberg <sagi@grimberg.me>
2017-04-21nvmet: convert from kmap to nvmet_copy_from_sglLogan Gunthorpe
This is safer as it doesn't rely on the data being stored in a single page in an sgl. It also aids our effort to start phasing out users of sg_page. See [1]. For this we kmalloc some memory, copy to it and free at the end. Note: we can't allocate this memory on the stack as the kbuild test robot reports some frame size overflows on i386. [1] https://lwn.net/Articles/720053/ Signed-off-by: Logan Gunthorpe <logang@deltatee.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Max Gurtovoy <maxg@mellanox.com> Signed-off-by: Sagi Grimberg <sagi@grimberg.me>
2017-04-21nvme: improve performance for virtual NVMe devicesHelen Koike
This change provides a mechanism to reduce the number of MMIO doorbell writes for the NVMe driver. When running in a virtualized environment like QEMU, the cost of an MMIO is quite hefy here. The main idea for the patch is provide the device two memory location locations: 1) to store the doorbell values so they can be lookup without the doorbell MMIO write 2) to store an event index. I believe the doorbell value is obvious, the event index not so much. Similar to the virtio specification, the virtual device can tell the driver (guest OS) not to write MMIO unless you are writing past this value. FYI: doorbell values are written by the nvme driver (guest OS) and the event index is written by the virtual device (host OS). The patch implements a new admin command that will communicate where these two memory locations reside. If the command fails, the nvme driver will work as before without any optimizations. Contributions: Eric Northup <digitaleric@google.com> Frank Swiderski <fes@google.com> Ted Tso <tytso@mit.edu> Keith Busch <keith.busch@intel.com> Just to give an idea on the performance boost with the vendor extension: Running fio [1], a stock NVMe driver I get about 200K read IOPs with my vendor patch I get about 1000K read IOPs. This was running with a null device i.e. the backing device simply returned success on every read IO request. [1] Running on a 4 core machine: fio --time_based --name=benchmark --runtime=30 --filename=/dev/nvme0n1 --nrfiles=1 --ioengine=libaio --iodepth=32 --direct=1 --invalidate=1 --verify=0 --verify_fatal=0 --numjobs=4 --rw=randread --blocksize=4k --randrepeat=false Signed-off-by: Rob Nelson <rlnelson@google.com> [mlin: port for upstream] Signed-off-by: Ming Lin <mlin@kernel.org> [koike: updated for upstream] Signed-off-by: Helen Koike <helen.koike@collabora.co.uk> Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Keith Busch <keith.busch@intel.com>
2017-04-21nvme/pci: Don't set reserved SQ create flagsKeith Busch
The QPRIO field is only valid if weighted round robin arbitration is used, and this driver doesn't enable that controller configuration option. Signed-off-by: Keith Busch <keith.busch@intel.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
2017-04-21blk-stat: kill blk_stat_rq_ddir()Jens Axboe
No point in providing and exporting this helper. There's just one (real) user of it, just use rq_data_dir(). Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Jens Axboe <axboe@fb.com>
2017-04-20nbd: set the max segments to USHRT_MAXJosef Bacik
I lack the basic understanding of what segments mean, so we were being limited to 512kib requests even with higher max_sectors sizes set. Setting the maximum number of segments to unlimited allows us to actually have arbitrarily large IO's go through NBD. Signed-off-by: Josef Bacik <jbacik@fb.com> Signed-off-by: Jens Axboe <axboe@fb.com>
2017-04-20blk-mq: Remove blk_mq_sched_move_to_dispatch()Bart Van Assche
commit c13660a08c8b ("blk-mq-sched: change ->dispatch_requests() to ->dispatch_request()") removed the last user of this function. Hence also remove the function itself. Signed-off-by: Bart Van Assche <bart.vanassche@sandisk.com> Cc: Omar Sandoval <osandov@fb.com> Cc: Hannes Reinecke <hare@suse.com> Signed-off-by: Jens Axboe <axboe@fb.com>
2017-04-20blk-mq: add might_sleep check to blk_mq_get_driver_tag()Jens Axboe
If the caller passes in wait=true, it has to be able to block for a driver tag. We just had a bug where flush insertion would block on tag allocation, while we had preempt disabled. Ensure that we catch cases like that earlier next time. Reviewed-by: Bart Van Assche <Bart.VanAssche@sandisk.com> Signed-off-by: Jens Axboe <axboe@fb.com>
2017-04-20blk-mq: Fix poll_stat for new size-based bucketing.Stephen Bates
Fixes an issue where the size of the poll_stat array in request_queue does not match the size expected by the new size based bucketing for IO completion polling. Fixes: 720b8ccc4500 ("blk-mq: Add a polling specific stats function") Signed-off-by: Stephen Bates <sbates@raithlin.com> Signed-off-by: Jens Axboe <axboe@fb.com>
2017-04-20blk-mq: fix schedule-while-atomic with scheduler attachedJens Axboe
We must have dropped the ctx before we call blk_mq_sched_insert_request() with can_block=true, otherwise we risk that a flush request can block on insertion if we are currently out of tags. [ 47.667190] BUG: scheduling while atomic: jbd2/sda2-8/2089/0x00000002 [ 47.674493] Modules linked in: x86_pkg_temp_thermal btrfs xor zlib_deflate raid6_pq sr_mod cdre [ 47.690572] Preemption disabled at: [ 47.690584] [<ffffffff81326c7c>] blk_mq_sched_get_request+0x6c/0x280 [ 47.701764] CPU: 1 PID: 2089 Comm: jbd2/sda2-8 Not tainted 4.11.0-rc7+ #271 [ 47.709630] Hardware name: Dell Inc. PowerEdge T630/0NT78X, BIOS 2.3.4 11/09/2016 [ 47.718081] Call Trace: [ 47.720903] dump_stack+0x4f/0x73 [ 47.724694] ? blk_mq_sched_get_request+0x6c/0x280 [ 47.730137] __schedule_bug+0x6c/0xc0 [ 47.734314] __schedule+0x559/0x780 [ 47.738302] schedule+0x3b/0x90 [ 47.741899] io_schedule+0x11/0x40 [ 47.745788] blk_mq_get_tag+0x167/0x2a0 [ 47.750162] ? remove_wait_queue+0x70/0x70 [ 47.754901] blk_mq_get_driver_tag+0x92/0xf0 [ 47.759758] blk_mq_sched_insert_request+0x134/0x170 [ 47.765398] ? blk_account_io_start+0xd0/0x270 [ 47.770679] blk_mq_make_request+0x1b2/0x850 [ 47.775766] generic_make_request+0xf7/0x2d0 [ 47.780860] submit_bio+0x5f/0x120 [ 47.784979] ? submit_bio+0x5f/0x120 [ 47.789631] submit_bh_wbc.isra.46+0x10d/0x130 [ 47.794902] submit_bh+0xb/0x10 [ 47.798719] journal_submit_commit_record+0x190/0x210 [ 47.804686] ? _raw_spin_unlock+0x13/0x30 [ 47.809480] jbd2_journal_commit_transaction+0x180a/0x1d00 [ 47.815925] kjournald2+0xb6/0x250 [ 47.820022] ? kjournald2+0xb6/0x250 [ 47.824328] ? remove_wait_queue+0x70/0x70 [ 47.829223] kthread+0x10e/0x140 [ 47.833147] ? commit_timeout+0x10/0x10 [ 47.837742] ? kthread_create_on_node+0x40/0x40 [ 47.843122] ret_from_fork+0x29/0x40 Fixes: a4d907b6a33b ("blk-mq: streamline blk_mq_make_request") Reviewed-by: Omar Sandoval <osandov@fb.com> Signed-off-by: Jens Axboe <axboe@fb.com>
2017-04-20Merge branch 'akpm' (patches from Andrew)Linus Torvalds
Merge two mm fixes from Andrew Morton. * emailed patches from Andrew Morton <akpm@linux-foundation.org>: mm: prevent NR_ISOLATE_* stats from going negative Revert "mm, page_alloc: only use per-cpu allocator for irq-safe requests"
2017-04-20mm: prevent NR_ISOLATE_* stats from going negativeRabin Vincent
Commit 6afcf8ef0ca0 ("mm, compaction: fix NR_ISOLATED_* stats for pfn based migration") moved the dec_node_page_state() call (along with the page_is_file_cache() call) to after putback_lru_page(). But page_is_file_cache() can change after putback_lru_page() is called, so it should be called before putback_lru_page(), as it was before that patch, to prevent NR_ISOLATE_* stats from going negative. Without this fix, non-CONFIG_SMP kernels end up hanging in the while(too_many_isolated()) { congestion_wait() } loop in shrink_active_list() due to the negative stats. Mem-Info: active_anon:32567 inactive_anon:121 isolated_anon:1 active_file:6066 inactive_file:6639 isolated_file:4294967295 ^^^^^^^^^^ unevictable:0 dirty:115 writeback:0 unstable:0 slab_reclaimable:2086 slab_unreclaimable:3167 mapped:3398 shmem:18366 pagetables:1145 bounce:0 free:1798 free_pcp:13 free_cma:0 Fixes: 6afcf8ef0ca0 ("mm, compaction: fix NR_ISOLATED_* stats for pfn based migration") Link: http://lkml.kernel.org/r/1492683865-27549-1-git-send-email-rabin.vincent@axis.com Signed-off-by: Rabin Vincent <rabinv@axis.com> Acked-by: Michal Hocko <mhocko@suse.com> Cc: Ming Ling <ming.ling@spreadtrum.com> Cc: Minchan Kim <minchan@kernel.org> Cc: Vlastimil Babka <vbabka@suse.cz> Cc: <stable@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2017-04-20Revert "mm, page_alloc: only use per-cpu allocator for irq-safe requests"Mel Gorman
This reverts commit 374ad05ab64. While the patch worked great for userspace allocations, the fact that softirq loses the per-cpu allocator caused problems. It needs to be redone taking into account that a separate list is needed for hard/soft IRQs or alternatively find a cheap way of detecting reentry due to an interrupt. Both are possible but sufficiently tricky that it shouldn't be rushed. Jesper had one method for allowing softirqs but reported that the cost was high enough that it performed similarly to a plain revert. His figures for netperf TCP_STREAM were as follows Baseline v4.10.0 : 60316 Mbit/s Current 4.11.0-rc6: 47491 Mbit/s Jesper's patch : 60662 Mbit/s This patch : 60106 Mbit/s As this is a regression, I wish to revert to noirq allocator for now and go back to the drawing board. Link: http://lkml.kernel.org/r/20170415145350.ixy7vtrzdzve57mh@techsingularity.net Signed-off-by: Mel Gorman <mgorman@techsingularity.net> Reported-by: Tariq Toukan <ttoukan.linux@gmail.com> Acked-by: Jesper Dangaard Brouer <brouer@redhat.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2017-04-20blk-mq: Add a polling specific stats functionStephen Bates
Rather than bucketing IO statisics based on direction only we also bucket based on the IO size. This leads to improved polling performance. Update the bucket callback function and use it in the polling latency estimation. Signed-off-by: Stephen Bates <sbates@raithlin.com> Signed-off-by: Jens Axboe <axboe@fb.com>
2017-04-20blk-stat: convert blk-stat bucket callback to signedStephen Bates
In order to allow for filtering of IO based on some other properties of the request than direction we allow the bucket function to return an int. If the bucket callback returns a negative do no count it in the stats accumulation. Signed-off-by: Stephen Bates <sbates@raithlin.com> Fixed up Kyber scheduler stat callback. Signed-off-by: Jens Axboe <axboe@fb.com>
2017-04-20blk-mq: fix potential oops with polling and blk-mq schedulerJens Axboe
If we have a scheduler attached, blk_mq_tag_to_rq() on the scheduled tags will return NULL if a request is no longer in flight. This is different than using the normal tags, where it will always return the fixed request. Check for this condition for polling, in case we happen to enter polling for a completed request. The request address remains valid, so this check and return should be perfectly safe. Fixes: bd166ef183c2 ("blk-mq-sched: add framework for MQ capable IO schedulers") Tested-by: Stephen Bates <sbates@raithlin.com> Signed-off-by: Jens Axboe <axboe@fb.com>
2017-04-20nvme: Quirk APST off on "THNSF5256GPUK TOSHIBA"Andy Lutomirski
There's a report that it malfunctions with APST on. See https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1678184 Cc: Kai-Heng Feng <kai.heng.feng@canonical.com> Signed-off-by: Andy Lutomirski <luto@kernel.org> Signed-off-by: Jens Axboe <axboe@fb.com>
2017-04-20nvme: Adjust the Samsung APST quirkAndy Lutomirski
I got a couple more reports: the Samsung APST issues appears to affect multiple 950-series devices in Dell XPS 15 9550 and Precision 5510 laptops. Change the quirk: rather than blacklisting the firmware on the first problematic SSD that was reported, disable APST on all 144d:a802 devices if they're installed in the two affected Dell models. While we're at it, disable only the deepest sleep state instead of all of them -- the reporters say that this is sufficient to fix the problem. (I have a device that appears to be entirely identical to one of the affected devices, but I have a different Dell laptop, so it's not the case that all Samsung devices with firmware BXW75D0Q are broken under all circumstances.) Samsung engineers have an affected system, and hopefully they'll give us a better workaround some time soon. In the mean time, this should minimize regressions. See https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1678184 Cc: Kai-Heng Feng <kai.heng.feng@canonical.com> Signed-off-by: Andy Lutomirski <luto@kernel.org> Signed-off-by: Jens Axboe <axboe@fb.com>
2017-04-20net sched actions: allocate act cookie earlyWolfgang Bumiller
Policing filters do not use the TCA_ACT_* enum and the tb[] nlattr array in tcf_action_init_1() doesn't get filled for them so we should not try to look for a TCA_ACT_COOKIE attribute in the then uninitialized array. The error handling in cookie allocation then calls tcf_hash_release() leading to invalid memory access later on. Additionally, if cookie allocation fails after an already existing non-policing filter has successfully been changed, tcf_action_release() should not be called, also we would have to roll back the changes in the error handling, so instead we now allocate the cookie early and assign it on success at the end. CVE-2017-7979 Fixes: 1045ba77a596 ("net sched actions: Add support for user cookies") Signed-off-by: Wolfgang Bumiller <w.bumiller@proxmox.com> Acked-by: Jamal Hadi Salim <jhs@mojatatu.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-04-20Merge branch 'qed-dcbx-fixes'David S. Miller
Sudarsana Reddy Kalluru says: ==================== qed: Dcbx bug fixes The series has set of bug fixes for dcbx implementation of qed driver. Please consider applying this to 'net' branch. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
2017-04-20qed: Fix issue in populating the PFC config paramters.sudarsana.kalluru@cavium.com
Change ieee_setpfc() callback implementation to populate traffic class count with the user provided value. Signed-off-by: Sudarsana Reddy Kalluru <Sudarsana.Kalluru@cavium.com> Signed-off-by: Yuval Mintz <Yuval.Mintz@cavium.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-04-20qed: Fix possible system hang in the dcbnl-getdcbx() path.sudarsana.kalluru@cavium.com
qed_dcbnl_get_dcbx() API uses kmalloc in GFT_KERNEL mode. The API gets invoked in the interrupt context by qed_dcbnl_getdcbx callback. Need to invoke this kmalloc in atomic mode. Signed-off-by: Sudarsana Reddy Kalluru <Sudarsana.Kalluru@cavium.com> Signed-off-by: Yuval Mintz <Yuval.Mintz@cavium.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-04-20qed: Fix sending an invalid PFC error mask to MFW.sudarsana.kalluru@cavium.com
PFC error-mask value is not supported by MFW, but this bit could be set in the pfc bit-map of the operational parameters if remote device supports it. These operational parameters are used as basis for populating the dcbx config parameters. User provided configs will be applied on top of these parameters and then send them to MFW when requested. Driver need to clear the error-mask bit before sending the config parameters to MFW. Signed-off-by: Sudarsana Reddy Kalluru <Sudarsana.Kalluru@cavium.com> Signed-off-by: Yuval Mintz <Yuval.Mintz@cavium.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-04-20qed: Fix possible error in populating max_tc field.sudarsana.kalluru@cavium.com
Some adapters may not publish the max_tc value. Populate the default value for max_tc field in case the mfw didn't provide one. Signed-off-by: Sudarsana Reddy Kalluru <Sudarsana.Kalluru@cavium.com> Signed-off-by: Yuval Mintz <Yuval.Mintz@cavium.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-04-20smsc95xx: Use skb_cow_head to deal with cloned skbsJames Hughes
The driver was failing to check that the SKB wasn't cloned before adding checksum data. Replace existing handling to extend/copy the header buffer with skb_cow_head. Signed-off-by: James Hughes <james.hughes@raspberrypi.org> Acked-by: Eric Dumazet <edumazet@google.com> Acked-by: Woojung Huh <Woojung.Huh@microchip.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-04-20MAINTAINERS: update entry for TI's CPSW driverSekhar Nori
Mugunthan V N, who was reviewing TI's CPSW driver patches is not working for TI anymore and wont be reviewing patches for that driver. Drop Mugunthan as the maintiainer for this driver. Grygorii continues to be a reviewer. Dave Miller applies the patches directly and adding a maintainer is actually misleading since get_maintainer.pl script stops suggesting that Dave Miller be copied. Signed-off-by: Sekhar Nori <nsekhar@ti.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-04-20Merge branch 'master' of ↵David S. Miller
git://git.kernel.org/pub/scm/linux/kernel/git/klassert/ipsec Steffen Klassert says: ==================== pull request (net): ipsec 2017-04-19 Two fixes for af_key: 1) Add a lock to key dump to prevent a NULL pointer dereference. From Yuejie Shi. 2) Fix slab-out-of-bounds in parse_ipsecrequests. From Herbert Xu. Please pull or let me know if there are problems. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
2017-04-20dp83640: don't recieve time stamps twiceDan Carpenter
This patch is prompted by a static checker warning about a potential use after free. The concern is that netif_rx_ni() can free "skb" and we call it twice. When I look at the commit that added this, it looks like some stray lines were added accidentally. It doesn't make sense to me that we would recieve the same data two times. I asked the author but never recieved a response. I can't test this code, but I'm pretty sure my patch is correct. Fixes: 4b063258ab93 ("dp83640: Delay scheduled work.") Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com> Acked-by: Stefan Sørensen <stefan.sorensen@spectralink.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-04-20ipv6: sr: fix out-of-bounds access in SRH validationDavid Lebrun
This patch fixes an out-of-bounds access in seg6_validate_srh() when the trailing data is less than sizeof(struct sr6_tlv). Reported-by: Andrey Konovalov <andreyknvl@google.com> Cc: Andrey Konovalov <andreyknvl@google.com> Signed-off-by: David Lebrun <david.lebrun@uclouvain.be> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-04-20selftests/net: Fixes psock_fanout CBPF test caseMike Maloney
'psock_fanout' has been failing since commit 4d7b9dc1f36a9 ("tools: psock_lib: harden socket filter used by psock tests"). That commit changed the CBPF filter to examine the full ethernet frame, and was tested on 'psock_tpacket' which uses SOCK_RAW. But 'psock_fanout' was also using this same CBPF in two places, for filtering and fanout, on a SOCK_DGRAM socket. Change 'psock_fanout' to use SOCK_RAW so that the CBPF program used with SO_ATTACH_FILTER can examine the entire frame. Create a new CBPF program for use with PACKET_FANOUT_DATA which ignores the header, as it cannot see the ethernet header. Tested: Ran tools/testing/selftests/net/psock_{fanout,tpacket} 10 times, and they all passed. Fixes: 4d7b9dc1f36a9 ("tools: psock_lib: harden socket filter used by psock tests") Signed-off-by: 'Mike Maloney <maloneykernel@gmail.com>' Signed-off-by: David S. Miller <davem@davemloft.net>
2017-04-20mac80211: reject ToDS broadcast data framesJohannes Berg
AP/AP_VLAN modes don't accept any real 802.11 multicast data frames, but since they do need to accept broadcast management frames the same is currently permitted for data frames. This opens a security problem because such frames would be decrypted with the GTK, and could even contain unicast L3 frames. Since the spec says that ToDS frames must always have the BSSID as the RA (addr1), reject any other data frames. The problem was originally reported in "Predicting, Decrypting, and Abusing WPA2/802.11 Group Keys" at usenix https://www.usenix.org/conference/usenixsecurity16/technical-sessions/presentation/vanhoef and brought to my attention by Jouni. Cc: stable@vger.kernel.org Reported-by: Jouni Malinen <j@w1.fi> Signed-off-by: Johannes Berg <johannes.berg@intel.com> -- Dave, I didn't want to send you a new pull request for a single commit yet again - can you apply this one patch as is? Signed-off-by: David S. Miller <davem@davemloft.net>
2017-04-20Merge tag 'trace-v4.11-rc5-5' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-trace Pull two more ftrace fixes from Steven Rostedt: "While continuing my development, I uncovered two more small bugs. One is a race condition when enabling the snapshot function probe trigger. It enables the probe before allocating the snapshot, and if the probe triggers first, it stops tracing with a warning that the snapshot buffer was not allocated. The seconds is that the snapshot file should show how to use it when it is empty. But a bug fix from long ago broke the "is empty" test and the snapshot file no longer displays the help message" * tag 'trace-v4.11-rc5-5' of git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-trace: ring-buffer: Have ring_buffer_iter_empty() return true when empty tracing: Allocate the snapshot buffer before enabling probe
2017-04-20Merge branch 'for-linus' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/jikos/hid Pull HID fixes from Jiri Kosina: "Two last-minute regression fixes for Wacom driver from Jason Gerecke" * 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/jikos/hid: HID: wacom: Override incorrect logical maximum contact identifier HID: wacom: Treat HID_DG_TOOLSERIALNUMBER as unsigned
2017-04-20Merge branch 'for-linus' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/s390/linux Pull s390 fix from Martin Schwidefsky: "There is one more fix I would like to see in 4.11: The combination of KVM, CMMA and heavy paging can cause data corruption, the fix is to clear the _PAGE_UNUSED bit in set_pte_at()" * 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/s390/linux: s390/mm: fix CMMA vs KSM vs others
2017-04-20block: remove the errors field from struct requestChristoph Hellwig
Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Bart Van Assche <Bart.VanAssche@sandisk.com> Acked-by: Roger Pau Monné <roger.pau@citrix.com> Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Signed-off-by: Jens Axboe <axboe@fb.com>
2017-04-20blktrace: remove the unused block_rq_abort tracepointChristoph Hellwig
Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Johannes Thumshirn <jthumshirn@suse.de> Signed-off-by: Jens Axboe <axboe@fb.com>
2017-04-20swim3: remove (commented out) printing of req->errorsChristoph Hellwig
Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Johannes Thumshirn <jthumshirn@suse.de> Signed-off-by: Jens Axboe <axboe@fb.com>
2017-04-20ataflop: switch from req->errors to req->error_countChristoph Hellwig
Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Jens Axboe <axboe@fb.com>
2017-04-20floppy: switch from req->errors to req->error_countChristoph Hellwig
Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Jens Axboe <axboe@fb.com>
2017-04-20block: add a error_count field to struct requestChristoph Hellwig
This is for the legacy floppy and ataflop drivers that currently abuse ->errors for this purpose. It's stashed away in a union to not grow the struct size, the other fields are either used by modern drivers for different purposes or the I/O scheduler before queing the I/O to drivers. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Jens Axboe <axboe@fb.com>
2017-04-20blk-mq: simplify __blk_mq_complete_requestChristoph Hellwig
Merge blk_mq_ipi_complete_request and blk_mq_stat_add into their only caller. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Jens Axboe <axboe@fb.com>
2017-04-20blk-mq: remove the error argument to blk_mq_complete_requestChristoph Hellwig
Now that all drivers that call blk_mq_complete_requests have a ->complete callback we can remove the direct call to blk_mq_end_request, as well as the error argument to blk_mq_complete_request. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Johannes Thumshirn <jthumshirn@suse.de> Reviewed-by: Bart Van Assche <Bart.VanAssche@sandisk.com> Signed-off-by: Jens Axboe <axboe@fb.com>
2017-04-20xen-blkfront: don't use req->errorsChristoph Hellwig
xen-blkfron is the last users using rq->errros for passing back error to blk-mq, and I'd like to get rid of that. In the longer run the driver should be moving more of the completion processing into .complete, but this is the minimal change to move forward for now. Signed-off-by: Christoph Hellwig <hch@lst.de> Acked-by: Roger Pau Monné <roger.pau@citrix.com> Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Signed-off-by: Jens Axboe <axboe@fb.com>
2017-04-20mtip32xx: add a status field to struct mtip_cmdChristoph Hellwig
Instead of using req->errors, which will go away. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Johannes Thumshirn <jthumshirn@suse.de> Signed-off-by: Jens Axboe <axboe@fb.com>