summaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
2017-04-20KVM: PPC: Enable IOMMU_API for KVM_BOOK3S_64 permanentlyAlexey Kardashevskiy
It does not make much sense to have KVM in book3s-64 and not to have IOMMU bits for PCI pass through support as it costs little and allows VFIO to function on book3s KVM. Having IOMMU_API always enabled makes it unnecessary to have a lot of "#ifdef IOMMU_API" in arch/powerpc/kvm/book3s_64_vio*. With those ifdef's we could have only user space emulated devices accelerated (but not VFIO) which do not seem to be very useful. Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru> Reviewed-by: David Gibson <david@gibson.dropbear.id.au> Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
2017-04-20KVM: PPC: Reserve KVM_CAP_SPAPR_TCE_VFIO capability numberAlexey Kardashevskiy
This adds a capability number for in-kernel support for VFIO on SPAPR platform. The capability will tell the user space whether in-kernel handlers of H_PUT_TCE can handle VFIO-targeted requests or not. If not, the user space must not attempt allocating a TCE table in the host kernel via the KVM_CREATE_SPAPR_TCE KVM ioctl because in that case TCE requests will not be passed to the user space which is desired action in the situation like that. Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru> Reviewed-by: David Gibson <david@gibson.dropbear.id.au> Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
2017-04-20Merge remote-tracking branch 'remotes/powerpc/topic/ppc-kvm' into kvm-ppc-nextPaul Mackerras
This merges in the commits in the topic/ppc-kvm branch of the powerpc tree to get the changes to arch/powerpc which subsequent patches will rely on. Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
2017-04-20KVM: PPC: Align the table size to system page sizeAlexey Kardashevskiy
At the moment the userspace can request a table smaller than a page size and this value will be stored as kvmppc_spapr_tce_table::size. However the actual allocated size will still be aligned to the system page size as alloc_page() is used there. This aligns the table size up to the system page size. It should not change the existing behaviour but when in-kernel TCE acceleration patchset reaches the upstream kernel, this will allow small TCE tables be accelerated as well: PCI IODA iommu_table allocator already aligns the size and, without this patch, an IOMMU group won't attach to LIOBN due to the mismatching table size. Reviewed-by: David Gibson <david@gibson.dropbear.id.au> Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru> Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
2017-04-20KVM: PPC: Book3S PR: Preserve storage control bitsAlexey Kardashevskiy
PR KVM page fault handler performs eaddr to pte translation for a guest, however kvmppc_mmu_book3s_64_xlate() does not preserve WIMG bits (storage control) in the kvmppc_pte struct. If PR KVM is running as a second level guest under HV KVM, and PR KVM tries inserting HPT entry, this fails in HV KVM if it already has this mapping. This preserves WIMG bits between kvmppc_mmu_book3s_64_xlate() and kvmppc_mmu_map_page(). Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru> Reviewed-by: David Gibson <david@gibson.dropbear.id.au> Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
2017-04-20KVM: PPC: Book3S PR: Exit KVM on failed mappingAlexey Kardashevskiy
At the moment kvmppc_mmu_map_page() returns -1 if mmu_hash_ops.hpte_insert() fails for any reason so the page fault handler resumes the guest and it faults on the same address again. This adds distinction to kvmppc_mmu_map_page() to return -EIO if mmu_hash_ops.hpte_insert() failed for a reason other than full pteg. At the moment only pSeries_lpar_hpte_insert() returns -2 if plpar_pte_enter() failed with a code other than H_PTEG_FULL. Other mmu_hash_ops.hpte_insert() instances can only fail with -1 "full pteg". With this change, if PR KVM fails to update HPT, it can signal the userspace about this instead of returning to guest and having the very same page fault over and over again. Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru> Reviewed-by: David Gibson <david@gibson.dropbear.id.au> Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
2017-04-20KVM: PPC: Book3S PR: Get rid of unused local variableAlexey Kardashevskiy
@is_mmio has never been used since introduction in commit 2f4cf5e42d13 ("Add book3s.c") from 2009. Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru> Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
2017-04-20KVM: PPC: e500: Use kcalloc() in e500_mmu_host_init()Markus Elfring
* A multiplication for the size determination of a memory allocation indicated that an array data structure should be processed. Thus use the corresponding function "kcalloc". This issue was detected by using the Coccinelle software. * Replace the specification of a data type by a pointer dereference to make the corresponding size determination a bit safer according to the Linux coding style convention. Signed-off-by: Markus Elfring <elfring@users.sourceforge.net> Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
2017-04-20KVM: PPC: Book3S HV: Use common error handling code in kvmppc_clr_passthru_irq()Markus Elfring
Add a jump target so that a bit of exception handling can be better reused at the end of this function. Signed-off-by: Markus Elfring <elfring@users.sourceforge.net> Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
2017-04-20KVM: PPC: Add MMIO emulation for remaining floating-point instructionsPaul Mackerras
For completeness, this adds emulation of the lfiwax and lfiwzx instructions. With this, all floating-point load and store instructions as of Power ISA V2.07 are emulated. Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
2017-04-20KVM: PPC: Emulation for more integer loads and storesPaul Mackerras
This adds emulation for the following integer loads and stores, thus enabling them to be used in a guest for accessing emulated MMIO locations. - lhaux - lwaux - lwzux - ldu - lwa - stdux - stwux - stdu - ldbrx - stdbrx Previously, most of these would cause an emulation failure exit to userspace, though ldu and lwa got treated incorrectly as ld, and stdu got treated incorrectly as std. This also tidies up some of the formatting and updates the comment listing instructions that still need to be implemented. With this, all integer loads and stores that are defined in the Power ISA v2.07 are emulated, except for those that are permitted to trap when used on cache-inhibited or write-through mappings (and which do in fact trap on POWER8), that is, lmw/stmw, lswi/stswi, lswx/stswx, lq/stq, and l[bhwdq]arx/st[bhwdq]cx. Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
2017-04-20KVM: PPC: Add MMIO emulation for stdx (store doubleword indexed)Alexey Kardashevskiy
This adds missing stdx emulation for emulated MMIO accesses by KVM guests. This allows the Mellanox mlx5_core driver from recent kernels to work when MMIO emulation is enforced by userspace. Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru> Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
2017-04-20KVM: PPC: Book3S: Add MMIO emulation for FP and VSX instructionsBin Lu
This patch provides the MMIO load/store emulation for instructions of 'double & vector unsigned char & vector signed char & vector unsigned short & vector signed short & vector unsigned int & vector signed int & vector double '. The instructions that this adds emulation for are: - ldx, ldux, lwax, - lfs, lfsx, lfsu, lfsux, lfd, lfdx, lfdu, lfdux, - stfs, stfsx, stfsu, stfsux, stfd, stfdx, stfdu, stfdux, stfiwx, - lxsdx, lxsspx, lxsiwax, lxsiwzx, lxvd2x, lxvw4x, lxvdsx, - stxsdx, stxsspx, stxsiwx, stxvd2x, stxvw4x [paulus@ozlabs.org - some cleanups, fixes and rework, make it compile for Book E, fix build when PR KVM is built in] Signed-off-by: Bin Lu <lblulb@linux.vnet.ibm.com> Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
2017-04-19ring-buffer: Have ring_buffer_iter_empty() return true when emptySteven Rostedt (VMware)
I noticed that reading the snapshot file when it is empty no longer gives a status. It suppose to show the status of the snapshot buffer as well as how to allocate and use it. For example: ># cat snapshot # tracer: nop # # # * Snapshot is allocated * # # Snapshot commands: # echo 0 > snapshot : Clears and frees snapshot buffer # echo 1 > snapshot : Allocates snapshot buffer, if not already allocated. # Takes a snapshot of the main buffer. # echo 2 > snapshot : Clears snapshot buffer (but does not allocate or free) # (Doesn't have to be '2' works with any number that # is not a '0' or '1') But instead it just showed an empty buffer: ># cat snapshot # tracer: nop # # entries-in-buffer/entries-written: 0/0 #P:4 # # _-----=> irqs-off # / _----=> need-resched # | / _---=> hardirq/softirq # || / _--=> preempt-depth # ||| / delay # TASK-PID CPU# |||| TIMESTAMP FUNCTION # | | | |||| | | What happened was that it was using the ring_buffer_iter_empty() function to see if it was empty, and if it was, it showed the status. But that function was returning false when it was empty. The reason was that the iter header page was on the reader page, and the reader page was empty, but so was the buffer itself. The check only tested to see if the iter was on the commit page, but the commit page was no longer pointing to the reader page, but as all pages were empty, the buffer is also. Cc: stable@vger.kernel.org Fixes: 651e22f2701b ("ring-buffer: Always reset iterator to reader page") Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
2017-04-19i40e: use i40e_stop_rings_no_wait to implement PORT_SUSPENDED stateJacob Keller
This state bit was added as a way for DCB to avoid having to wait for the queues to disable when handling LLDP events. The logic for this was burried deep within stop Tx and stop Rx queue code. First, let's rename it so that it does not appear to only affect Tx when infact it modifies both Tx and Rx flow. Second we can move it up into the i40e_stop_rings() function, and we can simply re-use the i40e_stop_rings_no_wait() so that we don't have to bury the implementation as deep into the call stack. An alternative might be to remove the state bit and instead attempt to shut down everything directly in DCP flow. This, however, is not ideal because it creates yet another separate shutdown routine that we'd have to maintain. In the current implementation any changes will be made to both flows. Change-ID: I68e1ccb901af320862bca395e9c9746f08e8b17c Signed-off-by: Jacob Keller <jacob.e.keller@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2017-04-19i40e: reset all VFs in parallel when rebuilding PFJacob Keller
When there are a lot of active VFs, it can take multiple seconds to finish resetting all of them during certain flows., which can cause some VFs to fail to wait long enough for the reset to occur. The user might see messages like "Never saw reset" or "Reset never finished" and the VF driver will stop functioning properly. The naive solution would be to simply increase the wait timer. We can get much more clever. Notice that i40e_reset_vf is run in a serialized fashion, and includes lots of delays. There are two prominent delays which take most of the time. First, when we begin resetting VFs, we have multiple 10ms delays which accrue because we reset each VF in a serial fashion. These delays accumulate to almost 4 seconds when handling the maximum number of VFs (128). Secondly, there is a massive 50ms delay for each time we disable queues on a VSI. This delay is necessary to allow HW to finish disabling queues before we restore functionality. However, just like with the first case, we are paying the cost for each VF, rather than disabling all VFs and waiting once. Both of these can be fixed, but required some previous refactoring to handle the special case. First, we will need the i40e_vsi_wait_queues_disabled function which was previously DCB specific. Second, we will need to implement our own i40e_vsi_stop_rings_no_wait function which will handle the stopping of rings without the delays. Finally, implement an i40e_reset_all_vfs function, which will first start the reset of all VFs, and pay the wait cost all at once, rather than serially waiting for each VF before we start processing then next one. After the VF has been reset, we'll disable all the VF queues, and then wait for them to disable. Again, we'll organize the flow such that we pay the wait cost only once. Finally, after we've disabled queues we'll go ahead and begin restoring VF functionality. The result is reducing the wait time by a large factor and ensuring that VFs do not timeout when waiting in the VF driver. Change-ID: Ia6e8cf8d98131b78aec89db78afb8d905c9b12be Signed-off-by: Jacob Keller <jacob.e.keller@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2017-04-19i40e: split some code in i40e_reset_vf into helpersJacob Keller
A future patch is going to want to re-use some of the code in i40e_reset_vf, so lets break up the beginning and ending parts into their own helper functions. The first function will be used to initialize the reset on a VF, while the second function will be used to finalize the reset and restore functionality. Change-ID: I48df808b8bf09de3c2ed8c521f57b3f0ab9e5907 Signed-off-by: Jacob Keller <jacob.e.keller@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2017-04-19i40e: remove I40E_FLAG_IN_NETPOLL entirelyJacob Keller
This flag was originally intended to be used to let some driver code know when we were running from netpoll. Ultimately this was not necessary and we never used it. Let's remove it Change-ID: I43b72483d91c1638071d2a7f389ab171ec5b796a Signed-off-by: Jacob Keller <jacob.e.keller@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2017-04-20KVM: PPC: Provide functions for queueing up FP/VEC/VSX unavailable interruptsPaul Mackerras
This provides functions that can be used for generating interrupts indicating that a given functional unit (floating point, vector, or VSX) is unavailable. These functions will be used in instruction emulation code. Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
2017-04-19i40e: reduce wait time for adminq command completionJacob Keller
When sending an adminq command, we wait for the command to complete in a loop. This loop waits for an entire millisecond, when in practice the adminq command is processed often much faster. Change the loop to use i40e_usec_delay instead, and wait for 50 usecs each time instead. This appears to be about the minimum time required, based on some manual observation and testing. The primary benefit of this change is reducing latency of various operations in the PF driver, especially when related to having a large number of VFs enabled. For example, on Linux, when instantiating 128 VFs, the time to finish the operation dropped from about 9 seconds down to under 6 seconds. Additionally, the time it takes to finish a PF reset with 128 VFs dropped from 5.1 seconds down to 0.7 seconds. As the examples above show, a significant portion of the delay is wasted waiting for admiqn operations which have already finished. This patch shouldn't cause impact to functionality, as we still check and keep waiting until the command does get processed. The only expected change is an increase in CPU utilization as we now check for completion far more times. However, in practice the commands appear to generally be complete within the first delay window anyways. Change-ID: If8af8388e100da0a14eaf9e1af3afadf73a958cf Signed-off-by: Jacob Keller <jacob.e.keller@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2017-04-19i40e: fix CONFIG_BUSY checks in i40e_set_settings functionJacob Keller
The check for I40E_CONFIG_BUSY state bit in the i40e_set_link_ksettings function is fishy. First we can notice a few things about the check here. First a similar check was introduced by commit 'c7d05ca89f8e ("i40e: driver ethtool core")' Later a commit introducing the link settings was added by commit 'bf9c71417f72 ("i40e: Implement set_settings for ethtool")' However, this second check was against vsi->state instead of pf->state, and also failed to set the bit, it only checks. That indicates the locking was not quite correct. The only other place that the state bit in vsi->state gets used is to protect the filter list. Since this code does not care about the mac filter list, and seems clear the original code should have set the pf->state bit. Fix these issues by using pf->state correctly, and by actually setting the bit so that we properly lock as expected. Since these checks occur while holding the rtnl_lock(), lets also add a timeout so that we don't potentially softlock the system. Signed-off-by: Jacob Keller <jacob.e.keller@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2017-04-19Merge tag 'clk-fixes-for-linus' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/clk/linux Pull clk fixes from Stephen Boyd" - one stm32f4 fix for a change that introduced the PLL_I2S and PLL_SAI boards - two Allwinner clk driver build fixes - two Allwinner CPU clk driver fixes where we see random CPUFreq crashes because the CPU's PLL locks up sometimes when we change the rate * tag 'clk-fixes-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/clk/linux: clk: sunxi-ng: a33: gate then ungate PLL CPU clk after rate change clk: sunxi-ng: Add clk notifier to gate then ungate PLL clocks clk: sunxi-ng: fix build failure in ccu-sun9i-a80 driver clk: sunxi-ng: fix build error without CONFIG_RESET_CONTROLLER clk: stm32f4: fix: exclude values 0 and 1 for PLLQ
2017-04-19i40e: factor out queue control from i40e_vsi_control_(tx|rx)Jacob Keller
A future patch will need to be able to handle controlling queues without waiting until all VSIs are handled. Factor out the direct queue modification so that we can easily re-use this code. The result is also a bit easier to read since we don't embed multiple single-letter loop counters. Change-ID: Id923cbfa43127b1c24d8ed4f809b1012c736d9ac Signed-off-by: Jacob Keller <jacob.e.keller@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2017-04-19Merge branch 'for-next' of git://git.samba.org/sfrench/cifs-2.6Linus Torvalds
Pull CIFS fix from Steve French: "One more cifs fix for stable" * 'for-next' of git://git.samba.org/sfrench/cifs-2.6: cifs: Do not send echoes before Negotiate is complete
2017-04-19i40e: don't hold RTNL lock while waiting for VF reset to finishJacob Keller
We made some effort to reduce the RTNL lock scope when resetting and rebuilding the PF. Unfortunately we still held the RTNL lock during the VF reset operation, which meant that multiple PFs could not reset in parallel due to the global lock. For now, further reduce the scope by not holding the RTNL lock while resetting VFs. This allows multiple PFs to reset in a timely manner. Change-ID: I2fbf823a0063f24dff67676cad09f0bbf83ee4ce Signed-off-by: Jacob Keller <jacob.e.keller@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2017-04-19i40e: new AQ commandsJingjing Wu
Add admin queue functions for Pipeline Personalization Profile AQ commands: - Write Recipe Command buffer (Opcode: 0x0270) - Get Applied Profiles list (Opcode: 0x0271) Change-ID: I558b4145364140f624013af48d4bbf79d21ebb0d Signed-off-by: Jingjing Wu <jingjing.wu@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2017-04-19i40e/i40evf: Add tracepointsScott Peterson
This patch adds tracepoints to the i40e and i40evf drivers to which BPF programs can be attached for feature testing and verification. It's expected that an attached BPF program will identify and count or log some interesting subset of traffic. The bcc-tools package is helpful there for containing all the BPF arcana in a handy Python wrapper. Though you can make these tracepoints log trace messages, the messages themselves probably won't be very useful (other to verify the tracepoint is being called while you're debugging your BPF program). The idea here is that tracepoints have such low performance cost when disabled that we can leave these in the upstream drivers. This may eventually enable the instrumentation of unmodified customer systems should the need arise to verify a NIC feature is working as expected. In general this enables one set of feature verification tools to be used on these drivers whether they're built with the kernel or separately. Users are advised against using these tracepoints for anything other than a diagnostic tool. They have a performance impact when enabled, and their exact placement and form may change as we see how well they work in practice for the purposes above. Change-ID: Id6014a7322c0e6d08068114dd20bd156f2f6435e Signed-off-by: Scott Peterson <scott.d.peterson@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2017-04-19block: Optimize ioprio_best()Bart Van Assche
Since ioprio_best() translates IOPRIO_CLASS_NONE into IOPRIO_CLASS_BE and since lower numerical priority values represent a higher priority a simple numerical comparison is sufficient. Signed-off-by: Bart Van Assche <bart.vanassche@sandisk.com> Reviewed-by: Adam Manzanares <adam.manzanares@wdc.com> Tested-by: Adam Manzanares <adam.manzanares@wdc.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Cc: Matias Bjørling <m@bjorling.me> Signed-off-by: Jens Axboe <axboe@fb.com>
2017-04-19block: Inline blk_rq_set_prio()Bart Van Assche
Since only a single caller remains, inline blk_rq_set_prio(). Initialize req->ioprio even if no I/O priority has been set in the bio nor in the I/O context. Signed-off-by: Bart Van Assche <bart.vanassche@sandisk.com> Reviewed-by: Adam Manzanares <adam.manzanares@wdc.com> Tested-by: Adam Manzanares <adam.manzanares@wdc.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Cc: Matias Bjørling <m@bjorling.me> Signed-off-by: Jens Axboe <axboe@fb.com>
2017-04-19lightnvm: Use blk_init_request_from_bio() instead of open-coding itBart Van Assche
This patch changes the behavior of the lightnvm driver as follows: * REQ_FAILFAST_MASK is set for read-ahead requests. * If no I/O priority has been set in the bio, the I/O priority is copied from the I/O context. * The rq_disk member is initialized if bio->bi_bdev != NULL. * The bio sector offset is copied into req->__sector instead of retaining the value -1 set by blk_mq_alloc_request(). * req->errors is initialized to zero. Signed-off-by: Bart Van Assche <bart.vanassche@sandisk.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Cc: Matias Bjørling <m@bjorling.me> Cc: Adam Manzanares <adam.manzanares@wdc.com> Signed-off-by: Jens Axboe <axboe@fb.com>
2017-04-19null_blk: Use blk_init_request_from_bio() instead of open-coding itBart Van Assche
This patch changes the behavior of the null_blk driver for the LightNVM mode as follows: * REQ_FAILFAST_MASK is set for read-ahead requests. * If no I/O priority has been set in the bio, the I/O priority is copied from the I/O context. * The rq_disk member is initialized if bio->bi_bdev != NULL. * req->errors is initialized to zero. Signed-off-by: Bart Van Assche <bart.vanassche@sandisk.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Cc: Matias Bjørling <m@bjorling.me> Cc: Adam Manzanares <adam.manzanares@wdc.com> Signed-off-by: Jens Axboe <axboe@fb.com>
2017-04-19block: Export blk_init_request_from_bio()Bart Van Assche
Export this function such that it becomes available to block drivers. Signed-off-by: Bart Van Assche <bart.vanassche@sandisk.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Cc: Matias Bjørling <m@bjorling.me> Cc: Adam Manzanares <adam.manzanares@wdc.com> Signed-off-by: Jens Axboe <axboe@fb.com>
2017-04-19i40e: dump VF information in debugfsMitch Williams
Dump some internal state about VFs through debugfs. This provides information not available with 'ip link show'. To use, write "dump vf <id>" to the command file, or just "dump vf" to dump information on all of the VFs. Change-ID: Ibe32b7f4ae55d4358c0b903217475f708ada1ecd Signed-off-by: Mitch Williams <mitch.a.williams@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2017-04-19i40e: Fix support for flow director programming statusAlexander Duyck
This patch fixes an issue I introduced when I converted the code over to using the length field to determine if a descriptor was done or not. It turns out that we are also processing programming descriptors in the Rx path and need to have these processed even though the length field will be 0 on these packets. What will happen with a programming descriptor is that we will receive a descriptor that has the SPH bit set, and the header length and packet length fields cleared. To account for this we should be checking for the bit for split header being set even though we aren't actually using header split. This bit is set in the length field to indicate if a programming descriptor response is contained in the descriptor. Since we don't support header split we don't need to perform the extra checks of using a fixed value for the entire length field. In addition I am moving the function for checking if a filter is a programming status filter into the i40e_txrx.c file since there is no longer support for FCoE it doesn't make sense to keep this file in i40e.h. Change-ID: I12c359c3dc70adb9d6b92b27324bb2c7f04c1a06 Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2017-04-19i40e/i40evf: Remove VF Rx csum offload for tunneled packetsalice michael
Rx checksum offload for tunneled packets was never being negotiated or requested by VF. This capability was assumed by default and enabled in current hardware for VF. Going forward, this feature needs to be disabled or advanced ptypes should be negotiated with PF in the future. Change-ID: I9e54cfa8a90e03ab6956db4412f1e337ccd2c2e0 Signed-off-by: Preethi Banala <preethi.banala@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2017-04-19i40evf: Use net_device_stats from struct net_deviceTobias Klauser
Instead of using a private copy of struct net_device_stats in struct i40evf_adapter, use stats from struct net_device. Also remove the now unnecessary .ndo_get_stats function. Signed-off-by: Tobias Klauser <tklauser@distanz.ch> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
2017-04-19scsi: storvsc: Add support for FC rport.Cathy Avery
Included in the current storvsc driver for Hyper-V is the ability to access luns on an FC fabric via a virtualized fiber channel adapter exposed by the Hyper-V host. The driver also attaches to the FC transport to allow host and port names to be published under /sys/class/fc_host/hostX. Current customer tools running on the VM require that these names be available in the well known standard location under fc_host/hostX. This patch stubs in an rport per fc_host and sets its rport role as FC_PORT_ROLE_FCP_DUMMY_INITIATOR to indicate to the fc_transport that it is a pseudo rport in order to scan the scsi stack via echo "- - -" > /sys/class/scsi_host/hostX/scan. Signed-off-by: Cathy Avery <cavery@redhat.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
2017-04-19scsi: scsi_transport_fc: Add dummy initiator role to rportCathy Avery
This patch allows scsi drivers that expose virturalized fibre channel devices but that do not expose rports to successfully rescan the scsi bus via echo "- - -" > /sys/class/scsi_host/hostX/scan. Drivers can create a pseudo rport and indicate FC_PORT_ROLE_FCP_DUMMY_INITIATOR as the rport's role in fc_rport_identifiers. This insures that a valid scsi_target_id is assigned to the newly created rport and it can meet the requirements of fc_user_scan_tgt calling scsi_scan_target. Signed-off-by: Cathy Avery <cavery@redhat.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
2017-04-19scsi: virtio_scsi: Always try to read VPD pagesDavid Gibson
Passed through SCSI targets may have transfer limits which come from the host SCSI controller or something on the host side other than the target itself. To make this work properly, the hypervisor can adjust the target's VPD information to advertise these limits. But for that to work, the guest has to look at the VPD pages, which we won't do by default if it is an SPC-2 device, even if it does actually support it. This adds a workaround to address this, forcing devices attached to a virtio-scsi controller to always check the VPD pages. This is modelled on a similar workaround for the storvsc (Hyper-V) SCSI controller, although that exists for slightly different reasons. A specific case which causes this is a volume from IBM's IPR RAID controller (which presents as an SPC-2 device, although it does support VPD) passed through with qemu's 'scsi-block' device. [mkp: fixed typo] Signed-off-by: David Gibson <david@gibson.dropbear.id.au> Acked-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
2017-04-19nsfs: mark dentry with DCACHE_RCUACCESSCong Wang
Andrey reported a use-after-free in __ns_get_path(): spin_lock include/linux/spinlock.h:299 [inline] lockref_get_not_dead+0x19/0x80 lib/lockref.c:179 __ns_get_path+0x197/0x860 fs/nsfs.c:66 open_related_ns+0xda/0x200 fs/nsfs.c:143 sock_ioctl+0x39d/0x440 net/socket.c:1001 vfs_ioctl fs/ioctl.c:45 [inline] do_vfs_ioctl+0x1bf/0x1780 fs/ioctl.c:685 SYSC_ioctl fs/ioctl.c:700 [inline] SyS_ioctl+0x8f/0xc0 fs/ioctl.c:691 We are under rcu read lock protection at that point: rcu_read_lock(); d = atomic_long_read(&ns->stashed); if (!d) goto slow; dentry = (struct dentry *)d; if (!lockref_get_not_dead(&dentry->d_lockref)) goto slow; rcu_read_unlock(); but don't use a proper RCU API on the free path, therefore a parallel __d_free() could free it at the same time. We need to mark the stashed dentry with DCACHE_RCUACCESS so that __d_free() will be called after all readers leave RCU. Fixes: e149ed2b805f ("take the targets of /proc/*/ns/* symlinks to separate fs") Cc: Alexander Viro <viro@zeniv.linux.org.uk> Cc: Andrew Morton <akpm@linux-foundation.org> Reported-by: Andrey Konovalov <andreyknvl@google.com> Signed-off-by: Cong Wang <xiyou.wangcong@gmail.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2017-04-19mm: make mm_percpu_wq non freezableMichal Hocko
Geert has reported a freeze during PM resume and some additional debugging has shown that the device_resume worker cannot make a forward progress because it waits for an event which is stuck waiting in drain_all_pages: INFO: task kworker/u4:0:5 blocked for more than 120 seconds. Not tainted 4.11.0-rc7-koelsch-00029-g005882e53d62f25d-dirty #3476 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. kworker/u4:0 D 0 5 2 0x00000000 Workqueue: events_unbound async_run_entry_fn __schedule schedule schedule_timeout wait_for_common dpm_wait_for_superior device_resume async_resume async_run_entry_fn process_one_work worker_thread kthread [...] bash D 0 1703 1694 0x00000000 __schedule schedule schedule_timeout wait_for_common flush_work drain_all_pages start_isolate_page_range alloc_contig_range cma_alloc __alloc_from_contiguous cma_allocator_alloc __dma_alloc arm_dma_alloc sh_eth_ring_init sh_eth_open sh_eth_resume dpm_run_callback device_resume dpm_resume dpm_resume_end suspend_devices_and_enter pm_suspend state_store kernfs_fop_write __vfs_write vfs_write SyS_write [...] Showing busy workqueues and worker pools: [...] workqueue mm_percpu_wq: flags=0xc pwq 2: cpus=1 node=0 flags=0x0 nice=0 active=0/0 delayed: drain_local_pages_wq, vmstat_update pwq 0: cpus=0 node=0 flags=0x0 nice=0 active=0/0 delayed: drain_local_pages_wq BAR(1703), vmstat_update Tetsuo has properly noted that mm_percpu_wq is created as WQ_FREEZABLE so it is frozen this early during resume so we are effectively deadlocked. Fix this by dropping WQ_FREEZABLE when creating mm_percpu_wq. We really want to have it operational all the time. Fixes: ce612879ddc7 ("mm: move pcp and lru-pcp draining into single wq") Reported-and-tested-by: Geert Uytterhoeven <geert@linux-m68k.org> Debugged-by: Tetsuo Handa <penguin-kernel@i-love.sakura.ne.jp> Signed-off-by: Michal Hocko <mhocko@suse.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2017-04-19dcssblk: add dax_operations supportDan Williams
Setup a dax_dev to have the same lifetime as the dcssblk block device and add a ->direct_access() method that is equivalent to dcssblk_direct_access(). Once fs/dax.c has been converted to use dax_operations the old dcssblk_direct_access() will be removed. Reported-by: Gerald Schaefer <gerald.schaefer@de.ibm.com> Acked-by: Gerald Schaefer <gerald.schaefer@de.ibm.com> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2017-04-19brd: add dax_operations supportDan Williams
Setup a dax_inode to have the same lifetime as the brd block device and add a ->direct_access() method that is equivalent to brd_direct_access(). Once fs/dax.c has been converted to use dax_operations the old brd_direct_access() will be removed. Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2017-04-19axon_ram: add dax_operations supportDan Williams
Setup a dax_device to have the same lifetime as the axon_ram block device and add a ->direct_access() method that is equivalent to axon_ram_direct_access(). Once fs/dax.c has been converted to use dax_operations the old axon_ram_direct_access() will be removed. Reported-by: Gerald Schaefer <gerald.schaefer@de.ibm.com> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2017-04-19pmem: add dax_operations supportDan Williams
Setup a dax_device to have the same lifetime as the pmem block device and add a ->direct_access() method that is equivalent to pmem_direct_access(). Once fs/dax.c has been converted to use dax_operations the old pmem_direct_access() will be removed. Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2017-04-19dax: introduce dax_operationsDan Williams
Track a set of dax_operations per dax_device that can be set at alloc_dax() time. These operations will be used to stop the abuse of block_device_operations for communicating dax capabilities to filesystems. It will also be used to replace the "pmem api" and move pmem-specific cache maintenance, and other dax-driver-specific filesystem-dax operations, to dax device methods. In particular this allows us to stop abusing __copy_user_nocache(), via memcpy_to_pmem(), with a driver specific replacement. This is a standalone introduction of the operations. Follow on patches convert each dax-driver and teach fs/dax.c to use ->direct_access() from dax_operations instead of block_device_operations. Suggested-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2017-04-19dax: add a facility to lookup a dax device by 'host' device nameDan Williams
For the current block_device based filesystem-dax path, we need a way for it to lookup the dax_device associated with a block_device. Add a 'host' property of a dax_device that can be used for this purpose. It is a free form string, but for a dax_device associated with a block device it is the bdev name. This is a stop-gap until filesystems are able to mount on a dax-inode directly. Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2017-04-19Merge tag 'backlight-for-v4.11' of ↵Linus Torvalds
git://git.linaro.org/people/daniel.thompson/linux Pull backlight fix from Daniel Thompson: "Normally pull requests for backlight come from Lee Jones (and will continue to do so) but the bug fixed here is annoying for few people so I'm providing a little holiday cover. Fix a single bug in the PWM backlight driver and make it play nice with a wider range of GPIO devices. This bug is a regression and was independently discovered by Geert Uytterhoevan and Paul Kocialkowski (and is tested by both)" * tag 'backlight-for-v4.11' of git://git.linaro.org/people/daniel.thompson/linux: backlight: pwm_bl: Fix GPIO out for unimplemented .get_direction()
2017-04-19PM / runtime: Document autosuspend-helper side effectsJohan Hovold
Document the fact that the autosuspend delay and enable helpers may change the power.usage_count and resume or suspend a device depending on the values of power.autosuspend_delay and power.use_autosuspend. Note that this means that a driver must disable autosuspend before disabling runtime pm on probe errors and on driver unbind if the device is to be suspended upon return (as a negative delay may otherwise keep the device resumed). Signed-off-by: Johan Hovold <johan@kernel.org> Acked-by: Tony Lindgren <tony@atomide.com> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2017-04-19PM / runtime: Fix autosuspend documentationJohan Hovold
Update the autosuspend documentation which claimed that the autosuspend delay is not taken into account when using the non-autosuspend helper functions, something which is no longer true since commit d66e6db28df3 ("PM / Runtime: Respect autosuspend when idle triggers suspend"). This specifically means that drivers must now disable autosuspend before disabling runtime pm in probe error paths and remove callbacks if pm_runtime_put_sync was being used to suspend the device before returning. (If an idle callback can prevent suspend, pm_runtime_put_sync_suspend must be used instead of pm_runtime_put_sync as before.) Also remove the claim that the autosuspend helpers behave "just like the non-autosuspend counterparts", something which have never really been true as some of the latter use idle notifications. Signed-off-by: Johan Hovold <johan@kernel.org> Acked-by: Tony Lindgren <tony@atomide.com> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>