summaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
2017-04-25nfsd: check for oversized NFSv2/v3 argumentsJ. Bruce Fields
A client can append random data to the end of an NFSv2 or NFSv3 RPC call without our complaining; we'll just stop parsing at the end of the expected data and ignore the rest. Encoded arguments and replies are stored together in an array of pages, and if a call is too large it could leave inadequate space for the reply. This is normally OK because NFS RPC's typically have either short arguments and long replies (like READ) or long arguments and short replies (like WRITE). But a client that sends an incorrectly long reply can violate those assumptions. This was observed to cause crashes. Also, several operations increment rq_next_page in the decode routine before checking the argument size, which can leave rq_next_page pointing well past the end of the page array, causing trouble later in svc_free_pages. So, following a suggestion from Neil Brown, add a central check to enforce our expectation that no NFSv2/v3 call has both a large call and a large reply. As followup we may also want to rewrite the encoding routines to check more carefully that they aren't running off the end of the page array. We may also consider rejecting calls that have any extra garbage appended. That would be safer, and within our rights by spec, but given the age of our server and the NFS protocol, and the fact that we've never enforced this before, we may need to balance that against the possibility of breaking some oddball client. Reported-by: Tuomas Haanpää <thaan@synopsys.com> Reported-by: Ari Kauppi <ari@synopsys.com> Cc: stable@vger.kernel.org Reviewed-by: NeilBrown <neilb@suse.com> Signed-off-by: J. Bruce Fields <bfields@redhat.com>
2017-04-25net: move xdp_prog field in RX cache linesEric Dumazet
(struct net_device, xdp_prog) field should be moved in RX cache lines, reducing latencies when a single packet is received on idle host, since netif_elide_gro() needs it. Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-04-25NFSv3: nfs3_nlm_alloc_call should be declared staticTrond Myklebust
Fix compiler warnings. Signed-off-by: Trond Myklebust <trond.myklebust@primarydata.com>
2017-04-25x86, dax, pmem: remove indirection around memcpy_from_pmem()Dan Williams
memcpy_from_pmem() maps directly to memcpy_mcsafe(). The wrapper serves no real benefit aside from affording a more generic function name than the x86-specific 'mcsafe'. However this would not be the first time that x86 terminology leaked into the global namespace. For lack of better name, just use memcpy_mcsafe() directly. This conversion also catches a place where we should have been using plain memcpy, acpi_nfit_blk_single_io(). Cc: <x86@kernel.org> Cc: Jan Kara <jack@suse.cz> Cc: Jeff Moyer <jmoyer@redhat.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Christoph Hellwig <hch@lst.de> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Matthew Wilcox <mawilcox@microsoft.com> Cc: Ross Zwisler <ross.zwisler@linux.intel.com> Acked-by: Tony Luck <tony.luck@intel.com> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2017-04-25block: remove block_device_operations ->direct_access()Dan Williams
Now that all the producers and consumers of dax interfaces have been converted to using dax_operations on a dax_device, remove the block device direct_access enabling. Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2017-04-25block, dax: convert bdev_dax_supported() to dax_direct_access()Dan Williams
Kill of the final user of bdev_direct_access() and struct blk_dax_ctl. Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2017-04-25filesystem-dax: convert to dax_direct_access()Dan Williams
Now that a dax_device is plumbed through all dax-capable drivers we can switch from block_device_operations to dax_operations for invoking ->direct_access. This also lets us kill off some usages of struct blk_dax_ctl on the way to its eventual removal. Suggested-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2017-04-25Revert "block: use DAX for partition table reads"Dan Williams
commit d1a5f2b4d8a1 ("block: use DAX for partition table reads") was part of a stalled effort to allow dax mappings of block devices. Since then the device-dax mechanism has filled the role of dax-mapping static device ranges. Now that we are moving ->direct_access() from a block_device operation to a dax_inode operation we would need block devices to map and carry their own dax_inode reference. Unless / until we decide to revive dax mapping of raw block devices through the dax_inode scheme, there is no need to carry read_dax_sector(). Its removal in turn allows for the removal of bdev_direct_access() and should have been included in commit 223757016837 ("block_dev: remove DAX leftovers"). Cc: Jeff Moyer <jmoyer@redhat.com> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2017-04-25ext2, ext4, xfs: retrieve dax_device for iomap operationsDan Williams
In preparation for converting fs/dax.c to use dax_direct_access() instead of bdev_direct_access(), add the plumbing to retrieve the dax_device associated with a given block_device. Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2017-04-25dm: teach dm-targets to use a dax_device + dax_operationsDan Williams
Arrange for dm to lookup the dax services available from member devices. Update the dax-capable targets, linear and stripe, to route dax operations to the underlying device. Changes the target-internal ->direct_access() method to more closely align with the dax_operations ->direct_access() calling convention. Cc: Toshi Kani <toshi.kani@hpe.com> Reviewed-by: Mike Snitzer <snitzer@redhat.com> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2017-04-25xprtrdma: Remove rpcrdma_buffer::rb_poolChuck Lever
Since commit 1e465fd4ff47 ("xprtrdma: Replace send and receive arrays"), this field is no longer used. Signed-off-by: Chuck Lever <chuck.lever@oracle.com> Signed-off-by: Anna Schumaker <Anna.Schumaker@Netapp.com>
2017-04-25sunrpc: Fix xdr_init_decode_pages() documenting commentChuck Lever
Clean up. Signed-off-by: Chuck Lever <chuck.lever@oracle.com> Signed-off-by: Anna Schumaker <Anna.Schumaker@Netapp.com>
2017-04-25xprtrdma: Squelch ENOBUFS warningsChuck Lever
When ro_map is out of buffers, that's not a permanent error, so don't report a problem. Signed-off-by: Chuck Lever <chuck.lever@oracle.com> Signed-off-by: Anna Schumaker <Anna.Schumaker@Netapp.com>
2017-04-25xprtrdma: Annotate receive workqueueChuck Lever
Micro-optimize the receive workqueue by marking it's anchor "read- mostly." Signed-off-by: Chuck Lever <chuck.lever@oracle.com> Signed-off-by: Anna Schumaker <Anna.Schumaker@Netapp.com>
2017-04-25xprtrdma: Revert commit d0f36c46deeaChuck Lever
Device removal is now adequately supported. Pinning the underlying device driver to prevent removal while an NFS mount is active is no longer necessary. Signed-off-by: Chuck Lever <chuck.lever@oracle.com> Signed-off-by: Anna Schumaker <Anna.Schumaker@Netapp.com>
2017-04-25xprtrdma: Restore transport after device removalChuck Lever
After a device removal, enable the transport connect worker to restore normal operation if there is another device with connectivity to the server. Signed-off-by: Chuck Lever <chuck.lever@oracle.com> Signed-off-by: Anna Schumaker <Anna.Schumaker@Netapp.com>
2017-04-25xprtrdma: Refactor rpcrdma_ep_connectChuck Lever
I'm about to add another arm to if (ep->rep_connected != 0) It will be cleaner to use a switch statement here. We'll be looking for a couple of specific errnos, or "anything else," basically to sort out the difference between a normal reconnect and recovery from device removal. This is a refactoring change only. Signed-off-by: Chuck Lever <chuck.lever@oracle.com> Signed-off-by: Anna Schumaker <Anna.Schumaker@Netapp.com>
2017-04-25xprtrdma: Support unplugging an HCA from under an NFS mountChuck Lever
The device driver for the underlying physical device associated with an RPC-over-RDMA transport can be removed while RPC-over-RDMA transports are still in use (ie, while NFS filesystems are still mounted and active). The IB core performs a connection event upcall to request that consumers free all RDMA resources associated with a transport. There may be pending RPCs when this occurs. Care must be taken to release associated resources without leaving references that can trigger a subsequent crash if a signal or soft timeout occurs. We rely on the caller of the transport's ->close method to ensure that the previous RPC task has invoked xprt_release but the transport remains write-locked. A DEVICE_REMOVE upcall forces a disconnect then sleeps. When ->close is invoked, it destroys the transport's H/W resources, then wakes the upcall, which completes and allows the core driver unload to continue. BugLink: https://bugzilla.linux-nfs.org/show_bug.cgi?id=266 Signed-off-by: Chuck Lever <chuck.lever@oracle.com> Signed-off-by: Anna Schumaker <Anna.Schumaker@Netapp.com>
2017-04-25xprtrdma: Use same device when mapping or syncing DMA buffersChuck Lever
When the underlying device driver is reloaded, ia->ri_device will be replaced. All cached copies of that device pointer have to be updated as well. Commit 54cbd6b0c6b9 ("xprtrdma: Delay DMA mapping Send and Receive buffers") added the rg_device field to each regbuf. As part of handling a device removal, rpcrdma_dma_unmap_regbuf is invoked on all regbufs for a transport. Simply calling rpcrdma_dma_map_regbuf for each Receive buffer after the driver has been reloaded should reinitialize rg_device correctly for every case except rpcrdma_wc_receive, which still uses rpcrdma_rep::rr_device. Ensure the same device that was used to map a Receive buffer is also used to sync it in rpcrdma_wc_receive by using rg_device there instead of rr_device. This is the only use of rr_device, so it can be removed. The use of regbufs in the send path is also updated, for completeness. Fixes: 54cbd6b0c6b9 ("xprtrdma: Delay DMA mapping Send and ... ") Signed-off-by: Chuck Lever <chuck.lever@oracle.com> Signed-off-by: Anna Schumaker <Anna.Schumaker@Netapp.com>
2017-04-25xprtrdma: Refactor rpcrdma_ia_open()Chuck Lever
In order to unload a device driver and reload it, xprtrdma will need to close a transport's interface adapter, and then call rpcrdma_ia_open again, possibly finding a different interface adapter. Make rpcrdma_ia_open safe to call on the same transport multiple times. This is a refactoring change only. Signed-off-by: Chuck Lever <chuck.lever@oracle.com> Signed-off-by: Anna Schumaker <Anna.Schumaker@Netapp.com>
2017-04-25xprtrdma: Detect unreachable NFS/RDMA servers more reliablyChuck Lever
Current NFS clients rely on connection loss to determine when to retransmit. In particular, for protocols like NFSv4, clients no longer rely on RPC timeouts to drive retransmission: NFSv4 servers are required to terminate a connection when they need a client to retransmit pending RPCs. When a server is no longer reachable, either because it has crashed or because the network path has broken, the server cannot actively terminate a connection. Thus NFS clients depend on transport-level keepalive to determine when a connection must be replaced and pending RPCs retransmitted. However, RDMA RC connections do not have a native keepalive mechanism. If an NFS/RDMA server crashes after a client has sent RPCs successfully (an RC ACK has been received for all OTW RDMA requests), there is no way for the client to know the connection is moribund. In addition, new RDMA requests are subject to the RPC-over-RDMA credit limit. If the client has consumed all granted credits with NFS traffic, it is not allowed to send another RDMA request until the server replies. Thus it has no way to send a true keepalive when the workload has already consumed all credits with pending RPCs. To address this, forcibly disconnect a transport when an RPC times out. This prevents moribund connections from stopping the detection of failover or other configuration changes on the server. Note that even if the connection is still good, retransmitting any RPC will trigger a disconnect thanks to this logic in xprt_rdma_send_request: /* Must suppress retransmit to maintain credits */ if (req->rl_connect_cookie == xprt->connect_cookie) goto drop_connection; req->rl_connect_cookie = xprt->connect_cookie; Signed-off-by: Chuck Lever <chuck.lever@oracle.com> Signed-off-by: Anna Schumaker <Anna.Schumaker@Netapp.com>
2017-04-25sunrpc: Export xprt_force_disconnect()Chuck Lever
xprt_force_disconnect() is already invoked from the socket transport. I want to invoke xprt_force_disconnect() from the RPC-over-RDMA transport, which is a separate module from sunrpc.ko. Signed-off-by: Chuck Lever <chuck.lever@oracle.com> Signed-off-by: Anna Schumaker <Anna.Schumaker@Netapp.com>
2017-04-25xprtrdma: Cancel refresh worker during buffer shutdownChuck Lever
Trying to create MRs while the transport is being torn down can cause a crash. Fixes: e2ac236c0b65 ("xprtrdma: Allocate MRs on demand") Signed-off-by: Chuck Lever <chuck.lever@oracle.com> Signed-off-by: Anna Schumaker <Anna.Schumaker@Netapp.com>
2017-04-25dm crypt: remove obsolete references to per-CPU stateEric Biggers
dm-crypt used to use separate crypto transforms for each CPU, but this is no longer the case. To avoid confusion, fix up obsolete comments and rename setup_essiv_cpu(). Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Mike Snitzer <snitzer@redhat.com>
2017-04-25iwlwifi: adjust NVM parsing APIs for new a000 methodSara Sharon
In a000 devices we will get all nvm data from the firmware, and can save most of the parsing. Export two APIs that op mode will still use. Adjust API of init_sbands to be independent of NVM file structure so it can be used by op mode as well. Signed-off-by: Sara Sharon <sara.sharon@intel.com> Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
2017-04-25iwlwifi: pcie: apply no-reclaim logic only to group 0Johannes Berg
When applying no-reclaim logic to commands other than the group zero for legacy commands, commands such as 0x1c (TX_CMD in group 0) can't be used in any other group. Fix that by applying this logic only for group 0 - it's not and should never be needed for any other groups. Reported-by: Sharon Dvir <sharon.dvir@intel.com> Reported-by: Shaul Triebitz <shaul.triebitz@intel.com> Signed-off-by: Johannes Berg <johannes.berg@intel.com> Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
2017-04-25iwlwifi: mvm: memset binding before setting valuesSara Sharon
The changes in commit 9415af7f306b ("iwlwifi: mvm: support new binding API") assigned values that were later memset to 0. Move the memset earlier. Fixes: 9415af7f306b ("iwlwifi: mvm: support new binding API") Signed-off-by: Sara Sharon <sara.sharon@intel.com> Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
2017-04-25iwlwifi: rename wait_for_tx_queues_emptySara Sharon
Rename current wait_tx_queue_empty to wait_tx_queues_empty since it waits for multiple queues (up to 32). Next patch will add a wait for single TX queue which is needed for gen2 to be scalable for 512. Signed-off-by: Sara Sharon <sara.sharon@intel.com> Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
2017-04-25iwlwifi: move to 512 queuesSara Sharon
Avoid using the old define since it will enlarge necessary structs for previous HW. Signed-off-by: Sara Sharon <sara.sharon@intel.com> Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
2017-04-25iwlwifi: mvm: support station type APISara Sharon
Support change to ADD_STA API to support station types. Each station is assigned its type. This simplifies FW handling of the broadcast and multicast stations: * broadcast station is identified by its type and not the mac address. * multicast queue is no longer treated differently. The opening and closing of it is done by referring to its station. There is no need to specify it in the MAC command. * When disabling TX to all station driver can disable the traffic on multicast station, so FW doesn't have to do it. Change is backward compatible. Change the order of adding and removing the stations according to FW requirements. Signed-off-by: Sara Sharon <sara.sharon@intel.com> Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
2017-04-25MIPS: Octeon: cavium_octeon_defconfig: Enable Octeon MMCSteven J. Hill
Enable the Octeon MMC driver in the defconfig. Signed-off-by: Steven J. Hill <Steven.Hill@cavium.com> Acked-by: Ralf Baechle <ralf@linux-mips.org> Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
2017-04-25iwlwifi: mvm: remove references to queue_info in new TX pathSara Sharon
Most of the fields aren't needed in new TX path. Enlarging the struct to 512 queues will consume a lot of memory. Remove all references to the struct in the new TX path. Move mac80211 queue mapping outside, since it will be needed per queue for TVQM mode. Add warning in paths that shouldn't be hit. Signed-off-by: Sara Sharon <sara.sharon@intel.com> Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
2017-04-25iwlwifi: gen2: support nmi triggering from hostLiad Kaufman
For gen2 there is a new register. Signed-off-by: Liad Kaufman <liad.kaufman@intel.com> Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
2017-04-25iwlwifi: remove module loading failure messageJohannes Berg
When CONFIG_DEBUG_TEST_DRIVER_REMOVE is set, iwlwifi crashes when the opmode module cannot be loaded, due to completing the completion before using drv->dev, which can then already be freed. Fix this by removing the (fairly useless) message. Moving the completion later causes a deadlock instead, so that's not an option. Signed-off-by: Johannes Berg <johannes.berg@intel.com> Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
2017-04-25iwlwifi: don't leak memory on allocation failureJohannes Berg
If we fail to allocate the small chunk of memory for the pieces of the firmware file, we leak the whole firmware image instead... Since the allocation failure is really unlikely, just bail out at that point instead. Remove the error message at the label since we now (and actually have been) use it for various reasons. Signed-off-by: Johannes Berg <johannes.berg@intel.com> Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
2017-04-25iwlwifi: pcie: remove superfluous trans->dev assignmentJohannes Berg
This struct member is already assigned in the previous call to iwl_trans_alloc(), so assigning the same value again is superfluous - remove it. Signed-off-by: Johannes Berg <johannes.berg@intel.com> Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
2017-04-25iwlwifi: mvm: use defines instead of variables for shared dwell timesSara Sharon
Most of the dwells are constant across different scan types. Use defines instead of depending on scan type. This is needed as preparation to having different scan type per band. Signed-off-by: Sara Sharon <sara.sharon@intel.com> Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
2017-04-25iwlwifi: mvm: remove color definitionSara Sharon
We use the full station field as sta_id, and never use the color. FW are about to clean the color out, so those defines are incorrect now (and were redundant before). Signed-off-by: Sara Sharon <sara.sharon@intel.com> Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
2017-04-25iwlwifi: mvm: move internally to use bigger INVALID_TXQSara Sharon
We can't use IEEE80211_INVAL_HW_QUEUE to mark a queue as invalid since 255 will be a valid value for a TVQM queue index. Use IWL_MVM_INVALID_QUEUE instead for accessing txq_id. reserved_queue can stay a u8 since reserved_queue is not used when TVQM is enabled. Signed-off-by: Sara Sharon <sara.sharon@intel.com> Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
2017-04-25mac80211: rewrite monitor mode delivery logicJohannes Berg
The monitor mode delivery logic makes it hard to add any kind of filtering in an efficient way, because the monitor SKB is created first and then passed to all interfaces. Rewrite the logic to create the monitor SKB the first time it's actually needed, and then keep delivering it. Signed-off-by: Johannes Berg <johannes.berg@intel.com>
2017-04-25cfg80211: Fix dfs state propagation for non-DFS center channelVasanthakumar Thiagarajan
When part of a bigger bandwidth (160 MHz) channel falls in DFS channel range it is possible that the center frequency may not necessarily be a radar channel. Remove the sanity check on channel flag for IEEE80211_CHAN_RADAR in regulatory_propagate_dfs_state(), this should fix the dfs state propagation for non-DFS center freq which has DFS channels in it's bandwidth, should also fix unnecessary WARN_ON() spam in regulatory_propagate_dfs_state(). Fixes: 8976672736d6 ("cfg80211: Share Channel DFS state across wiphys of same DFS domain") Reported-by: Johannes Berg <johannes@sipsolutions.net> Signed-off-by: Vasanthakumar Thiagarajan <vthiagar@qti.qualcomm.com> Signed-off-by: Johannes Berg <johannes.berg@intel.com>
2017-04-25IB/vmw_pvrdma: Spare annotate imm_dataJason Gunthorpe
imm_data is copied directly from the ib_send_wr and ib_wc which have it marked as __be32, copy that mark into the uapi structures as well. Signed-off-by: Jason Gunthorpe <jgunthorpe@obsidianresearch.com> Tested-by: Adit Ranadive <aditr@vmware.com> Acked-by: Adit Ranadive <aditr@vmware.com> Signed-off-by: Doug Ledford <dledford@redhat.com>
2017-04-25NFS: Don't write back further requests if there is a pending write errorTrond Myklebust
If the server has already returned a fatal write error that the user has not yet received on this file, then don't write back the other pages. Instead, act as if they have been sent, and have returned with the same error. Signed-off-by: Trond Myklebust <trond.myklebust@primarydata.com>
2017-04-25pNFS: Fix use after free issues in pnfs_do_read()Trond Myklebust
The assumption should be that if the caller returns PNFS_ATTEMPTED, then hdr has been consumed, and so we should not be testing hdr->task.tk_status. If the caller returns PNFS_TRY_AGAIN, then we need to recoalesce and free hdr. Signed-off-by: Trond Myklebust <trond.myklebust@primarydata.com>
2017-04-25IB/mlx5: Add ODP support to MWArtemy Kovalyov
Internally MW implemented as KLM MKey and filled by userspace UMR postsends. Handle pagefault trigered by operations on this MKeys. Signed-off-by: Artemy Kovalyov <artemyko@mellanox.com> Signed-off-by: Leon Romanovsky <leon@kernel.org> Signed-off-by: Doug Ledford <dledford@redhat.com>
2017-04-25IB/mlx5: Extract page fault codeArtemy Kovalyov
To make page fault handling code more flexible split pagefault_single_data_segment() function. Keep MR resolution in pagefault_single_data_segment() and move actual updates into pagefault_single_mr(). Signed-off-by: Artemy Kovalyov <artemyko@mellanox.com> Signed-off-by: Leon Romanovsky <leon@kernel.org> Signed-off-by: Doug Ledford <dledford@redhat.com>
2017-04-25IB/umem: Add support to huge ODPArtemy Kovalyov
Add IB_ACCESS_HUGETLB ib_reg_mr flag. Hugetlb region registered with this flag will use single translation entry per huge page. Signed-off-by: Artemy Kovalyov <artemyko@mellanox.com> Signed-off-by: Leon Romanovsky <leon@kernel.org> Signed-off-by: Doug Ledford <dledford@redhat.com>
2017-04-25IB/mlx5: Add contiguous ODP supportArtemy Kovalyov
Currenlty ODP supports only regular MMU pages. Add ODP support for regions consisting of physically contiguous chunks of arbitrary order (huge pages for instance) to improve performance. Signed-off-by: Artemy Kovalyov <artemyko@mellanox.com> Signed-off-by: Leon Romanovsky <leon@kernel.org> Signed-off-by: Doug Ledford <dledford@redhat.com>
2017-04-25IB/umem: Add contiguous ODP supportArtemy Kovalyov
Currenlty ODP supports only regular MMU pages. Add ODP support for regions consisting of physically contiguous chunks of arbitrary order (huge pages for instance) to improve performance. Signed-off-by: Artemy Kovalyov <artemyko@mellanox.com> Signed-off-by: Leon Romanovsky <leon@kernel.org> Signed-off-by: Doug Ledford <dledford@redhat.com>
2017-04-25IB/mlx5: Decrease verbosity level of ODP errorsArtemy Kovalyov
Decrease verbosity level of ODP error flows messages to debug level. Remove one redundant print since debug level message already exists in this flow. Fixes: d9aaed838765 ('{net,IB}/mlx5: Refactor page fault handling') Signed-off-by: Artemy Kovalyov <artemyko@mellanox.com> Signed-off-by: Leon Romanovsky <leon@kernel.org> Signed-off-by: Doug Ledford <dledford@redhat.com>