summaryrefslogtreecommitdiff
path: root/include/rdma/ib_umem.h
diff options
context:
space:
mode:
authorShiraz Saleem <shiraz.saleem@intel.com>2019-04-02 14:52:52 -0500
committerJason Gunthorpe <jgg@mellanox.com>2019-04-08 13:05:24 -0300
commitd10bcf947a3ea240351a8182d71e4aa9c8ddba56 (patch)
treee2aff327b72cf6a8591935aaad4c77f70294c0a6 /include/rdma/ib_umem.h
parentc7252a6532995fe6971295b7878e5a74b4f85d0c (diff)
RDMA/umem: Combine contiguous PAGE_SIZE regions in SGEs
Combine contiguous regions of PAGE_SIZE pages into single scatter list entry while building the scatter table for a umem. This minimizes the number of the entries in the scatter list and reduces the DMA mapping overhead, particularly with the IOMMU. Set default max_seg_size in core for IB devices to 2G and do not combine if we exceed this limit. Also, purge npages in struct ib_umem as we now DMA map the umem SGL with sg_nents and npage computation is not needed. Drivers should now be using ib_umem_num_pages(), so fix the last stragglers. Move npages tracking to ib_umem_odp as ODP drivers still need it. Suggested-by: Jason Gunthorpe <jgg@ziepe.ca> Reviewed-by: Michael J. Ruhl <michael.j.ruhl@intel.com> Reviewed-by: Ira Weiny <ira.weiny@intel.com> Acked-by: Adit Ranadive <aditr@vmware.com> Signed-off-by: Shiraz Saleem <shiraz.saleem@intel.com> Tested-by: Gal Pressman <galpress@amazon.com> Tested-by: Selvin Xavier <selvin.xavier@broadcom.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
Diffstat (limited to 'include/rdma/ib_umem.h')
-rw-r--r--include/rdma/ib_umem.h2
1 files changed, 1 insertions, 1 deletions
diff --git a/include/rdma/ib_umem.h b/include/rdma/ib_umem.h
index 73af05db04c7..b13a2e9a50d4 100644
--- a/include/rdma/ib_umem.h
+++ b/include/rdma/ib_umem.h
@@ -53,7 +53,7 @@ struct ib_umem {
struct work_struct work;
struct sg_table sg_head;
int nmap;
- int npages;
+ unsigned int sg_nents;
};
/* Returns the offset of the umem start relative to the first page. */