summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorLinus Torvalds <torvalds@linux-foundation.org>2017-07-14 13:31:52 -0700
committerLinus Torvalds <torvalds@linux-foundation.org>2017-07-14 13:31:52 -0700
commitccd5d1b91f22351b55feb6fdee504cb84d97752f (patch)
treec85966f14a3c1efbb2379a3697b00d43937e20b2
parent4d25ec19669292a65a32498eabdabdd32b1a8747 (diff)
parent854b1dd9c39d8c8c8647a44de47ef18506ae11f9 (diff)
Merge tag 'ntb-4.13' of git://github.com/jonmason/ntb
Pull NTB updates from Jon Mason: "The major change in the series is a rework of the NTB infrastructure to all for IDT hardware to be supported (and resulting fallout from that). There are also a few clean-ups, etc. New IDT NTB driver and changes to the NTB infrastructure to allow for this different kind of NTB HW, some style fixes (per Greg KH recommendation), and some ntb_test tweaks" * tag 'ntb-4.13' of git://github.com/jonmason/ntb: ntb_netdev: set the net_device's parent ntb: Add error path/handling to Debug FS entry creation ntb: Add more debugfs support for ntb_perf testing options ntb: Remove debug-fs variables from the context structure ntb: Add a module option to control affinity of DMA channels NTB: Add IDT 89HPESxNTx PCIe-switches support ntb_hw_intel: Style fixes: open code macros that just obfuscate code ntb_hw_amd: Style fixes: open code macros that just obfuscate code NTB: Add ntb.h comments NTB: Add PCIe Gen4 link speed NTB: Add new Memory Windows API documentation NTB: Add Messaging NTB API NTB: Alter Scratchpads API to support multi-ports devices NTB: Alter MW API to support multi-ports devices NTB: Alter link-state API to support multi-port devices NTB: Add indexed ports NTB API NTB: Make link-state API being declared first NTB: ntb_test: add parameter for doorbell bitmask NTB: ntb_test: modprobe on remote host
-rw-r--r--Documentation/ntb.txt99
-rw-r--r--MAINTAINERS6
-rw-r--r--drivers/net/ntb_netdev.c2
-rw-r--r--drivers/ntb/hw/Kconfig1
-rw-r--r--drivers/ntb/hw/Makefile1
-rw-r--r--drivers/ntb/hw/amd/ntb_hw_amd.c139
-rw-r--r--drivers/ntb/hw/amd/ntb_hw_amd.h3
-rw-r--r--drivers/ntb/hw/idt/Kconfig31
-rw-r--r--drivers/ntb/hw/idt/Makefile1
-rw-r--r--drivers/ntb/hw/idt/ntb_hw_idt.c2712
-rw-r--r--drivers/ntb/hw/idt/ntb_hw_idt.h1149
-rw-r--r--drivers/ntb/hw/intel/ntb_hw_intel.c298
-rw-r--r--drivers/ntb/hw/intel/ntb_hw_intel.h3
-rw-r--r--drivers/ntb/ntb.c69
-rw-r--r--drivers/ntb/ntb_transport.c42
-rw-r--r--drivers/ntb/test/ntb_perf.c109
-rw-r--r--drivers/ntb/test/ntb_pingpong.c14
-rw-r--r--drivers/ntb/test/ntb_tool.c69
-rw-r--r--include/linux/ntb.h721
-rwxr-xr-xtools/testing/selftests/ntb/ntb_test.sh11
20 files changed, 5120 insertions, 360 deletions
diff --git a/Documentation/ntb.txt b/Documentation/ntb.txt
index 1d9bbabb6c79..a5af4f0159f3 100644
--- a/Documentation/ntb.txt
+++ b/Documentation/ntb.txt
@@ -1,14 +1,16 @@
# NTB Drivers
NTB (Non-Transparent Bridge) is a type of PCI-Express bridge chip that connects
-the separate memory systems of two computers to the same PCI-Express fabric.
-Existing NTB hardware supports a common feature set, including scratchpad
-registers, doorbell registers, and memory translation windows. Scratchpad
-registers are read-and-writable registers that are accessible from either side
-of the device, so that peers can exchange a small amount of information at a
-fixed address. Doorbell registers provide a way for peers to send interrupt
-events. Memory windows allow translated read and write access to the peer
-memory.
+the separate memory systems of two or more computers to the same PCI-Express
+fabric. Existing NTB hardware supports a common feature set: doorbell
+registers and memory translation windows, as well as non common features like
+scratchpad and message registers. Scratchpad registers are read-and-writable
+registers that are accessible from either side of the device, so that peers can
+exchange a small amount of information at a fixed address. Message registers can
+be utilized for the same purpose. Additionally they are provided with with
+special status bits to make sure the information isn't rewritten by another
+peer. Doorbell registers provide a way for peers to send interrupt events.
+Memory windows allow translated read and write access to the peer memory.
## NTB Core Driver (ntb)
@@ -26,6 +28,87 @@ as ntb hardware, or hardware drivers, are inserted and removed. The
registration uses the Linux Device framework, so it should feel familiar to
anyone who has written a pci driver.
+### NTB Typical client driver implementation
+
+Primary purpose of NTB is to share some peace of memory between at least two
+systems. So the NTB device features like Scratchpad/Message registers are
+mainly used to perform the proper memory window initialization. Typically
+there are two types of memory window interfaces supported by the NTB API:
+inbound translation configured on the local ntb port and outbound translation
+configured by the peer, on the peer ntb port. The first type is
+depicted on the next figure
+
+Inbound translation:
+ Memory: Local NTB Port: Peer NTB Port: Peer MMIO:
+ ____________
+ | dma-mapped |-ntb_mw_set_trans(addr) |
+ | memory | _v____________ | ______________
+ | (addr) |<======| MW xlat addr |<====| MW base addr |<== memory-mapped IO
+ |------------| |--------------| | |--------------|
+
+So typical scenario of the first type memory window initialization looks:
+1) allocate a memory region, 2) put translated address to NTB config,
+3) somehow notify a peer device of performed initialization, 4) peer device
+maps corresponding outbound memory window so to have access to the shared
+memory region.
+
+The second type of interface, that implies the shared windows being
+initialized by a peer device, is depicted on the figure:
+
+Outbound translation:
+ Memory: Local NTB Port: Peer NTB Port: Peer MMIO:
+ ____________ ______________
+ | dma-mapped | | | MW base addr |<== memory-mapped IO
+ | memory | | |--------------|
+ | (addr) |<===================| MW xlat addr |<-ntb_peer_mw_set_trans(addr)
+ |------------| | |--------------|
+
+Typical scenario of the second type interface initialization would be:
+1) allocate a memory region, 2) somehow deliver a translated address to a peer
+device, 3) peer puts the translated address to NTB config, 4) peer device maps
+outbound memory window so to have access to the shared memory region.
+
+As one can see the described scenarios can be combined in one portable
+algorithm.
+ Local device:
+ 1) Allocate memory for a shared window
+ 2) Initialize memory window by translated address of the allocated region
+ (it may fail if local memory window initialization is unsupported)
+ 3) Send the translated address and memory window index to a peer device
+ Peer device:
+ 1) Initialize memory window with retrieved address of the allocated
+ by another device memory region (it may fail if peer memory window
+ initialization is unsupported)
+ 2) Map outbound memory window
+
+In accordance with this scenario, the NTB Memory Window API can be used as
+follows:
+ Local device:
+ 1) ntb_mw_count(pidx) - retrieve number of memory ranges, which can
+ be allocated for memory windows between local device and peer device
+ of port with specified index.
+ 2) ntb_get_align(pidx, midx) - retrieve parameters restricting the
+ shared memory region alignment and size. Then memory can be properly
+ allocated.
+ 3) Allocate physically contiguous memory region in compliance with
+ restrictions retrieved in 2).
+ 4) ntb_mw_set_trans(pidx, midx) - try to set translation address of
+ the memory window with specified index for the defined peer device
+ (it may fail if local translated address setting is not supported)
+ 5) Send translated base address (usually together with memory window
+ number) to the peer device using, for instance, scratchpad or message
+ registers.
+ Peer device:
+ 1) ntb_peer_mw_set_trans(pidx, midx) - try to set received from other
+ device (related to pidx) translated address for specified memory
+ window. It may fail if retrieved address, for instance, exceeds
+ maximum possible address or isn't properly aligned.
+ 2) ntb_peer_mw_get_addr(widx) - retrieve MMIO address to map the memory
+ window so to have an access to the shared memory.
+
+Also it is worth to note, that method ntb_mw_count(pidx) should return the
+same value as ntb_peer_mw_count() on the peer with port index - pidx.
+
### NTB Transport Client (ntb\_transport) and NTB Netdev (ntb\_netdev)
The primary client for NTB is the Transport client, used in tandem with NTB
diff --git a/MAINTAINERS b/MAINTAINERS
index 7d9bd4a041af..4bae99c37635 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -9381,6 +9381,12 @@ F: include/linux/ntb.h
F: include/linux/ntb_transport.h
F: tools/testing/selftests/ntb/
+NTB IDT DRIVER
+M: Serge Semin <fancer.lancer@gmail.com>
+L: linux-ntb@googlegroups.com
+S: Supported
+F: drivers/ntb/hw/idt/
+
NTB INTEL DRIVER
M: Jon Mason <jdmason@kudzu.us>
M: Dave Jiang <dave.jiang@intel.com>
diff --git a/drivers/net/ntb_netdev.c b/drivers/net/ntb_netdev.c
index 4daf3d0926a8..0250aa9ae2cb 100644
--- a/drivers/net/ntb_netdev.c
+++ b/drivers/net/ntb_netdev.c
@@ -418,6 +418,8 @@ static int ntb_netdev_probe(struct device *client_dev)
if (!ndev)
return -ENOMEM;
+ SET_NETDEV_DEV(ndev, client_dev);
+
dev = netdev_priv(ndev);
dev->ndev = ndev;
dev->pdev = pdev;
diff --git a/drivers/ntb/hw/Kconfig b/drivers/ntb/hw/Kconfig
index 7116472b4625..a89243c9fdd3 100644
--- a/drivers/ntb/hw/Kconfig
+++ b/drivers/ntb/hw/Kconfig
@@ -1,2 +1,3 @@
source "drivers/ntb/hw/amd/Kconfig"
+source "drivers/ntb/hw/idt/Kconfig"
source "drivers/ntb/hw/intel/Kconfig"
diff --git a/drivers/ntb/hw/Makefile b/drivers/ntb/hw/Makefile
index 532e0859b4a1..87332c3905f0 100644
--- a/drivers/ntb/hw/Makefile
+++ b/drivers/ntb/hw/Makefile
@@ -1,2 +1,3 @@
obj-$(CONFIG_NTB_AMD) += amd/
+obj-$(CONFIG_NTB_IDT) += idt/
obj-$(CONFIG_NTB_INTEL) += intel/
diff --git a/drivers/ntb/hw/amd/ntb_hw_amd.c b/drivers/ntb/hw/amd/ntb_hw_amd.c
index 019a158e1128..f0788aae05c9 100644
--- a/drivers/ntb/hw/amd/ntb_hw_amd.c
+++ b/drivers/ntb/hw/amd/ntb_hw_amd.c
@@ -5,6 +5,7 @@
* GPL LICENSE SUMMARY
*
* Copyright (C) 2016 Advanced Micro Devices, Inc. All Rights Reserved.
+ * Copyright (C) 2016 T-Platforms. All Rights Reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of version 2 of the GNU General Public License as
@@ -13,6 +14,7 @@
* BSD LICENSE
*
* Copyright (C) 2016 Advanced Micro Devices, Inc. All Rights Reserved.
+ * Copyright (C) 2016 T-Platforms. All Rights Reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
@@ -79,40 +81,42 @@ static int ndev_mw_to_bar(struct amd_ntb_dev *ndev, int idx)
return 1 << idx;
}
-static int amd_ntb_mw_count(struct ntb_dev *ntb)
+static int amd_ntb_mw_count(struct ntb_dev *ntb, int pidx)
{
+ if (pidx != NTB_DEF_PEER_IDX)
+ return -EINVAL;
+
return ntb_ndev(ntb)->mw_count;
}
-static int amd_ntb_mw_get_range(struct ntb_dev *ntb, int idx,
- phys_addr_t *base,
- resource_size_t *size,
- resource_size_t *align,
- resource_size_t *align_size)
+static int amd_ntb_mw_get_align(struct ntb_dev *ntb, int pidx, int idx,
+ resource_size_t *addr_align,
+ resource_size_t *size_align,
+ resource_size_t *size_max)
{
struct amd_ntb_dev *ndev = ntb_ndev(ntb);
int bar;
+ if (pidx != NTB_DEF_PEER_IDX)
+ return -EINVAL;
+
bar = ndev_mw_to_bar(ndev, idx);
if (bar < 0)
return bar;
- if (base)
- *base = pci_resource_start(ndev->ntb.pdev, bar);
-
- if (size)
- *size = pci_resource_len(ndev->ntb.pdev, bar);
+ if (addr_align)
+ *addr_align = SZ_4K;
- if (align)
- *align = SZ_4K;
+ if (size_align)
+ *size_align = 1;
- if (align_size)
- *align_size = 1;
+ if (size_max)
+ *size_max = pci_resource_len(ndev->ntb.pdev, bar);
return 0;
}
-static int amd_ntb_mw_set_trans(struct ntb_dev *ntb, int idx,
+static int amd_ntb_mw_set_trans(struct ntb_dev *ntb, int pidx, int idx,
dma_addr_t addr, resource_size_t size)
{
struct amd_ntb_dev *ndev = ntb_ndev(ntb);
@@ -122,11 +126,14 @@ static int amd_ntb_mw_set_trans(struct ntb_dev *ntb, int idx,
u64 base_addr, limit, reg_val;
int bar;
+ if (pidx != NTB_DEF_PEER_IDX)
+ return -EINVAL;
+
bar = ndev_mw_to_bar(ndev, idx);
if (bar < 0)
return bar;
- mw_size = pci_resource_len(ndev->ntb.pdev, bar);
+ mw_size = pci_resource_len(ntb->pdev, bar);
/* make sure the range fits in the usable mw size */
if (size > mw_size)
@@ -135,7 +142,7 @@ static int amd_ntb_mw_set_trans(struct ntb_dev *ntb, int idx,
mmio = ndev->self_mmio;
peer_mmio = ndev->peer_mmio;
- base_addr = pci_resource_start(ndev->ntb.pdev, bar);
+ base_addr = pci_resource_start(ntb->pdev, bar);
if (bar != 1) {
xlat_reg = AMD_BAR23XLAT_OFFSET + ((bar - 2) << 2);
@@ -212,7 +219,7 @@ static int amd_link_is_up(struct amd_ntb_dev *ndev)
return 0;
}
-static int amd_ntb_link_is_up(struct ntb_dev *ntb,
+static u64 amd_ntb_link_is_up(struct ntb_dev *ntb,
enum ntb_speed *speed,
enum ntb_width *width)
{
@@ -225,7 +232,7 @@ static int amd_ntb_link_is_up(struct ntb_dev *ntb,
if (width)
*width = NTB_LNK_STA_WIDTH(ndev->lnk_sta);
- dev_dbg(ndev_dev(ndev), "link is up.\n");
+ dev_dbg(&ntb->pdev->dev, "link is up.\n");
ret = 1;
} else {
@@ -234,7 +241,7 @@ static int amd_ntb_link_is_up(struct ntb_dev *ntb,
if (width)
*width = NTB_WIDTH_NONE;
- dev_dbg(ndev_dev(ndev), "link is down.\n");
+ dev_dbg(&ntb->pdev->dev, "link is down.\n");
}
return ret;
@@ -254,7 +261,7 @@ static int amd_ntb_link_enable(struct ntb_dev *ntb,
if (ndev->ntb.topo == NTB_TOPO_SEC)
return -EINVAL;
- dev_dbg(ndev_dev(ndev), "Enabling Link.\n");
+ dev_dbg(&ntb->pdev->dev, "Enabling Link.\n");
ntb_ctl = readl(mmio + AMD_CNTL_OFFSET);
ntb_ctl |= (PMM_REG_CTL | SMM_REG_CTL);
@@ -275,7 +282,7 @@ static int amd_ntb_link_disable(struct ntb_dev *ntb)
if (ndev->ntb.topo == NTB_TOPO_SEC)
return -EINVAL;
- dev_dbg(ndev_dev(ndev), "Enabling Link.\n");
+ dev_dbg(&ntb->pdev->dev, "Enabling Link.\n");
ntb_ctl = readl(mmio + AMD_CNTL_OFFSET);
ntb_ctl &= ~(PMM_REG_CTL | SMM_REG_CTL);
@@ -284,6 +291,31 @@ static int amd_ntb_link_disable(struct ntb_dev *ntb)
return 0;
}
+static int amd_ntb_peer_mw_count(struct ntb_dev *ntb)
+{
+ /* The same as for inbound MWs */
+ return ntb_ndev(ntb)->mw_count;
+}
+
+static int amd_ntb_peer_mw_get_addr(struct ntb_dev *ntb, int idx,
+ phys_addr_t *base, resource_size_t *size)
+{
+ struct amd_ntb_dev *ndev = ntb_ndev(ntb);
+ int bar;
+
+ bar = ndev_mw_to_bar(ndev, idx);
+ if (bar < 0)
+ return bar;
+
+ if (base)
+ *base = pci_resource_start(ndev->ntb.pdev, bar);
+
+ if (size)
+ *size = pci_resource_len(ndev->ntb.pdev, bar);
+
+ return 0;
+}
+
static u64 amd_ntb_db_valid_mask(struct ntb_dev *ntb)
{
return ntb_ndev(ntb)->db_valid_mask;
@@ -400,30 +432,30 @@ static int amd_ntb_spad_write(struct ntb_dev *ntb,
return 0;
}
-static u32 amd_ntb_peer_spad_read(struct ntb_dev *ntb, int idx)
+static u32 amd_ntb_peer_spad_read(struct ntb_dev *ntb, int pidx, int sidx)
{
struct amd_ntb_dev *ndev = ntb_ndev(ntb);
void __iomem *mmio = ndev->self_mmio;
u32 offset;
- if (idx < 0 || idx >= ndev->spad_count)
+ if (sidx < 0 || sidx >= ndev->spad_count)
return -EINVAL;
- offset = ndev->peer_spad + (idx << 2);
+ offset = ndev->peer_spad + (sidx << 2);
return readl(mmio + AMD_SPAD_OFFSET + offset);
}
-static int amd_ntb_peer_spad_write(struct ntb_dev *ntb,
- int idx, u32 val)
+static int amd_ntb_peer_spad_write(struct ntb_dev *ntb, int pidx,
+ int sidx, u32 val)
{
struct amd_ntb_dev *ndev = ntb_ndev(ntb);
void __iomem *mmio = ndev->self_mmio;
u32 offset;
- if (idx < 0 || idx >= ndev->spad_count)
+ if (sidx < 0 || sidx >= ndev->spad_count)
return -EINVAL;
- offset = ndev->peer_spad + (idx << 2);
+ offset = ndev->peer_spad + (sidx << 2);
writel(val, mmio + AMD_SPAD_OFFSET + offset);
return 0;
@@ -431,8 +463,10 @@ static int amd_ntb_peer_spad_write(struct ntb_dev *ntb,
static const struct ntb_dev_ops amd_ntb_ops = {
.mw_count = amd_ntb_mw_count,
- .mw_get_range = amd_ntb_mw_get_range,
+ .mw_get_align = amd_ntb_mw_get_align,
.mw_set_trans = amd_ntb_mw_set_trans,
+ .peer_mw_count = amd_ntb_peer_mw_count,
+ .peer_mw_get_addr = amd_ntb_peer_mw_get_addr,
.link_is_up = amd_ntb_link_is_up,
.link_enable = amd_ntb_link_enable,
.link_disable = amd_ntb_link_disable,
@@ -466,18 +500,19 @@ static void amd_ack_smu(struct amd_ntb_dev *ndev, u32 bit)
static void amd_handle_event(struct amd_ntb_dev *ndev, int vec)
{
void __iomem *mmio = ndev->self_mmio;
+ struct device *dev = &ndev->ntb.pdev->dev;
u32 status;
status = readl(mmio + AMD_INTSTAT_OFFSET);
if (!(status & AMD_EVENT_INTMASK))
return;
- dev_dbg(ndev_dev(ndev), "status = 0x%x and vec = %d\n", status, vec);
+ dev_dbg(dev, "status = 0x%x and vec = %d\n", status, vec);
status &= AMD_EVENT_INTMASK;
switch (status) {
case AMD_PEER_FLUSH_EVENT:
- dev_info(ndev_dev(ndev), "Flush is done.\n");
+ dev_info(dev, "Flush is done.\n");
break;
case AMD_PEER_RESET_EVENT:
amd_ack_smu(ndev, AMD_PEER_RESET_EVENT);
@@ -503,7 +538,7 @@ static void amd_handle_event(struct amd_ntb_dev *ndev, int vec)
status = readl(mmio + AMD_PMESTAT_OFFSET);
/* check if this is WAKEUP event */
if (status & 0x1)
- dev_info(ndev_dev(ndev), "Wakeup is done.\n");
+ dev_info(dev, "Wakeup is done.\n");
amd_ack_smu(ndev, AMD_PEER_D0_EVENT);
@@ -512,14 +547,14 @@ static void amd_handle_event(struct amd_ntb_dev *ndev, int vec)
AMD_LINK_HB_TIMEOUT);
break;
default:
- dev_info(ndev_dev(ndev), "event status = 0x%x.\n", status);
+ dev_info(dev, "event status = 0x%x.\n", status);
break;
}
}
static irqreturn_t ndev_interrupt(struct amd_ntb_dev *ndev, int vec)
{
- dev_dbg(ndev_dev(ndev), "vec %d\n", vec);
+ dev_dbg(&ndev->ntb.pdev->dev, "vec %d\n", vec);
if (vec > (AMD_DB_CNT - 1) || (ndev->msix_vec_count == 1))
amd_handle_event(ndev, vec);
@@ -541,7 +576,7 @@ static irqreturn_t ndev_irq_isr(int irq, void *dev)
{
struct amd_ntb_dev *ndev = dev;
- return ndev_interrupt(ndev, irq - ndev_pdev(ndev)->irq);
+ return ndev_interrupt(ndev, irq - ndev->ntb.pdev->irq);
}
static int ndev_init_isr(struct amd_ntb_dev *ndev,
@@ -550,7 +585,7 @@ static int ndev_init_isr(struct amd_ntb_dev *ndev,
struct pci_dev *pdev;
int rc, i, msix_count, node;
- pdev = ndev_pdev(ndev);
+ pdev = ndev->ntb.pdev;
node = dev_to_node(&pdev->dev);
@@ -592,7 +627,7 @@ static int ndev_init_isr(struct amd_ntb_dev *ndev,
goto err_msix_request;
}
- dev_dbg(ndev_dev(ndev), "Using msix interrupts\n");
+ dev_dbg(&pdev->dev, "Using msix interrupts\n");
ndev->db_count = msix_min;
ndev->msix_vec_count = msix_max;
return 0;
@@ -619,7 +654,7 @@ err_msix_vec_alloc:
if (rc)
goto err_msi_request;
- dev_dbg(ndev_dev(ndev), "Using msi interrupts\n");
+ dev_dbg(&pdev->dev, "Using msi interrupts\n");
ndev->db_count = 1;
ndev->msix_vec_count = 1;
return 0;
@@ -636,7 +671,7 @@ err_msi_enable:
if (rc)
goto err_intx_request;
- dev_dbg(ndev_dev(ndev), "Using intx interrupts\n");
+ dev_dbg(&pdev->dev, "Using intx interrupts\n");
ndev->db_count = 1;
ndev->msix_vec_count = 1;
return 0;
@@ -651,7 +686,7 @@ static void ndev_deinit_isr(struct amd_ntb_dev *ndev)
void __iomem *mmio = ndev->self_mmio;
int i;
- pdev = ndev_pdev(ndev);
+ pdev = ndev->ntb.pdev;
/* Mask all doorbell interrupts */
ndev->db_mask = ndev->db_valid_mask;
@@ -777,7 +812,8 @@ static void ndev_init_debugfs(struct amd_ntb_dev *ndev)
ndev->debugfs_info = NULL;
} else {
ndev->debugfs_dir =
- debugfs_create_dir(ndev_name(ndev), debugfs_dir);
+ debugfs_create_dir(pci_name(ndev->ntb.pdev),
+ debugfs_dir);
if (!ndev->debugfs_dir)
ndev->debugfs_info = NULL;
else
@@ -812,7 +848,7 @@ static int amd_poll_link(struct amd_ntb_dev *ndev)
reg = readl(mmio + AMD_SIDEINFO_OFFSET);
reg &= NTB_LIN_STA_ACTIVE_BIT;
- dev_dbg(ndev_dev(ndev), "%s: reg_val = 0x%x.\n", __func__, reg);
+ dev_dbg(&ndev->ntb.pdev->dev, "%s: reg_val = 0x%x.\n", __func__, reg);
if (reg == ndev->cntl_sta)
return 0;
@@ -894,7 +930,8 @@ static int amd_init_ntb(struct amd_ntb_dev *ndev)
break;
default:
- dev_err(ndev_dev(ndev), "AMD NTB does not support B2B mode.\n");
+ dev_err(&ndev->ntb.pdev->dev,
+ "AMD NTB does not support B2B mode.\n");
return -EINVAL;
}
@@ -923,10 +960,10 @@ static int amd_init_dev(struct amd_ntb_dev *ndev)
struct pci_dev *pdev;
int rc = 0;
- pdev = ndev_pdev(ndev);
+ pdev = ndev->ntb.pdev;
ndev->ntb.topo = amd_get_topo(ndev);
- dev_dbg(ndev_dev(ndev), "AMD NTB topo is %s\n",
+ dev_dbg(&pdev->dev, "AMD NTB topo is %s\n",
ntb_topo_string(ndev->ntb.topo));
rc = amd_init_ntb(ndev);
@@ -935,7 +972,7 @@ static int amd_init_dev(struct amd_ntb_dev *ndev)
rc = amd_init_isr(ndev);
if (rc) {
- dev_err(ndev_dev(ndev), "fail to init isr.\n");
+ dev_err(&pdev->dev, "fail to init isr.\n");
return rc;
}
@@ -973,7 +1010,7 @@ static int amd_ntb_init_pci(struct amd_ntb_dev *ndev,
rc = pci_set_dma_mask(pdev, DMA_BIT_MASK(32));
if (rc)
goto err_dma_mask;
- dev_warn(ndev_dev(ndev), "Cannot DMA highmem\n");
+ dev_warn(&pdev->dev, "Cannot DMA highmem\n");
}
rc = pci_set_consistent_dma_mask(pdev, DMA_BIT_MASK(64));
@@ -981,7 +1018,7 @@ static int amd_ntb_init_pci(struct amd_ntb_dev *ndev,
rc = pci_set_consistent_dma_mask(pdev, DMA_BIT_MASK(32));
if (rc)
goto err_dma_mask;
- dev_warn(ndev_dev(ndev), "Cannot DMA consistent highmem\n");
+ dev_warn(&pdev->dev, "Cannot DMA consistent highmem\n");
}
ndev->self_mmio = pci_iomap(pdev, 0, 0);
@@ -1004,7 +1041,7 @@ err_pci_enable:
static void amd_ntb_deinit_pci(struct amd_ntb_dev *ndev)
{
- struct pci_dev *pdev = ndev_pdev(ndev);
+ struct pci_dev *pdev = ndev->ntb.pdev;
pci_iounmap(pdev, ndev->self_mmio);
diff --git a/drivers/ntb/hw/amd/ntb_hw_amd.h b/drivers/ntb/hw/amd/ntb_hw_amd.h
index 13d73ed94a52..8f3617a46292 100644
--- a/drivers/ntb/hw/amd/ntb_hw_amd.h
+++ b/drivers/ntb/hw/amd/ntb_hw_amd.h
@@ -211,9 +211,6 @@ struct amd_ntb_dev {
struct dentry *debugfs_info;
};
-#define ndev_pdev(ndev) ((ndev)->ntb.pdev)
-#define ndev_name(ndev) pci_name(ndev_pdev(ndev))
-#define ndev_dev(ndev) (&ndev_pdev(ndev)->dev)
#define ntb_ndev(__ntb) container_of(__ntb, struct amd_ntb_dev, ntb)
#define hb_ndev(__work) container_of(__work, struct amd_ntb_dev, hb_timer.work)
diff --git a/drivers/ntb/hw/idt/Kconfig b/drivers/ntb/hw/idt/Kconfig
new file mode 100644
index 000000000000..b360e5613b9f
--- /dev/null
+++ b/drivers/ntb/hw/idt/Kconfig
@@ -0,0 +1,31 @@
+config NTB_IDT
+ tristate "IDT PCIe-switch Non-Transparent Bridge support"
+ depends on PCI
+ help
+ This driver supports NTB of cappable IDT PCIe-switches.
+
+ Some of the pre-initializations must be made before IDT PCIe-switch
+ exposes it NT-functions correctly. It should be done by either proper
+ initialisation of EEPROM connected to master smbus of the switch or
+ by BIOS using slave-SMBus interface changing corresponding registers
+ value. Evidently it must be done before PCI bus enumeration is
+ finished in Linux kernel.
+
+ First of all partitions must be activated and properly assigned to all
+ the ports with NT-functions intended to be activated (see SWPARTxCTL
+ and SWPORTxCTL registers). Then all NT-function BARs must be enabled
+ with chosen valid aperture. For memory windows related BARs the
+ aperture settings shall determine the maximum size of memory windows
+ accepted by a BAR. Note that BAR0 must map PCI configuration space
+ registers.
+
+ It's worth to note, that since a part of this driver relies on the
+ BAR settings of peer NT-functions, the BAR setups can't be done over
+ kernel PCI fixups. That's why the alternative pre-initialization
+ techniques like BIOS using SMBus interface or EEPROM should be
+ utilized. Additionally if one needs to have temperature sensor
+ information printed to system log, the corresponding registers must
+ be initialized within BIOS/EEPROM as well.
+
+ If unsure, say N.
+
diff --git a/drivers/ntb/hw/idt/Makefile b/drivers/ntb/hw/idt/Makefile
new file mode 100644
index 000000000000..a102cf154be0
--- /dev/null
+++ b/drivers/ntb/hw/idt/Makefile
@@ -0,0 +1 @@
+obj-$(CONFIG_NTB_IDT) += ntb_hw_idt.o
diff --git a/drivers/ntb/hw/idt/ntb_hw_idt.c b/drivers/ntb/hw/idt/ntb_hw_idt.c
new file mode 100644
index 000000000000..d44d7ef38fe8
--- /dev/null
+++ b/drivers/ntb/hw/idt/ntb_hw_idt.c
@@ -0,0 +1,2712 @@
+/*
+ * This file is provided under a GPLv2 license. When using or
+ * redistributing this file, you may do so under that license.
+ *
+ * GPL LICENSE SUMMARY
+ *
+ * Copyright (C) 2016 T-Platforms All Rights Reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General
+ * Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License along
+ * with this program; if not, one can be found http://www.gnu.org/licenses/.
+ *
+ * The full GNU General Public License is included in this distribution in
+ * the file called "COPYING".
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ *
+ * IDT PCIe-switch NTB Linux driver
+ *
+ * Contact Information:
+ * Serge Semin <fancer.lancer@gmail.com>, <Sergey.Semin@t-platforms.ru>
+ */
+
+#include <linux/stddef.h>
+#include <linux/types.h>
+#include <linux/kernel.h>
+#include <linux/bitops.h>
+#include <linux/sizes.h>
+#include <linux/module.h>
+#include <linux/moduleparam.h>
+#include <linux/init.h>
+#include <linux/interrupt.h>
+#include <linux/spinlock.h>
+#include <linux/pci.h>
+#include <linux/aer.h>
+#include <linux/slab.h>
+#include <linux/list.h>
+#include <linux/debugfs.h>
+#include <linux/ntb.h>
+
+#include "ntb_hw_idt.h"
+
+#define NTB_NAME "ntb_hw_idt"
+#define NTB_DESC "IDT PCI-E Non-Transparent Bridge Driver"
+#define NTB_VER "2.0"
+#define NTB_IRQNAME "ntb_irq_idt"
+
+MODULE_DESCRIPTION(NTB_DESC);
+MODULE_VERSION(NTB_VER);
+MODULE_LICENSE("GPL v2");
+MODULE_AUTHOR("T-platforms");
+
+/*
+ * NT Endpoint registers table simplifying a loop access to the functionally
+ * related registers
+ */
+static const struct idt_ntb_regs ntdata_tbl = {
+ { {IDT_NT_BARSETUP0, IDT_NT_BARLIMIT0,
+ IDT_NT_BARLTBASE0, IDT_NT_BARUTBASE0},
+ {IDT_NT_BARSETUP1, IDT_NT_BARLIMIT1,
+ IDT_NT_BARLTBASE1, IDT_NT_BARUTBASE1},
+ {IDT_NT_BARSETUP2, IDT_NT_BARLIMIT2,
+ IDT_NT_BARLTBASE2, IDT_NT_BARUTBASE2},
+ {IDT_NT_BARSETUP3, IDT_NT_BARLIMIT3,
+ IDT_NT_BARLTBASE3, IDT_NT_BARUTBASE3},
+ {IDT_NT_BARSETUP4, IDT_NT_BARLIMIT4,
+ IDT_NT_BARLTBASE4, IDT_NT_BARUTBASE4},
+ {IDT_NT_BARSETUP5, IDT_NT_BARLIMIT5,
+ IDT_NT_BARLTBASE5, IDT_NT_BARUTBASE5} },
+ { {IDT_NT_INMSG0, IDT_NT_OUTMSG0, IDT_NT_INMSGSRC0},
+ {IDT_NT_INMSG1, IDT_NT_OUTMSG1, IDT_NT_INMSGSRC1},
+ {IDT_NT_INMSG2, IDT_NT_OUTMSG2, IDT_NT_INMSGSRC2},
+ {IDT_NT_INMSG3, IDT_NT_OUTMSG3, IDT_NT_INMSGSRC3} }
+};
+
+/*
+ * NT Endpoint ports data table with the corresponding pcie command, link
+ * status, control and BAR-related registers
+ */
+static const struct idt_ntb_port portdata_tbl[IDT_MAX_NR_PORTS] = {
+/*0*/ { IDT_SW_NTP0_PCIECMDSTS, IDT_SW_NTP0_PCIELCTLSTS,
+ IDT_SW_NTP0_NTCTL,
+ IDT_SW_SWPORT0CTL, IDT_SW_SWPORT0STS,
+ { {IDT_SW_NTP0_BARSETUP0, IDT_SW_NTP0_BARLIMIT0,
+ IDT_SW_NTP0_BARLTBASE0, IDT_SW_NTP0_BARUTBASE0},
+ {IDT_SW_NTP0_BARSETUP1, IDT_SW_NTP0_BARLIMIT1,
+ IDT_SW_NTP0_BARLTBASE1, IDT_SW_NTP0_BARUTBASE1},
+ {IDT_SW_NTP0_BARSETUP2, IDT_SW_NTP0_BARLIMIT2,
+ IDT_SW_NTP0_BARLTBASE2, IDT_SW_NTP0_BARUTBASE2},
+ {IDT_SW_NTP0_BARSETUP3, IDT_SW_NTP0_BARLIMIT3,
+ IDT_SW_NTP0_BARLTBASE3, IDT_SW_NTP0_BARUTBASE3},
+ {IDT_SW_NTP0_BARSETUP4, IDT_SW_NTP0_BARLIMIT4,
+ IDT_SW_NTP0_BARLTBASE4, IDT_SW_NTP0_BARUTBASE4},
+ {IDT_SW_NTP0_BARSETUP5, IDT_SW_NTP0_BARLIMIT5,
+ IDT_SW_NTP0_BARLTBASE5, IDT_SW_NTP0_BARUTBASE5} } },
+/*1*/ {0},
+/*2*/ { IDT_SW_NTP2_PCIECMDSTS, IDT_SW_NTP2_PCIELCTLSTS,
+ IDT_SW_NTP2_NTCTL,
+ IDT_SW_SWPORT2CTL, IDT_SW_SWPORT2STS,
+ { {IDT_SW_NTP2_BARSETUP0, IDT_SW_NTP2_BARLIMIT0,
+ IDT_SW_NTP2_BARLTBASE0, IDT_SW_NTP2_BARUTBASE0},
+ {IDT_SW_NTP2_BARSETUP1, IDT_SW_NTP2_BARLIMIT1,
+ IDT_SW_NTP2_BARLTBASE1, IDT_SW_NTP2_BARUTBASE1},
+ {IDT_SW_NTP2_BARSETUP2, IDT_SW_NTP2_BARLIMIT2,
+ IDT_SW_NTP2_BARLTBASE2, IDT_SW_NTP2_BARUTBASE2},
+ {IDT_SW_NTP2_BARSETUP3, IDT_SW_NTP2_BARLIMIT3,
+ IDT_SW_NTP2_BARLTBASE3, IDT_SW_NTP2_BARUTBASE3},
+ {IDT_SW_NTP2_BARSETUP4, IDT_SW_NTP2_BARLIMIT4,
+ IDT_SW_NTP2_BARLTBASE4, IDT_SW_NTP2_BARUTBASE4},
+ {IDT_SW_NTP2_BARSETUP5, IDT_SW_NTP2_BARLIMIT5,
+ IDT_SW_NTP2_BARLTBASE5, IDT_SW_NTP2_BARUTBASE5} } },
+/*3*/ {0},
+/*4*/ { IDT_SW_NTP4_PCIECMDSTS, IDT_SW_NTP4_PCIELCTLSTS,
+ IDT_SW_NTP4_NTCTL,
+ IDT_SW_SWPORT4CTL, IDT_SW_SWPORT4STS,
+ { {IDT_SW_NTP4_BARSETUP0, IDT_SW_NTP4_BARLIMIT0,
+ IDT_SW_NTP4_BARLTBASE0, IDT_SW_NTP4_BARUTBASE0},
+ {IDT_SW_NTP4_BARSETUP1, IDT_SW_NTP4_BARLIMIT1,
+ IDT_SW_NTP4_BARLTBASE1, IDT_SW_NTP4_BARUTBASE1},
+ {IDT_SW_NTP4_BARSETUP2, IDT_SW_NTP4_BARLIMIT2,
+ IDT_SW_NTP4_BARLTBASE2, IDT_SW_NTP4_BARUTBASE2},
+ {IDT_SW_NTP4_BARSETUP3, IDT_SW_NTP4_BARLIMIT3,
+ IDT_SW_NTP4_BARLTBASE3, IDT_SW_NTP4_BARUTBASE3},
+ {IDT_SW_NTP4_BARSETUP4, IDT_SW_NTP4_BARLIMIT4,
+ IDT_SW_NTP4_BARLTBASE4, IDT_SW_NTP4_BARUTBASE4},
+ {IDT_SW_NTP4_BARSETUP5, IDT_SW_NTP4_BARLIMIT5,
+ IDT_SW_NTP4_BARLTBASE5, IDT_SW_NTP4_BARUTBASE5} } },
+/*5*/ {0},
+/*6*/ { IDT_SW_NTP6_PCIECMDSTS, IDT_SW_NTP6_PCIELCTLSTS,
+ IDT_SW_NTP6_NTCTL,
+ IDT_SW_SWPORT6CTL, IDT_SW_SWPORT6STS,
+ { {IDT_SW_NTP6_BARSETUP0, IDT_SW_NTP6_BARLIMIT0,
+ IDT_SW_NTP6_BARLTBASE0, IDT_SW_NTP6_BARUTBASE0},
+ {IDT_SW_NTP6_BARSETUP1, IDT_SW_NTP6_BARLIMIT1,
+ IDT_SW_NTP6_BARLTBASE1, IDT_SW_NTP6_BARUTBASE1},
+ {IDT_SW_NTP6_BARSETUP2, IDT_SW_NTP6_BARLIMIT2,
+ IDT_SW_NTP6_BARLTBASE2, IDT_SW_NTP6_BARUTBASE2},
+ {IDT_SW_NTP6_BARSETUP3, IDT_SW_NTP6_BARLIMIT3,
+ IDT_SW_NTP6_BARLTBASE3, IDT_SW_NTP6_BARUTBASE3},
+ {IDT_SW_NTP6_BARSETUP4, IDT_SW_NTP6_BARLIMIT4,
+ IDT_SW_NTP6_BARLTBASE4, IDT_SW_NTP6_BARUTBASE4},
+ {IDT_SW_NTP6_BARSETUP5, IDT_SW_NTP6_BARLIMIT5,
+ IDT_SW_NTP6_BARLTBASE5, IDT_SW_NTP6_BARUTBASE5} } },
+/*7*/ {0},
+/*8*/ { IDT_SW_NTP8_PCIECMDSTS, IDT_SW_NTP8_PCIELCTLSTS,
+ IDT_SW_NTP8_NTCTL,
+ IDT_SW_SWPORT8CTL, IDT_SW_SWPORT8STS,
+ { {IDT_SW_NTP8_BARSETUP0, IDT_SW_NTP8_BARLIMIT0,
+ IDT_SW_NTP8_BARLTBASE0, IDT_SW_NTP8_BARUTBASE0},
+ {IDT_SW_NTP8_BARSETUP1, IDT_SW_NTP8_BARLIMIT1,
+ IDT_SW_NTP8_BARLTBASE1, IDT_SW_NTP8_BARUTBASE1},
+ {IDT_SW_NTP8_BARSETUP2, IDT_SW_NTP8_BARLIMIT2,
+ IDT_SW_NTP8_BARLTBASE2, IDT_SW_NTP8_BARUTBASE2},
+ {IDT_SW_NTP8_BARSETUP3, IDT_SW_NTP8_BARLIMIT3,
+ IDT_SW_NTP8_BARLTBASE3, IDT_SW_NTP8_BARUTBASE3},
+ {IDT_SW_NTP8_BARSETUP4, IDT_SW_NTP8_BARLIMIT4,
+ IDT_SW_NTP8_BARLTBASE4, IDT_SW_NTP8_BARUTBASE4},
+ {IDT_SW_NTP8_BARSETUP5, IDT_SW_NTP8_BARLIMIT5,
+ IDT_SW_NTP8_BARLTBASE5, IDT_SW_NTP8_BARUTBASE5} } },
+/*9*/ {0},
+/*10*/ {0},
+/*11*/ {0},
+/*12*/ { IDT_SW_NTP12_PCIECMDSTS, IDT_SW_NTP12_PCIELCTLSTS,
+ IDT_SW_NTP12_NTCTL,
+ IDT_SW_SWPORT12CTL, IDT_SW_SWPORT12STS,
+ { {IDT_SW_NTP12_BARSETUP0, IDT_SW_NTP12_BARLIMIT0,
+ IDT_SW_NTP12_BARLTBASE0, IDT_SW_NTP12_BARUTBASE0},
+ {IDT_SW_NTP12_BARSETUP1, IDT_SW_NTP12_BARLIMIT1,
+ IDT_SW_NTP12_BARLTBASE1, IDT_SW_NTP12_BARUTBASE1},
+ {IDT_SW_NTP12_BARSETUP2, IDT_SW_NTP12_BARLIMIT2,
+ IDT_SW_NTP12_BARLTBASE2, IDT_SW_NTP12_BARUTBASE2},
+ {IDT_SW_NTP12_BARSETUP3, IDT_SW_NTP12_BARLIMIT3,
+ IDT_SW_NTP12_BARLTBASE3, IDT_SW_NTP12_BARUTBASE3},
+ {IDT_SW_NTP12_BARSETUP4, IDT_SW_NTP12_BARLIMIT4,
+ IDT_SW_NTP12_BARLTBASE4, IDT_SW_NTP12_BARUTBASE4},
+ {IDT_SW_NTP12_BARSETUP5, IDT_SW_NTP12_BARLIMIT5,
+ IDT_SW_NTP12_BARLTBASE5, IDT_SW_NTP12_BARUTBASE5} } },
+/*13*/ {0},
+/*14*/ {0},
+/*15*/ {0},
+/*16*/ { IDT_SW_NTP16_PCIECMDSTS, IDT_SW_NTP16_PCIELCTLSTS,
+ IDT_SW_NTP16_NTCTL,
+ IDT_SW_SWPORT16CTL, IDT_SW_SWPORT16STS,
+ { {IDT_SW_NTP16_BARSETUP0, IDT_SW_NTP16_BARLIMIT0,
+ IDT_SW_NTP16_BARLTBASE0, IDT_SW_NTP16_BARUTBASE0},
+ {IDT_SW_NTP16_BARSETUP1, IDT_SW_NTP16_BARLIMIT1,
+ IDT_SW_NTP16_BARLTBASE1, IDT_SW_NTP16_BARUTBASE1},
+ {IDT_SW_NTP16_BARSETUP2, IDT_SW_NTP16_BARLIMIT2,
+ IDT_SW_NTP16_BARLTBASE2, IDT_SW_NTP16_BARUTBASE2},
+ {IDT_SW_NTP16_BARSETUP3, IDT_SW_NTP16_BARLIMIT3,
+ IDT_SW_NTP16_BARLTBASE3, IDT_SW_NTP16_BARUTBASE3},
+ {IDT_SW_NTP16_BARSETUP4, IDT_SW_NTP16_BARLIMIT4,
+ IDT_SW_NTP16_BARLTBASE4, IDT_SW_NTP16_BARUTBASE4},
+ {IDT_SW_NTP16_BARSETUP5, IDT_SW_NTP16_BARLIMIT5,
+ IDT_SW_NTP16_BARLTBASE5, IDT_SW_NTP16_BARUTBASE5} } },
+/*17*/ {0},
+/*18*/ {0},
+/*19*/ {0},
+/*20*/ { IDT_SW_NTP20_PCIECMDSTS, IDT_SW_NTP20_PCIELCTLSTS,
+ IDT_SW_NTP20_NTCTL,
+ IDT_SW_SWPORT20CTL, IDT_SW_SWPORT20STS,
+ { {IDT_SW_NTP20_BARSETUP0, IDT_SW_NTP20_BARLIMIT0,
+ IDT_SW_NTP20_BARLTBASE0, IDT_SW_NTP20_BARUTBASE0},
+ {IDT_SW_NTP20_BARSETUP1, IDT_SW_NTP20_BARLIMIT1,
+ IDT_SW_NTP20_BARLTBASE1, IDT_SW_NTP20_BARUTBASE1},
+ {IDT_SW_NTP20_BARSETUP2, IDT_SW_NTP20_BARLIMIT2,
+ IDT_SW_NTP20_BARLTBASE2, IDT_SW_NTP20_BARUTBASE2},
+ {IDT_SW_NTP20_BARSETUP3, IDT_SW_NTP20_BARLIMIT3,
+ IDT_SW_NTP20_BARLTBASE3, IDT_SW_NTP20_BARUTBASE3},
+ {IDT_SW_NTP20_BARSETUP4, IDT_SW_NTP20_BARLIMIT4,
+ IDT_SW_NTP20_BARLTBASE4, IDT_SW_NTP20_BARUTBASE4},
+ {IDT_SW_NTP20_BARSETUP5, IDT_SW_NTP20_BARLIMIT5,
+ IDT_SW_NTP20_BARLTBASE5, IDT_SW_NTP20_BARUTBASE5} } },
+/*21*/ {0},
+/*22*/ {0},
+/*23*/ {0}
+};
+
+/*
+ * IDT PCIe-switch partitions table with the corresponding control, status
+ * and messages control registers
+ */
+static const struct idt_ntb_part partdata_tbl[IDT_MAX_NR_PARTS] = {
+/*0*/ { IDT_SW_SWPART0CTL, IDT_SW_SWPART0STS,
+ {IDT_SW_SWP0MSGCTL0, IDT_SW_SWP0MSGCTL1,
+ IDT_SW_SWP0MSGCTL2, IDT_SW_SWP0MSGCTL3} },
+/*1*/ { IDT_SW_SWPART1CTL, IDT_SW_SWPART1STS,
+ {IDT_SW_SWP1MSGCTL0, IDT_SW_SWP1MSGCTL1,
+ IDT_SW_SWP1MSGCTL2, IDT_SW_SWP1MSGCTL3} },
+/*2*/ { IDT_SW_SWPART2CTL, IDT_SW_SWPART2STS,
+ {IDT_SW_SWP2MSGCTL0, IDT_SW_SWP2MSGCTL1,
+ IDT_SW_SWP2MSGCTL2, IDT_SW_SWP2MSGCTL3} },
+/*3*/ { IDT_SW_SWPART3CTL, IDT_SW_SWPART3STS,
+ {IDT_SW_SWP3MSGCTL0, IDT_SW_SWP3MSGCTL1,
+ IDT_SW_SWP3MSGCTL2, IDT_SW_SWP3MSGCTL3} },
+/*4*/ { IDT_SW_SWPART4CTL, IDT_SW_SWPART4STS,
+ {IDT_SW_SWP4MSGCTL0, IDT_SW_SWP4MSGCTL1,
+ IDT_SW_SWP4MSGCTL2, IDT_SW_SWP4MSGCTL3} },
+/*5*/ { IDT_SW_SWPART5CTL, IDT_SW_SWPART5STS,
+ {IDT_SW_SWP5MSGCTL0, IDT_SW_SWP5MSGCTL1,
+ IDT_SW_SWP5MSGCTL2, IDT_SW_SWP5MSGCTL3} },
+/*6*/ { IDT_SW_SWPART6CTL, IDT_SW_SWPART6STS,
+ {IDT_SW_SWP6MSGCTL0, IDT_SW_SWP6MSGCTL1,
+ IDT_SW_SWP6MSGCTL2, IDT_SW_SWP6MSGCTL3} },
+/*7*/ { IDT_SW_SWPART7CTL, IDT_SW_SWPART7STS,
+ {IDT_SW_SWP7MSGCTL0, IDT_SW_SWP7MSGCTL1,
+ IDT_SW_SWP7MSGCTL2, IDT_SW_SWP7MSGCTL3} }
+};
+
+/*
+ * DebugFS directory to place the driver debug file
+ */
+static struct dentry *dbgfs_topdir;
+
+/*=============================================================================
+ * 1. IDT PCIe-switch registers IO-functions
+ *
+ * Beside ordinary configuration space registers IDT PCIe-switch expose
+ * global configuration registers, which are used to determine state of other
+ * device ports as well as being notified of some switch-related events.
+ * Additionally all the configuration space registers of all the IDT
+ * PCIe-switch functions are mapped to the Global Address space, so each
+ * function can determine a configuration of any other PCI-function.
+ * Functions declared in this chapter are created to encapsulate access
+ * to configuration and global registers, so the driver code just need to
+ * provide IDT NTB hardware descriptor and a register address.
+ *=============================================================================
+ */
+
+/*
+ * idt_nt_write() - PCI configuration space registers write method
+ * @ndev: IDT NTB hardware driver descriptor
+ * @reg: Register to write data to
+ * @data: Value to write to the register
+ *
+ * IDT PCIe-switch registers are all Little endian.
+ */
+static void idt_nt_write(struct idt_ntb_dev *ndev,
+ const unsigned int reg, const u32 data)
+{
+ /*
+ * It's obvious bug to request a register exceeding the maximum possible
+ * value as well as to have it unaligned.
+ */
+ if (WARN_ON(reg > IDT_REG_PCI_MAX || !IS_ALIGNED(reg, IDT_REG_ALIGN)))
+ return;
+
+ /* Just write the value to the specified register */
+ iowrite32(data, ndev->cfgspc + (ptrdiff_t)reg);
+}
+
+/*
+ * idt_nt_read() - PCI configuration space registers read method
+ * @ndev: IDT NTB hardware driver descriptor
+ * @reg: Register to write data to
+ *
+ * IDT PCIe-switch Global configuration registers are all Little endian.
+ *
+ * Return: register value
+ */
+static u32 idt_nt_read(struct idt_ntb_dev *ndev, const unsigned int reg)
+{
+ /*
+ * It's obvious bug to request a register exceeding the maximum possible
+ * value as well as to have it unaligned.
+ */
+ if (WARN_ON(reg > IDT_REG_PCI_MAX || !IS_ALIGNED(reg, IDT_REG_ALIGN)))
+ return ~0;
+
+ /* Just read the value from the specified register */
+ return ioread32(ndev->cfgspc + (ptrdiff_t)reg);
+}
+
+/*
+ * idt_sw_write() - Global registers write method
+ * @ndev: IDT NTB hardware driver descriptor
+ * @reg: Register to write data to
+ * @data: Value to write to the register
+ *
+ * IDT PCIe-switch Global configuration registers are all Little endian.
+ */
+static void idt_sw_write(struct idt_ntb_dev *ndev,
+ const unsigned int reg, const u32 data)
+{
+ unsigned long irqflags;
+
+ /*
+ * It's obvious bug to request a register exceeding the maximum possible
+ * value as well as to have it unaligned.
+ */
+ if (WARN_ON(reg > IDT_REG_SW_MAX || !IS_ALIGNED(reg, IDT_REG_ALIGN)))
+ return;
+
+ /* Lock GASA registers operations */
+ spin_lock_irqsave(&ndev->gasa_lock, irqflags);
+ /* Set the global register address */
+ iowrite32((u32)reg, ndev->cfgspc + (ptrdiff_t)IDT_NT_GASAADDR);
+ /* Put the new value of the register */
+ iowrite32(data, ndev->cfgspc + (ptrdiff_t)IDT_NT_GASADATA);
+ /* Make sure the PCIe transactions are executed */
+ mmiowb();
+ /* Unlock GASA registers operations */
+ spin_unlock_irqrestore(&ndev->gasa_lock, irqflags);
+}
+
+/*
+ * idt_sw_read() - Global registers read method
+ * @ndev: IDT NTB hardware driver descriptor
+ * @reg: Register to write data to
+ *
+ * IDT PCIe-switch Global configuration registers are all Little endian.
+ *
+ * Return: register value
+ */
+static u32 idt_sw_read(struct idt_ntb_dev *ndev, const unsigned int reg)
+{
+ unsigned long irqflags;
+ u32 data;
+
+ /*
+ * It's obvious bug to request a register exceeding the maximum possible
+ * value as well as to have it unaligned.
+ */
+ if (WARN_ON(reg > IDT_REG_SW_MAX || !IS_ALIGNED(reg, IDT_REG_ALIGN)))
+ return ~0;
+
+ /* Lock GASA registers operations */
+ spin_lock_irqsave(&ndev->gasa_lock, irqflags);
+ /* Set the global register address */
+ iowrite32((u32)reg, ndev->cfgspc + (ptrdiff_t)IDT_NT_GASAADDR);
+ /* Get the data of the register (read ops acts as MMIO barrier) */
+ data = ioread32(ndev->cfgspc + (ptrdiff_t)IDT_NT_GASADATA);
+ /* Unlock GASA registers operations */
+ spin_unlock_irqrestore(&ndev->gasa_lock, irqflags);
+
+ return data;
+}
+
+/*
+ * idt_reg_set_bits() - set bits of a passed register
+ * @ndev: IDT NTB hardware driver descriptor
+ * @reg: Register to change bits of
+ * @reg_lock: Register access spin lock
+ * @valid_mask: Mask of valid bits
+ * @set_bits: Bitmask to set
+ *
+ * Helper method to check whether a passed bitfield is valid and set
+ * corresponding bits of a register.
+ *
+ * WARNING! Make sure the passed register isn't accessed over plane
+ * idt_nt_write() method (read method is ok to be used concurrently).
+ *
+ * Return: zero on success, negative error on invalid bitmask.
+ */
+static inline int idt_reg_set_bits(struct idt_ntb_dev *ndev, unsigned int reg,
+ spinlock_t *reg_lock,
+ u64 valid_mask, u64 set_bits)
+{
+ unsigned long irqflags;
+ u32 data;
+
+ if (set_bits & ~(u64)valid_mask)
+ return -EINVAL;
+
+ /* Lock access to the register unless the change is written back */
+ spin_lock_irqsave(reg_lock, irqflags);
+ data = idt_nt_read(ndev, reg) | (u32)set_bits;
+ idt_nt_write(ndev, reg, data);
+ /* Unlock the register */
+ spin_unlock_irqrestore(reg_lock, irqflags);
+
+ return 0;
+}
+
+/*
+ * idt_reg_clear_bits() - clear bits of a passed register
+ * @ndev: IDT NTB hardware driver descriptor
+ * @reg: Register to change bits of
+ * @reg_lock: Register access spin lock
+ * @set_bits: Bitmask to clear
+ *
+ * Helper method to check whether a passed bitfield is valid and clear
+ * corresponding bits of a register.
+ *
+ * NOTE! Invalid bits are always considered cleared so it's not an error
+ * to clear them over.
+ *
+ * WARNING! Make sure the passed register isn't accessed over plane
+ * idt_nt_write() method (read method is ok to use concurrently).
+ */
+static inline void idt_reg_clear_bits(struct idt_ntb_dev *ndev,
+ unsigned int reg, spinlock_t *reg_lock,
+ u64 clear_bits)
+{
+ unsigned long irqflags;
+ u32 data;
+
+ /* Lock access to the register unless the change is written back */
+ spin_lock_irqsave(reg_lock, irqflags);
+ data = idt_nt_read(ndev, reg) & ~(u32)clear_bits;
+ idt_nt_write(ndev, reg, data);
+ /* Unlock the register */
+ spin_unlock_irqrestore(reg_lock, irqflags);
+}
+
+/*===========================================================================
+ * 2. Ports operations
+ *
+ * IDT PCIe-switches can have from 3 up to 8 ports with possible
+ * NT-functions enabled. So all the possible ports need to be scanned looking
+ * for NTB activated. NTB API will have enumerated only the ports with NTB.
+ *===========================================================================
+ */
+
+/*
+ * idt_scan_ports() - scan IDT PCIe-switch ports collecting info in the tables
+ * @ndev: Pointer to the PCI device descriptor
+ *
+ * Return: zero on success, otherwise a negative error number.
+ */
+static int idt_scan_ports(struct idt_ntb_dev *ndev)
+{
+ unsigned char pidx, port, part;
+ u32 data, portsts, partsts;
+
+ /* Retrieve the local port number */
+ data = idt_nt_read(ndev, IDT_NT_PCIELCAP);
+ ndev->port = GET_FIELD(PCIELCAP_PORTNUM, data);
+
+ /* Retrieve the local partition number */
+ portsts = idt_sw_read(ndev, portdata_tbl[ndev->port].sts);
+ ndev->part = GET_FIELD(SWPORTxSTS_SWPART, portsts);
+
+ /* Initialize port/partition -> index tables with invalid values */
+ memset(ndev->port_idx_map, -EINVAL, sizeof(ndev->port_idx_map));
+ memset(ndev->part_idx_map, -EINVAL, sizeof(ndev->part_idx_map));
+
+ /*
+ * Walk over all the possible ports checking whether any of them has
+ * NT-function activated
+ */
+ ndev->peer_cnt = 0;
+ for (pidx = 0; pidx < ndev->swcfg->port_cnt; pidx++) {
+ port = ndev->swcfg->ports[pidx];
+ /* Skip local port */
+ if (port == ndev->port)
+ continue;
+
+ /* Read the port status register to get it partition */
+ portsts = idt_sw_read(ndev, portdata_tbl[port].sts);
+ part = GET_FIELD(SWPORTxSTS_SWPART, portsts);
+
+ /* Retrieve the partition status */
+ partsts = idt_sw_read(ndev, partdata_tbl[part].sts);
+ /* Check if partition state is active and port has NTB */
+ if (IS_FLD_SET(SWPARTxSTS_STATE, partsts, ACT) &&
+ (IS_FLD_SET(SWPORTxSTS_MODE, portsts, NT) ||
+ IS_FLD_SET(SWPORTxSTS_MODE, portsts, USNT) ||
+ IS_FLD_SET(SWPORTxSTS_MODE, portsts, USNTDMA) ||
+ IS_FLD_SET(SWPORTxSTS_MODE, portsts, NTDMA))) {
+ /* Save the port and partition numbers */
+ ndev->peers[ndev->peer_cnt].port = port;
+ ndev->peers[ndev->peer_cnt].part = part;
+ /* Fill in the port/partition -> index tables */
+ ndev->port_idx_map[port] = ndev->peer_cnt;
+ ndev->part_idx_map[part] = ndev->peer_cnt;
+ ndev->peer_cnt++;
+ }
+ }
+
+ dev_dbg(&ndev->ntb.pdev->dev, "Local port: %hhu, num of peers: %hhu\n",
+ ndev->port, ndev->peer_cnt);
+
+ /* It's useless to have this driver loaded if there is no any peer */
+ if (ndev->peer_cnt == 0) {
+ dev_warn(&ndev->ntb.pdev->dev, "No active peer found\n");
+ return -ENODEV;
+ }
+
+ return 0;
+}
+
+/*
+ * idt_ntb_port_number() - get the local port number
+ * @ntb: NTB device context.
+ *
+ * Return: the local port number
+ */
+static int idt_ntb_port_number(struct ntb_dev *ntb)
+{
+ struct idt_ntb_dev *ndev = to_ndev_ntb(ntb);
+
+ return ndev->port;
+}
+
+/*
+ * idt_ntb_peer_port_count() - get the number of peer ports
+ * @ntb: NTB device context.
+ *
+ * Return the count of detected peer NT-functions.
+ *
+ * Return: number of peer ports
+ */
+static int idt_ntb_peer_port_count(struct ntb_dev *ntb)
+{
+ struct idt_ntb_dev *ndev = to_ndev_ntb(ntb);
+
+ return ndev->peer_cnt;
+}
+
+/*
+ * idt_ntb_peer_port_number() - get peer port by given index
+ * @ntb: NTB device context.
+ * @pidx: Peer port index.
+ *
+ * Return: peer port or negative error
+ */
+static int idt_ntb_peer_port_number(struct ntb_dev *ntb, int pidx)
+{
+ struct idt_ntb_dev *ndev = to_ndev_ntb(ntb);
+
+ if (pidx < 0 || ndev->peer_cnt <= pidx)
+ return -EINVAL;
+
+ /* Return the detected NT-function port number */
+ return ndev->peers[pidx].port;
+}
+
+/*
+ * idt_ntb_peer_port_idx() - get peer port index by given port number
+ * @ntb: NTB device context.
+ * @port: Peer port number.
+ *
+ * Internal port -> index table is pre-initialized with -EINVAL values,
+ * so we just need to return it value
+ *
+ * Return: peer NT-function port index or negative error
+ */
+static int idt_ntb_peer_port_idx(struct ntb_dev *ntb, int port)
+{
+ struct idt_ntb_dev *ndev = to_ndev_ntb(ntb);
+
+ if (port < 0 || IDT_MAX_NR_PORTS <= port)
+ return -EINVAL;
+
+ return ndev->port_idx_map[port];
+}
+
+/*===========================================================================
+ * 3. Link status operations
+ * There is no any ready-to-use method to have peer ports notified if NTB
+ * link is set up or got down. Instead global signal can be used instead.
+ * In case if any one of ports changes local NTB link state, it sends
+ * global signal and clears corresponding global state bit. Then all the ports
+ * receive a notification of that, so to make client driver being aware of
+ * possible NTB link change.
+ * Additionally each of active NT-functions is subscribed to PCIe-link
+ * state changes of peer ports.
+ *===========================================================================
+ */
+
+static void idt_ntb_local_link_disable(struct idt_ntb_dev *ndev);
+
+/*
+ * idt_init_link() - Initialize NTB link state notification subsystem
+ * @ndev: IDT NTB hardware driver descriptor
+ *
+ * Function performs the basic initialization of some global registers
+ * needed to enable IRQ-based notifications of PCIe Link Up/Down and
+ * Global Signal events.
+ * NOTE Since it's not possible to determine when all the NTB peer drivers are
+ * unloaded as well as have those registers accessed concurrently, we must
+ * preinitialize them with the same value and leave it uncleared on local
+ * driver unload.
+ */
+static void idt_init_link(struct idt_ntb_dev *ndev)
+{
+ u32 part_mask, port_mask, se_mask;
+ unsigned char pidx;
+
+ /* Initialize spin locker of Mapping Table access registers */
+ spin_lock_init(&ndev->mtbl_lock);
+
+ /* Walk over all detected peers collecting port and partition masks */
+ port_mask = ~BIT(ndev->port);
+ part_mask = ~BIT(ndev->part);
+ for (pidx = 0; pidx < ndev->peer_cnt; pidx++) {
+ port_mask &= ~BIT(ndev->peers[pidx].port);
+ part_mask &= ~BIT(ndev->peers[pidx].part);
+ }
+
+ /* Clean the Link Up/Down and GLobal Signal status registers */
+ idt_sw_write(ndev, IDT_SW_SELINKUPSTS, (u32)-1);
+ idt_sw_write(ndev, IDT_SW_SELINKDNSTS, (u32)-1);
+ idt_sw_write(ndev, IDT_SW_SEGSIGSTS, (u32)-1);
+
+ /* Unmask NT-activated partitions to receive Global Switch events */
+ idt_sw_write(ndev, IDT_SW_SEPMSK, part_mask);
+
+ /* Enable PCIe Link Up events of NT-activated ports */
+ idt_sw_write(ndev, IDT_SW_SELINKUPMSK, port_mask);
+
+ /* Enable PCIe Link Down events of NT-activated ports */
+ idt_sw_write(ndev, IDT_SW_SELINKDNMSK, port_mask);
+
+ /* Unmask NT-activated partitions to receive Global Signal events */
+ idt_sw_write(ndev, IDT_SW_SEGSIGMSK, part_mask);
+
+ /* Unmask Link Up/Down and Global Switch Events */
+ se_mask = ~(IDT_SEMSK_LINKUP | IDT_SEMSK_LINKDN | IDT_SEMSK_GSIGNAL);
+ idt_sw_write(ndev, IDT_SW_SEMSK, se_mask);
+
+ dev_dbg(&ndev->ntb.pdev->dev, "NTB link status events initialized");
+}
+
+/*
+ * idt_deinit_link() - deinitialize link subsystem
+ * @ndev: IDT NTB hardware driver descriptor
+ *
+ * Just disable the link back.
+ */
+static void idt_deinit_link(struct idt_ntb_dev *ndev)
+{
+ /* Disable the link */
+ idt_ntb_local_link_disable(ndev);
+
+ dev_dbg(&ndev->ntb.pdev->dev, "NTB link status events deinitialized");
+}
+
+/*
+ * idt_se_isr() - switch events ISR
+ * @ndev: IDT NTB hardware driver descriptor
+ * @ntint_sts: NT-function interrupt status
+ *
+ * This driver doesn't support IDT PCIe-switch dynamic reconfigurations,
+ * Failover capability, etc, so switch events are utilized to notify of
+ * PCIe and NTB link events.
+ * The method is called from PCIe ISR bottom-half routine.
+ */
+static void idt_se_isr(struct idt_ntb_dev *ndev, u32 ntint_sts)
+{
+ u32 sests;
+
+ /* Read Switch Events status */
+ sests = idt_sw_read(ndev, IDT_SW_SESTS);
+
+ /* Clean the Link Up/Down and Global Signal status registers */
+ idt_sw_write(ndev, IDT_SW_SELINKUPSTS, (u32)-1);
+ idt_sw_write(ndev, IDT_SW_SELINKDNSTS, (u32)-1);
+ idt_sw_write(ndev, IDT_SW_SEGSIGSTS, (u32)-1);
+
+ /* Clean the corresponding interrupt bit */
+ idt_nt_write(ndev, IDT_NT_NTINTSTS, IDT_NTINTSTS_SEVENT);
+
+ dev_dbg(&ndev->ntb.pdev->dev, "SE IRQ detected %#08x (SESTS %#08x)",
+ ntint_sts, sests);
+
+ /* Notify the client driver of possible link state change */
+ ntb_link_event(&ndev->ntb);
+}
+
+/*
+ * idt_ntb_local_link_enable() - enable the local NTB link.
+ * @ndev: IDT NTB hardware driver descriptor
+ *
+ * In order to enable the NTB link we need:
+ * - enable Completion TLPs translation
+ * - initialize mapping table to enable the Request ID translation
+ * - notify peers of NTB link state change
+ */
+static void idt_ntb_local_link_enable(struct idt_ntb_dev *ndev)
+{
+ u32 reqid, mtbldata = 0;
+ unsigned long irqflags;
+
+ /* Enable the ID protection and Completion TLPs translation */
+ idt_nt_write(ndev, IDT_NT_NTCTL, IDT_NTCTL_CPEN);
+
+ /* Retrieve the current Requester ID (Bus:Device:Function) */
+ reqid = idt_nt_read(ndev, IDT_NT_REQIDCAP);
+
+ /*
+ * Set the corresponding NT Mapping table entry of port partition index
+ * with the data to perform the Request ID translation
+ */
+ mtbldata = SET_FIELD(NTMTBLDATA_REQID, 0, reqid) |
+ SET_FIELD(NTMTBLDATA_PART, 0, ndev->part) |
+ IDT_NTMTBLDATA_VALID;
+ spin_lock_irqsave(&ndev->mtbl_lock, irqflags);
+ idt_nt_write(ndev, IDT_NT_NTMTBLADDR, ndev->part);
+ idt_nt_write(ndev, IDT_NT_NTMTBLDATA, mtbldata);
+ mmiowb();
+ spin_unlock_irqrestore(&ndev->mtbl_lock, irqflags);
+
+ /* Notify the peers by setting and clearing the global signal bit */
+ idt_nt_write(ndev, IDT_NT_NTGSIGNAL, IDT_NTGSIGNAL_SET);
+ idt_sw_write(ndev, IDT_SW_SEGSIGSTS, (u32)1 << ndev->part);
+}
+
+/*
+ * idt_ntb_local_link_disable() - disable the local NTB link.
+ * @ndev: IDT NTB hardware driver descriptor
+ *
+ * In order to enable the NTB link we need:
+ * - disable Completion TLPs translation
+ * - clear corresponding mapping table entry
+ * - notify peers of NTB link state change
+ */
+static void idt_ntb_local_link_disable(struct idt_ntb_dev *ndev)
+{
+ unsigned long irqflags;
+
+ /* Disable Completion TLPs translation */
+ idt_nt_write(ndev, IDT_NT_NTCTL, 0);
+
+ /* Clear the corresponding NT Mapping table entry */
+ spin_lock_irqsave(&ndev->mtbl_lock, irqflags);
+ idt_nt_write(ndev, IDT_NT_NTMTBLADDR, ndev->part);
+ idt_nt_write(ndev, IDT_NT_NTMTBLDATA, 0);
+ mmiowb();
+ spin_unlock_irqrestore(&ndev->mtbl_lock, irqflags);
+
+ /* Notify the peers by setting and clearing the global signal bit */
+ idt_nt_write(ndev, IDT_NT_NTGSIGNAL, IDT_NTGSIGNAL_SET);
+ idt_sw_write(ndev, IDT_SW_SEGSIGSTS, (u32)1 << ndev->part);
+}
+
+/*
+ * idt_ntb_local_link_is_up() - test wethter local NTB link is up
+ * @ndev: IDT NTB hardware driver descriptor
+ *
+ * Local link is up under the following conditions:
+ * - Bus mastering is enabled
+ * - NTCTL has Completion TLPs translation enabled
+ * - Mapping table permits Request TLPs translation
+ * NOTE: We don't need to check PCIe link state since it's obviously
+ * up while we are able to communicate with IDT PCIe-switch
+ *
+ * Return: true if link is up, otherwise false
+ */
+static bool idt_ntb_local_link_is_up(struct idt_ntb_dev *ndev)
+{
+ unsigned long irqflags;
+ u32 data;
+
+ /* Read the local Bus Master Enable status */
+ data = idt_nt_read(ndev, IDT_NT_PCICMDSTS);
+ if (!(data & IDT_PCICMDSTS_BME))
+ return false;
+
+ /* Read the local Completion TLPs translation enable status */
+ data = idt_nt_read(ndev, IDT_NT_NTCTL);
+ if (!(data & IDT_NTCTL_CPEN))
+ return false;
+
+ /* Read Mapping table entry corresponding to the local partition */
+ spin_lock_irqsave(&ndev->mtbl_lock, irqflags);
+ idt_nt_write(ndev, IDT_NT_NTMTBLADDR, ndev->part);
+ data = idt_nt_read(ndev, IDT_NT_NTMTBLDATA);
+ spin_unlock_irqrestore(&ndev->mtbl_lock, irqflags);
+
+ return !!(data & IDT_NTMTBLDATA_VALID);
+}
+
+/*
+ * idt_ntb_peer_link_is_up() - test whether peer NTB link is up
+ * @ndev: IDT NTB hardware driver descriptor
+ * @pidx: Peer port index
+ *
+ * Peer link is up under the following conditions:
+ * - PCIe link is up
+ * - Bus mastering is enabled
+ * - NTCTL has Completion TLPs translation enabled
+ * - Mapping table permits Request TLPs translation
+ *
+ * Return: true if link is up, otherwise false
+ */
+static bool idt_ntb_peer_link_is_up(struct idt_ntb_dev *ndev, int pidx)
+{
+ unsigned long irqflags;
+ unsigned char port;
+ u32 data;
+
+ /* Retrieve the device port number */
+ port = ndev->peers[pidx].port;
+
+ /* Check whether PCIe link is up */
+ data = idt_sw_read(ndev, portdata_tbl[port].sts);
+ if (!(data & IDT_SWPORTxSTS_LINKUP))
+ return false;
+
+ /* Check whether bus mastering is enabled on the peer port */
+ data = idt_sw_read(ndev, portdata_tbl[port].pcicmdsts);
+ if (!(data & IDT_PCICMDSTS_BME))
+ return false;
+
+ /* Check if Completion TLPs translation is enabled on the peer port */
+ data = idt_sw_read(ndev, portdata_tbl[port].ntctl);
+ if (!(data & IDT_NTCTL_CPEN))
+ return false;
+
+ /* Read Mapping table entry corresponding to the peer partition */
+ spin_lock_irqsave(&ndev->mtbl_lock, irqflags);
+ idt_nt_write(ndev, IDT_NT_NTMTBLADDR, ndev->peers[pidx].part);
+ data = idt_nt_read(ndev, IDT_NT_NTMTBLDATA);
+ spin_unlock_irqrestore(&ndev->mtbl_lock, irqflags);
+
+ return !!(data & IDT_NTMTBLDATA_VALID);
+}
+
+/*
+ * idt_ntb_link_is_up() - get the current ntb link state (NTB API callback)
+ * @ntb: NTB device context.
+ * @speed: OUT - The link speed expressed as PCIe generation number.
+ * @width: OUT - The link width expressed as the number of PCIe lanes.
+ *
+ * Get the bitfield of NTB link states for all peer ports
+ *
+ * Return: bitfield of indexed ports link state: bit is set/cleared if the
+ * link is up/down respectively.
+ */
+static u64 idt_ntb_link_is_up(struct ntb_dev *ntb,
+ enum ntb_speed *speed, enum ntb_width *width)
+{
+ struct idt_ntb_dev *ndev = to_ndev_ntb(ntb);
+ unsigned char pidx;
+ u64 status;
+ u32 data;
+
+ /* Retrieve the local link speed and width */
+ if (speed != NULL || width != NULL) {
+ data = idt_nt_read(ndev, IDT_NT_PCIELCTLSTS);
+ if (speed != NULL)
+ *speed = GET_FIELD(PCIELCTLSTS_CLS, data);
+ if (width != NULL)
+ *width = GET_FIELD(PCIELCTLSTS_NLW, data);
+ }
+
+ /* If local NTB link isn't up then all the links are considered down */
+ if (!idt_ntb_local_link_is_up(ndev))
+ return 0;
+
+ /* Collect all the peer ports link states into the bitfield */
+ status = 0;
+ for (pidx = 0; pidx < ndev->peer_cnt; pidx++) {
+ if (idt_ntb_peer_link_is_up(ndev, pidx))
+ status |= ((u64)1 << pidx);
+ }
+
+ return status;
+}
+
+/*
+ * idt_ntb_link_enable() - enable local port ntb link (NTB API callback)
+ * @ntb: NTB device context.
+ * @max_speed: The maximum link speed expressed as PCIe generation number.
+ * @max_width: The maximum link width expressed as the number of PCIe lanes.
+ *
+ * Enable just local NTB link. PCIe link parameters are ignored.
+ *
+ * Return: always zero.
+ */
+static int idt_ntb_link_enable(struct ntb_dev *ntb, enum ntb_speed speed,
+ enum ntb_width width)
+{
+ struct idt_ntb_dev *ndev = to_ndev_ntb(ntb);
+
+ /* Just enable the local NTB link */
+ idt_ntb_local_link_enable(ndev);
+
+ dev_dbg(&ndev->ntb.pdev->dev, "Local NTB link enabled");
+
+ return 0;
+}
+
+/*
+ * idt_ntb_link_disable() - disable local port ntb link (NTB API callback)
+ * @ntb: NTB device context.
+ *
+ * Disable just local NTB link.
+ *
+ * Return: always zero.
+ */
+static int idt_ntb_link_disable(struct ntb_dev *ntb)
+{
+ struct idt_ntb_dev *ndev = to_ndev_ntb(ntb);
+
+ /* Just disable the local NTB link */
+ idt_ntb_local_link_disable(ndev);
+
+ dev_dbg(&ndev->ntb.pdev->dev, "Local NTB link disabled");
+
+ return 0;
+}
+
+/*=============================================================================
+ * 4. Memory Window operations
+ *
+ * IDT PCIe-switches have two types of memory windows: MWs with direct
+ * address translation and MWs with LUT based translation. The first type of
+ * MWs is simple map of corresponding BAR address space to a memory space
+ * of specified target port. So it implemets just ont-to-one mapping. Lookup
+ * table in its turn can map one BAR address space to up to 24 different
+ * memory spaces of different ports.
+ * NT-functions BARs can be turned on to implement either direct or lookup
+ * table based address translations, so:
+ * BAR0 - NT configuration registers space/direct address translation
+ * BAR1 - direct address translation/upper address of BAR0x64
+ * BAR2 - direct address translation/Lookup table with either 12 or 24 entries
+ * BAR3 - direct address translation/upper address of BAR2x64
+ * BAR4 - direct address translation/Lookup table with either 12 or 24 entries
+ * BAR5 - direct address translation/upper address of BAR4x64
+ * Additionally BAR2 and BAR4 can't have 24-entries LUT enabled at the same
+ * time. Since the BARs setup can be rather complicated this driver implements
+ * a scanning algorithm to have all the possible memory windows configuration
+ * covered.
+ *
+ * NOTE 1 BAR setup must be done before Linux kernel enumerated NT-function
+ * of any port, so this driver would have memory windows configurations fixed.
+ * In this way all initializations must be performed either by platform BIOS
+ * or using EEPROM connected to IDT PCIe-switch master SMBus.
+ *
+ * NOTE 2 This driver expects BAR0 mapping NT-function configuration space.
+ * Easy calculation can give us an upper boundary of 29 possible memory windows
+ * per each NT-function if all the BARs are of 32bit type.
+ *=============================================================================
+ */
+
+/*
+ * idt_get_mw_count() - get memory window count
+ * @mw_type: Memory window type
+ *
+ * Return: number of memory windows with respect to the BAR type
+ */
+static inline unsigned char idt_get_mw_count(enum idt_mw_type mw_type)
+{
+ switch (mw_type) {
+ case IDT_MW_DIR:
+ return 1;
+ case IDT_MW_LUT12:
+ return 12;
+ case IDT_MW_LUT24:
+ return 24;
+ default:
+ break;
+ }
+
+ return 0;
+}
+
+/*
+ * idt_get_mw_name() - get memory window name
+ * @mw_type: Memory window type
+ *
+ * Return: pointer to a string with name
+ */
+static inline char *idt_get_mw_name(enum idt_mw_type mw_type)
+{
+ switch (mw_type) {
+ case IDT_MW_DIR:
+ return "DIR ";
+ case IDT_MW_LUT12:
+ return "LUT12";
+ case IDT_MW_LUT24:
+ return "LUT24";
+ default:
+ break;
+ }
+
+ return "unknown";
+}
+
+/*
+ * idt_scan_mws() - scan memory windows of the port
+ * @ndev: IDT NTB hardware driver descriptor
+ * @port: Port to get number of memory windows for
+ * @mw_cnt: Out - number of memory windows
+ *
+ * It walks over BAR setup registers of the specified port and determines
+ * the memory windows parameters if any activated.
+ *
+ * Return: array of memory windows
+ */
+static struct idt_mw_cfg *idt_scan_mws(struct idt_ntb_dev *ndev, int port,
+ unsigned char *mw_cnt)
+{
+ struct idt_mw_cfg mws[IDT_MAX_NR_MWS], *ret_mws;
+ const struct idt_ntb_bar *bars;
+ enum idt_mw_type mw_type;
+ unsigned char widx, bidx, en_cnt;
+ bool bar_64bit = false;
+ int aprt_size;
+ u32 data;
+
+ /* Retrieve the array of the BARs registers */
+ bars = portdata_tbl[port].bars;
+
+ /* Scan all the BARs belonging to the port */
+ *mw_cnt = 0;
+ for (bidx = 0; bidx < IDT_BAR_CNT; bidx += 1 + bar_64bit) {
+ /* Read BARSETUP register value */
+ data = idt_sw_read(ndev, bars[bidx].setup);
+
+ /* Skip disabled BARs */
+ if (!(data & IDT_BARSETUP_EN)) {
+ bar_64bit = false;
+ continue;
+ }
+
+ /* Skip next BARSETUP if current one has 64bit addressing */
+ bar_64bit = IS_FLD_SET(BARSETUP_TYPE, data, 64);
+
+ /* Skip configuration space mapping BARs */
+ if (data & IDT_BARSETUP_MODE_CFG)
+ continue;
+
+ /* Retrieve MW type/entries count and aperture size */
+ mw_type = GET_FIELD(BARSETUP_ATRAN, data);
+ en_cnt = idt_get_mw_count(mw_type);
+ aprt_size = (u64)1 << GET_FIELD(BARSETUP_SIZE, data);
+
+ /* Save configurations of all available memory windows */
+ for (widx = 0; widx < en_cnt; widx++, (*mw_cnt)++) {
+ /*
+ * IDT can expose a limited number of MWs, so it's bug
+ * to have more than the driver expects
+ */
+ if (*mw_cnt >= IDT_MAX_NR_MWS)
+ return ERR_PTR(-EINVAL);
+
+ /* Save basic MW info */
+ mws[*mw_cnt].type = mw_type;
+ mws[*mw_cnt].bar = bidx;
+ mws[*mw_cnt].idx = widx;
+ /* It's always DWORD aligned */
+ mws[*mw_cnt].addr_align = IDT_TRANS_ALIGN;
+ /* DIR and LUT approachs differently configure MWs */
+ if (mw_type == IDT_MW_DIR)
+ mws[*mw_cnt].size_max = aprt_size;
+ else if (mw_type == IDT_MW_LUT12)
+ mws[*mw_cnt].size_max = aprt_size / 16;
+ else
+ mws[*mw_cnt].size_max = aprt_size / 32;
+ mws[*mw_cnt].size_align = (mw_type == IDT_MW_DIR) ?
+ IDT_DIR_SIZE_ALIGN : mws[*mw_cnt].size_max;
+ }
+ }
+
+ /* Allocate memory for memory window descriptors */
+ ret_mws = devm_kcalloc(&ndev->ntb.pdev->dev, *mw_cnt,
+ sizeof(*ret_mws), GFP_KERNEL);
+ if (IS_ERR_OR_NULL(ret_mws))
+ return ERR_PTR(-ENOMEM);
+
+ /* Copy the info of detected memory windows */
+ memcpy(ret_mws, mws, (*mw_cnt)*sizeof(*ret_mws));
+
+ return ret_mws;
+}
+
+/*
+ * idt_init_mws() - initialize memory windows subsystem
+ * @ndev: IDT NTB hardware driver descriptor
+ *
+ * Scan BAR setup registers of local and peer ports to determine the
+ * outbound and inbound memory windows parameters
+ *
+ * Return: zero on success, otherwise a negative error number
+ */
+static int idt_init_mws(struct idt_ntb_dev *ndev)
+{
+ struct idt_ntb_peer *peer;
+ unsigned char pidx;
+
+ /* Scan memory windows of the local port */
+ ndev->mws = idt_scan_mws(ndev, ndev->port, &ndev->mw_cnt);
+ if (IS_ERR(ndev->mws)) {
+ dev_err(&ndev->ntb.pdev->dev,
+ "Failed to scan mws of local port %hhu", ndev->port);
+ return PTR_ERR(ndev->mws);
+ }
+
+ /* Scan memory windows of the peer ports */
+ for (pidx = 0; pidx < ndev->peer_cnt; pidx++) {
+ peer = &ndev->peers[pidx];
+ peer->mws = idt_scan_mws(ndev, peer->port, &peer->mw_cnt);
+ if (IS_ERR(peer->mws)) {
+ dev_err(&ndev->ntb.pdev->dev,
+ "Failed to scan mws of port %hhu", peer->port);
+ return PTR_ERR(peer->mws);
+ }
+ }
+
+ /* Initialize spin locker of the LUT registers */
+ spin_lock_init(&ndev->lut_lock);
+
+ dev_dbg(&ndev->ntb.pdev->dev, "Outbound and inbound MWs initialized");
+
+ return 0;
+}
+
+/*
+ * idt_ntb_mw_count() - number of inbound memory windows (NTB API callback)
+ * @ntb: NTB device context.
+ * @pidx: Port index of peer device.
+ *
+ * The value is returned for the specified peer, so generally speaking it can
+ * be different for different port depending on the IDT PCIe-switch
+ * initialization.
+ *
+ * Return: the number of memory windows.
+ */
+static int idt_ntb_mw_count(struct ntb_dev *ntb, int pidx)
+{
+ struct idt_ntb_dev *ndev = to_ndev_ntb(ntb);
+
+ if (pidx < 0 || ndev->peer_cnt <= pidx)
+ return -EINVAL;
+
+ return ndev->peers[pidx].mw_cnt;
+}
+
+/*
+ * idt_ntb_mw_get_align() - inbound memory window parameters (NTB API callback)
+ * @ntb: NTB device context.
+ * @pidx: Port index of peer device.
+ * @widx: Memory window index.
+ * @addr_align: OUT - the base alignment for translating the memory window
+ * @size_align: OUT - the size alignment for translating the memory window
+ * @size_max: OUT - the maximum size of the memory window
+ *
+ * The peer memory window parameters have already been determined, so just
+ * return the corresponding values, which mustn't change within session.
+ *
+ * Return: Zero on success, otherwise a negative error number.
+ */
+static int idt_ntb_mw_get_align(struct ntb_dev *ntb, int pidx, int widx,
+ resource_size_t *addr_align,
+ resource_size_t *size_align,
+ resource_size_t *size_max)
+{
+ struct idt_ntb_dev *ndev = to_ndev_ntb(ntb);
+ struct idt_ntb_peer *peer;
+
+ if (pidx < 0 || ndev->peer_cnt <= pidx)
+ return -EINVAL;
+
+ peer = &ndev->peers[pidx];
+
+ if (widx < 0 || peer->mw_cnt <= widx)
+ return -EINVAL;
+
+ if (addr_align != NULL)
+ *addr_align = peer->mws[widx].addr_align;
+
+ if (size_align != NULL)
+ *size_align = peer->mws[widx].size_align;
+
+ if (size_max != NULL)
+ *size_max = peer->mws[widx].size_max;
+
+ return 0;
+}
+
+/*
+ * idt_ntb_peer_mw_count() - number of outbound memory windows
+ * (NTB API callback)
+ * @ntb: NTB device context.
+ *
+ * Outbound memory windows parameters have been determined based on the
+ * BAR setup registers value, which are mostly constants within one session.
+ *
+ * Return: the number of memory windows.
+ */
+static int idt_ntb_peer_mw_count(struct ntb_dev *ntb)
+{
+ struct idt_ntb_dev *ndev = to_ndev_ntb(ntb);
+
+ return ndev->mw_cnt;
+}
+
+/*
+ * idt_ntb_peer_mw_get_addr() - get map address of an outbound memory window
+ * (NTB API callback)
+ * @ntb: NTB device context.
+ * @widx: Memory window index (within ntb_peer_mw_count() return value).
+ * @base: OUT - the base address of mapping region.
+ * @size: OUT - the size of mapping region.
+ *
+ * Return just parameters of BAR resources mapping. Size reflects just the size
+ * of the resource
+ *
+ * Return: Zero on success, otherwise a negative error number.
+ */
+static int idt_ntb_peer_mw_get_addr(struct ntb_dev *ntb, int widx,
+ phys_addr_t *base, resource_size_t *size)
+{
+ struct idt_ntb_dev *ndev = to_ndev_ntb(ntb);
+
+ if (widx < 0 || ndev->mw_cnt <= widx)
+ return -EINVAL;
+
+ /* Mapping address is just properly shifted BAR resource start */
+ if (base != NULL)
+ *base = pci_resource_start(ntb->pdev, ndev->mws[widx].bar) +
+ ndev->mws[widx].idx * ndev->mws[widx].size_max;
+
+ /* Mapping size has already been calculated at MWs scanning */
+ if (size != NULL)
+ *size = ndev->mws[widx].size_max;
+
+ return 0;
+}
+
+/*
+ * idt_ntb_peer_mw_set_trans() - set a translation address of a memory window
+ * (NTB API callback)
+ * @ntb: NTB device context.
+ * @pidx: Port index of peer device the translation address received from.
+ * @widx: Memory window index.
+ * @addr: The dma address of the shared memory to access.
+ * @size: The size of the shared memory to access.
+ *
+ * The Direct address translation and LUT base translation is initialized a
+ * bit differenet. Although the parameters restriction are now determined by
+ * the same code.
+ *
+ * Return: Zero on success, otherwise an error number.
+ */
+static int idt_ntb_peer_mw_set_trans(struct ntb_dev *ntb, int pidx, int widx,
+ u64 addr, resource_size_t size)
+{
+ struct idt_ntb_dev *ndev = to_ndev_ntb(ntb);
+ struct idt_mw_cfg *mw_cfg;
+ u32 data = 0, lutoff = 0;
+
+ if (pidx < 0 || ndev->peer_cnt <= pidx)
+ return -EINVAL;
+
+ if (widx < 0 || ndev->mw_cnt <= widx)
+ return -EINVAL;
+
+ /*
+ * Retrieve the memory window config to make sure the passed arguments
+ * fit it restrictions
+ */
+ mw_cfg = &ndev->mws[widx];
+ if (!IS_ALIGNED(addr, mw_cfg->addr_align))
+ return -EINVAL;
+ if (!IS_ALIGNED(size, mw_cfg->size_align) || size > mw_cfg->size_max)
+ return -EINVAL;
+
+ /* DIR and LUT based translations are initialized differently */
+ if (mw_cfg->type == IDT_MW_DIR) {
+ const struct idt_ntb_bar *bar = &ntdata_tbl.bars[mw_cfg->bar];
+ u64 limit;
+ /* Set destination partition of translation */
+ data = idt_nt_read(ndev, bar->setup);
+ data = SET_FIELD(BARSETUP_TPART, data, ndev->peers[pidx].part);
+ idt_nt_write(ndev, bar->setup, data);
+ /* Set translation base address */
+ idt_nt_write(ndev, bar->ltbase, (u32)addr);
+ idt_nt_write(ndev, bar->utbase, (u32)(addr >> 32));
+ /* Set the custom BAR aperture limit */
+ limit = pci_resource_start(ntb->pdev, mw_cfg->bar) + size;
+ idt_nt_write(ndev, bar->limit, (u32)limit);
+ if (IS_FLD_SET(BARSETUP_TYPE, data, 64))
+ idt_nt_write(ndev, (bar + 1)->limit, (limit >> 32));
+ } else {
+ unsigned long irqflags;
+ /* Initialize corresponding LUT entry */
+ lutoff = SET_FIELD(LUTOFFSET_INDEX, 0, mw_cfg->idx) |
+ SET_FIELD(LUTOFFSET_BAR, 0, mw_cfg->bar);
+ data = SET_FIELD(LUTUDATA_PART, 0, ndev->peers[pidx].part) |
+ IDT_LUTUDATA_VALID;
+ spin_lock_irqsave(&ndev->lut_lock, irqflags);
+ idt_nt_write(ndev, IDT_NT_LUTOFFSET, lutoff);
+ idt_nt_write(ndev, IDT_NT_LUTLDATA, (u32)addr);
+ idt_nt_write(ndev, IDT_NT_LUTMDATA, (u32)(addr >> 32));
+ idt_nt_write(ndev, IDT_NT_LUTUDATA, data);
+ mmiowb();
+ spin_unlock_irqrestore(&ndev->lut_lock, irqflags);
+ /* Limit address isn't specified since size is fixed for LUT */
+ }
+
+ return 0;
+}
+
+/*
+ * idt_ntb_peer_mw_clear_trans() - clear the outbound MW translation address
+ * (NTB API callback)
+ * @ntb: NTB device context.
+ * @pidx: Port index of peer device.
+ * @widx: Memory window index.
+ *
+ * It effectively disables the translation over the specified outbound MW.
+ *
+ * Return: Zero on success, otherwise an error number.
+ */
+static int idt_ntb_peer_mw_clear_trans(struct ntb_dev *ntb, int pidx,
+ int widx)
+{
+ struct idt_ntb_dev *ndev = to_ndev_ntb(ntb);
+ struct idt_mw_cfg *mw_cfg;
+
+ if (pidx < 0 || ndev->peer_cnt <= pidx)
+ return -EINVAL;
+
+ if (widx < 0 || ndev->mw_cnt <= widx)
+ return -EINVAL;
+
+ mw_cfg = &ndev->mws[widx];
+
+ /* DIR and LUT based translations are initialized differently */
+ if (mw_cfg->type == IDT_MW_DIR) {
+ const struct idt_ntb_bar *bar = &ntdata_tbl.bars[mw_cfg->bar];
+ u32 data;
+ /* Read BARSETUP to check BAR type */
+ data = idt_nt_read(ndev, bar->setup);
+ /* Disable translation by specifying zero BAR limit */
+ idt_nt_write(ndev, bar->limit, 0);
+ if (IS_FLD_SET(BARSETUP_TYPE, data, 64))
+ idt_nt_write(ndev, (bar + 1)->limit, 0);
+ } else {
+ unsigned long irqflags;
+ u32 lutoff;
+ /* Clear the corresponding LUT entry up */
+ lutoff = SET_FIELD(LUTOFFSET_INDEX, 0, mw_cfg->idx) |
+ SET_FIELD(LUTOFFSET_BAR, 0, mw_cfg->bar);
+ spin_lock_irqsave(&ndev->lut_lock, irqflags);
+ idt_nt_write(ndev, IDT_NT_LUTOFFSET, lutoff);
+ idt_nt_write(ndev, IDT_NT_LUTLDATA, 0);
+ idt_nt_write(ndev, IDT_NT_LUTMDATA, 0);
+ idt_nt_write(ndev, IDT_NT_LUTUDATA, 0);
+ mmiowb();
+ spin_unlock_irqrestore(&ndev->lut_lock, irqflags);
+ }
+
+ return 0;
+}
+
+/*=============================================================================
+ * 5. Doorbell operations
+ *
+ * Doorbell functionality of IDT PCIe-switches is pretty unusual. First of
+ * all there is global doorbell register which state can by changed by any
+ * NT-function of the IDT device in accordance with global permissions. These
+ * permissions configs are not supported by NTB API, so it must be done by
+ * either BIOS or EEPROM settings. In the same way the state of the global
+ * doorbell is reflected to the NT-functions local inbound doorbell registers.
+ * It can lead to situations when client driver sets some peer doorbell bits
+ * and get them bounced back to local inbound doorbell if permissions are
+ * granted.
+ * Secondly there is just one IRQ vector for Doorbell, Message, Temperature
+ * and Switch events, so if client driver left any of Doorbell bits set and
+ * some other event occurred, the driver will be notified of Doorbell event
+ * again.
+ *=============================================================================
+ */
+
+/*
+ * idt_db_isr() - doorbell event ISR
+ * @ndev: IDT NTB hardware driver descriptor
+ * @ntint_sts: NT-function interrupt status
+ *
+ * Doorbell event happans when DBELL bit of NTINTSTS switches from 0 to 1.
+ * It happens only when unmasked doorbell bits are set to ones on completely
+ * zeroed doorbell register.
+ * The method is called from PCIe ISR bottom-half routine.
+ */
+static void idt_db_isr(struct idt_ntb_dev *ndev, u32 ntint_sts)
+{
+ /*
+ * Doorbell IRQ status will be cleaned only when client
+ * driver unsets all the doorbell bits.
+ */
+ dev_dbg(&ndev->ntb.pdev->dev, "DB IRQ detected %#08x", ntint_sts);
+
+ /* Notify the client driver of possible doorbell state change */
+ ntb_db_event(&ndev->ntb, 0);
+}
+
+/*
+ * idt_ntb_db_valid_mask() - get a mask of doorbell bits supported by the ntb
+ * (NTB API callback)
+ * @ntb: NTB device context.
+ *
+ * IDT PCIe-switches expose just one Doorbell register of DWORD size.
+ *
+ * Return: A mask of doorbell bits supported by the ntb.
+ */
+static u64 idt_ntb_db_valid_mask(struct ntb_dev *ntb)
+{
+ return IDT_DBELL_MASK;
+}
+
+/*
+ * idt_ntb_db_read() - read the local doorbell register (NTB API callback)
+ * @ntb: NTB device context.
+ *
+ * There is just on inbound doorbell register of each NT-function, so
+ * this method return it value.
+ *
+ * Return: The bits currently set in the local doorbell register.
+ */
+static u64 idt_ntb_db_read(struct ntb_dev *ntb)
+{
+ struct idt_ntb_dev *ndev = to_ndev_ntb(ntb);
+
+ return idt_nt_read(ndev, IDT_NT_INDBELLSTS);
+}
+
+/*
+ * idt_ntb_db_clear() - clear bits in the local doorbell register
+ * (NTB API callback)
+ * @ntb: NTB device context.
+ * @db_bits: Doorbell bits to clear.
+ *
+ * Clear bits of inbound doorbell register by writing ones to it.
+ *
+ * NOTE! Invalid bits are always considered cleared so it's not an error
+ * to clear them over.
+ *
+ * Return: always zero as success.
+ */
+static int idt_ntb_db_clear(struct ntb_dev *ntb, u64 db_bits)
+{
+ struct idt_ntb_dev *ndev = to_ndev_ntb(ntb);
+
+ idt_nt_write(ndev, IDT_NT_INDBELLSTS, (u32)db_bits);
+
+ return 0;
+}
+
+/*
+ * idt_ntb_db_read_mask() - read the local doorbell mask (NTB API callback)
+ * @ntb: NTB device context.
+ *
+ * Each inbound doorbell bit can be masked from generating IRQ by setting
+ * the corresponding bit in inbound doorbell mask. So this method returns
+ * the value of the register.
+ *
+ * Return: The bits currently set in the local doorbell mask register.
+ */
+static u64 idt_ntb_db_read_mask(struct ntb_dev *ntb)
+{
+ struct idt_ntb_dev *ndev = to_ndev_ntb(ntb);
+
+ return idt_nt_read(ndev, IDT_NT_INDBELLMSK);
+}
+
+/*
+ * idt_ntb_db_set_mask() - set bits in the local doorbell mask
+ * (NTB API callback)
+ * @ntb: NTB device context.
+ * @db_bits: Doorbell mask bits to set.
+ *
+ * The inbound doorbell register mask value must be read, then OR'ed with
+ * passed field and only then set back.
+ *
+ * Return: zero on success, negative error if invalid argument passed.
+ */
+static int idt_ntb_db_set_mask(struct ntb_dev *ntb, u64 db_bits)
+{
+ struct idt_ntb_dev *ndev = to_ndev_ntb(ntb);
+
+ return idt_reg_set_bits(ndev, IDT_NT_INDBELLMSK, &ndev->db_mask_lock,
+ IDT_DBELL_MASK, db_bits);
+}
+
+/*
+ * idt_ntb_db_clear_mask() - clear bits in the local doorbell mask
+ * (NTB API callback)
+ * @ntb: NTB device context.
+ * @db_bits: Doorbell bits to clear.
+ *
+ * The method just clears the set bits up in accordance with the passed
+ * bitfield. IDT PCIe-switch shall generate an interrupt if there hasn't
+ * been any unmasked bit set before current unmasking. Otherwise IRQ won't
+ * be generated since there is only one IRQ vector for all doorbells.
+ *
+ * Return: always zero as success
+ */
+static int idt_ntb_db_clear_mask(struct ntb_dev *ntb, u64 db_bits)
+{
+ struct idt_ntb_dev *ndev = to_ndev_ntb(ntb);
+
+ idt_reg_clear_bits(ndev, IDT_NT_INDBELLMSK, &ndev->db_mask_lock,
+ db_bits);
+
+ return 0;
+}
+
+/*
+ * idt_ntb_peer_db_set() - set bits in the peer doorbell register
+ * (NTB API callback)
+ * @ntb: NTB device context.
+ * @db_bits: Doorbell bits to set.
+ *
+ * IDT PCIe-switches exposes local outbound doorbell register to change peer
+ * inbound doorbell register state.
+ *
+ * Return: zero on success, negative error if invalid argument passed.
+ */
+static int idt_ntb_peer_db_set(struct ntb_dev *ntb, u64 db_bits)
+{
+ struct idt_ntb_dev *ndev = to_ndev_ntb(ntb);
+
+ if (db_bits & ~(u64)IDT_DBELL_MASK)
+ return -EINVAL;
+
+ idt_nt_write(ndev, IDT_NT_OUTDBELLSET, (u32)db_bits);
+ return 0;
+}
+
+/*=============================================================================
+ * 6. Messaging operations
+ *
+ * Each NT-function of IDT PCIe-switch has four inbound and four outbound
+ * message registers. Each outbound message register can be connected to one or
+ * even more than one peer inbound message registers by setting global
+ * configurations. Since NTB API permits one-on-one message registers mapping
+ * only, the driver acts in according with that restriction.
+ *=============================================================================
+ */
+
+/*
+ * idt_init_msg() - initialize messaging interface
+ * @ndev: IDT NTB hardware driver descriptor
+ *
+ * Just initialize the message registers routing tables locker.
+ */
+static void idt_init_msg(struct idt_ntb_dev *ndev)
+{
+ unsigned char midx;
+
+ /* Init the messages routing table lockers */
+ for (midx = 0; midx < IDT_MSG_CNT; midx++)
+ spin_lock_init(&ndev->msg_locks[midx]);
+
+ dev_dbg(&ndev->ntb.pdev->dev, "NTB Messaging initialized");
+}
+
+/*
+ * idt_msg_isr() - message event ISR
+ * @ndev: IDT NTB hardware driver descriptor
+ * @ntint_sts: NT-function interrupt status
+ *
+ * Message event happens when MSG bit of NTINTSTS switches from 0 to 1.
+ * It happens only when unmasked message status bits are set to ones on
+ * completely zeroed message status register.
+ * The method is called from PCIe ISR bottom-half routine.
+ */
+static void idt_msg_isr(struct idt_ntb_dev *ndev, u32 ntint_sts)
+{
+ /*
+ * Message IRQ status will be cleaned only when client
+ * driver unsets all the message status bits.
+ */
+ dev_dbg(&ndev->ntb.pdev->dev, "Message IRQ detected %#08x", ntint_sts);
+
+ /* Notify the client driver of possible message status change */
+ ntb_msg_event(&ndev->ntb);
+}
+
+/*
+ * idt_ntb_msg_count() - get the number of message registers (NTB API callback)
+ * @ntb: NTB device context.
+ *
+ * IDT PCIe-switches support four message registers.
+ *
+ * Return: the number of message registers.
+ */
+static int idt_ntb_msg_count(struct ntb_dev *ntb)
+{
+ return IDT_MSG_CNT;
+}
+
+/*
+ * idt_ntb_msg_inbits() - get a bitfield of inbound message registers status
+ * (NTB API callback)
+ * @ntb: NTB device context.
+ *
+ * NT message status register is shared between inbound and outbound message
+ * registers status
+ *
+ * Return: bitfield of inbound message registers.
+ */
+static u64 idt_ntb_msg_inbits(struct ntb_dev *ntb)
+{
+ return (u64)IDT_INMSG_MASK;
+}
+
+/*
+ * idt_ntb_msg_outbits() - get a bitfield of outbound message registers status
+ * (NTB API callback)
+ * @ntb: NTB device context.
+ *
+ * NT message status register is shared between inbound and outbound message
+ * registers status
+ *
+ * Return: bitfield of outbound message registers.
+ */
+static u64 idt_ntb_msg_outbits(struct ntb_dev *ntb)
+{
+ return (u64)IDT_OUTMSG_MASK;
+}
+
+/*
+ * idt_ntb_msg_read_sts() - read the message registers status (NTB API callback)
+ * @ntb: NTB device context.
+ *
+ * IDT PCIe-switches expose message status registers to notify drivers of
+ * incoming data and failures in case if peer message register isn't freed.
+ *
+ * Return: status bits of message registers
+ */
+static u64 idt_ntb_msg_read_sts(struct ntb_dev *ntb)
+{
+ struct idt_ntb_dev *ndev = to_ndev_ntb(ntb);
+
+ return idt_nt_read(ndev, IDT_NT_MSGSTS);
+}
+
+/*
+ * idt_ntb_msg_clear_sts() - clear status bits of message registers
+ * (NTB API callback)
+ * @ntb: NTB device context.
+ * @sts_bits: Status bits to clear.
+ *
+ * Clear bits in the status register by writing ones.
+ *
+ * NOTE! Invalid bits are always considered cleared so it's not an error
+ * to clear them over.
+ *
+ * Return: always zero as success.
+ */
+static int idt_ntb_msg_clear_sts(struct ntb_dev *ntb, u64 sts_bits)
+{
+ struct idt_ntb_dev *ndev = to_ndev_ntb(ntb);
+
+ idt_nt_write(ndev, IDT_NT_MSGSTS, sts_bits);
+
+ return 0;
+}
+
+/*
+ * idt_ntb_msg_set_mask() - set mask of message register status bits
+ * (NTB API callback)
+ * @ntb: NTB device context.
+ * @mask_bits: Mask bits.
+ *
+ * Mask the message status bits from raising an IRQ.
+ *
+ * Return: zero on success, negative error if invalid argument passed.
+ */
+static int idt_ntb_msg_set_mask(struct ntb_dev *ntb, u64 mask_bits)
+{
+ struct idt_ntb_dev *ndev = to_ndev_ntb(ntb);
+
+ return idt_reg_set_bits(ndev, IDT_NT_MSGSTSMSK, &ndev->msg_mask_lock,
+ IDT_MSG_MASK, mask_bits);
+}
+
+/*
+ * idt_ntb_msg_clear_mask() - clear message registers mask
+ * (NTB API callback)
+ * @ntb: NTB device context.
+ * @mask_bits: Mask bits.
+ *
+ * Clear mask of message status bits IRQs.
+ *
+ * Return: always zero as success.
+ */
+static int idt_ntb_msg_clear_mask(struct ntb_dev *ntb, u64 mask_bits)
+{
+ struct idt_ntb_dev *ndev = to_ndev_ntb(ntb);
+
+ idt_reg_clear_bits(ndev, IDT_NT_MSGSTSMSK, &ndev->msg_mask_lock,
+ mask_bits);
+
+ return 0;
+}
+
+/*
+ * idt_ntb_msg_read() - read message register with specified index
+ * (NTB API callback)
+ * @ntb: NTB device context.
+ * @midx: Message register index
+ * @pidx: OUT - Port index of peer device a message retrieved from
+ * @msg: OUT - Data
+ *
+ * Read data from the specified message register and source register.
+ *
+ * Return: zero on success, negative error if invalid argument passed.
+ */
+static int idt_ntb_msg_read(struct ntb_dev *ntb, int midx, int *pidx, u32 *msg)
+{
+ struct idt_ntb_dev *ndev = to_ndev_ntb(ntb);
+
+ if (midx < 0 || IDT_MSG_CNT <= midx)
+ return -EINVAL;
+
+ /* Retrieve source port index of the message */
+ if (pidx != NULL) {
+ u32 srcpart;
+
+ srcpart = idt_nt_read(ndev, ntdata_tbl.msgs[midx].src);
+ *pidx = ndev->part_idx_map[srcpart];
+
+ /* Sanity check partition index (for initial case) */
+ if (*pidx == -EINVAL)
+ *pidx = 0;
+ }
+
+ /* Retrieve data of the corresponding message register */
+ if (msg != NULL)
+ *msg = idt_nt_read(ndev, ntdata_tbl.msgs[midx].in);
+
+ return 0;
+}
+
+/*
+ * idt_ntb_msg_write() - write data to the specified message register
+ * (NTB API callback)
+ * @ntb: NTB device context.
+ * @midx: Message register index
+ * @pidx: Port index of peer device a message being sent to
+ * @msg: Data to send
+ *
+ * Just try to send data to a peer. Message status register should be
+ * checked by client driver.
+ *
+ * Return: zero on success, negative error if invalid argument passed.
+ */
+static int idt_ntb_msg_write(struct ntb_dev *ntb, int midx, int pidx, u32 msg)
+{
+ struct idt_ntb_dev *ndev = to_ndev_ntb(ntb);
+ unsigned long irqflags;
+ u32 swpmsgctl = 0;
+
+ if (midx < 0 || IDT_MSG_CNT <= midx)
+ return -EINVAL;
+
+ if (pidx < 0 || ndev->peer_cnt <= pidx)
+ return -EINVAL;
+
+ /* Collect the routing information */
+ swpmsgctl = SET_FIELD(SWPxMSGCTL_REG, 0, midx) |
+ SET_FIELD(SWPxMSGCTL_PART, 0, ndev->peers[pidx].part);
+
+ /* Lock the messages routing table of the specified register */
+ spin_lock_irqsave(&ndev->msg_locks[midx], irqflags);
+ /* Set the route and send the data */
+ idt_sw_write(ndev, partdata_tbl[ndev->part].msgctl[midx], swpmsgctl);
+ idt_nt_write(ndev, ntdata_tbl.msgs[midx].out, msg);
+ mmiowb();
+ /* Unlock the messages routing table */
+ spin_unlock_irqrestore(&ndev->msg_locks[midx], irqflags);
+
+ /* Client driver shall check the status register */
+ return 0;
+}
+
+/*=============================================================================
+ * 7. Temperature sensor operations
+ *
+ * IDT PCIe-switch has an embedded temperature sensor, which can be used to
+ * warn a user-space of possible chip overheating. Since workload temperature
+ * can be different on different platforms, temperature thresholds as well as
+ * general sensor settings must be setup in the framework of BIOS/EEPROM
+ * initializations. It includes the actual sensor enabling as well.
+ *=============================================================================
+ */
+
+/*
+ * idt_read_temp() - read temperature from chip sensor
+ * @ntb: NTB device context.
+ * @val: OUT - integer value of temperature
+ * @frac: OUT - fraction
+ */
+static void idt_read_temp(struct idt_ntb_dev *ndev, unsigned char *val,
+ unsigned char *frac)
+{
+ u32 data;
+
+ /* Read the data from TEMP field of the TMPSTS register */
+ data = idt_sw_read(ndev, IDT_SW_TMPSTS);
+ data = GET_FIELD(TMPSTS_TEMP, data);
+ /* TEMP field has one fractional bit and seven integer bits */
+ *val = data >> 1;
+ *frac = ((data & 0x1) ? 5 : 0);
+}
+
+/*
+ * idt_temp_isr() - temperature sensor alarm events ISR
+ * @ndev: IDT NTB hardware driver descriptor
+ * @ntint_sts: NT-function interrupt status
+ *
+ * It handles events of temperature crossing alarm thresholds. Since reading
+ * of TMPALARM register clears it up, the function doesn't analyze the
+ * read value, instead the current temperature value just warningly printed to
+ * log.
+ * The method is called from PCIe ISR bottom-half routine.
+ */
+static void idt_temp_isr(struct idt_ntb_dev *ndev, u32 ntint_sts)
+{
+ unsigned char val, frac;
+
+ /* Read the current temperature value */
+ idt_read_temp(ndev, &val, &frac);
+
+ /* Read the temperature alarm to clean the alarm status out */
+ /*(void)idt_sw_read(ndev, IDT_SW_TMPALARM);*/
+
+ /* Clean the corresponding interrupt bit */
+ idt_nt_write(ndev, IDT_NT_NTINTSTS, IDT_NTINTSTS_TMPSENSOR);
+
+ dev_dbg(&ndev->ntb.pdev->dev,
+ "Temp sensor IRQ detected %#08x", ntint_sts);
+
+ /* Print temperature value to log */
+ dev_warn(&ndev->ntb.pdev->dev, "Temperature %hhu.%hhu", val, frac);
+}
+
+/*=============================================================================
+ * 8. ISRs related operations
+ *
+ * IDT PCIe-switch has strangely developed IRQ system. There is just one
+ * interrupt vector for doorbell and message registers. So the hardware driver
+ * can't determine actual source of IRQ if, for example, message event happened
+ * while any of unmasked doorbell is still set. The similar situation may be if
+ * switch or temperature sensor events pop up. The difference is that SEVENT
+ * and TMPSENSOR bits of NT interrupt status register can be cleaned by
+ * IRQ handler so a next interrupt request won't have false handling of
+ * corresponding events.
+ * The hardware driver has only bottom-half handler of the IRQ, since if any
+ * of events happened the device won't raise it again before the last one is
+ * handled by clearing of corresponding NTINTSTS bit.
+ *=============================================================================
+ */
+
+static irqreturn_t idt_thread_isr(int irq, void *devid);
+
+/*
+ * idt_init_isr() - initialize PCIe interrupt handler
+ * @ndev: IDT NTB hardware driver descriptor
+ *
+ * Return: zero on success, otherwise a negative error number.
+ */
+static int idt_init_isr(struct idt_ntb_dev *ndev)
+{
+ struct pci_dev *pdev = ndev->ntb.pdev;
+ u32 ntint_mask;
+ int ret;
+
+ /* Allocate just one interrupt vector for the ISR */
+ ret = pci_alloc_irq_vectors(pdev, 1, 1, PCI_IRQ_MSI | PCI_IRQ_LEGACY);
+ if (ret != 1) {
+ dev_err(&pdev->dev, "Failed to allocate IRQ vector");
+ return ret;
+ }
+
+ /* Retrieve the IRQ vector */
+ ret = pci_irq_vector(pdev, 0);
+ if (ret < 0) {
+ dev_err(&pdev->dev, "Failed to get IRQ vector");
+ goto err_free_vectors;
+ }
+
+ /* Set the IRQ handler */
+ ret = devm_request_threaded_irq(&pdev->dev, ret, NULL, idt_thread_isr,
+ IRQF_ONESHOT, NTB_IRQNAME, ndev);
+ if (ret != 0) {
+ dev_err(&pdev->dev, "Failed to set MSI IRQ handler, %d", ret);
+ goto err_free_vectors;
+ }
+
+ /* Unmask Message/Doorbell/SE/Temperature interrupts */
+ ntint_mask = idt_nt_read(ndev, IDT_NT_NTINTMSK) & ~IDT_NTINTMSK_ALL;
+ idt_nt_write(ndev, IDT_NT_NTINTMSK, ntint_mask);
+
+ /* From now on the interrupts are enabled */
+ dev_dbg(&pdev->dev, "NTB interrupts initialized");
+
+ return 0;
+
+err_free_vectors:
+ pci_free_irq_vectors(pdev);
+
+ return ret;
+}
+
+
+/*
+ * idt_deinit_ist() - deinitialize PCIe interrupt handler
+ * @ndev: IDT NTB hardware driver descriptor
+ *
+ * Disable corresponding interrupts and free allocated IRQ vectors.
+ */
+static void idt_deinit_isr(struct idt_ntb_dev *ndev)
+{
+ struct pci_dev *pdev = ndev->ntb.pdev;
+ u32 ntint_mask;
+
+ /* Mask interrupts back */
+ ntint_mask = idt_nt_read(ndev, IDT_NT_NTINTMSK) | IDT_NTINTMSK_ALL;
+ idt_nt_write(ndev, IDT_NT_NTINTMSK, ntint_mask);
+
+ /* Manually free IRQ otherwise PCI free irq vectors will fail */
+ devm_free_irq(&pdev->dev, pci_irq_vector(pdev, 0), ndev);
+
+ /* Free allocated IRQ vectors */
+ pci_free_irq_vectors(pdev);
+
+ dev_dbg(&pdev->dev, "NTB interrupts deinitialized");
+}
+
+/*
+ * idt_thread_isr() - NT function interrupts handler
+ * @irq: IRQ number
+ * @devid: Custom buffer
+ *
+ * It reads current NT interrupts state register and handles all the event
+ * it declares.
+ * The method is bottom-half routine of actual default PCIe IRQ handler.
+ */
+static irqreturn_t idt_thread_isr(int irq, void *devid)
+{
+ struct idt_ntb_dev *ndev = devid;
+ bool handled = false;
+ u32 ntint_sts;
+
+ /* Read the NT interrupts status register */
+ ntint_sts = idt_nt_read(ndev, IDT_NT_NTINTSTS);
+
+ /* Handle messaging interrupts */
+ if (ntint_sts & IDT_NTINTSTS_MSG) {
+ idt_msg_isr(ndev, ntint_sts);
+ handled = true;
+ }
+
+ /* Handle doorbell interrupts */
+ if (ntint_sts & IDT_NTINTSTS_DBELL) {
+ idt_db_isr(ndev, ntint_sts);
+ handled = true;
+ }
+
+ /* Handle switch event interrupts */
+ if (ntint_sts & IDT_NTINTSTS_SEVENT) {
+ idt_se_isr(ndev, ntint_sts);
+ handled = true;
+ }
+
+ /* Handle temperature sensor interrupt */
+ if (ntint_sts & IDT_NTINTSTS_TMPSENSOR) {
+ idt_temp_isr(ndev, ntint_sts);
+ handled = true;
+ }
+
+ dev_dbg(&ndev->ntb.pdev->dev, "IDT IRQs 0x%08x handled", ntint_sts);
+
+ return handled ? IRQ_HANDLED : IRQ_NONE;
+}
+
+/*===========================================================================
+ * 9. NTB hardware driver initialization
+ *===========================================================================
+ */
+
+/*
+ * NTB API operations
+ */
+static const struct ntb_dev_ops idt_ntb_ops = {
+ .port_number = idt_ntb_port_number,
+ .peer_port_count = idt_ntb_peer_port_count,
+ .peer_port_number = idt_ntb_peer_port_number,
+ .peer_port_idx = idt_ntb_peer_port_idx,
+ .link_is_up = idt_ntb_link_is_up,
+ .link_enable = idt_ntb_link_enable,
+ .link_disable = idt_ntb_link_disable,
+ .mw_count = idt_ntb_mw_count,
+ .mw_get_align = idt_ntb_mw_get_align,
+ .peer_mw_count = idt_ntb_peer_mw_count,
+ .peer_mw_get_addr = idt_ntb_peer_mw_get_addr,
+ .peer_mw_set_trans = idt_ntb_peer_mw_set_trans,
+ .peer_mw_clear_trans = idt_ntb_peer_mw_clear_trans,
+ .db_valid_mask = idt_ntb_db_valid_mask,
+ .db_read = idt_ntb_db_read,
+ .db_clear = idt_ntb_db_clear,
+ .db_read_mask = idt_ntb_db_read_mask,
+ .db_set_mask = idt_ntb_db_set_mask,
+ .db_clear_mask = idt_ntb_db_clear_mask,
+ .peer_db_set = idt_ntb_peer_db_set,
+ .msg_count = idt_ntb_msg_count,
+ .msg_inbits = idt_ntb_msg_inbits,
+ .msg_outbits = idt_ntb_msg_outbits,
+ .msg_read_sts = idt_ntb_msg_read_sts,
+ .msg_clear_sts = idt_ntb_msg_clear_sts,
+ .msg_set_mask = idt_ntb_msg_set_mask,
+ .msg_clear_mask = idt_ntb_msg_clear_mask,
+ .msg_read = idt_ntb_msg_read,
+ .msg_write = idt_ntb_msg_write
+};
+
+/*
+ * idt_register_device() - register IDT NTB device
+ * @ndev: IDT NTB hardware driver descriptor
+ *
+ * Return: zero on success, otherwise a negative error number.
+ */
+static int idt_register_device(struct idt_ntb_dev *ndev)
+{
+ int ret;
+
+ /* Initialize the rest of NTB device structure and register it */
+ ndev->ntb.ops = &idt_ntb_ops;
+ ndev->ntb.topo = NTB_TOPO_PRI;
+
+ ret = ntb_register_device(&ndev->ntb);
+ if (ret != 0) {
+ dev_err(&ndev->ntb.pdev->dev, "Failed to register NTB device");
+ return ret;
+ }
+
+ dev_dbg(&ndev->ntb.pdev->dev, "NTB device successfully registered");
+
+ return 0;
+}
+
+/*
+ * idt_unregister_device() - unregister IDT NTB device
+ * @ndev: IDT NTB hardware driver descriptor
+ */
+static void idt_unregister_device(struct idt_ntb_dev *ndev)
+{
+ /* Just unregister the NTB device */
+ ntb_unregister_device(&ndev->ntb);
+
+ dev_dbg(&ndev->ntb.pdev->dev, "NTB device unregistered");
+}
+
+/*=============================================================================
+ * 10. DebugFS node initialization
+ *=============================================================================
+ */
+
+static ssize_t idt_dbgfs_info_read(struct file *filp, char __user *ubuf,
+ size_t count, loff_t *offp);
+
+/*
+ * Driver DebugFS info file operations
+ */
+static const struct file_operations idt_dbgfs_info_ops = {
+ .owner = THIS_MODULE,
+ .open = simple_open,
+ .read = idt_dbgfs_info_read
+};
+
+/*
+ * idt_dbgfs_info_read() - DebugFS read info node callback
+ * @file: File node descriptor.
+ * @ubuf: User-space buffer to put data to
+ * @count: Size of the buffer
+ * @offp: Offset within the buffer
+ */
+static ssize_t idt_dbgfs_info_read(struct file *filp, char __user *ubuf,
+ size_t count, loff_t *offp)
+{
+ struct idt_ntb_dev *ndev = filp->private_data;
+ unsigned char temp, frac, idx, pidx, cnt;
+ ssize_t ret = 0, off = 0;
+ unsigned long irqflags;
+ enum ntb_speed speed;
+ enum ntb_width width;
+ char *strbuf;
+ size_t size;
+ u32 data;
+
+ /* Lets limit the buffer size the way the Intel/AMD drivers do */
+ size = min_t(size_t, count, 0x1000U);
+
+ /* Allocate the memory for the buffer */
+ strbuf = kmalloc(size, GFP_KERNEL);
+ if (strbuf == NULL)
+ return -ENOMEM;
+
+ /* Put the data into the string buffer */
+ off += scnprintf(strbuf + off, size - off,
+ "\n\t\tIDT NTB device Information:\n\n");
+
+ /* General local device configurations */
+ off += scnprintf(strbuf + off, size - off,
+ "Local Port %hhu, Partition %hhu\n", ndev->port, ndev->part);
+
+ /* Peer ports information */
+ off += scnprintf(strbuf + off, size - off, "Peers:\n");
+ for (idx = 0; idx < ndev->peer_cnt; idx++) {
+ off += scnprintf(strbuf + off, size - off,
+ "\t%hhu. Port %hhu, Partition %hhu\n",
+ idx, ndev->peers[idx].port, ndev->peers[idx].part);
+ }
+
+ /* Links status */
+ data = idt_ntb_link_is_up(&ndev->ntb, &speed, &width);
+ off += scnprintf(strbuf + off, size - off,
+ "NTB link status\t- 0x%08x, ", data);
+ off += scnprintf(strbuf + off, size - off, "PCIe Gen %d x%d lanes\n",
+ speed, width);
+
+ /* Mapping table entries */
+ off += scnprintf(strbuf + off, size - off, "NTB Mapping Table:\n");
+ for (idx = 0; idx < IDT_MTBL_ENTRY_CNT; idx++) {
+ spin_lock_irqsave(&ndev->mtbl_lock, irqflags);
+ idt_nt_write(ndev, IDT_NT_NTMTBLADDR, idx);
+ data = idt_nt_read(ndev, IDT_NT_NTMTBLDATA);
+ spin_unlock_irqrestore(&ndev->mtbl_lock, irqflags);
+
+ /* Print valid entries only */
+ if (data & IDT_NTMTBLDATA_VALID) {
+ off += scnprintf(strbuf + off, size - off,
+ "\t%hhu. Partition %d, Requester ID 0x%04x\n",
+ idx, GET_FIELD(NTMTBLDATA_PART, data),
+ GET_FIELD(NTMTBLDATA_REQID, data));
+ }
+ }
+ off += scnprintf(strbuf + off, size - off, "\n");
+
+ /* Outbound memory windows information */
+ off += scnprintf(strbuf + off, size - off,
+ "Outbound Memory Windows:\n");
+ for (idx = 0; idx < ndev->mw_cnt; idx += cnt) {
+ data = ndev->mws[idx].type;
+ cnt = idt_get_mw_count(data);
+
+ /* Print Memory Window information */
+ if (data == IDT_MW_DIR)
+ off += scnprintf(strbuf + off, size - off,
+ "\t%hhu.\t", idx);
+ else
+ off += scnprintf(strbuf + off, size - off,
+ "\t%hhu-%hhu.\t", idx, idx + cnt - 1);
+
+ off += scnprintf(strbuf + off, size - off, "%s BAR%hhu, ",
+ idt_get_mw_name(data), ndev->mws[idx].bar);
+
+ off += scnprintf(strbuf + off, size - off,
+ "Address align 0x%08llx, ", ndev->mws[idx].addr_align);
+
+ off += scnprintf(strbuf + off, size - off,
+ "Size align 0x%08llx, Size max %llu\n",
+ ndev->mws[idx].size_align, ndev->mws[idx].size_max);
+ }
+
+ /* Inbound memory windows information */
+ for (pidx = 0; pidx < ndev->peer_cnt; pidx++) {
+ off += scnprintf(strbuf + off, size - off,
+ "Inbound Memory Windows for peer %hhu (Port %hhu):\n",
+ pidx, ndev->peers[pidx].port);
+
+ /* Print Memory Windows information */
+ for (idx = 0; idx < ndev->peers[pidx].mw_cnt; idx += cnt) {
+ data = ndev->peers[pidx].mws[idx].type;
+ cnt = idt_get_mw_count(data);
+
+ if (data == IDT_MW_DIR)
+ off += scnprintf(strbuf + off, size - off,
+ "\t%hhu.\t", idx);
+ else
+ off += scnprintf(strbuf + off, size - off,
+ "\t%hhu-%hhu.\t", idx, idx + cnt - 1);
+
+ off += scnprintf(strbuf + off, size - off,
+ "%s BAR%hhu, ", idt_get_mw_name(data),
+ ndev->peers[pidx].mws[idx].bar);
+
+ off += scnprintf(strbuf + off, size - off,
+ "Address align 0x%08llx, ",
+ ndev->peers[pidx].mws[idx].addr_align);
+
+ off += scnprintf(strbuf + off, size - off,
+ "Size align 0x%08llx, Size max %llu\n",
+ ndev->peers[pidx].mws[idx].size_align,
+ ndev->peers[pidx].mws[idx].size_max);
+ }
+ }
+ off += scnprintf(strbuf + off, size - off, "\n");
+
+ /* Doorbell information */
+ data = idt_sw_read(ndev, IDT_SW_GDBELLSTS);
+ off += scnprintf(strbuf + off, size - off,
+ "Global Doorbell state\t- 0x%08x\n", data);
+ data = idt_ntb_db_read(&ndev->ntb);
+ off += scnprintf(strbuf + off, size - off,
+ "Local Doorbell state\t- 0x%08x\n", data);
+ data = idt_nt_read(ndev, IDT_NT_INDBELLMSK);
+ off += scnprintf(strbuf + off, size - off,
+ "Local Doorbell mask\t- 0x%08x\n", data);
+ off += scnprintf(strbuf + off, size - off, "\n");
+
+ /* Messaging information */
+ off += scnprintf(strbuf + off, size - off,
+ "Message event valid\t- 0x%08x\n", IDT_MSG_MASK);
+ data = idt_ntb_msg_read_sts(&ndev->ntb);
+ off += scnprintf(strbuf + off, size - off,
+ "Message event status\t- 0x%08x\n", data);
+ data = idt_nt_read(ndev, IDT_NT_MSGSTSMSK);
+ off += scnprintf(strbuf + off, size - off,
+ "Message event mask\t- 0x%08x\n", data);
+ off += scnprintf(strbuf + off, size - off,
+ "Message data:\n");
+ for (idx = 0; idx < IDT_MSG_CNT; idx++) {
+ int src;
+ (void)idt_ntb_msg_read(&ndev->ntb, idx, &src, &data);
+ off += scnprintf(strbuf + off, size - off,
+ "\t%hhu. 0x%08x from peer %hhu (Port %hhu)\n",
+ idx, data, src, ndev->peers[src].port);
+ }
+ off += scnprintf(strbuf + off, size - off, "\n");
+
+ /* Current temperature */
+ idt_read_temp(ndev, &temp, &frac);
+ off += scnprintf(strbuf + off, size - off,
+ "Switch temperature\t\t- %hhu.%hhuC\n", temp, frac);
+
+ /* Copy the buffer to the User Space */
+ ret = simple_read_from_buffer(ubuf, count, offp, strbuf, off);
+ kfree(strbuf);
+
+ return ret;
+}
+
+/*
+ * idt_init_dbgfs() - initialize DebugFS node
+ * @ndev: IDT NTB hardware driver descriptor
+ *
+ * Return: zero on success, otherwise a negative error number.
+ */
+static int idt_init_dbgfs(struct idt_ntb_dev *ndev)
+{
+ char devname[64];
+
+ /* If the top directory is not created then do nothing */
+ if (IS_ERR_OR_NULL(dbgfs_topdir)) {
+ dev_info(&ndev->ntb.pdev->dev, "Top DebugFS directory absent");
+ return PTR_ERR(dbgfs_topdir);
+ }
+
+ /* Create the info file node */
+ snprintf(devname, 64, "info:%s", pci_name(ndev->ntb.pdev));
+ ndev->dbgfs_info = debugfs_create_file(devname, 0400, dbgfs_topdir,
+ ndev, &idt_dbgfs_info_ops);
+ if (IS_ERR(ndev->dbgfs_info)) {
+ dev_dbg(&ndev->ntb.pdev->dev, "Failed to create DebugFS node");
+ return PTR_ERR(ndev->dbgfs_info);
+ }
+
+ dev_dbg(&ndev->ntb.pdev->dev, "NTB device DebugFS node created");
+
+ return 0;
+}
+
+/*
+ * idt_deinit_dbgfs() - deinitialize DebugFS node
+ * @ndev: IDT NTB hardware driver descriptor
+ *
+ * Just discard the info node from DebugFS
+ */
+static void idt_deinit_dbgfs(struct idt_ntb_dev *ndev)
+{
+ debugfs_remove(ndev->dbgfs_info);
+
+ dev_dbg(&ndev->ntb.pdev->dev, "NTB device DebugFS node discarded");
+}
+
+/*=============================================================================
+ * 11. Basic PCIe device initialization
+ *=============================================================================
+ */
+
+/*
+ * idt_check_setup() - Check whether the IDT PCIe-swtich is properly
+ * pre-initialized
+ * @pdev: Pointer to the PCI device descriptor
+ *
+ * Return: zero on success, otherwise a negative error number.
+ */
+static int idt_check_setup(struct pci_dev *pdev)
+{
+ u32 data;
+ int ret;
+
+ /* Read the BARSETUP0 */
+ ret = pci_read_config_dword(pdev, IDT_NT_BARSETUP0, &data);
+ if (ret != 0) {
+ dev_err(&pdev->dev,
+ "Failed to read BARSETUP0 config register");
+ return ret;
+ }
+
+ /* Check whether the BAR0 register is enabled to be of config space */
+ if (!(data & IDT_BARSETUP_EN) || !(data & IDT_BARSETUP_MODE_CFG)) {
+ dev_err(&pdev->dev, "BAR0 doesn't map config space");
+ return -EINVAL;
+ }
+
+ /* Configuration space BAR0 must have certain size */
+ if ((data & IDT_BARSETUP_SIZE_MASK) != IDT_BARSETUP_SIZE_CFG) {
+ dev_err(&pdev->dev, "Invalid size of config space");
+ return -EINVAL;
+ }
+
+ dev_dbg(&pdev->dev, "NTB device pre-initialized correctly");
+
+ return 0;
+}
+
+/*
+ * Create the IDT PCIe-switch driver descriptor
+ * @pdev: Pointer to the PCI device descriptor
+ * @id: IDT PCIe-device configuration
+ *
+ * It just allocates a memory for IDT PCIe-switch device structure and
+ * initializes some commonly used fields.
+ *
+ * No need of release method, since managed device resource is used for
+ * memory allocation.
+ *
+ * Return: pointer to the descriptor, otherwise a negative error number.
+ */
+static struct idt_ntb_dev *idt_create_dev(struct pci_dev *pdev,
+ const struct pci_device_id *id)
+{
+ struct idt_ntb_dev *ndev;
+
+ /* Allocate memory for the IDT PCIe-device descriptor */
+ ndev = devm_kzalloc(&pdev->dev, sizeof(*ndev), GFP_KERNEL);
+ if (IS_ERR_OR_NULL(ndev)) {
+ dev_err(&pdev->dev, "Memory allocation failed for descriptor");
+ return ERR_PTR(-ENOMEM);
+ }
+
+ /* Save the IDT PCIe-switch ports configuration */
+ ndev->swcfg = (struct idt_89hpes_cfg *)id->driver_data;
+ /* Save the PCI-device pointer inside the NTB device structure */
+ ndev->ntb.pdev = pdev;
+
+ /* Initialize spin locker of Doorbell, Message and GASA registers */
+ spin_lock_init(&ndev->db_mask_lock);
+ spin_lock_init(&ndev->msg_mask_lock);
+ spin_lock_init(&ndev->gasa_lock);
+
+ dev_info(&pdev->dev, "IDT %s discovered", ndev->swcfg->name);
+
+ dev_dbg(&pdev->dev, "NTB device descriptor created");
+
+ return ndev;
+}
+
+/*
+ * idt_init_pci() - initialize the basic PCI-related subsystem
+ * @ndev: Pointer to the IDT PCIe-switch driver descriptor
+ *
+ * Managed device resources will be freed automatically in case of failure or
+ * driver detachment.
+ *
+ * Return: zero on success, otherwise negative error number.
+ */
+static int idt_init_pci(struct idt_ntb_dev *ndev)
+{
+ struct pci_dev *pdev = ndev->ntb.pdev;
+ int ret;
+
+ /* Initialize the bit mask of DMA */
+ ret = pci_set_dma_mask(pdev, DMA_BIT_MASK(64));
+ if (ret != 0) {
+ ret = pci_set_dma_mask(pdev, DMA_BIT_MASK(32));
+ if (ret != 0) {
+ dev_err(&pdev->dev, "Failed to set DMA bit mask\n");
+ return ret;
+ }
+ dev_warn(&pdev->dev, "Cannot set DMA highmem bit mask\n");
+ }
+ ret = pci_set_consistent_dma_mask(pdev, DMA_BIT_MASK(64));
+ if (ret != 0) {
+ ret = pci_set_consistent_dma_mask(pdev, DMA_BIT_MASK(32));
+ if (ret != 0) {
+ dev_err(&pdev->dev,
+ "Failed to set consistent DMA bit mask\n");
+ return ret;
+ }
+ dev_warn(&pdev->dev,
+ "Cannot set consistent DMA highmem bit mask\n");
+ }
+
+ /*
+ * Enable the device advanced error reporting. It's not critical to
+ * have AER disabled in the kernel.
+ */
+ ret = pci_enable_pcie_error_reporting(pdev);
+ if (ret != 0)
+ dev_warn(&pdev->dev, "PCIe AER capability disabled\n");
+ else /* Cleanup uncorrectable error status before getting to init */
+ pci_cleanup_aer_uncorrect_error_status(pdev);
+
+ /* First enable the PCI device */
+ ret = pcim_enable_device(pdev);
+ if (ret != 0) {
+ dev_err(&pdev->dev, "Failed to enable PCIe device\n");
+ goto err_disable_aer;
+ }
+
+ /*
+ * Enable the bus mastering, which effectively enables MSI IRQs and
+ * Request TLPs translation
+ */
+ pci_set_master(pdev);
+
+ /* Request all BARs resources and map BAR0 only */
+ ret = pcim_iomap_regions_request_all(pdev, 1, NTB_NAME);
+ if (ret != 0) {
+ dev_err(&pdev->dev, "Failed to request resources\n");
+ goto err_clear_master;
+ }
+
+ /* Retrieve virtual address of BAR0 - PCI configuration space */
+ ndev->cfgspc = pcim_iomap_table(pdev)[0];
+
+ /* Put the IDT driver data pointer to the PCI-device private pointer */
+ pci_set_drvdata(pdev, ndev);
+
+ dev_dbg(&pdev->dev, "NT-function PCIe interface initialized");
+
+ return 0;
+
+err_clear_master:
+ pci_clear_master(pdev);
+err_disable_aer:
+ (void)pci_disable_pcie_error_reporting(pdev);
+
+ return ret;
+}
+
+/*
+ * idt_deinit_pci() - deinitialize the basic PCI-related subsystem
+ * @ndev: Pointer to the IDT PCIe-switch driver descriptor
+ *
+ * Managed resources will be freed on the driver detachment
+ */
+static void idt_deinit_pci(struct idt_ntb_dev *ndev)
+{
+ struct pci_dev *pdev = ndev->ntb.pdev;
+
+ /* Clean up the PCI-device private data pointer */
+ pci_set_drvdata(pdev, NULL);
+
+ /* Clear the bus master disabling the Request TLPs translation */
+ pci_clear_master(pdev);
+
+ /* Disable the AER capability */
+ (void)pci_disable_pcie_error_reporting(pdev);
+
+ dev_dbg(&pdev->dev, "NT-function PCIe interface cleared");
+}
+
+/*===========================================================================
+ * 12. PCI bus callback functions
+ *===========================================================================
+ */
+
+/*
+ * idt_pci_probe() - PCI device probe callback
+ * @pdev: Pointer to PCI device structure
+ * @id: PCIe device custom descriptor
+ *
+ * Return: zero on success, otherwise negative error number
+ */
+static int idt_pci_probe(struct pci_dev *pdev,
+ const struct pci_device_id *id)
+{
+ struct idt_ntb_dev *ndev;
+ int ret;
+
+ /* Check whether IDT PCIe-switch is properly pre-initialized */
+ ret = idt_check_setup(pdev);
+ if (ret != 0)
+ return ret;
+
+ /* Allocate the memory for IDT NTB device data */
+ ndev = idt_create_dev(pdev, id);
+ if (IS_ERR_OR_NULL(ndev))
+ return PTR_ERR(ndev);
+
+ /* Initialize the basic PCI subsystem of the device */
+ ret = idt_init_pci(ndev);
+ if (ret != 0)
+ return ret;
+
+ /* Scan ports of the IDT PCIe-switch */
+ (void)idt_scan_ports(ndev);
+
+ /* Initialize NTB link events subsystem */
+ idt_init_link(ndev);
+
+ /* Initialize MWs subsystem */
+ ret = idt_init_mws(ndev);
+ if (ret != 0)
+ goto err_deinit_link;
+
+ /* Initialize Messaging subsystem */
+ idt_init_msg(ndev);
+
+ /* Initialize IDT interrupts handler */
+ ret = idt_init_isr(ndev);
+ if (ret != 0)
+ goto err_deinit_link;
+
+ /* Register IDT NTB devices on the NTB bus */
+ ret = idt_register_device(ndev);
+ if (ret != 0)
+ goto err_deinit_isr;
+
+ /* Initialize DebugFS info node */
+ (void)idt_init_dbgfs(ndev);
+
+ /* IDT PCIe-switch NTB driver is finally initialized */
+ dev_info(&pdev->dev, "IDT NTB device is ready");
+
+ /* May the force be with us... */
+ return 0;
+
+err_deinit_isr:
+ idt_deinit_isr(ndev);
+err_deinit_link:
+ idt_deinit_link(ndev);
+ idt_deinit_pci(ndev);
+
+ return ret;
+}
+
+/*
+ * idt_pci_probe() - PCI device remove callback
+ * @pdev: Pointer to PCI device structure
+ */
+static void idt_pci_remove(struct pci_dev *pdev)
+{
+ struct idt_ntb_dev *ndev = pci_get_drvdata(pdev);
+
+ /* Deinit the DebugFS node */
+ idt_deinit_dbgfs(ndev);
+
+ /* Unregister NTB device */
+ idt_unregister_device(ndev);
+
+ /* Stop the interrupts handling */
+ idt_deinit_isr(ndev);
+
+ /* Deinitialize link event subsystem */
+ idt_deinit_link(ndev);
+
+ /* Deinit basic PCI subsystem */
+ idt_deinit_pci(ndev);
+
+ /* IDT PCIe-switch NTB driver is finally initialized */
+ dev_info(&pdev->dev, "IDT NTB device is removed");
+
+ /* Sayonara... */
+}
+
+/*
+ * IDT PCIe-switch models ports configuration structures
+ */
+static struct idt_89hpes_cfg idt_89hpes24nt6ag2_config = {
+ .name = "89HPES24NT6AG2",
+ .port_cnt = 6, .ports = {0, 2, 4, 6, 8, 12}
+};
+static struct idt_89hpes_cfg idt_89hpes32nt8ag2_config = {
+ .name = "89HPES32NT8AG2",
+ .port_cnt = 8, .ports = {0, 2, 4, 6, 8, 12, 16, 20}
+};
+static struct idt_89hpes_cfg idt_89hpes32nt8bg2_config = {
+ .name = "89HPES32NT8BG2",
+ .port_cnt = 8, .ports = {0, 2, 4, 6, 8, 12, 16, 20}
+};
+static struct idt_89hpes_cfg idt_89hpes12nt12g2_config = {
+ .name = "89HPES12NT12G2",
+ .port_cnt = 3, .ports = {0, 8, 16}
+};
+static struct idt_89hpes_cfg idt_89hpes16nt16g2_config = {
+ .name = "89HPES16NT16G2",
+ .port_cnt = 4, .ports = {0, 8, 12, 16}
+};
+static struct idt_89hpes_cfg idt_89hpes24nt24g2_config = {
+ .name = "89HPES24NT24G2",
+ .port_cnt = 8, .ports = {0, 2, 4, 6, 8, 12, 16, 20}
+};
+static struct idt_89hpes_cfg idt_89hpes32nt24ag2_config = {
+ .name = "89HPES32NT24AG2",
+ .port_cnt = 8, .ports = {0, 2, 4, 6, 8, 12, 16, 20}
+};
+static struct idt_89hpes_cfg idt_89hpes32nt24bg2_config = {
+ .name = "89HPES32NT24BG2",
+ .port_cnt = 8, .ports = {0, 2, 4, 6, 8, 12, 16, 20}
+};
+
+/*
+ * PCI-ids table of the supported IDT PCIe-switch devices
+ */
+static const struct pci_device_id idt_pci_tbl[] = {
+ {IDT_PCI_DEVICE_IDS(89HPES24NT6AG2, idt_89hpes24nt6ag2_config)},
+ {IDT_PCI_DEVICE_IDS(89HPES32NT8AG2, idt_89hpes32nt8ag2_config)},
+ {IDT_PCI_DEVICE_IDS(89HPES32NT8BG2, idt_89hpes32nt8bg2_config)},
+ {IDT_PCI_DEVICE_IDS(89HPES12NT12G2, idt_89hpes12nt12g2_config)},
+ {IDT_PCI_DEVICE_IDS(89HPES16NT16G2, idt_89hpes16nt16g2_config)},
+ {IDT_PCI_DEVICE_IDS(89HPES24NT24G2, idt_89hpes24nt24g2_config)},
+ {IDT_PCI_DEVICE_IDS(89HPES32NT24AG2, idt_89hpes32nt24ag2_config)},
+ {IDT_PCI_DEVICE_IDS(89HPES32NT24BG2, idt_89hpes32nt24bg2_config)},
+ {0}
+};
+MODULE_DEVICE_TABLE(pci, idt_pci_tbl);
+
+/*
+ * IDT PCIe-switch NT-function device driver structure definition
+ */
+static struct pci_driver idt_pci_driver = {
+ .name = KBUILD_MODNAME,
+ .probe = idt_pci_probe,
+ .remove = idt_pci_remove,
+ .id_table = idt_pci_tbl,
+};
+
+static int __init idt_pci_driver_init(void)
+{
+ pr_info("%s %s\n", NTB_DESC, NTB_VER);
+
+ /* Create the top DebugFS directory if the FS is initialized */
+ if (debugfs_initialized())
+ dbgfs_topdir = debugfs_create_dir(KBUILD_MODNAME, NULL);
+
+ /* Register the NTB hardware driver to handle the PCI device */
+ return pci_register_driver(&idt_pci_driver);
+}
+module_init(idt_pci_driver_init);
+
+static void __exit idt_pci_driver_exit(void)
+{
+ /* Unregister the NTB hardware driver */
+ pci_unregister_driver(&idt_pci_driver);
+
+ /* Discard the top DebugFS directory */
+ debugfs_remove_recursive(dbgfs_topdir);
+}
+module_exit(idt_pci_driver_exit);
+
diff --git a/drivers/ntb/hw/idt/ntb_hw_idt.h b/drivers/ntb/hw/idt/ntb_hw_idt.h
new file mode 100644
index 000000000000..856fd182f6f4
--- /dev/null
+++ b/drivers/ntb/hw/idt/ntb_hw_idt.h
@@ -0,0 +1,1149 @@
+/*
+ * This file is provided under a GPLv2 license. When using or
+ * redistributing this file, you may do so under that license.
+ *
+ * GPL LICENSE SUMMARY
+ *
+ * Copyright (C) 2016 T-Platforms All Rights Reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General
+ * Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License along
+ * with this program; if not, one can be found http://www.gnu.org/licenses/.
+ *
+ * The full GNU General Public License is included in this distribution in
+ * the file called "COPYING".
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ *
+ * IDT PCIe-switch NTB Linux driver
+ *
+ * Contact Information:
+ * Serge Semin <fancer.lancer@gmail.com>, <Sergey.Semin@t-platforms.ru>
+ */
+
+#ifndef NTB_HW_IDT_H
+#define NTB_HW_IDT_H
+
+#include <linux/types.h>
+#include <linux/pci.h>
+#include <linux/pci_ids.h>
+#include <linux/interrupt.h>
+#include <linux/spinlock.h>
+#include <linux/ntb.h>
+
+
+/*
+ * Macro is used to create the struct pci_device_id that matches
+ * the supported IDT PCIe-switches
+ * @devname: Capitalized name of the particular device
+ * @data: Variable passed to the driver of the particular device
+ */
+#define IDT_PCI_DEVICE_IDS(devname, data) \
+ .vendor = PCI_VENDOR_ID_IDT, .device = PCI_DEVICE_ID_IDT_##devname, \
+ .subvendor = PCI_ANY_ID, .subdevice = PCI_ANY_ID, \
+ .class = (PCI_CLASS_BRIDGE_OTHER << 8), .class_mask = (0xFFFF00), \
+ .driver_data = (kernel_ulong_t)&data
+
+/*
+ * IDT PCIe-switches device IDs
+ */
+#define PCI_DEVICE_ID_IDT_89HPES24NT6AG2 0x8091
+#define PCI_DEVICE_ID_IDT_89HPES32NT8AG2 0x808F
+#define PCI_DEVICE_ID_IDT_89HPES32NT8BG2 0x8088
+#define PCI_DEVICE_ID_IDT_89HPES12NT12G2 0x8092
+#define PCI_DEVICE_ID_IDT_89HPES16NT16G2 0x8090
+#define PCI_DEVICE_ID_IDT_89HPES24NT24G2 0x808E
+#define PCI_DEVICE_ID_IDT_89HPES32NT24AG2 0x808C
+#define PCI_DEVICE_ID_IDT_89HPES32NT24BG2 0x808A
+
+/*
+ * NT-function Configuration Space registers
+ * NOTE 1) The IDT PCIe-switch internal data is little-endian
+ * so it must be taken into account in the driver
+ * internals.
+ * 2) Additionally the registers should be accessed either
+ * with byte-enables corresponding to their native size or
+ * the size of one DWORD
+ *
+ * So to simplify the driver code, there is only DWORD-sized read/write
+ * operations utilized.
+ */
+/* PCI Express Configuration Space */
+/* PCI Express command/status register (DWORD) */
+#define IDT_NT_PCICMDSTS 0x00004U
+/* PCI Express Device Capabilities (DWORD) */
+#define IDT_NT_PCIEDCAP 0x00044U
+/* PCI Express Device Control/Status (WORD+WORD) */
+#define IDT_NT_PCIEDCTLSTS 0x00048U
+/* PCI Express Link Capabilities (DWORD) */
+#define IDT_NT_PCIELCAP 0x0004CU
+/* PCI Express Link Control/Status (WORD+WORD) */
+#define IDT_NT_PCIELCTLSTS 0x00050U
+/* PCI Express Device Capabilities 2 (DWORD) */
+#define IDT_NT_PCIEDCAP2 0x00064U
+/* PCI Express Device Control 2 (WORD+WORD) */
+#define IDT_NT_PCIEDCTL2 0x00068U
+/* PCI Power Management Control and Status (DWORD) */
+#define IDT_NT_PMCSR 0x000C4U
+/*==========================================*/
+/* IDT Proprietary NT-port-specific registers */
+/* NT-function main control registers */
+/* NT Endpoint Control (DWORD) */
+#define IDT_NT_NTCTL 0x00400U
+/* NT Endpoint Interrupt Status/Mask (DWORD) */
+#define IDT_NT_NTINTSTS 0x00404U
+#define IDT_NT_NTINTMSK 0x00408U
+/* NT Endpoint Signal Data (DWORD) */
+#define IDT_NT_NTSDATA 0x0040CU
+/* NT Endpoint Global Signal (DWORD) */
+#define IDT_NT_NTGSIGNAL 0x00410U
+/* Internal Error Reporting Mask 0/1 (DWORD) */
+#define IDT_NT_NTIERRORMSK0 0x00414U
+#define IDT_NT_NTIERRORMSK1 0x00418U
+/* Doorbel registers */
+/* NT Outbound Doorbell Set (DWORD) */
+#define IDT_NT_OUTDBELLSET 0x00420U
+/* NT Inbound Doorbell Status/Mask (DWORD) */
+#define IDT_NT_INDBELLSTS 0x00428U
+#define IDT_NT_INDBELLMSK 0x0042CU
+/* Message registers */
+/* Outbound Message N (DWORD) */
+#define IDT_NT_OUTMSG0 0x00430U
+#define IDT_NT_OUTMSG1 0x00434U
+#define IDT_NT_OUTMSG2 0x00438U
+#define IDT_NT_OUTMSG3 0x0043CU
+/* Inbound Message N (DWORD) */
+#define IDT_NT_INMSG0 0x00440U
+#define IDT_NT_INMSG1 0x00444U
+#define IDT_NT_INMSG2 0x00448U
+#define IDT_NT_INMSG3 0x0044CU
+/* Inbound Message Source N (DWORD) */
+#define IDT_NT_INMSGSRC0 0x00450U
+#define IDT_NT_INMSGSRC1 0x00454U
+#define IDT_NT_INMSGSRC2 0x00458U
+#define IDT_NT_INMSGSRC3 0x0045CU
+/* Message Status (DWORD) */
+#define IDT_NT_MSGSTS 0x00460U
+/* Message Status Mask (DWORD) */
+#define IDT_NT_MSGSTSMSK 0x00464U
+/* BAR-setup registers */
+/* BAR N Setup/Limit Address/Lower and Upper Translated Base Address (DWORD) */
+#define IDT_NT_BARSETUP0 0x00470U
+#define IDT_NT_BARLIMIT0 0x00474U
+#define IDT_NT_BARLTBASE0 0x00478U
+#define IDT_NT_BARUTBASE0 0x0047CU
+#define IDT_NT_BARSETUP1 0x00480U
+#define IDT_NT_BARLIMIT1 0x00484U
+#define IDT_NT_BARLTBASE1 0x00488U
+#define IDT_NT_BARUTBASE1 0x0048CU
+#define IDT_NT_BARSETUP2 0x00490U
+#define IDT_NT_BARLIMIT2 0x00494U
+#define IDT_NT_BARLTBASE2 0x00498U
+#define IDT_NT_BARUTBASE2 0x0049CU
+#define IDT_NT_BARSETUP3 0x004A0U
+#define IDT_NT_BARLIMIT3 0x004A4U
+#define IDT_NT_BARLTBASE3 0x004A8U
+#define IDT_NT_BARUTBASE3 0x004ACU
+#define IDT_NT_BARSETUP4 0x004B0U
+#define IDT_NT_BARLIMIT4 0x004B4U
+#define IDT_NT_BARLTBASE4 0x004B8U
+#define IDT_NT_BARUTBASE4 0x004BCU
+#define IDT_NT_BARSETUP5 0x004C0U
+#define IDT_NT_BARLIMIT5 0x004C4U
+#define IDT_NT_BARLTBASE5 0x004C8U
+#define IDT_NT_BARUTBASE5 0x004CCU
+/* NT mapping table registers */
+/* NT Mapping Table Address/Status/Data (DWORD) */
+#define IDT_NT_NTMTBLADDR 0x004D0U
+#define IDT_NT_NTMTBLSTS 0x004D4U
+#define IDT_NT_NTMTBLDATA 0x004D8U
+/* Requester ID (Bus:Device:Function) Capture (DWORD) */
+#define IDT_NT_REQIDCAP 0x004DCU
+/* Memory Windows Lookup table registers */
+/* Lookup Table Offset/Lower, Middle and Upper data (DWORD) */
+#define IDT_NT_LUTOFFSET 0x004E0U
+#define IDT_NT_LUTLDATA 0x004E4U
+#define IDT_NT_LUTMDATA 0x004E8U
+#define IDT_NT_LUTUDATA 0x004ECU
+/* NT Endpoint Uncorrectable/Correctable Errors Emulation registers (DWORD) */
+#define IDT_NT_NTUEEM 0x004F0U
+#define IDT_NT_NTCEEM 0x004F4U
+/* Global Address Space Access/Data registers (DWARD) */
+#define IDT_NT_GASAADDR 0x00FF8U
+#define IDT_NT_GASADATA 0x00FFCU
+
+/*
+ * IDT PCIe-switch Global Configuration and Status registers
+ */
+/* Port N Configuration register in global space */
+/* PCI Express command/status and link control/status registers (WORD+WORD) */
+#define IDT_SW_NTP0_PCIECMDSTS 0x01004U
+#define IDT_SW_NTP0_PCIELCTLSTS 0x01050U
+/* NT-function control register (DWORD) */
+#define IDT_SW_NTP0_NTCTL 0x01400U
+/* BAR setup/limit/base address registers (DWORD) */
+#define IDT_SW_NTP0_BARSETUP0 0x01470U
+#define IDT_SW_NTP0_BARLIMIT0 0x01474U
+#define IDT_SW_NTP0_BARLTBASE0 0x01478U
+#define IDT_SW_NTP0_BARUTBASE0 0x0147CU
+#define IDT_SW_NTP0_BARSETUP1 0x01480U
+#define IDT_SW_NTP0_BARLIMIT1 0x01484U
+#define IDT_SW_NTP0_BARLTBASE1 0x01488U
+#define IDT_SW_NTP0_BARUTBASE1 0x0148CU
+#define IDT_SW_NTP0_BARSETUP2 0x01490U
+#define IDT_SW_NTP0_BARLIMIT2 0x01494U
+#define IDT_SW_NTP0_BARLTBASE2 0x01498U
+#define IDT_SW_NTP0_BARUTBASE2 0x0149CU
+#define IDT_SW_NTP0_BARSETUP3 0x014A0U
+#define IDT_SW_NTP0_BARLIMIT3 0x014A4U
+#define IDT_SW_NTP0_BARLTBASE3 0x014A8U
+#define IDT_SW_NTP0_BARUTBASE3 0x014ACU
+#define IDT_SW_NTP0_BARSETUP4 0x014B0U
+#define IDT_SW_NTP0_BARLIMIT4 0x014B4U
+#define IDT_SW_NTP0_BARLTBASE4 0x014B8U
+#define IDT_SW_NTP0_BARUTBASE4 0x014BCU
+#define IDT_SW_NTP0_BARSETUP5 0x014C0U
+#define IDT_SW_NTP0_BARLIMIT5 0x014C4U
+#define IDT_SW_NTP0_BARLTBASE5 0x014C8U
+#define IDT_SW_NTP0_BARUTBASE5 0x014CCU
+/* PCI Express command/status and link control/status registers (WORD+WORD) */
+#define IDT_SW_NTP2_PCIECMDSTS 0x05004U
+#define IDT_SW_NTP2_PCIELCTLSTS 0x05050U
+/* NT-function control register (DWORD) */
+#define IDT_SW_NTP2_NTCTL 0x05400U
+/* BAR setup/limit/base address registers (DWORD) */
+#define IDT_SW_NTP2_BARSETUP0 0x05470U
+#define IDT_SW_NTP2_BARLIMIT0 0x05474U
+#define IDT_SW_NTP2_BARLTBASE0 0x05478U
+#define IDT_SW_NTP2_BARUTBASE0 0x0547CU
+#define IDT_SW_NTP2_BARSETUP1 0x05480U
+#define IDT_SW_NTP2_BARLIMIT1 0x05484U
+#define IDT_SW_NTP2_BARLTBASE1 0x05488U
+#define IDT_SW_NTP2_BARUTBASE1 0x0548CU
+#define IDT_SW_NTP2_BARSETUP2 0x05490U
+#define IDT_SW_NTP2_BARLIMIT2 0x05494U
+#define IDT_SW_NTP2_BARLTBASE2 0x05498U
+#define IDT_SW_NTP2_BARUTBASE2 0x0549CU
+#define IDT_SW_NTP2_BARSETUP3 0x054A0U
+#define IDT_SW_NTP2_BARLIMIT3 0x054A4U
+#define IDT_SW_NTP2_BARLTBASE3 0x054A8U
+#define IDT_SW_NTP2_BARUTBASE3 0x054ACU
+#define IDT_SW_NTP2_BARSETUP4 0x054B0U
+#define IDT_SW_NTP2_BARLIMIT4 0x054B4U
+#define IDT_SW_NTP2_BARLTBASE4 0x054B8U
+#define IDT_SW_NTP2_BARUTBASE4 0x054BCU
+#define IDT_SW_NTP2_BARSETUP5 0x054C0U
+#define IDT_SW_NTP2_BARLIMIT5 0x054C4U
+#define IDT_SW_NTP2_BARLTBASE5 0x054C8U
+#define IDT_SW_NTP2_BARUTBASE5 0x054CCU
+/* PCI Express command/status and link control/status registers (WORD+WORD) */
+#define IDT_SW_NTP4_PCIECMDSTS 0x09004U
+#define IDT_SW_NTP4_PCIELCTLSTS 0x09050U
+/* NT-function control register (DWORD) */
+#define IDT_SW_NTP4_NTCTL 0x09400U
+/* BAR setup/limit/base address registers (DWORD) */
+#define IDT_SW_NTP4_BARSETUP0 0x09470U
+#define IDT_SW_NTP4_BARLIMIT0 0x09474U
+#define IDT_SW_NTP4_BARLTBASE0 0x09478U
+#define IDT_SW_NTP4_BARUTBASE0 0x0947CU
+#define IDT_SW_NTP4_BARSETUP1 0x09480U
+#define IDT_SW_NTP4_BARLIMIT1 0x09484U
+#define IDT_SW_NTP4_BARLTBASE1 0x09488U
+#define IDT_SW_NTP4_BARUTBASE1 0x0948CU
+#define IDT_SW_NTP4_BARSETUP2 0x09490U
+#define IDT_SW_NTP4_BARLIMIT2 0x09494U
+#define IDT_SW_NTP4_BARLTBASE2 0x09498U
+#define IDT_SW_NTP4_BARUTBASE2 0x0949CU
+#define IDT_SW_NTP4_BARSETUP3 0x094A0U
+#define IDT_SW_NTP4_BARLIMIT3 0x094A4U
+#define IDT_SW_NTP4_BARLTBASE3 0x094A8U
+#define IDT_SW_NTP4_BARUTBASE3 0x094ACU
+#define IDT_SW_NTP4_BARSETUP4 0x094B0U
+#define IDT_SW_NTP4_BARLIMIT4 0x094B4U
+#define IDT_SW_NTP4_BARLTBASE4 0x094B8U
+#define IDT_SW_NTP4_BARUTBASE4 0x094BCU
+#define IDT_SW_NTP4_BARSETUP5 0x094C0U
+#define IDT_SW_NTP4_BARLIMIT5 0x094C4U
+#define IDT_SW_NTP4_BARLTBASE5 0x094C8U
+#define IDT_SW_NTP4_BARUTBASE5 0x094CCU
+/* PCI Express command/status and link control/status registers (WORD+WORD) */
+#define IDT_SW_NTP6_PCIECMDSTS 0x0D004U
+#define IDT_SW_NTP6_PCIELCTLSTS 0x0D050U
+/* NT-function control register (DWORD) */
+#define IDT_SW_NTP6_NTCTL 0x0D400U
+/* BAR setup/limit/base address registers (DWORD) */
+#define IDT_SW_NTP6_BARSETUP0 0x0D470U
+#define IDT_SW_NTP6_BARLIMIT0 0x0D474U
+#define IDT_SW_NTP6_BARLTBASE0 0x0D478U
+#define IDT_SW_NTP6_BARUTBASE0 0x0D47CU
+#define IDT_SW_NTP6_BARSETUP1 0x0D480U
+#define IDT_SW_NTP6_BARLIMIT1 0x0D484U
+#define IDT_SW_NTP6_BARLTBASE1 0x0D488U
+#define IDT_SW_NTP6_BARUTBASE1 0x0D48CU
+#define IDT_SW_NTP6_BARSETUP2 0x0D490U
+#define IDT_SW_NTP6_BARLIMIT2 0x0D494U
+#define IDT_SW_NTP6_BARLTBASE2 0x0D498U
+#define IDT_SW_NTP6_BARUTBASE2 0x0D49CU
+#define IDT_SW_NTP6_BARSETUP3 0x0D4A0U
+#define IDT_SW_NTP6_BARLIMIT3 0x0D4A4U
+#define IDT_SW_NTP6_BARLTBASE3 0x0D4A8U
+#define IDT_SW_NTP6_BARUTBASE3 0x0D4ACU
+#define IDT_SW_NTP6_BARSETUP4 0x0D4B0U
+#define IDT_SW_NTP6_BARLIMIT4 0x0D4B4U
+#define IDT_SW_NTP6_BARLTBASE4 0x0D4B8U
+#define IDT_SW_NTP6_BARUTBASE4 0x0D4BCU
+#define IDT_SW_NTP6_BARSETUP5 0x0D4C0U
+#define IDT_SW_NTP6_BARLIMIT5 0x0D4C4U
+#define IDT_SW_NTP6_BARLTBASE5 0x0D4C8U
+#define IDT_SW_NTP6_BARUTBASE5 0x0D4CCU
+/* PCI Express command/status and link control/status registers (WORD+WORD) */
+#define IDT_SW_NTP8_PCIECMDSTS 0x11004U
+#define IDT_SW_NTP8_PCIELCTLSTS 0x11050U
+/* NT-function control register (DWORD) */
+#define IDT_SW_NTP8_NTCTL 0x11400U
+/* BAR setup/limit/base address registers (DWORD) */
+#define IDT_SW_NTP8_BARSETUP0 0x11470U
+#define IDT_SW_NTP8_BARLIMIT0 0x11474U
+#define IDT_SW_NTP8_BARLTBASE0 0x11478U
+#define IDT_SW_NTP8_BARUTBASE0 0x1147CU
+#define IDT_SW_NTP8_BARSETUP1 0x11480U
+#define IDT_SW_NTP8_BARLIMIT1 0x11484U
+#define IDT_SW_NTP8_BARLTBASE1 0x11488U
+#define IDT_SW_NTP8_BARUTBASE1 0x1148CU
+#define IDT_SW_NTP8_BARSETUP2 0x11490U
+#define IDT_SW_NTP8_BARLIMIT2 0x11494U
+#define IDT_SW_NTP8_BARLTBASE2 0x11498U
+#define IDT_SW_NTP8_BARUTBASE2 0x1149CU
+#define IDT_SW_NTP8_BARSETUP3 0x114A0U
+#define IDT_SW_NTP8_BARLIMIT3 0x114A4U
+#define IDT_SW_NTP8_BARLTBASE3 0x114A8U
+#define IDT_SW_NTP8_BARUTBASE3 0x114ACU
+#define IDT_SW_NTP8_BARSETUP4 0x114B0U
+#define IDT_SW_NTP8_BARLIMIT4 0x114B4U
+#define IDT_SW_NTP8_BARLTBASE4 0x114B8U
+#define IDT_SW_NTP8_BARUTBASE4 0x114BCU
+#define IDT_SW_NTP8_BARSETUP5 0x114C0U
+#define IDT_SW_NTP8_BARLIMIT5 0x114C4U
+#define IDT_SW_NTP8_BARLTBASE5 0x114C8U
+#define IDT_SW_NTP8_BARUTBASE5 0x114CCU
+/* PCI Express command/status and link control/status registers (WORD+WORD) */
+#define IDT_SW_NTP12_PCIECMDSTS 0x19004U
+#define IDT_SW_NTP12_PCIELCTLSTS 0x19050U
+/* NT-function control register (DWORD) */
+#define IDT_SW_NTP12_NTCTL 0x19400U
+/* BAR setup/limit/base address registers (DWORD) */
+#define IDT_SW_NTP12_BARSETUP0 0x19470U
+#define IDT_SW_NTP12_BARLIMIT0 0x19474U
+#define IDT_SW_NTP12_BARLTBASE0 0x19478U
+#define IDT_SW_NTP12_BARUTBASE0 0x1947CU
+#define IDT_SW_NTP12_BARSETUP1 0x19480U
+#define IDT_SW_NTP12_BARLIMIT1 0x19484U
+#define IDT_SW_NTP12_BARLTBASE1 0x19488U
+#define IDT_SW_NTP12_BARUTBASE1 0x1948CU
+#define IDT_SW_NTP12_BARSETUP2 0x19490U
+#define IDT_SW_NTP12_BARLIMIT2 0x19494U
+#define IDT_SW_NTP12_BARLTBASE2 0x19498U
+#define IDT_SW_NTP12_BARUTBASE2 0x1949CU
+#define IDT_SW_NTP12_BARSETUP3 0x194A0U
+#define IDT_SW_NTP12_BARLIMIT3 0x194A4U
+#define IDT_SW_NTP12_BARLTBASE3 0x194A8U
+#define IDT_SW_NTP12_BARUTBASE3 0x194ACU
+#define IDT_SW_NTP12_BARSETUP4 0x194B0U
+#define IDT_SW_NTP12_BARLIMIT4 0x194B4U
+#define IDT_SW_NTP12_BARLTBASE4 0x194B8U
+#define IDT_SW_NTP12_BARUTBASE4 0x194BCU
+#define IDT_SW_NTP12_BARSETUP5 0x194C0U
+#define IDT_SW_NTP12_BARLIMIT5 0x194C4U
+#define IDT_SW_NTP12_BARLTBASE5 0x194C8U
+#define IDT_SW_NTP12_BARUTBASE5 0x194CCU
+/* PCI Express command/status and link control/status registers (WORD+WORD) */
+#define IDT_SW_NTP16_PCIECMDSTS 0x21004U
+#define IDT_SW_NTP16_PCIELCTLSTS 0x21050U
+/* NT-function control register (DWORD) */
+#define IDT_SW_NTP16_NTCTL 0x21400U
+/* BAR setup/limit/base address registers (DWORD) */
+#define IDT_SW_NTP16_BARSETUP0 0x21470U
+#define IDT_SW_NTP16_BARLIMIT0 0x21474U
+#define IDT_SW_NTP16_BARLTBASE0 0x21478U
+#define IDT_SW_NTP16_BARUTBASE0 0x2147CU
+#define IDT_SW_NTP16_BARSETUP1 0x21480U
+#define IDT_SW_NTP16_BARLIMIT1 0x21484U
+#define IDT_SW_NTP16_BARLTBASE1 0x21488U
+#define IDT_SW_NTP16_BARUTBASE1 0x2148CU
+#define IDT_SW_NTP16_BARSETUP2 0x21490U
+#define IDT_SW_NTP16_BARLIMIT2 0x21494U
+#define IDT_SW_NTP16_BARLTBASE2 0x21498U
+#define IDT_SW_NTP16_BARUTBASE2 0x2149CU
+#define IDT_SW_NTP16_BARSETUP3 0x214A0U
+#define IDT_SW_NTP16_BARLIMIT3 0x214A4U
+#define IDT_SW_NTP16_BARLTBASE3 0x214A8U
+#define IDT_SW_NTP16_BARUTBASE3 0x214ACU
+#define IDT_SW_NTP16_BARSETUP4 0x214B0U
+#define IDT_SW_NTP16_BARLIMIT4 0x214B4U
+#define IDT_SW_NTP16_BARLTBASE4 0x214B8U
+#define IDT_SW_NTP16_BARUTBASE4 0x214BCU
+#define IDT_SW_NTP16_BARSETUP5 0x214C0U
+#define IDT_SW_NTP16_BARLIMIT5 0x214C4U
+#define IDT_SW_NTP16_BARLTBASE5 0x214C8U
+#define IDT_SW_NTP16_BARUTBASE5 0x214CCU
+/* PCI Express command/status and link control/status registers (WORD+WORD) */
+#define IDT_SW_NTP20_PCIECMDSTS 0x29004U
+#define IDT_SW_NTP20_PCIELCTLSTS 0x29050U
+/* NT-function control register (DWORD) */
+#define IDT_SW_NTP20_NTCTL 0x29400U
+/* BAR setup/limit/base address registers (DWORD) */
+#define IDT_SW_NTP20_BARSETUP0 0x29470U
+#define IDT_SW_NTP20_BARLIMIT0 0x29474U
+#define IDT_SW_NTP20_BARLTBASE0 0x29478U
+#define IDT_SW_NTP20_BARUTBASE0 0x2947CU
+#define IDT_SW_NTP20_BARSETUP1 0x29480U
+#define IDT_SW_NTP20_BARLIMIT1 0x29484U
+#define IDT_SW_NTP20_BARLTBASE1 0x29488U
+#define IDT_SW_NTP20_BARUTBASE1 0x2948CU
+#define IDT_SW_NTP20_BARSETUP2 0x29490U
+#define IDT_SW_NTP20_BARLIMIT2 0x29494U
+#define IDT_SW_NTP20_BARLTBASE2 0x29498U
+#define IDT_SW_NTP20_BARUTBASE2 0x2949CU
+#define IDT_SW_NTP20_BARSETUP3 0x294A0U
+#define IDT_SW_NTP20_BARLIMIT3 0x294A4U
+#define IDT_SW_NTP20_BARLTBASE3 0x294A8U
+#define IDT_SW_NTP20_BARUTBASE3 0x294ACU
+#define IDT_SW_NTP20_BARSETUP4 0x294B0U
+#define IDT_SW_NTP20_BARLIMIT4 0x294B4U
+#define IDT_SW_NTP20_BARLTBASE4 0x294B8U
+#define IDT_SW_NTP20_BARUTBASE4 0x294BCU
+#define IDT_SW_NTP20_BARSETUP5 0x294C0U
+#define IDT_SW_NTP20_BARLIMIT5 0x294C4U
+#define IDT_SW_NTP20_BARLTBASE5 0x294C8U
+#define IDT_SW_NTP20_BARUTBASE5 0x294CCU
+/* IDT PCIe-switch control register (DWORD) */
+#define IDT_SW_CTL 0x3E000U
+/* Boot Configuration Vector Status (DWORD) */
+#define IDT_SW_BCVSTS 0x3E004U
+/* Port Clocking Mode (DWORD) */
+#define IDT_SW_PCLKMODE 0x3E008U
+/* Reset Drain Delay (DWORD) */
+#define IDT_SW_RDRAINDELAY 0x3E080U
+/* Port Operating Mode Change Drain Delay (DWORD) */
+#define IDT_SW_POMCDELAY 0x3E084U
+/* Side Effect Delay (DWORD) */
+#define IDT_SW_SEDELAY 0x3E088U
+/* Upstream Secondary Bus Reset Delay (DWORD) */
+#define IDT_SW_SSBRDELAY 0x3E08CU
+/* Switch partition N Control/Status/Failover registers */
+#define IDT_SW_SWPART0CTL 0x3E100U
+#define IDT_SW_SWPART0STS 0x3E104U
+#define IDT_SW_SWPART0FCTL 0x3E108U
+#define IDT_SW_SWPART1CTL 0x3E120U
+#define IDT_SW_SWPART1STS 0x3E124U
+#define IDT_SW_SWPART1FCTL 0x3E128U
+#define IDT_SW_SWPART2CTL 0x3E140U
+#define IDT_SW_SWPART2STS 0x3E144U
+#define IDT_SW_SWPART2FCTL 0x3E148U
+#define IDT_SW_SWPART3CTL 0x3E160U
+#define IDT_SW_SWPART3STS 0x3E164U
+#define IDT_SW_SWPART3FCTL 0x3E168U
+#define IDT_SW_SWPART4CTL 0x3E180U
+#define IDT_SW_SWPART4STS 0x3E184U
+#define IDT_SW_SWPART4FCTL 0x3E188U
+#define IDT_SW_SWPART5CTL 0x3E1A0U
+#define IDT_SW_SWPART5STS 0x3E1A4U
+#define IDT_SW_SWPART5FCTL 0x3E1A8U
+#define IDT_SW_SWPART6CTL 0x3E1C0U
+#define IDT_SW_SWPART6STS 0x3E1C4U
+#define IDT_SW_SWPART6FCTL 0x3E1C8U
+#define IDT_SW_SWPART7CTL 0x3E1E0U
+#define IDT_SW_SWPART7STS 0x3E1E4U
+#define IDT_SW_SWPART7FCTL 0x3E1E8U
+/* Switch port N control and status registers */
+#define IDT_SW_SWPORT0CTL 0x3E200U
+#define IDT_SW_SWPORT0STS 0x3E204U
+#define IDT_SW_SWPORT0FCTL 0x3E208U
+#define IDT_SW_SWPORT2CTL 0x3E240U
+#define IDT_SW_SWPORT2STS 0x3E244U
+#define IDT_SW_SWPORT2FCTL 0x3E248U
+#define IDT_SW_SWPORT4CTL 0x3E280U
+#define IDT_SW_SWPORT4STS 0x3E284U
+#define IDT_SW_SWPORT4FCTL 0x3E288U
+#define IDT_SW_SWPORT6CTL 0x3E2C0U
+#define IDT_SW_SWPORT6STS 0x3E2C4U
+#define IDT_SW_SWPORT6FCTL 0x3E2C8U
+#define IDT_SW_SWPORT8CTL 0x3E300U
+#define IDT_SW_SWPORT8STS 0x3E304U
+#define IDT_SW_SWPORT8FCTL 0x3E308U
+#define IDT_SW_SWPORT12CTL 0x3E380U
+#define IDT_SW_SWPORT12STS 0x3E384U
+#define IDT_SW_SWPORT12FCTL 0x3E388U
+#define IDT_SW_SWPORT16CTL 0x3E400U
+#define IDT_SW_SWPORT16STS 0x3E404U
+#define IDT_SW_SWPORT16FCTL 0x3E408U
+#define IDT_SW_SWPORT20CTL 0x3E480U
+#define IDT_SW_SWPORT20STS 0x3E484U
+#define IDT_SW_SWPORT20FCTL 0x3E488U
+/* Switch Event registers */
+/* Switch Event Status/Mask/Partition mask (DWORD) */
+#define IDT_SW_SESTS 0x3EC00U
+#define IDT_SW_SEMSK 0x3EC04U
+#define IDT_SW_SEPMSK 0x3EC08U
+/* Switch Event Link Up/Down Status/Mask (DWORD) */
+#define IDT_SW_SELINKUPSTS 0x3EC0CU
+#define IDT_SW_SELINKUPMSK 0x3EC10U
+#define IDT_SW_SELINKDNSTS 0x3EC14U
+#define IDT_SW_SELINKDNMSK 0x3EC18U
+/* Switch Event Fundamental Reset Status/Mask (DWORD) */
+#define IDT_SW_SEFRSTSTS 0x3EC1CU
+#define IDT_SW_SEFRSTMSK 0x3EC20U
+/* Switch Event Hot Reset Status/Mask (DWORD) */
+#define IDT_SW_SEHRSTSTS 0x3EC24U
+#define IDT_SW_SEHRSTMSK 0x3EC28U
+/* Switch Event Failover Mask (DWORD) */
+#define IDT_SW_SEFOVRMSK 0x3EC2CU
+/* Switch Event Global Signal Status/Mask (DWORD) */
+#define IDT_SW_SEGSIGSTS 0x3EC30U
+#define IDT_SW_SEGSIGMSK 0x3EC34U
+/* NT Global Doorbell Status (DWORD) */
+#define IDT_SW_GDBELLSTS 0x3EC3CU
+/* Switch partition N message M control (msgs routing table) (DWORD) */
+#define IDT_SW_SWP0MSGCTL0 0x3EE00U
+#define IDT_SW_SWP1MSGCTL0 0x3EE04U
+#define IDT_SW_SWP2MSGCTL0 0x3EE08U
+#define IDT_SW_SWP3MSGCTL0 0x3EE0CU
+#define IDT_SW_SWP4MSGCTL0 0x3EE10U
+#define IDT_SW_SWP5MSGCTL0 0x3EE14U
+#define IDT_SW_SWP6MSGCTL0 0x3EE18U
+#define IDT_SW_SWP7MSGCTL0 0x3EE1CU
+#define IDT_SW_SWP0MSGCTL1 0x3EE20U
+#define IDT_SW_SWP1MSGCTL1 0x3EE24U
+#define IDT_SW_SWP2MSGCTL1 0x3EE28U
+#define IDT_SW_SWP3MSGCTL1 0x3EE2CU
+#define IDT_SW_SWP4MSGCTL1 0x3EE30U
+#define IDT_SW_SWP5MSGCTL1 0x3EE34U
+#define IDT_SW_SWP6MSGCTL1 0x3EE38U
+#define IDT_SW_SWP7MSGCTL1 0x3EE3CU
+#define IDT_SW_SWP0MSGCTL2 0x3EE40U
+#define IDT_SW_SWP1MSGCTL2 0x3EE44U
+#define IDT_SW_SWP2MSGCTL2 0x3EE48U
+#define IDT_SW_SWP3MSGCTL2 0x3EE4CU
+#define IDT_SW_SWP4MSGCTL2 0x3EE50U
+#define IDT_SW_SWP5MSGCTL2 0x3EE54U
+#define IDT_SW_SWP6MSGCTL2 0x3EE58U
+#define IDT_SW_SWP7MSGCTL2 0x3EE5CU
+#define IDT_SW_SWP0MSGCTL3 0x3EE60U
+#define IDT_SW_SWP1MSGCTL3 0x3EE64U
+#define IDT_SW_SWP2MSGCTL3 0x3EE68U
+#define IDT_SW_SWP3MSGCTL3 0x3EE6CU
+#define IDT_SW_SWP4MSGCTL3 0x3EE70U
+#define IDT_SW_SWP5MSGCTL3 0x3EE74U
+#define IDT_SW_SWP6MSGCTL3 0x3EE78U
+#define IDT_SW_SWP7MSGCTL3 0x3EE7CU
+/* SMBus Status and Control registers (DWORD) */
+#define IDT_SW_SMBUSSTS 0x3F188U
+#define IDT_SW_SMBUSCTL 0x3F18CU
+/* Serial EEPROM Interface (DWORD) */
+#define IDT_SW_EEPROMINTF 0x3F190U
+/* MBus I/O Expander Address N (DWORD) */
+#define IDT_SW_IOEXPADDR0 0x3F198U
+#define IDT_SW_IOEXPADDR1 0x3F19CU
+#define IDT_SW_IOEXPADDR2 0x3F1A0U
+#define IDT_SW_IOEXPADDR3 0x3F1A4U
+#define IDT_SW_IOEXPADDR4 0x3F1A8U
+#define IDT_SW_IOEXPADDR5 0x3F1ACU
+/* General Purpose Events Control and Status registers (DWORD) */
+#define IDT_SW_GPECTL 0x3F1B0U
+#define IDT_SW_GPESTS 0x3F1B4U
+/* Temperature sensor Control/Status/Alarm/Adjustment/Slope registers */
+#define IDT_SW_TMPCTL 0x3F1D4U
+#define IDT_SW_TMPSTS 0x3F1D8U
+#define IDT_SW_TMPALARM 0x3F1DCU
+#define IDT_SW_TMPADJ 0x3F1E0U
+#define IDT_SW_TSSLOPE 0x3F1E4U
+/* SMBus Configuration Block header log (DWORD) */
+#define IDT_SW_SMBUSCBHL 0x3F1E8U
+
+/*
+ * Common registers related constants
+ * @IDT_REG_ALIGN: Registers alignment used in the driver
+ * @IDT_REG_PCI_MAX: Maximum PCI configuration space register value
+ * @IDT_REG_SW_MAX: Maximum global register value
+ */
+#define IDT_REG_ALIGN 4
+#define IDT_REG_PCI_MAX 0x00FFFU
+#define IDT_REG_SW_MAX 0x3FFFFU
+
+/*
+ * PCICMDSTS register fields related constants
+ * @IDT_PCICMDSTS_IOAE: I/O access enable
+ * @IDT_PCICMDSTS_MAE: Memory access enable
+ * @IDT_PCICMDSTS_BME: Bus master enable
+ */
+#define IDT_PCICMDSTS_IOAE 0x00000001U
+#define IDT_PCICMDSTS_MAE 0x00000002U
+#define IDT_PCICMDSTS_BME 0x00000004U
+
+/*
+ * PCIEDCAP register fields related constants
+ * @IDT_PCIEDCAP_MPAYLOAD_MASK: Maximum payload size mask
+ * @IDT_PCIEDCAP_MPAYLOAD_FLD: Maximum payload size field offset
+ * @IDT_PCIEDCAP_MPAYLOAD_S128: Max supported payload size of 128 bytes
+ * @IDT_PCIEDCAP_MPAYLOAD_S256: Max supported payload size of 256 bytes
+ * @IDT_PCIEDCAP_MPAYLOAD_S512: Max supported payload size of 512 bytes
+ * @IDT_PCIEDCAP_MPAYLOAD_S1024: Max supported payload size of 1024 bytes
+ * @IDT_PCIEDCAP_MPAYLOAD_S2048: Max supported payload size of 2048 bytes
+ */
+#define IDT_PCIEDCAP_MPAYLOAD_MASK 0x00000007U
+#define IDT_PCIEDCAP_MPAYLOAD_FLD 0
+#define IDT_PCIEDCAP_MPAYLOAD_S128 0x00000000U
+#define IDT_PCIEDCAP_MPAYLOAD_S256 0x00000001U
+#define IDT_PCIEDCAP_MPAYLOAD_S512 0x00000002U
+#define IDT_PCIEDCAP_MPAYLOAD_S1024 0x00000003U
+#define IDT_PCIEDCAP_MPAYLOAD_S2048 0x00000004U
+
+/*
+ * PCIEDCTLSTS registers fields related constants
+ * @IDT_PCIEDCTL_MPS_MASK: Maximum payload size mask
+ * @IDT_PCIEDCTL_MPS_FLD: MPS field offset
+ * @IDT_PCIEDCTL_MPS_S128: Max payload size of 128 bytes
+ * @IDT_PCIEDCTL_MPS_S256: Max payload size of 256 bytes
+ * @IDT_PCIEDCTL_MPS_S512: Max payload size of 512 bytes
+ * @IDT_PCIEDCTL_MPS_S1024: Max payload size of 1024 bytes
+ * @IDT_PCIEDCTL_MPS_S2048: Max payload size of 2048 bytes
+ * @IDT_PCIEDCTL_MPS_S4096: Max payload size of 4096 bytes
+ */
+#define IDT_PCIEDCTLSTS_MPS_MASK 0x000000E0U
+#define IDT_PCIEDCTLSTS_MPS_FLD 5
+#define IDT_PCIEDCTLSTS_MPS_S128 0x00000000U
+#define IDT_PCIEDCTLSTS_MPS_S256 0x00000020U
+#define IDT_PCIEDCTLSTS_MPS_S512 0x00000040U
+#define IDT_PCIEDCTLSTS_MPS_S1024 0x00000060U
+#define IDT_PCIEDCTLSTS_MPS_S2048 0x00000080U
+#define IDT_PCIEDCTLSTS_MPS_S4096 0x000000A0U
+
+/*
+ * PCIELCAP register fields related constants
+ * @IDT_PCIELCAP_PORTNUM_MASK: Port number field mask
+ * @IDT_PCIELCAP_PORTNUM_FLD: Port number field offset
+ */
+#define IDT_PCIELCAP_PORTNUM_MASK 0xFF000000U
+#define IDT_PCIELCAP_PORTNUM_FLD 24
+
+/*
+ * PCIELCTLSTS registers fields related constants
+ * @IDT_PCIELSTS_CLS_MASK: Current link speed mask
+ * @IDT_PCIELSTS_CLS_FLD: Current link speed field offset
+ * @IDT_PCIELSTS_NLW_MASK: Negotiated link width mask
+ * @IDT_PCIELSTS_NLW_FLD: Negotiated link width field offset
+ * @IDT_PCIELSTS_SCLK_COM: Common slot clock configuration
+ */
+#define IDT_PCIELCTLSTS_CLS_MASK 0x000F0000U
+#define IDT_PCIELCTLSTS_CLS_FLD 16
+#define IDT_PCIELCTLSTS_NLW_MASK 0x03F00000U
+#define IDT_PCIELCTLSTS_NLW_FLD 20
+#define IDT_PCIELCTLSTS_SCLK_COM 0x10000000U
+
+/*
+ * NTCTL register fields related constants
+ * @IDT_NTCTL_IDPROTDIS: ID Protection check disable (disable MTBL)
+ * @IDT_NTCTL_CPEN: Completion enable
+ * @IDT_NTCTL_RNS: Request no snoop processing (if MTBL disabled)
+ * @IDT_NTCTL_ATP: Address type processing (if MTBL disabled)
+ */
+#define IDT_NTCTL_IDPROTDIS 0x00000001U
+#define IDT_NTCTL_CPEN 0x00000002U
+#define IDT_NTCTL_RNS 0x00000004U
+#define IDT_NTCTL_ATP 0x00000008U
+
+/*
+ * NTINTSTS register fields related constants
+ * @IDT_NTINTSTS_MSG: Message interrupt bit
+ * @IDT_NTINTSTS_DBELL: Doorbell interrupt bit
+ * @IDT_NTINTSTS_SEVENT: Switch Event interrupt bit
+ * @IDT_NTINTSTS_TMPSENSOR: Temperature sensor interrupt bit
+ */
+#define IDT_NTINTSTS_MSG 0x00000001U
+#define IDT_NTINTSTS_DBELL 0x00000002U
+#define IDT_NTINTSTS_SEVENT 0x00000008U
+#define IDT_NTINTSTS_TMPSENSOR 0x00000080U
+
+/*
+ * NTINTMSK register fields related constants
+ * @IDT_NTINTMSK_MSG: Message interrupt mask bit
+ * @IDT_NTINTMSK_DBELL: Doorbell interrupt mask bit
+ * @IDT_NTINTMSK_SEVENT: Switch Event interrupt mask bit
+ * @IDT_NTINTMSK_TMPSENSOR: Temperature sensor interrupt mask bit
+ * @IDT_NTINTMSK_ALL: All the useful interrupts mask
+ */
+#define IDT_NTINTMSK_MSG 0x00000001U
+#define IDT_NTINTMSK_DBELL 0x00000002U
+#define IDT_NTINTMSK_SEVENT 0x00000008U
+#define IDT_NTINTMSK_TMPSENSOR 0x00000080U
+#define IDT_NTINTMSK_ALL \
+ (IDT_NTINTMSK_MSG | IDT_NTINTMSK_DBELL | \
+ IDT_NTINTMSK_SEVENT | IDT_NTINTMSK_TMPSENSOR)
+
+/*
+ * NTGSIGNAL register fields related constants
+ * @IDT_NTGSIGNAL_SET: Set global signal of the local partition
+ */
+#define IDT_NTGSIGNAL_SET 0x00000001U
+
+/*
+ * BARSETUP register fields related constants
+ * @IDT_BARSETUP_TYPE_MASK: Mask of the TYPE field
+ * @IDT_BARSETUP_TYPE_32: 32-bit addressing BAR
+ * @IDT_BARSETUP_TYPE_64: 64-bit addressing BAR
+ * @IDT_BARSETUP_PREF: Value of the BAR prefetchable field
+ * @IDT_BARSETUP_SIZE_MASK: Mask of the SIZE field
+ * @IDT_BARSETUP_SIZE_FLD: SIZE field offset
+ * @IDT_BARSETUP_SIZE_CFG: SIZE field value in case of config space MODE
+ * @IDT_BARSETUP_MODE_CFG: Configuration space BAR mode
+ * @IDT_BARSETUP_ATRAN_MASK: ATRAN field mask
+ * @IDT_BARSETUP_ATRAN_FLD: ATRAN field offset
+ * @IDT_BARSETUP_ATRAN_DIR: Direct address translation memory window
+ * @IDT_BARSETUP_ATRAN_LUT12: 12-entry lookup table
+ * @IDT_BARSETUP_ATRAN_LUT24: 24-entry lookup table
+ * @IDT_BARSETUP_TPART_MASK: TPART field mask
+ * @IDT_BARSETUP_TPART_FLD: TPART field offset
+ * @IDT_BARSETUP_EN: BAR enable bit
+ */
+#define IDT_BARSETUP_TYPE_MASK 0x00000006U
+#define IDT_BARSETUP_TYPE_FLD 0
+#define IDT_BARSETUP_TYPE_32 0x00000000U
+#define IDT_BARSETUP_TYPE_64 0x00000004U
+#define IDT_BARSETUP_PREF 0x00000008U
+#define IDT_BARSETUP_SIZE_MASK 0x000003F0U
+#define IDT_BARSETUP_SIZE_FLD 4
+#define IDT_BARSETUP_SIZE_CFG 0x000000C0U
+#define IDT_BARSETUP_MODE_CFG 0x00000400U
+#define IDT_BARSETUP_ATRAN_MASK 0x00001800U
+#define IDT_BARSETUP_ATRAN_FLD 11
+#define IDT_BARSETUP_ATRAN_DIR 0x00000000U
+#define IDT_BARSETUP_ATRAN_LUT12 0x00000800U
+#define IDT_BARSETUP_ATRAN_LUT24 0x00001000U
+#define IDT_BARSETUP_TPART_MASK 0x0000E000U
+#define IDT_BARSETUP_TPART_FLD 13
+#define IDT_BARSETUP_EN 0x80000000U
+
+/*
+ * NTMTBLDATA register fields related constants
+ * @IDT_NTMTBLDATA_VALID: Set the MTBL entry being valid
+ * @IDT_NTMTBLDATA_REQID_MASK: Bus:Device:Function field mask
+ * @IDT_NTMTBLDATA_REQID_FLD: Bus:Device:Function field offset
+ * @IDT_NTMTBLDATA_PART_MASK: Partition field mask
+ * @IDT_NTMTBLDATA_PART_FLD: Partition field offset
+ * @IDT_NTMTBLDATA_ATP_TRANS: Enable AT field translation on request TLPs
+ * @IDT_NTMTBLDATA_CNS_INV: Enable No Snoop attribute inversion of
+ * Completion TLPs
+ * @IDT_NTMTBLDATA_RNS_INV: Enable No Snoop attribute inversion of
+ * Request TLPs
+ */
+#define IDT_NTMTBLDATA_VALID 0x00000001U
+#define IDT_NTMTBLDATA_REQID_MASK 0x0001FFFEU
+#define IDT_NTMTBLDATA_REQID_FLD 1
+#define IDT_NTMTBLDATA_PART_MASK 0x000E0000U
+#define IDT_NTMTBLDATA_PART_FLD 17
+#define IDT_NTMTBLDATA_ATP_TRANS 0x20000000U
+#define IDT_NTMTBLDATA_CNS_INV 0x40000000U
+#define IDT_NTMTBLDATA_RNS_INV 0x80000000U
+
+/*
+ * REQIDCAP register fields related constants
+ * @IDT_REQIDCAP_REQID_MASK: Request ID field mask
+ * @IDT_REQIDCAP_REQID_FLD: Request ID field offset
+ */
+#define IDT_REQIDCAP_REQID_MASK 0x0000FFFFU
+#define IDT_REQIDCAP_REQID_FLD 0
+
+/*
+ * LUTOFFSET register fields related constants
+ * @IDT_LUTOFFSET_INDEX_MASK: Lookup table index field mask
+ * @IDT_LUTOFFSET_INDEX_FLD: Lookup table index field offset
+ * @IDT_LUTOFFSET_BAR_MASK: Lookup table BAR select field mask
+ * @IDT_LUTOFFSET_BAR_FLD: Lookup table BAR select field offset
+ */
+#define IDT_LUTOFFSET_INDEX_MASK 0x0000001FU
+#define IDT_LUTOFFSET_INDEX_FLD 0
+#define IDT_LUTOFFSET_BAR_MASK 0x00000700U
+#define IDT_LUTOFFSET_BAR_FLD 8
+
+/*
+ * LUTUDATA register fields related constants
+ * @IDT_LUTUDATA_PART_MASK: Partition field mask
+ * @IDT_LUTUDATA_PART_FLD: Partition field offset
+ * @IDT_LUTUDATA_VALID: Lookup table entry valid bit
+ */
+#define IDT_LUTUDATA_PART_MASK 0x0000000FU
+#define IDT_LUTUDATA_PART_FLD 0
+#define IDT_LUTUDATA_VALID 0x80000000U
+
+/*
+ * SWPARTxSTS register fields related constants
+ * @IDT_SWPARTxSTS_SCI: Switch partition state change initiated
+ * @IDT_SWPARTxSTS_SCC: Switch partition state change completed
+ * @IDT_SWPARTxSTS_STATE_MASK: Switch partition state mask
+ * @IDT_SWPARTxSTS_STATE_FLD: Switch partition state field offset
+ * @IDT_SWPARTxSTS_STATE_DIS: Switch partition disabled
+ * @IDT_SWPARTxSTS_STATE_ACT: Switch partition enabled
+ * @IDT_SWPARTxSTS_STATE_RES: Switch partition in reset
+ * @IDT_SWPARTxSTS_US: Switch partition has upstream port
+ * @IDT_SWPARTxSTS_USID_MASK: Switch partition upstream port ID mask
+ * @IDT_SWPARTxSTS_USID_FLD: Switch partition upstream port ID field offset
+ * @IDT_SWPARTxSTS_NT: Upstream port has NT function
+ * @IDT_SWPARTxSTS_DMA: Upstream port has DMA function
+ */
+#define IDT_SWPARTxSTS_SCI 0x00000001U
+#define IDT_SWPARTxSTS_SCC 0x00000002U
+#define IDT_SWPARTxSTS_STATE_MASK 0x00000060U
+#define IDT_SWPARTxSTS_STATE_FLD 5
+#define IDT_SWPARTxSTS_STATE_DIS 0x00000000U
+#define IDT_SWPARTxSTS_STATE_ACT 0x00000020U
+#define IDT_SWPARTxSTS_STATE_RES 0x00000060U
+#define IDT_SWPARTxSTS_US 0x00000100U
+#define IDT_SWPARTxSTS_USID_MASK 0x00003E00U
+#define IDT_SWPARTxSTS_USID_FLD 9
+#define IDT_SWPARTxSTS_NT 0x00004000U
+#define IDT_SWPARTxSTS_DMA 0x00008000U
+
+/*
+ * SWPORTxSTS register fields related constants
+ * @IDT_SWPORTxSTS_OMCI: Operation mode change initiated
+ * @IDT_SWPORTxSTS_OMCC: Operation mode change completed
+ * @IDT_SWPORTxSTS_LINKUP: Link up status
+ * @IDT_SWPORTxSTS_DS: Port lanes behave as downstream lanes
+ * @IDT_SWPORTxSTS_MODE_MASK: Port mode field mask
+ * @IDT_SWPORTxSTS_MODE_FLD: Port mode field offset
+ * @IDT_SWPORTxSTS_MODE_DIS: Port mode - disabled
+ * @IDT_SWPORTxSTS_MODE_DS: Port mode - downstream switch port
+ * @IDT_SWPORTxSTS_MODE_US: Port mode - upstream switch port
+ * @IDT_SWPORTxSTS_MODE_NT: Port mode - NT function
+ * @IDT_SWPORTxSTS_MODE_USNT: Port mode - upstream switch port with NTB
+ * @IDT_SWPORTxSTS_MODE_UNAT: Port mode - unattached
+ * @IDT_SWPORTxSTS_MODE_USDMA: Port mode - upstream switch port with DMA
+ * @IDT_SWPORTxSTS_MODE_USNTDMA:Port mode - upstream port with NTB and DMA
+ * @IDT_SWPORTxSTS_MODE_NTDMA: Port mode - NT function with DMA
+ * @IDT_SWPORTxSTS_SWPART_MASK: Port partition field mask
+ * @IDT_SWPORTxSTS_SWPART_FLD: Port partition field offset
+ * @IDT_SWPORTxSTS_DEVNUM_MASK: Port device number field mask
+ * @IDT_SWPORTxSTS_DEVNUM_FLD: Port device number field offset
+ */
+#define IDT_SWPORTxSTS_OMCI 0x00000001U
+#define IDT_SWPORTxSTS_OMCC 0x00000002U
+#define IDT_SWPORTxSTS_LINKUP 0x00000010U
+#define IDT_SWPORTxSTS_DS 0x00000020U
+#define IDT_SWPORTxSTS_MODE_MASK 0x000003C0U
+#define IDT_SWPORTxSTS_MODE_FLD 6
+#define IDT_SWPORTxSTS_MODE_DIS 0x00000000U
+#define IDT_SWPORTxSTS_MODE_DS 0x00000040U
+#define IDT_SWPORTxSTS_MODE_US 0x00000080U
+#define IDT_SWPORTxSTS_MODE_NT 0x000000C0U
+#define IDT_SWPORTxSTS_MODE_USNT 0x00000100U
+#define IDT_SWPORTxSTS_MODE_UNAT 0x00000140U
+#define IDT_SWPORTxSTS_MODE_USDMA 0x00000180U
+#define IDT_SWPORTxSTS_MODE_USNTDMA 0x000001C0U
+#define IDT_SWPORTxSTS_MODE_NTDMA 0x00000200U
+#define IDT_SWPORTxSTS_SWPART_MASK 0x00001C00U
+#define IDT_SWPORTxSTS_SWPART_FLD 10
+#define IDT_SWPORTxSTS_DEVNUM_MASK 0x001F0000U
+#define IDT_SWPORTxSTS_DEVNUM_FLD 16
+
+/*
+ * SEMSK register fields related constants
+ * @IDT_SEMSK_LINKUP: Link Up event mask bit
+ * @IDT_SEMSK_LINKDN: Link Down event mask bit
+ * @IDT_SEMSK_GSIGNAL: Global Signal event mask bit
+ */
+#define IDT_SEMSK_LINKUP 0x00000001U
+#define IDT_SEMSK_LINKDN 0x00000002U
+#define IDT_SEMSK_GSIGNAL 0x00000020U
+
+/*
+ * SWPxMSGCTL register fields related constants
+ * @IDT_SWPxMSGCTL_REG_MASK: Register select field mask
+ * @IDT_SWPxMSGCTL_REG_FLD: Register select field offset
+ * @IDT_SWPxMSGCTL_PART_MASK: Partition select field mask
+ * @IDT_SWPxMSGCTL_PART_FLD: Partition select field offset
+ */
+#define IDT_SWPxMSGCTL_REG_MASK 0x00000003U
+#define IDT_SWPxMSGCTL_REG_FLD 0
+#define IDT_SWPxMSGCTL_PART_MASK 0x00000070U
+#define IDT_SWPxMSGCTL_PART_FLD 4
+
+/*
+ * TMPSTS register fields related constants
+ * @IDT_TMPSTS_TEMP_MASK: Current temperature field mask
+ * @IDT_TMPSTS_TEMP_FLD: Current temperature field offset
+ */
+#define IDT_TMPSTS_TEMP_MASK 0x000000FFU
+#define IDT_TMPSTS_TEMP_FLD 0
+
+/*
+ * Helper macro to get/set the corresponding field value
+ * @GET_FIELD: Retrieve the value of the corresponding field
+ * @SET_FIELD: Set the specified field up
+ * @IS_FLD_SET: Check whether a field is set with value
+ */
+#define GET_FIELD(field, data) \
+ (((u32)(data) & IDT_ ##field## _MASK) >> IDT_ ##field## _FLD)
+#define SET_FIELD(field, data, value) \
+ (((u32)(data) & ~IDT_ ##field## _MASK) | \
+ ((u32)(value) << IDT_ ##field## _FLD))
+#define IS_FLD_SET(field, data, value) \
+ (((u32)(data) & IDT_ ##field## _MASK) == IDT_ ##field## _ ##value)
+
+/*
+ * Useful registers masks:
+ * @IDT_DBELL_MASK: Doorbell bits mask
+ * @IDT_OUTMSG_MASK: Out messages status bits mask
+ * @IDT_INMSG_MASK: In messages status bits mask
+ * @IDT_MSG_MASK: Any message status bits mask
+ */
+#define IDT_DBELL_MASK ((u32)0xFFFFFFFFU)
+#define IDT_OUTMSG_MASK ((u32)0x0000000FU)
+#define IDT_INMSG_MASK ((u32)0x000F0000U)
+#define IDT_MSG_MASK (IDT_INMSG_MASK | IDT_OUTMSG_MASK)
+
+/*
+ * Number of IDT NTB resources:
+ * @IDT_MSG_CNT: Number of Message registers
+ * @IDT_BAR_CNT: Number of BARs of each port
+ * @IDT_MTBL_ENTRY_CNT: Number mapping table entries
+ */
+#define IDT_MSG_CNT 4
+#define IDT_BAR_CNT 6
+#define IDT_MTBL_ENTRY_CNT 64
+
+/*
+ * General IDT PCIe-switch constant
+ * @IDT_MAX_NR_PORTS: Maximum number of ports per IDT PCIe-switch
+ * @IDT_MAX_NR_PARTS: Maximum number of partitions per IDT PCIe-switch
+ * @IDT_MAX_NR_PEERS: Maximum number of NT-peers per IDT PCIe-switch
+ * @IDT_MAX_NR_MWS: Maximum number of Memory Widows
+ * @IDT_PCIE_REGSIZE: Size of the registers in bytes
+ * @IDT_TRANS_ALIGN: Alignment of translated base address
+ * @IDT_DIR_SIZE_ALIGN: Alignment of size setting for direct translated MWs.
+ * Even though the lower 10 bits are reserved, they are
+ * treated by IDT as one's so basically there is no any
+ * alignment of size limit for DIR address translation.
+ */
+#define IDT_MAX_NR_PORTS 24
+#define IDT_MAX_NR_PARTS 8
+#define IDT_MAX_NR_PEERS 8
+#define IDT_MAX_NR_MWS 29
+#define IDT_PCIE_REGSIZE 4
+#define IDT_TRANS_ALIGN 4
+#define IDT_DIR_SIZE_ALIGN 1
+
+/*
+ * IDT Memory Windows type. Depending on the device settings, IDT supports
+ * Direct Address Translation MW registers and Lookup Table registers
+ * @IDT_MW_DIR: Direct address translation
+ * @IDT_MW_LUT12: 12-entry lookup table entry
+ * @IDT_MW_LUT24: 24-entry lookup table entry
+ *
+ * NOTE These values are exactly the same as one of the BARSETUP ATRAN field
+ */
+enum idt_mw_type {
+ IDT_MW_DIR = 0x0,
+ IDT_MW_LUT12 = 0x1,
+ IDT_MW_LUT24 = 0x2
+};
+
+/*
+ * IDT PCIe-switch model private data
+ * @name: Device name
+ * @port_cnt: Total number of NT endpoint ports
+ * @ports: Port ids
+ */
+struct idt_89hpes_cfg {
+ char *name;
+ unsigned char port_cnt;
+ unsigned char ports[];
+};
+
+/*
+ * Memory window configuration structure
+ * @type: Type of the memory window (direct address translation or lookup
+ * table)
+ *
+ * @bar: PCIe BAR the memory window referenced to
+ * @idx: Index of the memory window within the BAR
+ *
+ * @addr_align: Alignment of translated address
+ * @size_align: Alignment of memory window size
+ * @size_max: Maximum size of memory window
+ */
+struct idt_mw_cfg {
+ enum idt_mw_type type;
+
+ unsigned char bar;
+ unsigned char idx;
+
+ u64 addr_align;
+ u64 size_align;
+ u64 size_max;
+};
+
+/*
+ * Description structure of peer IDT NT-functions:
+ * @port: NT-function port
+ * @part: NT-function partition
+ *
+ * @mw_cnt: Number of memory windows supported by NT-function
+ * @mws: Array of memory windows descriptors
+ */
+struct idt_ntb_peer {
+ unsigned char port;
+ unsigned char part;
+
+ unsigned char mw_cnt;
+ struct idt_mw_cfg *mws;
+};
+
+/*
+ * Description structure of local IDT NT-function:
+ * @ntb: Linux NTB-device description structure
+ * @swcfg: Pointer to the structure of local IDT PCIe-switch
+ * specific cofnfigurations
+ *
+ * @port: Local NT-function port
+ * @part: Local NT-function partition
+ *
+ * @peer_cnt: Number of peers with activated NTB-function
+ * @peers: Array of peers descripting structures
+ * @port_idx_map: Map of port number -> peer index
+ * @part_idx_map: Map of partition number -> peer index
+ *
+ * @mtbl_lock: Mapping table access lock
+ *
+ * @mw_cnt: Number of memory windows supported by NT-function
+ * @mws: Array of memory windows descriptors
+ * @lut_lock: Lookup table access lock
+ *
+ * @msg_locks: Message registers mapping table lockers
+ *
+ * @cfgspc: Virtual address of the memory mapped configuration
+ * space of the NT-function
+ * @db_mask_lock: Doorbell mask register lock
+ * @msg_mask_lock: Message mask register lock
+ * @gasa_lock: GASA registers access lock
+ *
+ * @dbgfs_info: DebugFS info node
+ */
+struct idt_ntb_dev {
+ struct ntb_dev ntb;
+ struct idt_89hpes_cfg *swcfg;
+
+ unsigned char port;
+ unsigned char part;
+
+ unsigned char peer_cnt;
+ struct idt_ntb_peer peers[IDT_MAX_NR_PEERS];
+ char port_idx_map[IDT_MAX_NR_PORTS];
+ char part_idx_map[IDT_MAX_NR_PARTS];
+
+ spinlock_t mtbl_lock;
+
+ unsigned char mw_cnt;
+ struct idt_mw_cfg *mws;
+ spinlock_t lut_lock;
+
+ spinlock_t msg_locks[IDT_MSG_CNT];
+
+ void __iomem *cfgspc;
+ spinlock_t db_mask_lock;
+ spinlock_t msg_mask_lock;
+ spinlock_t gasa_lock;
+
+ struct dentry *dbgfs_info;
+};
+#define to_ndev_ntb(__ntb) container_of(__ntb, struct idt_ntb_dev, ntb)
+
+/*
+ * Descriptor of the IDT PCIe-switch BAR resources
+ * @setup: BAR setup register
+ * @limit: BAR limit register
+ * @ltbase: Lower translated base address
+ * @utbase: Upper translated base address
+ */
+struct idt_ntb_bar {
+ unsigned int setup;
+ unsigned int limit;
+ unsigned int ltbase;
+ unsigned int utbase;
+};
+
+/*
+ * Descriptor of the IDT PCIe-switch message resources
+ * @in: Inbound message register
+ * @out: Outbound message register
+ * @src: Source of inbound message register
+ */
+struct idt_ntb_msg {
+ unsigned int in;
+ unsigned int out;
+ unsigned int src;
+};
+
+/*
+ * Descriptor of the IDT PCIe-switch NT-function specific parameters in the
+ * PCI Configuration Space
+ * @bars: BARs related registers
+ * @msgs: Messaging related registers
+ */
+struct idt_ntb_regs {
+ struct idt_ntb_bar bars[IDT_BAR_CNT];
+ struct idt_ntb_msg msgs[IDT_MSG_CNT];
+};
+
+/*
+ * Descriptor of the IDT PCIe-switch port specific parameters in the
+ * Global Configuration Space
+ * @pcicmdsts: PCI command/status register
+ * @pcielctlsts: PCIe link control/status
+ *
+ * @ctl: Port control register
+ * @sts: Port status register
+ *
+ * @bars: BARs related registers
+ */
+struct idt_ntb_port {
+ unsigned int pcicmdsts;
+ unsigned int pcielctlsts;
+ unsigned int ntctl;
+
+ unsigned int ctl;
+ unsigned int sts;
+
+ struct idt_ntb_bar bars[IDT_BAR_CNT];
+};
+
+/*
+ * Descriptor of the IDT PCIe-switch partition specific parameters.
+ * @ctl: Partition control register in the Global Address Space
+ * @sts: Partition status register in the Global Address Space
+ * @msgctl: Messages control registers
+ */
+struct idt_ntb_part {
+ unsigned int ctl;
+ unsigned int sts;
+ unsigned int msgctl[IDT_MSG_CNT];
+};
+
+#endif /* NTB_HW_IDT_H */
diff --git a/drivers/ntb/hw/intel/ntb_hw_intel.c b/drivers/ntb/hw/intel/ntb_hw_intel.c
index 7b3b6fd63d7d..2557e2c05b90 100644
--- a/drivers/ntb/hw/intel/ntb_hw_intel.c
+++ b/drivers/ntb/hw/intel/ntb_hw_intel.c
@@ -6,6 +6,7 @@
*
* Copyright(c) 2012 Intel Corporation. All rights reserved.
* Copyright (C) 2015 EMC Corporation. All Rights Reserved.
+ * Copyright (C) 2016 T-Platforms. All Rights Reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of version 2 of the GNU General Public License as
@@ -15,6 +16,7 @@
*
* Copyright(c) 2012 Intel Corporation. All rights reserved.
* Copyright (C) 2015 EMC Corporation. All Rights Reserved.
+ * Copyright (C) 2016 T-Platforms. All Rights Reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
@@ -270,12 +272,12 @@ static inline int ndev_db_addr(struct intel_ntb_dev *ndev,
if (db_addr) {
*db_addr = reg_addr + reg;
- dev_dbg(ndev_dev(ndev), "Peer db addr %llx\n", *db_addr);
+ dev_dbg(&ndev->ntb.pdev->dev, "Peer db addr %llx\n", *db_addr);
}
if (db_size) {
*db_size = ndev->reg->db_size;
- dev_dbg(ndev_dev(ndev), "Peer db size %llx\n", *db_size);
+ dev_dbg(&ndev->ntb.pdev->dev, "Peer db size %llx\n", *db_size);
}
return 0;
@@ -368,7 +370,8 @@ static inline int ndev_spad_addr(struct intel_ntb_dev *ndev, int idx,
if (spad_addr) {
*spad_addr = reg_addr + reg + (idx << 2);
- dev_dbg(ndev_dev(ndev), "Peer spad addr %llx\n", *spad_addr);
+ dev_dbg(&ndev->ntb.pdev->dev, "Peer spad addr %llx\n",
+ *spad_addr);
}
return 0;
@@ -409,7 +412,7 @@ static irqreturn_t ndev_interrupt(struct intel_ntb_dev *ndev, int vec)
if ((ndev->hwerr_flags & NTB_HWERR_MSIX_VECTOR32_BAD) && (vec == 31))
vec_mask |= ndev->db_link_mask;
- dev_dbg(ndev_dev(ndev), "vec %d vec_mask %llx\n", vec, vec_mask);
+ dev_dbg(&ndev->ntb.pdev->dev, "vec %d vec_mask %llx\n", vec, vec_mask);
ndev->last_ts = jiffies;
@@ -428,7 +431,7 @@ static irqreturn_t ndev_vec_isr(int irq, void *dev)
{
struct intel_ntb_vec *nvec = dev;
- dev_dbg(ndev_dev(nvec->ndev), "irq: %d nvec->num: %d\n",
+ dev_dbg(&nvec->ndev->ntb.pdev->dev, "irq: %d nvec->num: %d\n",
irq, nvec->num);
return ndev_interrupt(nvec->ndev, nvec->num);
@@ -438,7 +441,7 @@ static irqreturn_t ndev_irq_isr(int irq, void *dev)
{
struct intel_ntb_dev *ndev = dev;
- return ndev_interrupt(ndev, irq - ndev_pdev(ndev)->irq);
+ return ndev_interrupt(ndev, irq - ndev->ntb.pdev->irq);
}
static int ndev_init_isr(struct intel_ntb_dev *ndev,
@@ -448,7 +451,7 @@ static int ndev_init_isr(struct intel_ntb_dev *ndev,
struct pci_dev *pdev;
int rc, i, msix_count, node;
- pdev = ndev_pdev(ndev);
+ pdev = ndev->ntb.pdev;
node = dev_to_node(&pdev->dev);
@@ -487,7 +490,7 @@ static int ndev_init_isr(struct intel_ntb_dev *ndev,
goto err_msix_request;
}
- dev_dbg(ndev_dev(ndev), "Using %d msix interrupts\n", msix_count);
+ dev_dbg(&pdev->dev, "Using %d msix interrupts\n", msix_count);
ndev->db_vec_count = msix_count;
ndev->db_vec_shift = msix_shift;
return 0;
@@ -515,7 +518,7 @@ err_msix_vec_alloc:
if (rc)
goto err_msi_request;
- dev_dbg(ndev_dev(ndev), "Using msi interrupts\n");
+ dev_dbg(&pdev->dev, "Using msi interrupts\n");
ndev->db_vec_count = 1;
ndev->db_vec_shift = total_shift;
return 0;
@@ -533,7 +536,7 @@ err_msi_enable:
if (rc)
goto err_intx_request;
- dev_dbg(ndev_dev(ndev), "Using intx interrupts\n");
+ dev_dbg(&pdev->dev, "Using intx interrupts\n");
ndev->db_vec_count = 1;
ndev->db_vec_shift = total_shift;
return 0;
@@ -547,7 +550,7 @@ static void ndev_deinit_isr(struct intel_ntb_dev *ndev)
struct pci_dev *pdev;
int i;
- pdev = ndev_pdev(ndev);
+ pdev = ndev->ntb.pdev;
/* Mask all doorbell interrupts */
ndev->db_mask = ndev->db_valid_mask;
@@ -744,7 +747,7 @@ static ssize_t ndev_ntb_debugfs_read(struct file *filp, char __user *ubuf,
union { u64 v64; u32 v32; u16 v16; u8 v8; } u;
ndev = filp->private_data;
- pdev = ndev_pdev(ndev);
+ pdev = ndev->ntb.pdev;
mmio = ndev->self_mmio;
buf_size = min(count, 0x800ul);
@@ -1019,7 +1022,8 @@ static void ndev_init_debugfs(struct intel_ntb_dev *ndev)
ndev->debugfs_info = NULL;
} else {
ndev->debugfs_dir =
- debugfs_create_dir(ndev_name(ndev), debugfs_dir);
+ debugfs_create_dir(pci_name(ndev->ntb.pdev),
+ debugfs_dir);
if (!ndev->debugfs_dir)
ndev->debugfs_info = NULL;
else
@@ -1035,20 +1039,26 @@ static void ndev_deinit_debugfs(struct intel_ntb_dev *ndev)
debugfs_remove_recursive(ndev->debugfs_dir);
}
-static int intel_ntb_mw_count(struct ntb_dev *ntb)
+static int intel_ntb_mw_count(struct ntb_dev *ntb, int pidx)
{
+ if (pidx != NTB_DEF_PEER_IDX)
+ return -EINVAL;
+
return ntb_ndev(ntb)->mw_count;
}
-static int intel_ntb_mw_get_range(struct ntb_dev *ntb, int idx,
- phys_addr_t *base,
- resource_size_t *size,
- resource_size_t *align,
- resource_size_t *align_size)
+static int intel_ntb_mw_get_align(struct ntb_dev *ntb, int pidx, int idx,
+ resource_size_t *addr_align,
+ resource_size_t *size_align,
+ resource_size_t *size_max)
{
struct intel_ntb_dev *ndev = ntb_ndev(ntb);
+ resource_size_t bar_size, mw_size;
int bar;
+ if (pidx != NTB_DEF_PEER_IDX)
+ return -EINVAL;
+
if (idx >= ndev->b2b_idx && !ndev->b2b_off)
idx += 1;
@@ -1056,24 +1066,26 @@ static int intel_ntb_mw_get_range(struct ntb_dev *ntb, int idx,
if (bar < 0)
return bar;
- if (base)
- *base = pci_resource_start(ndev->ntb.pdev, bar) +
- (idx == ndev->b2b_idx ? ndev->b2b_off : 0);
+ bar_size = pci_resource_len(ndev->ntb.pdev, bar);
- if (size)
- *size = pci_resource_len(ndev->ntb.pdev, bar) -
- (idx == ndev->b2b_idx ? ndev->b2b_off : 0);
+ if (idx == ndev->b2b_idx)
+ mw_size = bar_size - ndev->b2b_off;
+ else
+ mw_size = bar_size;
+
+ if (addr_align)
+ *addr_align = pci_resource_len(ndev->ntb.pdev, bar);
- if (align)
- *align = pci_resource_len(ndev->ntb.pdev, bar);
+ if (size_align)
+ *size_align = 1;
- if (align_size)
- *align_size = 1;
+ if (size_max)
+ *size_max = mw_size;
return 0;
}
-static int intel_ntb_mw_set_trans(struct ntb_dev *ntb, int idx,
+static int intel_ntb_mw_set_trans(struct ntb_dev *ntb, int pidx, int idx,
dma_addr_t addr, resource_size_t size)
{
struct intel_ntb_dev *ndev = ntb_ndev(ntb);
@@ -1083,6 +1095,9 @@ static int intel_ntb_mw_set_trans(struct ntb_dev *ntb, int idx,
u64 base, limit, reg_val;
int bar;
+ if (pidx != NTB_DEF_PEER_IDX)
+ return -EINVAL;
+
if (idx >= ndev->b2b_idx && !ndev->b2b_off)
idx += 1;
@@ -1171,7 +1186,7 @@ static int intel_ntb_mw_set_trans(struct ntb_dev *ntb, int idx,
return 0;
}
-static int intel_ntb_link_is_up(struct ntb_dev *ntb,
+static u64 intel_ntb_link_is_up(struct ntb_dev *ntb,
enum ntb_speed *speed,
enum ntb_width *width)
{
@@ -1206,13 +1221,13 @@ static int intel_ntb_link_enable(struct ntb_dev *ntb,
if (ndev->ntb.topo == NTB_TOPO_SEC)
return -EINVAL;
- dev_dbg(ndev_dev(ndev),
+ dev_dbg(&ntb->pdev->dev,
"Enabling link with max_speed %d max_width %d\n",
max_speed, max_width);
if (max_speed != NTB_SPEED_AUTO)
- dev_dbg(ndev_dev(ndev), "ignoring max_speed %d\n", max_speed);
+ dev_dbg(&ntb->pdev->dev, "ignoring max_speed %d\n", max_speed);
if (max_width != NTB_WIDTH_AUTO)
- dev_dbg(ndev_dev(ndev), "ignoring max_width %d\n", max_width);
+ dev_dbg(&ntb->pdev->dev, "ignoring max_width %d\n", max_width);
ntb_ctl = ioread32(ndev->self_mmio + ndev->reg->ntb_ctl);
ntb_ctl &= ~(NTB_CTL_DISABLE | NTB_CTL_CFG_LOCK);
@@ -1235,7 +1250,7 @@ static int intel_ntb_link_disable(struct ntb_dev *ntb)
if (ndev->ntb.topo == NTB_TOPO_SEC)
return -EINVAL;
- dev_dbg(ndev_dev(ndev), "Disabling link\n");
+ dev_dbg(&ntb->pdev->dev, "Disabling link\n");
/* Bring NTB link down */
ntb_cntl = ioread32(ndev->self_mmio + ndev->reg->ntb_ctl);
@@ -1249,6 +1264,36 @@ static int intel_ntb_link_disable(struct ntb_dev *ntb)
return 0;
}
+static int intel_ntb_peer_mw_count(struct ntb_dev *ntb)
+{
+ /* Numbers of inbound and outbound memory windows match */
+ return ntb_ndev(ntb)->mw_count;
+}
+
+static int intel_ntb_peer_mw_get_addr(struct ntb_dev *ntb, int idx,
+ phys_addr_t *base, resource_size_t *size)
+{
+ struct intel_ntb_dev *ndev = ntb_ndev(ntb);
+ int bar;
+
+ if (idx >= ndev->b2b_idx && !ndev->b2b_off)
+ idx += 1;
+
+ bar = ndev_mw_to_bar(ndev, idx);
+ if (bar < 0)
+ return bar;
+
+ if (base)
+ *base = pci_resource_start(ndev->ntb.pdev, bar) +
+ (idx == ndev->b2b_idx ? ndev->b2b_off : 0);
+
+ if (size)
+ *size = pci_resource_len(ndev->ntb.pdev, bar) -
+ (idx == ndev->b2b_idx ? ndev->b2b_off : 0);
+
+ return 0;
+}
+
static int intel_ntb_db_is_unsafe(struct ntb_dev *ntb)
{
return ndev_ignore_unsafe(ntb_ndev(ntb), NTB_UNSAFE_DB);
@@ -1366,30 +1411,30 @@ static int intel_ntb_spad_write(struct ntb_dev *ntb,
ndev->self_reg->spad);
}
-static int intel_ntb_peer_spad_addr(struct ntb_dev *ntb, int idx,
+static int intel_ntb_peer_spad_addr(struct ntb_dev *ntb, int pidx, int sidx,
phys_addr_t *spad_addr)
{
struct intel_ntb_dev *ndev = ntb_ndev(ntb);
- return ndev_spad_addr(ndev, idx, spad_addr, ndev->peer_addr,
+ return ndev_spad_addr(ndev, sidx, spad_addr, ndev->peer_addr,
ndev->peer_reg->spad);
}
-static u32 intel_ntb_peer_spad_read(struct ntb_dev *ntb, int idx)
+static u32 intel_ntb_peer_spad_read(struct ntb_dev *ntb, int pidx, int sidx)
{
struct intel_ntb_dev *ndev = ntb_ndev(ntb);
- return ndev_spad_read(ndev, idx,
+ return ndev_spad_read(ndev, sidx,
ndev->peer_mmio +
ndev->peer_reg->spad);
}
-static int intel_ntb_peer_spad_write(struct ntb_dev *ntb,
- int idx, u32 val)
+static int intel_ntb_peer_spad_write(struct ntb_dev *ntb, int pidx,
+ int sidx, u32 val)
{
struct intel_ntb_dev *ndev = ntb_ndev(ntb);
- return ndev_spad_write(ndev, idx, val,
+ return ndev_spad_write(ndev, sidx, val,
ndev->peer_mmio +
ndev->peer_reg->spad);
}
@@ -1442,30 +1487,33 @@ static int atom_link_is_err(struct intel_ntb_dev *ndev)
static inline enum ntb_topo atom_ppd_topo(struct intel_ntb_dev *ndev, u32 ppd)
{
+ struct device *dev = &ndev->ntb.pdev->dev;
+
switch (ppd & ATOM_PPD_TOPO_MASK) {
case ATOM_PPD_TOPO_B2B_USD:
- dev_dbg(ndev_dev(ndev), "PPD %d B2B USD\n", ppd);
+ dev_dbg(dev, "PPD %d B2B USD\n", ppd);
return NTB_TOPO_B2B_USD;
case ATOM_PPD_TOPO_B2B_DSD:
- dev_dbg(ndev_dev(ndev), "PPD %d B2B DSD\n", ppd);
+ dev_dbg(dev, "PPD %d B2B DSD\n", ppd);
return NTB_TOPO_B2B_DSD;
case ATOM_PPD_TOPO_PRI_USD:
case ATOM_PPD_TOPO_PRI_DSD: /* accept bogus PRI_DSD */
case ATOM_PPD_TOPO_SEC_USD:
case ATOM_PPD_TOPO_SEC_DSD: /* accept bogus SEC_DSD */
- dev_dbg(ndev_dev(ndev), "PPD %d non B2B disabled\n", ppd);
+ dev_dbg(dev, "PPD %d non B2B disabled\n", ppd);
return NTB_TOPO_NONE;
}
- dev_dbg(ndev_dev(ndev), "PPD %d invalid\n", ppd);
+ dev_dbg(dev, "PPD %d invalid\n", ppd);
return NTB_TOPO_NONE;
}
static void atom_link_hb(struct work_struct *work)
{
struct intel_ntb_dev *ndev = hb_ndev(work);
+ struct device *dev = &ndev->ntb.pdev->dev;
unsigned long poll_ts;
void __iomem *mmio;
u32 status32;
@@ -1503,30 +1551,30 @@ static void atom_link_hb(struct work_struct *work)
/* Clear AER Errors, write to clear */
status32 = ioread32(mmio + ATOM_ERRCORSTS_OFFSET);
- dev_dbg(ndev_dev(ndev), "ERRCORSTS = %x\n", status32);
+ dev_dbg(dev, "ERRCORSTS = %x\n", status32);
status32 &= PCI_ERR_COR_REP_ROLL;
iowrite32(status32, mmio + ATOM_ERRCORSTS_OFFSET);
/* Clear unexpected electrical idle event in LTSSM, write to clear */
status32 = ioread32(mmio + ATOM_LTSSMERRSTS0_OFFSET);
- dev_dbg(ndev_dev(ndev), "LTSSMERRSTS0 = %x\n", status32);
+ dev_dbg(dev, "LTSSMERRSTS0 = %x\n", status32);
status32 |= ATOM_LTSSMERRSTS0_UNEXPECTEDEI;
iowrite32(status32, mmio + ATOM_LTSSMERRSTS0_OFFSET);
/* Clear DeSkew Buffer error, write to clear */
status32 = ioread32(mmio + ATOM_DESKEWSTS_OFFSET);
- dev_dbg(ndev_dev(ndev), "DESKEWSTS = %x\n", status32);
+ dev_dbg(dev, "DESKEWSTS = %x\n", status32);
status32 |= ATOM_DESKEWSTS_DBERR;
iowrite32(status32, mmio + ATOM_DESKEWSTS_OFFSET);
status32 = ioread32(mmio + ATOM_IBSTERRRCRVSTS0_OFFSET);
- dev_dbg(ndev_dev(ndev), "IBSTERRRCRVSTS0 = %x\n", status32);
+ dev_dbg(dev, "IBSTERRRCRVSTS0 = %x\n", status32);
status32 &= ATOM_IBIST_ERR_OFLOW;
iowrite32(status32, mmio + ATOM_IBSTERRRCRVSTS0_OFFSET);
/* Releases the NTB state machine to allow the link to retrain */
status32 = ioread32(mmio + ATOM_LTSSMSTATEJMP_OFFSET);
- dev_dbg(ndev_dev(ndev), "LTSSMSTATEJMP = %x\n", status32);
+ dev_dbg(dev, "LTSSMSTATEJMP = %x\n", status32);
status32 &= ~ATOM_LTSSMSTATEJMP_FORCEDETECT;
iowrite32(status32, mmio + ATOM_LTSSMSTATEJMP_OFFSET);
@@ -1699,11 +1747,11 @@ static int skx_setup_b2b_mw(struct intel_ntb_dev *ndev,
int b2b_bar;
u8 bar_sz;
- pdev = ndev_pdev(ndev);
+ pdev = ndev->ntb.pdev;
mmio = ndev->self_mmio;
if (ndev->b2b_idx == UINT_MAX) {
- dev_dbg(ndev_dev(ndev), "not using b2b mw\n");
+ dev_dbg(&pdev->dev, "not using b2b mw\n");
b2b_bar = 0;
ndev->b2b_off = 0;
} else {
@@ -1711,24 +1759,21 @@ static int skx_setup_b2b_mw(struct intel_ntb_dev *ndev,
if (b2b_bar < 0)
return -EIO;
- dev_dbg(ndev_dev(ndev), "using b2b mw bar %d\n", b2b_bar);
+ dev_dbg(&pdev->dev, "using b2b mw bar %d\n", b2b_bar);
bar_size = pci_resource_len(ndev->ntb.pdev, b2b_bar);
- dev_dbg(ndev_dev(ndev), "b2b bar size %#llx\n", bar_size);
+ dev_dbg(&pdev->dev, "b2b bar size %#llx\n", bar_size);
if (b2b_mw_share && ((bar_size >> 1) >= XEON_B2B_MIN_SIZE)) {
- dev_dbg(ndev_dev(ndev),
- "b2b using first half of bar\n");
+ dev_dbg(&pdev->dev, "b2b using first half of bar\n");
ndev->b2b_off = bar_size >> 1;
} else if (bar_size >= XEON_B2B_MIN_SIZE) {
- dev_dbg(ndev_dev(ndev),
- "b2b using whole bar\n");
+ dev_dbg(&pdev->dev, "b2b using whole bar\n");
ndev->b2b_off = 0;
--ndev->mw_count;
} else {
- dev_dbg(ndev_dev(ndev),
- "b2b bar size is too small\n");
+ dev_dbg(&pdev->dev, "b2b bar size is too small\n");
return -EIO;
}
}
@@ -1738,7 +1783,7 @@ static int skx_setup_b2b_mw(struct intel_ntb_dev *ndev,
* except disable or halve the size of the b2b secondary bar.
*/
pci_read_config_byte(pdev, SKX_IMBAR1SZ_OFFSET, &bar_sz);
- dev_dbg(ndev_dev(ndev), "IMBAR1SZ %#x\n", bar_sz);
+ dev_dbg(&pdev->dev, "IMBAR1SZ %#x\n", bar_sz);
if (b2b_bar == 1) {
if (ndev->b2b_off)
bar_sz -= 1;
@@ -1748,10 +1793,10 @@ static int skx_setup_b2b_mw(struct intel_ntb_dev *ndev,
pci_write_config_byte(pdev, SKX_EMBAR1SZ_OFFSET, bar_sz);
pci_read_config_byte(pdev, SKX_EMBAR1SZ_OFFSET, &bar_sz);
- dev_dbg(ndev_dev(ndev), "EMBAR1SZ %#x\n", bar_sz);
+ dev_dbg(&pdev->dev, "EMBAR1SZ %#x\n", bar_sz);
pci_read_config_byte(pdev, SKX_IMBAR2SZ_OFFSET, &bar_sz);
- dev_dbg(ndev_dev(ndev), "IMBAR2SZ %#x\n", bar_sz);
+ dev_dbg(&pdev->dev, "IMBAR2SZ %#x\n", bar_sz);
if (b2b_bar == 2) {
if (ndev->b2b_off)
bar_sz -= 1;
@@ -1761,7 +1806,7 @@ static int skx_setup_b2b_mw(struct intel_ntb_dev *ndev,
pci_write_config_byte(pdev, SKX_EMBAR2SZ_OFFSET, bar_sz);
pci_read_config_byte(pdev, SKX_EMBAR2SZ_OFFSET, &bar_sz);
- dev_dbg(ndev_dev(ndev), "EMBAR2SZ %#x\n", bar_sz);
+ dev_dbg(&pdev->dev, "EMBAR2SZ %#x\n", bar_sz);
/* SBAR01 hit by first part of the b2b bar */
if (b2b_bar == 0)
@@ -1777,12 +1822,12 @@ static int skx_setup_b2b_mw(struct intel_ntb_dev *ndev,
bar_addr = addr->bar2_addr64 + (b2b_bar == 1 ? ndev->b2b_off : 0);
iowrite64(bar_addr, mmio + SKX_IMBAR1XLMT_OFFSET);
bar_addr = ioread64(mmio + SKX_IMBAR1XLMT_OFFSET);
- dev_dbg(ndev_dev(ndev), "IMBAR1XLMT %#018llx\n", bar_addr);
+ dev_dbg(&pdev->dev, "IMBAR1XLMT %#018llx\n", bar_addr);
bar_addr = addr->bar4_addr64 + (b2b_bar == 2 ? ndev->b2b_off : 0);
iowrite64(bar_addr, mmio + SKX_IMBAR2XLMT_OFFSET);
bar_addr = ioread64(mmio + SKX_IMBAR2XLMT_OFFSET);
- dev_dbg(ndev_dev(ndev), "IMBAR2XLMT %#018llx\n", bar_addr);
+ dev_dbg(&pdev->dev, "IMBAR2XLMT %#018llx\n", bar_addr);
/* zero incoming translation addrs */
iowrite64(0, mmio + SKX_IMBAR1XBASE_OFFSET);
@@ -1852,7 +1897,7 @@ static int skx_init_dev(struct intel_ntb_dev *ndev)
u8 ppd;
int rc;
- pdev = ndev_pdev(ndev);
+ pdev = ndev->ntb.pdev;
ndev->reg = &skx_reg;
@@ -1861,7 +1906,7 @@ static int skx_init_dev(struct intel_ntb_dev *ndev)
return -EIO;
ndev->ntb.topo = xeon_ppd_topo(ndev, ppd);
- dev_dbg(ndev_dev(ndev), "ppd %#x topo %s\n", ppd,
+ dev_dbg(&pdev->dev, "ppd %#x topo %s\n", ppd,
ntb_topo_string(ndev->ntb.topo));
if (ndev->ntb.topo == NTB_TOPO_NONE)
return -EINVAL;
@@ -1885,14 +1930,14 @@ static int intel_ntb3_link_enable(struct ntb_dev *ntb,
ndev = container_of(ntb, struct intel_ntb_dev, ntb);
- dev_dbg(ndev_dev(ndev),
+ dev_dbg(&ntb->pdev->dev,
"Enabling link with max_speed %d max_width %d\n",
max_speed, max_width);
if (max_speed != NTB_SPEED_AUTO)
- dev_dbg(ndev_dev(ndev), "ignoring max_speed %d\n", max_speed);
+ dev_dbg(&ntb->pdev->dev, "ignoring max_speed %d\n", max_speed);
if (max_width != NTB_WIDTH_AUTO)
- dev_dbg(ndev_dev(ndev), "ignoring max_width %d\n", max_width);
+ dev_dbg(&ntb->pdev->dev, "ignoring max_width %d\n", max_width);
ntb_ctl = ioread32(ndev->self_mmio + ndev->reg->ntb_ctl);
ntb_ctl &= ~(NTB_CTL_DISABLE | NTB_CTL_CFG_LOCK);
@@ -1902,7 +1947,7 @@ static int intel_ntb3_link_enable(struct ntb_dev *ntb,
return 0;
}
-static int intel_ntb3_mw_set_trans(struct ntb_dev *ntb, int idx,
+static int intel_ntb3_mw_set_trans(struct ntb_dev *ntb, int pidx, int idx,
dma_addr_t addr, resource_size_t size)
{
struct intel_ntb_dev *ndev = ntb_ndev(ntb);
@@ -1912,6 +1957,9 @@ static int intel_ntb3_mw_set_trans(struct ntb_dev *ntb, int idx,
u64 base, limit, reg_val;
int bar;
+ if (pidx != NTB_DEF_PEER_IDX)
+ return -EINVAL;
+
if (idx >= ndev->b2b_idx && !ndev->b2b_off)
idx += 1;
@@ -1953,7 +2001,7 @@ static int intel_ntb3_mw_set_trans(struct ntb_dev *ntb, int idx,
return -EIO;
}
- dev_dbg(ndev_dev(ndev), "BAR %d IMBARXBASE: %#Lx\n", bar, reg_val);
+ dev_dbg(&ntb->pdev->dev, "BAR %d IMBARXBASE: %#Lx\n", bar, reg_val);
/* set and verify setting the limit */
iowrite64(limit, mmio + limit_reg);
@@ -1964,7 +2012,7 @@ static int intel_ntb3_mw_set_trans(struct ntb_dev *ntb, int idx,
return -EIO;
}
- dev_dbg(ndev_dev(ndev), "BAR %d IMBARXLMT: %#Lx\n", bar, reg_val);
+ dev_dbg(&ntb->pdev->dev, "BAR %d IMBARXLMT: %#Lx\n", bar, reg_val);
/* setup the EP */
limit_reg = ndev->xlat_reg->bar2_limit + (idx * 0x10) + 0x4000;
@@ -1985,7 +2033,7 @@ static int intel_ntb3_mw_set_trans(struct ntb_dev *ntb, int idx,
return -EIO;
}
- dev_dbg(ndev_dev(ndev), "BAR %d EMBARXLMT: %#Lx\n", bar, reg_val);
+ dev_dbg(&ntb->pdev->dev, "BAR %d EMBARXLMT: %#Lx\n", bar, reg_val);
return 0;
}
@@ -2092,7 +2140,7 @@ static inline enum ntb_topo xeon_ppd_topo(struct intel_ntb_dev *ndev, u8 ppd)
static inline int xeon_ppd_bar4_split(struct intel_ntb_dev *ndev, u8 ppd)
{
if (ppd & XEON_PPD_SPLIT_BAR_MASK) {
- dev_dbg(ndev_dev(ndev), "PPD %d split bar\n", ppd);
+ dev_dbg(&ndev->ntb.pdev->dev, "PPD %d split bar\n", ppd);
return 1;
}
return 0;
@@ -2122,11 +2170,11 @@ static int xeon_setup_b2b_mw(struct intel_ntb_dev *ndev,
int b2b_bar;
u8 bar_sz;
- pdev = ndev_pdev(ndev);
+ pdev = ndev->ntb.pdev;
mmio = ndev->self_mmio;
if (ndev->b2b_idx == UINT_MAX) {
- dev_dbg(ndev_dev(ndev), "not using b2b mw\n");
+ dev_dbg(&pdev->dev, "not using b2b mw\n");
b2b_bar = 0;
ndev->b2b_off = 0;
} else {
@@ -2134,24 +2182,21 @@ static int xeon_setup_b2b_mw(struct intel_ntb_dev *ndev,
if (b2b_bar < 0)
return -EIO;
- dev_dbg(ndev_dev(ndev), "using b2b mw bar %d\n", b2b_bar);
+ dev_dbg(&pdev->dev, "using b2b mw bar %d\n", b2b_bar);
bar_size = pci_resource_len(ndev->ntb.pdev, b2b_bar);
- dev_dbg(ndev_dev(ndev), "b2b bar size %#llx\n", bar_size);
+ dev_dbg(&pdev->dev, "b2b bar size %#llx\n", bar_size);
if (b2b_mw_share && XEON_B2B_MIN_SIZE <= bar_size >> 1) {
- dev_dbg(ndev_dev(ndev),
- "b2b using first half of bar\n");
+ dev_dbg(&pdev->dev, "b2b using first half of bar\n");
ndev->b2b_off = bar_size >> 1;
} else if (XEON_B2B_MIN_SIZE <= bar_size) {
- dev_dbg(ndev_dev(ndev),
- "b2b using whole bar\n");
+ dev_dbg(&pdev->dev, "b2b using whole bar\n");
ndev->b2b_off = 0;
--ndev->mw_count;
} else {
- dev_dbg(ndev_dev(ndev),
- "b2b bar size is too small\n");
+ dev_dbg(&pdev->dev, "b2b bar size is too small\n");
return -EIO;
}
}
@@ -2163,7 +2208,7 @@ static int xeon_setup_b2b_mw(struct intel_ntb_dev *ndev,
* offsets are not in a consistent order (bar5sz comes after ppd, odd).
*/
pci_read_config_byte(pdev, XEON_PBAR23SZ_OFFSET, &bar_sz);
- dev_dbg(ndev_dev(ndev), "PBAR23SZ %#x\n", bar_sz);
+ dev_dbg(&pdev->dev, "PBAR23SZ %#x\n", bar_sz);
if (b2b_bar == 2) {
if (ndev->b2b_off)
bar_sz -= 1;
@@ -2172,11 +2217,11 @@ static int xeon_setup_b2b_mw(struct intel_ntb_dev *ndev,
}
pci_write_config_byte(pdev, XEON_SBAR23SZ_OFFSET, bar_sz);
pci_read_config_byte(pdev, XEON_SBAR23SZ_OFFSET, &bar_sz);
- dev_dbg(ndev_dev(ndev), "SBAR23SZ %#x\n", bar_sz);
+ dev_dbg(&pdev->dev, "SBAR23SZ %#x\n", bar_sz);
if (!ndev->bar4_split) {
pci_read_config_byte(pdev, XEON_PBAR45SZ_OFFSET, &bar_sz);
- dev_dbg(ndev_dev(ndev), "PBAR45SZ %#x\n", bar_sz);
+ dev_dbg(&pdev->dev, "PBAR45SZ %#x\n", bar_sz);
if (b2b_bar == 4) {
if (ndev->b2b_off)
bar_sz -= 1;
@@ -2185,10 +2230,10 @@ static int xeon_setup_b2b_mw(struct intel_ntb_dev *ndev,
}
pci_write_config_byte(pdev, XEON_SBAR45SZ_OFFSET, bar_sz);
pci_read_config_byte(pdev, XEON_SBAR45SZ_OFFSET, &bar_sz);
- dev_dbg(ndev_dev(ndev), "SBAR45SZ %#x\n", bar_sz);
+ dev_dbg(&pdev->dev, "SBAR45SZ %#x\n", bar_sz);
} else {
pci_read_config_byte(pdev, XEON_PBAR4SZ_OFFSET, &bar_sz);
- dev_dbg(ndev_dev(ndev), "PBAR4SZ %#x\n", bar_sz);
+ dev_dbg(&pdev->dev, "PBAR4SZ %#x\n", bar_sz);
if (b2b_bar == 4) {
if (ndev->b2b_off)
bar_sz -= 1;
@@ -2197,10 +2242,10 @@ static int xeon_setup_b2b_mw(struct intel_ntb_dev *ndev,
}
pci_write_config_byte(pdev, XEON_SBAR4SZ_OFFSET, bar_sz);
pci_read_config_byte(pdev, XEON_SBAR4SZ_OFFSET, &bar_sz);
- dev_dbg(ndev_dev(ndev), "SBAR4SZ %#x\n", bar_sz);
+ dev_dbg(&pdev->dev, "SBAR4SZ %#x\n", bar_sz);
pci_read_config_byte(pdev, XEON_PBAR5SZ_OFFSET, &bar_sz);
- dev_dbg(ndev_dev(ndev), "PBAR5SZ %#x\n", bar_sz);
+ dev_dbg(&pdev->dev, "PBAR5SZ %#x\n", bar_sz);
if (b2b_bar == 5) {
if (ndev->b2b_off)
bar_sz -= 1;
@@ -2209,7 +2254,7 @@ static int xeon_setup_b2b_mw(struct intel_ntb_dev *ndev,
}
pci_write_config_byte(pdev, XEON_SBAR5SZ_OFFSET, bar_sz);
pci_read_config_byte(pdev, XEON_SBAR5SZ_OFFSET, &bar_sz);
- dev_dbg(ndev_dev(ndev), "SBAR5SZ %#x\n", bar_sz);
+ dev_dbg(&pdev->dev, "SBAR5SZ %#x\n", bar_sz);
}
/* SBAR01 hit by first part of the b2b bar */
@@ -2226,7 +2271,7 @@ static int xeon_setup_b2b_mw(struct intel_ntb_dev *ndev,
else
return -EIO;
- dev_dbg(ndev_dev(ndev), "SBAR01 %#018llx\n", bar_addr);
+ dev_dbg(&pdev->dev, "SBAR01 %#018llx\n", bar_addr);
iowrite64(bar_addr, mmio + XEON_SBAR0BASE_OFFSET);
/* Other SBAR are normally hit by the PBAR xlat, except for b2b bar.
@@ -2237,26 +2282,26 @@ static int xeon_setup_b2b_mw(struct intel_ntb_dev *ndev,
bar_addr = addr->bar2_addr64 + (b2b_bar == 2 ? ndev->b2b_off : 0);
iowrite64(bar_addr, mmio + XEON_SBAR23BASE_OFFSET);
bar_addr = ioread64(mmio + XEON_SBAR23BASE_OFFSET);
- dev_dbg(ndev_dev(ndev), "SBAR23 %#018llx\n", bar_addr);
+ dev_dbg(&pdev->dev, "SBAR23 %#018llx\n", bar_addr);
if (!ndev->bar4_split) {
bar_addr = addr->bar4_addr64 +
(b2b_bar == 4 ? ndev->b2b_off : 0);
iowrite64(bar_addr, mmio + XEON_SBAR45BASE_OFFSET);
bar_addr = ioread64(mmio + XEON_SBAR45BASE_OFFSET);
- dev_dbg(ndev_dev(ndev), "SBAR45 %#018llx\n", bar_addr);
+ dev_dbg(&pdev->dev, "SBAR45 %#018llx\n", bar_addr);
} else {
bar_addr = addr->bar4_addr32 +
(b2b_bar == 4 ? ndev->b2b_off : 0);
iowrite32(bar_addr, mmio + XEON_SBAR4BASE_OFFSET);
bar_addr = ioread32(mmio + XEON_SBAR4BASE_OFFSET);
- dev_dbg(ndev_dev(ndev), "SBAR4 %#010llx\n", bar_addr);
+ dev_dbg(&pdev->dev, "SBAR4 %#010llx\n", bar_addr);
bar_addr = addr->bar5_addr32 +
(b2b_bar == 5 ? ndev->b2b_off : 0);
iowrite32(bar_addr, mmio + XEON_SBAR5BASE_OFFSET);
bar_addr = ioread32(mmio + XEON_SBAR5BASE_OFFSET);
- dev_dbg(ndev_dev(ndev), "SBAR5 %#010llx\n", bar_addr);
+ dev_dbg(&pdev->dev, "SBAR5 %#010llx\n", bar_addr);
}
/* setup incoming bar limits == base addrs (zero length windows) */
@@ -2264,26 +2309,26 @@ static int xeon_setup_b2b_mw(struct intel_ntb_dev *ndev,
bar_addr = addr->bar2_addr64 + (b2b_bar == 2 ? ndev->b2b_off : 0);
iowrite64(bar_addr, mmio + XEON_SBAR23LMT_OFFSET);
bar_addr = ioread64(mmio + XEON_SBAR23LMT_OFFSET);
- dev_dbg(ndev_dev(ndev), "SBAR23LMT %#018llx\n", bar_addr);
+ dev_dbg(&pdev->dev, "SBAR23LMT %#018llx\n", bar_addr);
if (!ndev->bar4_split) {
bar_addr = addr->bar4_addr64 +
(b2b_bar == 4 ? ndev->b2b_off : 0);
iowrite64(bar_addr, mmio + XEON_SBAR45LMT_OFFSET);
bar_addr = ioread64(mmio + XEON_SBAR45LMT_OFFSET);
- dev_dbg(ndev_dev(ndev), "SBAR45LMT %#018llx\n", bar_addr);
+ dev_dbg(&pdev->dev, "SBAR45LMT %#018llx\n", bar_addr);
} else {
bar_addr = addr->bar4_addr32 +
(b2b_bar == 4 ? ndev->b2b_off : 0);
iowrite32(bar_addr, mmio + XEON_SBAR4LMT_OFFSET);
bar_addr = ioread32(mmio + XEON_SBAR4LMT_OFFSET);
- dev_dbg(ndev_dev(ndev), "SBAR4LMT %#010llx\n", bar_addr);
+ dev_dbg(&pdev->dev, "SBAR4LMT %#010llx\n", bar_addr);
bar_addr = addr->bar5_addr32 +
(b2b_bar == 5 ? ndev->b2b_off : 0);
iowrite32(bar_addr, mmio + XEON_SBAR5LMT_OFFSET);
bar_addr = ioread32(mmio + XEON_SBAR5LMT_OFFSET);
- dev_dbg(ndev_dev(ndev), "SBAR5LMT %#05llx\n", bar_addr);
+ dev_dbg(&pdev->dev, "SBAR5LMT %#05llx\n", bar_addr);
}
/* zero incoming translation addrs */
@@ -2309,23 +2354,23 @@ static int xeon_setup_b2b_mw(struct intel_ntb_dev *ndev,
bar_addr = peer_addr->bar2_addr64;
iowrite64(bar_addr, mmio + XEON_PBAR23XLAT_OFFSET);
bar_addr = ioread64(mmio + XEON_PBAR23XLAT_OFFSET);
- dev_dbg(ndev_dev(ndev), "PBAR23XLAT %#018llx\n", bar_addr);
+ dev_dbg(&pdev->dev, "PBAR23XLAT %#018llx\n", bar_addr);
if (!ndev->bar4_split) {
bar_addr = peer_addr->bar4_addr64;
iowrite64(bar_addr, mmio + XEON_PBAR45XLAT_OFFSET);
bar_addr = ioread64(mmio + XEON_PBAR45XLAT_OFFSET);
- dev_dbg(ndev_dev(ndev), "PBAR45XLAT %#018llx\n", bar_addr);
+ dev_dbg(&pdev->dev, "PBAR45XLAT %#018llx\n", bar_addr);
} else {
bar_addr = peer_addr->bar4_addr32;
iowrite32(bar_addr, mmio + XEON_PBAR4XLAT_OFFSET);
bar_addr = ioread32(mmio + XEON_PBAR4XLAT_OFFSET);
- dev_dbg(ndev_dev(ndev), "PBAR4XLAT %#010llx\n", bar_addr);
+ dev_dbg(&pdev->dev, "PBAR4XLAT %#010llx\n", bar_addr);
bar_addr = peer_addr->bar5_addr32;
iowrite32(bar_addr, mmio + XEON_PBAR5XLAT_OFFSET);
bar_addr = ioread32(mmio + XEON_PBAR5XLAT_OFFSET);
- dev_dbg(ndev_dev(ndev), "PBAR5XLAT %#010llx\n", bar_addr);
+ dev_dbg(&pdev->dev, "PBAR5XLAT %#010llx\n", bar_addr);
}
/* set the translation offset for b2b registers */
@@ -2343,7 +2388,7 @@ static int xeon_setup_b2b_mw(struct intel_ntb_dev *ndev,
return -EIO;
/* B2B_XLAT_OFFSET is 64bit, but can only take 32bit writes */
- dev_dbg(ndev_dev(ndev), "B2BXLAT %#018llx\n", bar_addr);
+ dev_dbg(&pdev->dev, "B2BXLAT %#018llx\n", bar_addr);
iowrite32(bar_addr, mmio + XEON_B2B_XLAT_OFFSETL);
iowrite32(bar_addr >> 32, mmio + XEON_B2B_XLAT_OFFSETU);
@@ -2362,6 +2407,7 @@ static int xeon_setup_b2b_mw(struct intel_ntb_dev *ndev,
static int xeon_init_ntb(struct intel_ntb_dev *ndev)
{
+ struct device *dev = &ndev->ntb.pdev->dev;
int rc;
u32 ntb_ctl;
@@ -2377,7 +2423,7 @@ static int xeon_init_ntb(struct intel_ntb_dev *ndev)
switch (ndev->ntb.topo) {
case NTB_TOPO_PRI:
if (ndev->hwerr_flags & NTB_HWERR_SDOORBELL_LOCKUP) {
- dev_err(ndev_dev(ndev), "NTB Primary config disabled\n");
+ dev_err(dev, "NTB Primary config disabled\n");
return -EINVAL;
}
@@ -2395,7 +2441,7 @@ static int xeon_init_ntb(struct intel_ntb_dev *ndev)
case NTB_TOPO_SEC:
if (ndev->hwerr_flags & NTB_HWERR_SDOORBELL_LOCKUP) {
- dev_err(ndev_dev(ndev), "NTB Secondary config disabled\n");
+ dev_err(dev, "NTB Secondary config disabled\n");
return -EINVAL;
}
/* use half the spads for the peer */
@@ -2420,18 +2466,17 @@ static int xeon_init_ntb(struct intel_ntb_dev *ndev)
ndev->b2b_idx = b2b_mw_idx;
if (ndev->b2b_idx >= ndev->mw_count) {
- dev_dbg(ndev_dev(ndev),
+ dev_dbg(dev,
"b2b_mw_idx %d invalid for mw_count %u\n",
b2b_mw_idx, ndev->mw_count);
return -EINVAL;
}
- dev_dbg(ndev_dev(ndev),
- "setting up b2b mw idx %d means %d\n",
+ dev_dbg(dev, "setting up b2b mw idx %d means %d\n",
b2b_mw_idx, ndev->b2b_idx);
} else if (ndev->hwerr_flags & NTB_HWERR_B2BDOORBELL_BIT14) {
- dev_warn(ndev_dev(ndev), "Reduce doorbell count by 1\n");
+ dev_warn(dev, "Reduce doorbell count by 1\n");
ndev->db_count -= 1;
}
@@ -2472,7 +2517,7 @@ static int xeon_init_dev(struct intel_ntb_dev *ndev)
u8 ppd;
int rc, mem;
- pdev = ndev_pdev(ndev);
+ pdev = ndev->ntb.pdev;
switch (pdev->device) {
/* There is a Xeon hardware errata related to writes to SDOORBELL or
@@ -2548,14 +2593,14 @@ static int xeon_init_dev(struct intel_ntb_dev *ndev)
return -EIO;
ndev->ntb.topo = xeon_ppd_topo(ndev, ppd);
- dev_dbg(ndev_dev(ndev), "ppd %#x topo %s\n", ppd,
+ dev_dbg(&pdev->dev, "ppd %#x topo %s\n", ppd,
ntb_topo_string(ndev->ntb.topo));
if (ndev->ntb.topo == NTB_TOPO_NONE)
return -EINVAL;
if (ndev->ntb.topo != NTB_TOPO_SEC) {
ndev->bar4_split = xeon_ppd_bar4_split(ndev, ppd);
- dev_dbg(ndev_dev(ndev), "ppd %#x bar4_split %d\n",
+ dev_dbg(&pdev->dev, "ppd %#x bar4_split %d\n",
ppd, ndev->bar4_split);
} else {
/* This is a way for transparent BAR to figure out if we are
@@ -2565,7 +2610,7 @@ static int xeon_init_dev(struct intel_ntb_dev *ndev)
mem = pci_select_bars(pdev, IORESOURCE_MEM);
ndev->bar4_split = hweight32(mem) ==
HSX_SPLIT_BAR_MW_COUNT + 1;
- dev_dbg(ndev_dev(ndev), "mem %#x bar4_split %d\n",
+ dev_dbg(&pdev->dev, "mem %#x bar4_split %d\n",
mem, ndev->bar4_split);
}
@@ -2602,7 +2647,7 @@ static int intel_ntb_init_pci(struct intel_ntb_dev *ndev, struct pci_dev *pdev)
rc = pci_set_dma_mask(pdev, DMA_BIT_MASK(32));
if (rc)
goto err_dma_mask;
- dev_warn(ndev_dev(ndev), "Cannot DMA highmem\n");
+ dev_warn(&pdev->dev, "Cannot DMA highmem\n");
}
rc = pci_set_consistent_dma_mask(pdev, DMA_BIT_MASK(64));
@@ -2610,7 +2655,7 @@ static int intel_ntb_init_pci(struct intel_ntb_dev *ndev, struct pci_dev *pdev)
rc = pci_set_consistent_dma_mask(pdev, DMA_BIT_MASK(32));
if (rc)
goto err_dma_mask;
- dev_warn(ndev_dev(ndev), "Cannot DMA consistent highmem\n");
+ dev_warn(&pdev->dev, "Cannot DMA consistent highmem\n");
}
ndev->self_mmio = pci_iomap(pdev, 0, 0);
@@ -2636,7 +2681,7 @@ err_pci_enable:
static void intel_ntb_deinit_pci(struct intel_ntb_dev *ndev)
{
- struct pci_dev *pdev = ndev_pdev(ndev);
+ struct pci_dev *pdev = ndev->ntb.pdev;
if (ndev->peer_mmio && ndev->peer_mmio != ndev->self_mmio)
pci_iounmap(pdev, ndev->peer_mmio);
@@ -2906,8 +2951,10 @@ static const struct intel_ntb_xlat_reg skx_sec_xlat = {
/* operations for primary side of local ntb */
static const struct ntb_dev_ops intel_ntb_ops = {
.mw_count = intel_ntb_mw_count,
- .mw_get_range = intel_ntb_mw_get_range,
+ .mw_get_align = intel_ntb_mw_get_align,
.mw_set_trans = intel_ntb_mw_set_trans,
+ .peer_mw_count = intel_ntb_peer_mw_count,
+ .peer_mw_get_addr = intel_ntb_peer_mw_get_addr,
.link_is_up = intel_ntb_link_is_up,
.link_enable = intel_ntb_link_enable,
.link_disable = intel_ntb_link_disable,
@@ -2932,8 +2979,10 @@ static const struct ntb_dev_ops intel_ntb_ops = {
static const struct ntb_dev_ops intel_ntb3_ops = {
.mw_count = intel_ntb_mw_count,
- .mw_get_range = intel_ntb_mw_get_range,
+ .mw_get_align = intel_ntb_mw_get_align,
.mw_set_trans = intel_ntb3_mw_set_trans,
+ .peer_mw_count = intel_ntb_peer_mw_count,
+ .peer_mw_get_addr = intel_ntb_peer_mw_get_addr,
.link_is_up = intel_ntb_link_is_up,
.link_enable = intel_ntb3_link_enable,
.link_disable = intel_ntb_link_disable,
@@ -3008,4 +3057,3 @@ static void __exit intel_ntb_pci_driver_exit(void)
debugfs_remove_recursive(debugfs_dir);
}
module_exit(intel_ntb_pci_driver_exit);
-
diff --git a/drivers/ntb/hw/intel/ntb_hw_intel.h b/drivers/ntb/hw/intel/ntb_hw_intel.h
index f2cf8a783f1e..2d6c38afb128 100644
--- a/drivers/ntb/hw/intel/ntb_hw_intel.h
+++ b/drivers/ntb/hw/intel/ntb_hw_intel.h
@@ -382,9 +382,6 @@ struct intel_ntb_dev {
struct dentry *debugfs_info;
};
-#define ndev_pdev(ndev) ((ndev)->ntb.pdev)
-#define ndev_name(ndev) pci_name(ndev_pdev(ndev))
-#define ndev_dev(ndev) (&ndev_pdev(ndev)->dev)
#define ntb_ndev(__ntb) container_of(__ntb, struct intel_ntb_dev, ntb)
#define hb_ndev(__work) container_of(__work, struct intel_ntb_dev, \
hb_timer.work)
diff --git a/drivers/ntb/ntb.c b/drivers/ntb/ntb.c
index 2e2530743831..03b80d89b980 100644
--- a/drivers/ntb/ntb.c
+++ b/drivers/ntb/ntb.c
@@ -5,6 +5,7 @@
* GPL LICENSE SUMMARY
*
* Copyright (C) 2015 EMC Corporation. All Rights Reserved.
+ * Copyright (C) 2016 T-Platforms. All Rights Reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of version 2 of the GNU General Public License as
@@ -18,6 +19,7 @@
* BSD LICENSE
*
* Copyright (C) 2015 EMC Corporation. All Rights Reserved.
+ * Copyright (C) 2016 T-Platforms. All Rights Reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
@@ -191,6 +193,73 @@ void ntb_db_event(struct ntb_dev *ntb, int vector)
}
EXPORT_SYMBOL(ntb_db_event);
+void ntb_msg_event(struct ntb_dev *ntb)
+{
+ unsigned long irqflags;
+
+ spin_lock_irqsave(&ntb->ctx_lock, irqflags);
+ {
+ if (ntb->ctx_ops && ntb->ctx_ops->msg_event)
+ ntb->ctx_ops->msg_event(ntb->ctx);
+ }
+ spin_unlock_irqrestore(&ntb->ctx_lock, irqflags);
+}
+EXPORT_SYMBOL(ntb_msg_event);
+
+int ntb_default_port_number(struct ntb_dev *ntb)
+{
+ switch (ntb->topo) {
+ case NTB_TOPO_PRI:
+ case NTB_TOPO_B2B_USD:
+ return NTB_PORT_PRI_USD;
+ case NTB_TOPO_SEC:
+ case NTB_TOPO_B2B_DSD:
+ return NTB_PORT_SEC_DSD;
+ default:
+ break;
+ }
+
+ return -EINVAL;
+}
+EXPORT_SYMBOL(ntb_default_port_number);
+
+int ntb_default_peer_port_count(struct ntb_dev *ntb)
+{
+ return NTB_DEF_PEER_CNT;
+}
+EXPORT_SYMBOL(ntb_default_peer_port_count);
+
+int ntb_default_peer_port_number(struct ntb_dev *ntb, int pidx)
+{
+ if (pidx != NTB_DEF_PEER_IDX)
+ return -EINVAL;
+
+ switch (ntb->topo) {
+ case NTB_TOPO_PRI:
+ case NTB_TOPO_B2B_USD:
+ return NTB_PORT_SEC_DSD;
+ case NTB_TOPO_SEC:
+ case NTB_TOPO_B2B_DSD:
+ return NTB_PORT_PRI_USD;
+ default:
+ break;
+ }
+
+ return -EINVAL;
+}
+EXPORT_SYMBOL(ntb_default_peer_port_number);
+
+int ntb_default_peer_port_idx(struct ntb_dev *ntb, int port)
+{
+ int peer_port = ntb_default_peer_port_number(ntb, NTB_DEF_PEER_IDX);
+
+ if (peer_port == -EINVAL || port != peer_port)
+ return -EINVAL;
+
+ return 0;
+}
+EXPORT_SYMBOL(ntb_default_peer_port_idx);
+
static int ntb_probe(struct device *dev)
{
struct ntb_dev *ntb;
diff --git a/drivers/ntb/ntb_transport.c b/drivers/ntb/ntb_transport.c
index 10e5bf460139..9a03c5871efe 100644
--- a/drivers/ntb/ntb_transport.c
+++ b/drivers/ntb/ntb_transport.c
@@ -95,6 +95,9 @@ MODULE_PARM_DESC(use_dma, "Use DMA engine to perform large data copy");
static struct dentry *nt_debugfs_dir;
+/* Only two-ports NTB devices are supported */
+#define PIDX NTB_DEF_PEER_IDX
+
struct ntb_queue_entry {
/* ntb_queue list reference */
struct list_head entry;
@@ -670,7 +673,7 @@ static void ntb_free_mw(struct ntb_transport_ctx *nt, int num_mw)
if (!mw->virt_addr)
return;
- ntb_mw_clear_trans(nt->ndev, num_mw);
+ ntb_mw_clear_trans(nt->ndev, PIDX, num_mw);
dma_free_coherent(&pdev->dev, mw->buff_size,
mw->virt_addr, mw->dma_addr);
mw->xlat_size = 0;
@@ -727,7 +730,8 @@ static int ntb_set_mw(struct ntb_transport_ctx *nt, int num_mw,
}
/* Notify HW the memory location of the receive buffer */
- rc = ntb_mw_set_trans(nt->ndev, num_mw, mw->dma_addr, mw->xlat_size);
+ rc = ntb_mw_set_trans(nt->ndev, PIDX, num_mw, mw->dma_addr,
+ mw->xlat_size);
if (rc) {
dev_err(&pdev->dev, "Unable to set mw%d translation", num_mw);
ntb_free_mw(nt, num_mw);
@@ -858,17 +862,17 @@ static void ntb_transport_link_work(struct work_struct *work)
size = max_mw_size;
spad = MW0_SZ_HIGH + (i * 2);
- ntb_peer_spad_write(ndev, spad, upper_32_bits(size));
+ ntb_peer_spad_write(ndev, PIDX, spad, upper_32_bits(size));
spad = MW0_SZ_LOW + (i * 2);
- ntb_peer_spad_write(ndev, spad, lower_32_bits(size));
+ ntb_peer_spad_write(ndev, PIDX, spad, lower_32_bits(size));
}
- ntb_peer_spad_write(ndev, NUM_MWS, nt->mw_count);
+ ntb_peer_spad_write(ndev, PIDX, NUM_MWS, nt->mw_count);
- ntb_peer_spad_write(ndev, NUM_QPS, nt->qp_count);
+ ntb_peer_spad_write(ndev, PIDX, NUM_QPS, nt->qp_count);
- ntb_peer_spad_write(ndev, VERSION, NTB_TRANSPORT_VERSION);
+ ntb_peer_spad_write(ndev, PIDX, VERSION, NTB_TRANSPORT_VERSION);
/* Query the remote side for its info */
val = ntb_spad_read(ndev, VERSION);
@@ -944,7 +948,7 @@ static void ntb_qp_link_work(struct work_struct *work)
val = ntb_spad_read(nt->ndev, QP_LINKS);
- ntb_peer_spad_write(nt->ndev, QP_LINKS, val | BIT(qp->qp_num));
+ ntb_peer_spad_write(nt->ndev, PIDX, QP_LINKS, val | BIT(qp->qp_num));
/* query remote spad for qp ready bits */
dev_dbg_ratelimited(&pdev->dev, "Remote QP link status = %x\n", val);
@@ -1055,7 +1059,12 @@ static int ntb_transport_probe(struct ntb_client *self, struct ntb_dev *ndev)
int node;
int rc, i;
- mw_count = ntb_mw_count(ndev);
+ mw_count = ntb_mw_count(ndev, PIDX);
+
+ if (!ndev->ops->mw_set_trans) {
+ dev_err(&ndev->dev, "Inbound MW based NTB API is required\n");
+ return -EINVAL;
+ }
if (ntb_db_is_unsafe(ndev))
dev_dbg(&ndev->dev,
@@ -1064,6 +1073,9 @@ static int ntb_transport_probe(struct ntb_client *self, struct ntb_dev *ndev)
dev_dbg(&ndev->dev,
"scratchpad is unsafe, proceed anyway...\n");
+ if (ntb_peer_port_count(ndev) != NTB_DEF_PEER_CNT)
+ dev_warn(&ndev->dev, "Multi-port NTB devices unsupported\n");
+
node = dev_to_node(&ndev->dev);
nt = kzalloc_node(sizeof(*nt), GFP_KERNEL, node);
@@ -1094,8 +1106,13 @@ static int ntb_transport_probe(struct ntb_client *self, struct ntb_dev *ndev)
for (i = 0; i < mw_count; i++) {
mw = &nt->mw_vec[i];
- rc = ntb_mw_get_range(ndev, i, &mw->phys_addr, &mw->phys_size,
- &mw->xlat_align, &mw->xlat_align_size);
+ rc = ntb_mw_get_align(ndev, PIDX, i, &mw->xlat_align,
+ &mw->xlat_align_size, NULL);
+ if (rc)
+ goto err1;
+
+ rc = ntb_peer_mw_get_addr(ndev, i, &mw->phys_addr,
+ &mw->phys_size);
if (rc)
goto err1;
@@ -2091,8 +2108,7 @@ void ntb_transport_link_down(struct ntb_transport_qp *qp)
val = ntb_spad_read(qp->ndev, QP_LINKS);
- ntb_peer_spad_write(qp->ndev, QP_LINKS,
- val & ~BIT(qp->qp_num));
+ ntb_peer_spad_write(qp->ndev, PIDX, QP_LINKS, val & ~BIT(qp->qp_num));
if (qp->link_is_up)
ntb_send_link_down(qp);
diff --git a/drivers/ntb/test/ntb_perf.c b/drivers/ntb/test/ntb_perf.c
index 5cab2831ce99..759f772fa00c 100644
--- a/drivers/ntb/test/ntb_perf.c
+++ b/drivers/ntb/test/ntb_perf.c
@@ -76,6 +76,7 @@
#define DMA_RETRIES 20
#define SZ_4G (1ULL << 32)
#define MAX_SEG_ORDER 20 /* no larger than 1M for kmalloc buffer */
+#define PIDX NTB_DEF_PEER_IDX
MODULE_LICENSE(DRIVER_LICENSE);
MODULE_VERSION(DRIVER_VERSION);
@@ -100,6 +101,10 @@ static bool use_dma; /* default to 0 */
module_param(use_dma, bool, 0644);
MODULE_PARM_DESC(use_dma, "Using DMA engine to measure performance");
+static bool on_node = true; /* default to 1 */
+module_param(on_node, bool, 0644);
+MODULE_PARM_DESC(on_node, "Run threads only on NTB device node (default: true)");
+
struct perf_mw {
phys_addr_t phys_addr;
resource_size_t phys_size;
@@ -135,9 +140,6 @@ struct perf_ctx {
bool link_is_up;
struct delayed_work link_work;
wait_queue_head_t link_wq;
- struct dentry *debugfs_node_dir;
- struct dentry *debugfs_run;
- struct dentry *debugfs_threads;
u8 perf_threads;
/* mutex ensures only one set of threads run at once */
struct mutex run_mutex;
@@ -344,6 +346,10 @@ static int perf_move_data(struct pthr_ctx *pctx, char __iomem *dst, char *src,
static bool perf_dma_filter_fn(struct dma_chan *chan, void *node)
{
+ /* Is the channel required to be on the same node as the device? */
+ if (!on_node)
+ return true;
+
return dev_to_node(&chan->dev->device) == (int)(unsigned long)node;
}
@@ -361,7 +367,7 @@ static int ntb_perf_thread(void *data)
pr_debug("kthread %s starting...\n", current->comm);
- node = dev_to_node(&pdev->dev);
+ node = on_node ? dev_to_node(&pdev->dev) : NUMA_NO_NODE;
if (use_dma && !pctx->dma_chan) {
dma_cap_mask_t dma_mask;
@@ -454,7 +460,7 @@ static void perf_free_mw(struct perf_ctx *perf)
if (!mw->virt_addr)
return;
- ntb_mw_clear_trans(perf->ntb, 0);
+ ntb_mw_clear_trans(perf->ntb, PIDX, 0);
dma_free_coherent(&pdev->dev, mw->buf_size,
mw->virt_addr, mw->dma_addr);
mw->xlat_size = 0;
@@ -490,7 +496,7 @@ static int perf_set_mw(struct perf_ctx *perf, resource_size_t size)
mw->buf_size = 0;
}
- rc = ntb_mw_set_trans(perf->ntb, 0, mw->dma_addr, mw->xlat_size);
+ rc = ntb_mw_set_trans(perf->ntb, PIDX, 0, mw->dma_addr, mw->xlat_size);
if (rc) {
dev_err(&perf->ntb->dev, "Unable to set mw0 translation\n");
perf_free_mw(perf);
@@ -517,9 +523,9 @@ static void perf_link_work(struct work_struct *work)
if (max_mw_size && size > max_mw_size)
size = max_mw_size;
- ntb_peer_spad_write(ndev, MW_SZ_HIGH, upper_32_bits(size));
- ntb_peer_spad_write(ndev, MW_SZ_LOW, lower_32_bits(size));
- ntb_peer_spad_write(ndev, VERSION, PERF_VERSION);
+ ntb_peer_spad_write(ndev, PIDX, MW_SZ_HIGH, upper_32_bits(size));
+ ntb_peer_spad_write(ndev, PIDX, MW_SZ_LOW, lower_32_bits(size));
+ ntb_peer_spad_write(ndev, PIDX, VERSION, PERF_VERSION);
/* now read what peer wrote */
val = ntb_spad_read(ndev, VERSION);
@@ -561,8 +567,12 @@ static int perf_setup_mw(struct ntb_dev *ntb, struct perf_ctx *perf)
mw = &perf->mw;
- rc = ntb_mw_get_range(ntb, 0, &mw->phys_addr, &mw->phys_size,
- &mw->xlat_align, &mw->xlat_align_size);
+ rc = ntb_mw_get_align(ntb, PIDX, 0, &mw->xlat_align,
+ &mw->xlat_align_size, NULL);
+ if (rc)
+ return rc;
+
+ rc = ntb_peer_mw_get_addr(ntb, 0, &mw->phys_addr, &mw->phys_size);
if (rc)
return rc;
@@ -677,7 +687,8 @@ static ssize_t debugfs_run_write(struct file *filp, const char __user *ubuf,
pr_info("Fix run_order to %u\n", run_order);
}
- node = dev_to_node(&perf->ntb->pdev->dev);
+ node = on_node ? dev_to_node(&perf->ntb->pdev->dev)
+ : NUMA_NO_NODE;
atomic_set(&perf->tdone, 0);
/* launch kernel thread */
@@ -723,34 +734,71 @@ static const struct file_operations ntb_perf_debugfs_run = {
static int perf_debugfs_setup(struct perf_ctx *perf)
{
struct pci_dev *pdev = perf->ntb->pdev;
+ struct dentry *debugfs_node_dir;
+ struct dentry *debugfs_run;
+ struct dentry *debugfs_threads;
+ struct dentry *debugfs_seg_order;
+ struct dentry *debugfs_run_order;
+ struct dentry *debugfs_use_dma;
+ struct dentry *debugfs_on_node;
if (!debugfs_initialized())
return -ENODEV;
+ /* Assumpion: only one NTB device in the system */
if (!perf_debugfs_dir) {
perf_debugfs_dir = debugfs_create_dir(KBUILD_MODNAME, NULL);
if (!perf_debugfs_dir)
return -ENODEV;
}
- perf->debugfs_node_dir = debugfs_create_dir(pci_name(pdev),
- perf_debugfs_dir);
- if (!perf->debugfs_node_dir)
- return -ENODEV;
+ debugfs_node_dir = debugfs_create_dir(pci_name(pdev),
+ perf_debugfs_dir);
+ if (!debugfs_node_dir)
+ goto err;
- perf->debugfs_run = debugfs_create_file("run", S_IRUSR | S_IWUSR,
- perf->debugfs_node_dir, perf,
- &ntb_perf_debugfs_run);
- if (!perf->debugfs_run)
- return -ENODEV;
+ debugfs_run = debugfs_create_file("run", S_IRUSR | S_IWUSR,
+ debugfs_node_dir, perf,
+ &ntb_perf_debugfs_run);
+ if (!debugfs_run)
+ goto err;
- perf->debugfs_threads = debugfs_create_u8("threads", S_IRUSR | S_IWUSR,
- perf->debugfs_node_dir,
- &perf->perf_threads);
- if (!perf->debugfs_threads)
- return -ENODEV;
+ debugfs_threads = debugfs_create_u8("threads", S_IRUSR | S_IWUSR,
+ debugfs_node_dir,
+ &perf->perf_threads);
+ if (!debugfs_threads)
+ goto err;
+
+ debugfs_seg_order = debugfs_create_u32("seg_order", 0600,
+ debugfs_node_dir,
+ &seg_order);
+ if (!debugfs_seg_order)
+ goto err;
+
+ debugfs_run_order = debugfs_create_u32("run_order", 0600,
+ debugfs_node_dir,
+ &run_order);
+ if (!debugfs_run_order)
+ goto err;
+
+ debugfs_use_dma = debugfs_create_bool("use_dma", 0600,
+ debugfs_node_dir,
+ &use_dma);
+ if (!debugfs_use_dma)
+ goto err;
+
+ debugfs_on_node = debugfs_create_bool("on_node", 0600,
+ debugfs_node_dir,
+ &on_node);
+ if (!debugfs_on_node)
+ goto err;
return 0;
+
+err:
+ debugfs_remove_recursive(perf_debugfs_dir);
+ perf_debugfs_dir = NULL;
+ return -ENODEV;
}
static int perf_probe(struct ntb_client *client, struct ntb_dev *ntb)
@@ -766,8 +814,15 @@ static int perf_probe(struct ntb_client *client, struct ntb_dev *ntb)
return -EIO;
}
- node = dev_to_node(&pdev->dev);
+ if (!ntb->ops->mw_set_trans) {
+ dev_err(&ntb->dev, "Need inbound MW based NTB API\n");
+ return -EINVAL;
+ }
+
+ if (ntb_peer_port_count(ntb) != NTB_DEF_PEER_CNT)
+ dev_warn(&ntb->dev, "Multi-port NTB devices unsupported\n");
+ node = on_node ? dev_to_node(&pdev->dev) : NUMA_NO_NODE;
perf = kzalloc_node(sizeof(*perf), GFP_KERNEL, node);
if (!perf) {
rc = -ENOMEM;
diff --git a/drivers/ntb/test/ntb_pingpong.c b/drivers/ntb/test/ntb_pingpong.c
index 435861189d97..938a18bcfc3f 100644
--- a/drivers/ntb/test/ntb_pingpong.c
+++ b/drivers/ntb/test/ntb_pingpong.c
@@ -90,6 +90,9 @@ static unsigned long db_init = 0x7;
module_param(db_init, ulong, 0644);
MODULE_PARM_DESC(db_init, "Initial doorbell bits to ring on the peer");
+/* Only two-ports NTB devices are supported */
+#define PIDX NTB_DEF_PEER_IDX
+
struct pp_ctx {
struct ntb_dev *ntb;
u64 db_bits;
@@ -135,7 +138,7 @@ static void pp_ping(unsigned long ctx)
"Ping bits %#llx read %#x write %#x\n",
db_bits, spad_rd, spad_wr);
- ntb_peer_spad_write(pp->ntb, 0, spad_wr);
+ ntb_peer_spad_write(pp->ntb, PIDX, 0, spad_wr);
ntb_peer_db_set(pp->ntb, db_bits);
ntb_db_clear_mask(pp->ntb, db_mask);
@@ -222,6 +225,12 @@ static int pp_probe(struct ntb_client *client,
}
}
+ if (ntb_spad_count(ntb) < 1) {
+ dev_dbg(&ntb->dev, "no enough scratchpads\n");
+ rc = -EINVAL;
+ goto err_pp;
+ }
+
if (ntb_spad_is_unsafe(ntb)) {
dev_dbg(&ntb->dev, "scratchpad is unsafe\n");
if (!unsafe) {
@@ -230,6 +239,9 @@ static int pp_probe(struct ntb_client *client,
}
}
+ if (ntb_peer_port_count(ntb) != NTB_DEF_PEER_CNT)
+ dev_warn(&ntb->dev, "multi-port NTB is unsupported\n");
+
pp = kmalloc(sizeof(*pp), GFP_KERNEL);
if (!pp) {
rc = -ENOMEM;
diff --git a/drivers/ntb/test/ntb_tool.c b/drivers/ntb/test/ntb_tool.c
index 61bf2ef87e0e..f002bf48a08d 100644
--- a/drivers/ntb/test/ntb_tool.c
+++ b/drivers/ntb/test/ntb_tool.c
@@ -119,7 +119,10 @@ MODULE_VERSION(DRIVER_VERSION);
MODULE_AUTHOR(DRIVER_AUTHOR);
MODULE_DESCRIPTION(DRIVER_DESCRIPTION);
-#define MAX_MWS 16
+/* It is rare to have hadrware with greater than six MWs */
+#define MAX_MWS 6
+/* Only two-ports devices are supported */
+#define PIDX NTB_DEF_PEER_IDX
static struct dentry *tool_dbgfs;
@@ -459,13 +462,22 @@ static TOOL_FOPS_RDWR(tool_spad_fops,
tool_spad_read,
tool_spad_write);
+static u32 ntb_tool_peer_spad_read(struct ntb_dev *ntb, int sidx)
+{
+ return ntb_peer_spad_read(ntb, PIDX, sidx);
+}
+
static ssize_t tool_peer_spad_read(struct file *filep, char __user *ubuf,
size_t size, loff_t *offp)
{
struct tool_ctx *tc = filep->private_data;
- return tool_spadfn_read(tc, ubuf, size, offp,
- tc->ntb->ops->peer_spad_read);
+ return tool_spadfn_read(tc, ubuf, size, offp, ntb_tool_peer_spad_read);
+}
+
+static int ntb_tool_peer_spad_write(struct ntb_dev *ntb, int sidx, u32 val)
+{
+ return ntb_peer_spad_write(ntb, PIDX, sidx, val);
}
static ssize_t tool_peer_spad_write(struct file *filep, const char __user *ubuf,
@@ -474,7 +486,7 @@ static ssize_t tool_peer_spad_write(struct file *filep, const char __user *ubuf,
struct tool_ctx *tc = filep->private_data;
return tool_spadfn_write(tc, ubuf, size, offp,
- tc->ntb->ops->peer_spad_write);
+ ntb_tool_peer_spad_write);
}
static TOOL_FOPS_RDWR(tool_peer_spad_fops,
@@ -668,28 +680,27 @@ static int tool_setup_mw(struct tool_ctx *tc, int idx, size_t req_size)
{
int rc;
struct tool_mw *mw = &tc->mws[idx];
- phys_addr_t base;
- resource_size_t size, align, align_size;
+ resource_size_t size, align_addr, align_size;
char buf[16];
if (mw->peer)
return 0;
- rc = ntb_mw_get_range(tc->ntb, idx, &base, &size, &align,
- &align_size);
+ rc = ntb_mw_get_align(tc->ntb, PIDX, idx, &align_addr,
+ &align_size, &size);
if (rc)
return rc;
mw->size = min_t(resource_size_t, req_size, size);
- mw->size = round_up(mw->size, align);
+ mw->size = round_up(mw->size, align_addr);
mw->size = round_up(mw->size, align_size);
mw->peer = dma_alloc_coherent(&tc->ntb->pdev->dev, mw->size,
&mw->peer_dma, GFP_KERNEL);
- if (!mw->peer)
+ if (!mw->peer || !IS_ALIGNED(mw->peer_dma, align_addr))
return -ENOMEM;
- rc = ntb_mw_set_trans(tc->ntb, idx, mw->peer_dma, mw->size);
+ rc = ntb_mw_set_trans(tc->ntb, PIDX, idx, mw->peer_dma, mw->size);
if (rc)
goto err_free_dma;
@@ -716,7 +727,7 @@ static void tool_free_mw(struct tool_ctx *tc, int idx)
struct tool_mw *mw = &tc->mws[idx];
if (mw->peer) {
- ntb_mw_clear_trans(tc->ntb, idx);
+ ntb_mw_clear_trans(tc->ntb, PIDX, idx);
dma_free_coherent(&tc->ntb->pdev->dev, mw->size,
mw->peer,
mw->peer_dma);
@@ -742,8 +753,9 @@ static ssize_t tool_peer_mw_trans_read(struct file *filep,
phys_addr_t base;
resource_size_t mw_size;
- resource_size_t align;
+ resource_size_t align_addr;
resource_size_t align_size;
+ resource_size_t max_size;
buf_size = min_t(size_t, size, 512);
@@ -751,8 +763,9 @@ static ssize_t tool_peer_mw_trans_read(struct file *filep,
if (!buf)
return -ENOMEM;
- ntb_mw_get_range(mw->tc->ntb, mw->idx,
- &base, &mw_size, &align, &align_size);
+ ntb_mw_get_align(mw->tc->ntb, PIDX, mw->idx,
+ &align_addr, &align_size, &max_size);
+ ntb_peer_mw_get_addr(mw->tc->ntb, mw->idx, &base, &mw_size);
off += scnprintf(buf + off, buf_size - off,
"Peer MW %d Information:\n", mw->idx);
@@ -767,13 +780,17 @@ static ssize_t tool_peer_mw_trans_read(struct file *filep,
off += scnprintf(buf + off, buf_size - off,
"Alignment \t%lld\n",
- (unsigned long long)align);
+ (unsigned long long)align_addr);
off += scnprintf(buf + off, buf_size - off,
"Size Alignment \t%lld\n",
(unsigned long long)align_size);
off += scnprintf(buf + off, buf_size - off,
+ "Size Max \t%lld\n",
+ (unsigned long long)max_size);
+
+ off += scnprintf(buf + off, buf_size - off,
"Ready \t%c\n",
(mw->peer) ? 'Y' : 'N');
@@ -827,8 +844,7 @@ static int tool_init_mw(struct tool_ctx *tc, int idx)
phys_addr_t base;
int rc;
- rc = ntb_mw_get_range(tc->ntb, idx, &base, &mw->win_size,
- NULL, NULL);
+ rc = ntb_peer_mw_get_addr(tc->ntb, idx, &base, &mw->win_size);
if (rc)
return rc;
@@ -913,12 +929,27 @@ static int tool_probe(struct ntb_client *self, struct ntb_dev *ntb)
int rc;
int i;
+ if (!ntb->ops->mw_set_trans) {
+ dev_dbg(&ntb->dev, "need inbound MW based NTB API\n");
+ rc = -EINVAL;
+ goto err_tc;
+ }
+
+ if (ntb_spad_count(ntb) < 1) {
+ dev_dbg(&ntb->dev, "no enough scratchpads\n");
+ rc = -EINVAL;
+ goto err_tc;
+ }
+
if (ntb_db_is_unsafe(ntb))
dev_dbg(&ntb->dev, "doorbell is unsafe\n");
if (ntb_spad_is_unsafe(ntb))
dev_dbg(&ntb->dev, "scratchpad is unsafe\n");
+ if (ntb_peer_port_count(ntb) != NTB_DEF_PEER_CNT)
+ dev_warn(&ntb->dev, "multi-port NTB is unsupported\n");
+
tc = kzalloc(sizeof(*tc), GFP_KERNEL);
if (!tc) {
rc = -ENOMEM;
@@ -928,7 +959,7 @@ static int tool_probe(struct ntb_client *self, struct ntb_dev *ntb)
tc->ntb = ntb;
init_waitqueue_head(&tc->link_wq);
- tc->mw_count = min(ntb_mw_count(tc->ntb), MAX_MWS);
+ tc->mw_count = min(ntb_mw_count(tc->ntb, PIDX), MAX_MWS);
for (i = 0; i < tc->mw_count; i++) {
rc = tool_init_mw(tc, i);
if (rc)
diff --git a/include/linux/ntb.h b/include/linux/ntb.h
index de87ceac110e..609e232c00da 100644
--- a/include/linux/ntb.h
+++ b/include/linux/ntb.h
@@ -5,6 +5,7 @@
* GPL LICENSE SUMMARY
*
* Copyright (C) 2015 EMC Corporation. All Rights Reserved.
+ * Copyright (C) 2016 T-Platforms. All Rights Reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of version 2 of the GNU General Public License as
@@ -18,6 +19,7 @@
* BSD LICENSE
*
* Copyright (C) 2015 EMC Corporation. All Rights Reserved.
+ * Copyright (C) 2016 T-Platforms. All Rights Reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
@@ -106,6 +108,7 @@ static inline char *ntb_topo_string(enum ntb_topo topo)
* @NTB_SPEED_GEN1: Link is trained to gen1 speed.
* @NTB_SPEED_GEN2: Link is trained to gen2 speed.
* @NTB_SPEED_GEN3: Link is trained to gen3 speed.
+ * @NTB_SPEED_GEN4: Link is trained to gen4 speed.
*/
enum ntb_speed {
NTB_SPEED_AUTO = -1,
@@ -113,6 +116,7 @@ enum ntb_speed {
NTB_SPEED_GEN1 = 1,
NTB_SPEED_GEN2 = 2,
NTB_SPEED_GEN3 = 3,
+ NTB_SPEED_GEN4 = 4
};
/**
@@ -140,6 +144,20 @@ enum ntb_width {
};
/**
+ * enum ntb_default_port - NTB default port number
+ * @NTB_PORT_PRI_USD: Default port of the NTB_TOPO_PRI/NTB_TOPO_B2B_USD
+ * topologies
+ * @NTB_PORT_SEC_DSD: Default port of the NTB_TOPO_SEC/NTB_TOPO_B2B_DSD
+ * topologies
+ */
+enum ntb_default_port {
+ NTB_PORT_PRI_USD,
+ NTB_PORT_SEC_DSD
+};
+#define NTB_DEF_PEER_CNT (1)
+#define NTB_DEF_PEER_IDX (0)
+
+/**
* struct ntb_client_ops - ntb client operations
* @probe: Notify client of a new device.
* @remove: Notify client to remove a device.
@@ -162,10 +180,12 @@ static inline int ntb_client_ops_is_valid(const struct ntb_client_ops *ops)
* struct ntb_ctx_ops - ntb driver context operations
* @link_event: See ntb_link_event().
* @db_event: See ntb_db_event().
+ * @msg_event: See ntb_msg_event().
*/
struct ntb_ctx_ops {
void (*link_event)(void *ctx);
void (*db_event)(void *ctx, int db_vector);
+ void (*msg_event)(void *ctx);
};
static inline int ntb_ctx_ops_is_valid(const struct ntb_ctx_ops *ops)
@@ -174,18 +194,27 @@ static inline int ntb_ctx_ops_is_valid(const struct ntb_ctx_ops *ops)
return
/* ops->link_event && */
/* ops->db_event && */
+ /* ops->msg_event && */
1;
}
/**
* struct ntb_ctx_ops - ntb device operations
- * @mw_count: See ntb_mw_count().
- * @mw_get_range: See ntb_mw_get_range().
- * @mw_set_trans: See ntb_mw_set_trans().
- * @mw_clear_trans: See ntb_mw_clear_trans().
+ * @port_number: See ntb_port_number().
+ * @peer_port_count: See ntb_peer_port_count().
+ * @peer_port_number: See ntb_peer_port_number().
+ * @peer_port_idx: See ntb_peer_port_idx().
* @link_is_up: See ntb_link_is_up().
* @link_enable: See ntb_link_enable().
* @link_disable: See ntb_link_disable().
+ * @mw_count: See ntb_mw_count().
+ * @mw_get_align: See ntb_mw_get_align().
+ * @mw_set_trans: See ntb_mw_set_trans().
+ * @mw_clear_trans: See ntb_mw_clear_trans().
+ * @peer_mw_count: See ntb_peer_mw_count().
+ * @peer_mw_get_addr: See ntb_peer_mw_get_addr().
+ * @peer_mw_set_trans: See ntb_peer_mw_set_trans().
+ * @peer_mw_clear_trans:See ntb_peer_mw_clear_trans().
* @db_is_unsafe: See ntb_db_is_unsafe().
* @db_valid_mask: See ntb_db_valid_mask().
* @db_vector_count: See ntb_db_vector_count().
@@ -210,22 +239,43 @@ static inline int ntb_ctx_ops_is_valid(const struct ntb_ctx_ops *ops)
* @peer_spad_addr: See ntb_peer_spad_addr().
* @peer_spad_read: See ntb_peer_spad_read().
* @peer_spad_write: See ntb_peer_spad_write().
+ * @msg_count: See ntb_msg_count().
+ * @msg_inbits: See ntb_msg_inbits().
+ * @msg_outbits: See ntb_msg_outbits().
+ * @msg_read_sts: See ntb_msg_read_sts().
+ * @msg_clear_sts: See ntb_msg_clear_sts().
+ * @msg_set_mask: See ntb_msg_set_mask().
+ * @msg_clear_mask: See ntb_msg_clear_mask().
+ * @msg_read: See ntb_msg_read().
+ * @msg_write: See ntb_msg_write().
*/
struct ntb_dev_ops {
- int (*mw_count)(struct ntb_dev *ntb);
- int (*mw_get_range)(struct ntb_dev *ntb, int idx,
- phys_addr_t *base, resource_size_t *size,
- resource_size_t *align, resource_size_t *align_size);
- int (*mw_set_trans)(struct ntb_dev *ntb, int idx,
- dma_addr_t addr, resource_size_t size);
- int (*mw_clear_trans)(struct ntb_dev *ntb, int idx);
+ int (*port_number)(struct ntb_dev *ntb);
+ int (*peer_port_count)(struct ntb_dev *ntb);
+ int (*peer_port_number)(struct ntb_dev *ntb, int pidx);
+ int (*peer_port_idx)(struct ntb_dev *ntb, int port);
- int (*link_is_up)(struct ntb_dev *ntb,
+ u64 (*link_is_up)(struct ntb_dev *ntb,
enum ntb_speed *speed, enum ntb_width *width);
int (*link_enable)(struct ntb_dev *ntb,
enum ntb_speed max_speed, enum ntb_width max_width);
int (*link_disable)(struct ntb_dev *ntb);
+ int (*mw_count)(struct ntb_dev *ntb, int pidx);
+ int (*mw_get_align)(struct ntb_dev *ntb, int pidx, int widx,
+ resource_size_t *addr_align,
+ resource_size_t *size_align,
+ resource_size_t *size_max);
+ int (*mw_set_trans)(struct ntb_dev *ntb, int pidx, int widx,
+ dma_addr_t addr, resource_size_t size);
+ int (*mw_clear_trans)(struct ntb_dev *ntb, int pidx, int widx);
+ int (*peer_mw_count)(struct ntb_dev *ntb);
+ int (*peer_mw_get_addr)(struct ntb_dev *ntb, int widx,
+ phys_addr_t *base, resource_size_t *size);
+ int (*peer_mw_set_trans)(struct ntb_dev *ntb, int pidx, int widx,
+ u64 addr, resource_size_t size);
+ int (*peer_mw_clear_trans)(struct ntb_dev *ntb, int pidx, int widx);
+
int (*db_is_unsafe)(struct ntb_dev *ntb);
u64 (*db_valid_mask)(struct ntb_dev *ntb);
int (*db_vector_count)(struct ntb_dev *ntb);
@@ -252,32 +302,55 @@ struct ntb_dev_ops {
int (*spad_is_unsafe)(struct ntb_dev *ntb);
int (*spad_count)(struct ntb_dev *ntb);
- u32 (*spad_read)(struct ntb_dev *ntb, int idx);
- int (*spad_write)(struct ntb_dev *ntb, int idx, u32 val);
+ u32 (*spad_read)(struct ntb_dev *ntb, int sidx);
+ int (*spad_write)(struct ntb_dev *ntb, int sidx, u32 val);
- int (*peer_spad_addr)(struct ntb_dev *ntb, int idx,
+ int (*peer_spad_addr)(struct ntb_dev *ntb, int pidx, int sidx,
phys_addr_t *spad_addr);
- u32 (*peer_spad_read)(struct ntb_dev *ntb, int idx);
- int (*peer_spad_write)(struct ntb_dev *ntb, int idx, u32 val);
+ u32 (*peer_spad_read)(struct ntb_dev *ntb, int pidx, int sidx);
+ int (*peer_spad_write)(struct ntb_dev *ntb, int pidx, int sidx,
+ u32 val);
+
+ int (*msg_count)(struct ntb_dev *ntb);
+ u64 (*msg_inbits)(struct ntb_dev *ntb);
+ u64 (*msg_outbits)(struct ntb_dev *ntb);
+ u64 (*msg_read_sts)(struct ntb_dev *ntb);
+ int (*msg_clear_sts)(struct ntb_dev *ntb, u64 sts_bits);
+ int (*msg_set_mask)(struct ntb_dev *ntb, u64 mask_bits);
+ int (*msg_clear_mask)(struct ntb_dev *ntb, u64 mask_bits);
+ int (*msg_read)(struct ntb_dev *ntb, int midx, int *pidx, u32 *msg);
+ int (*msg_write)(struct ntb_dev *ntb, int midx, int pidx, u32 msg);
};
static inline int ntb_dev_ops_is_valid(const struct ntb_dev_ops *ops)
{
/* commented callbacks are not required: */
return
- ops->mw_count &&
- ops->mw_get_range &&
- ops->mw_set_trans &&
- /* ops->mw_clear_trans && */
+ /* Port operations are required for multiport devices */
+ !ops->peer_port_count == !ops->port_number &&
+ !ops->peer_port_number == !ops->port_number &&
+ !ops->peer_port_idx == !ops->port_number &&
+
+ /* Link operations are required */
ops->link_is_up &&
ops->link_enable &&
ops->link_disable &&
+
+ /* One or both MW interfaces should be developed */
+ ops->mw_count &&
+ ops->mw_get_align &&
+ (ops->mw_set_trans ||
+ ops->peer_mw_set_trans) &&
+ /* ops->mw_clear_trans && */
+ ops->peer_mw_count &&
+ ops->peer_mw_get_addr &&
+ /* ops->peer_mw_clear_trans && */
+
+ /* Doorbell operations are mostly required */
/* ops->db_is_unsafe && */
ops->db_valid_mask &&
-
/* both set, or both unset */
- (!ops->db_vector_count == !ops->db_vector_mask) &&
-
+ (!ops->db_vector_count == !ops->db_vector_mask) &&
ops->db_read &&
/* ops->db_set && */
ops->db_clear &&
@@ -291,13 +364,24 @@ static inline int ntb_dev_ops_is_valid(const struct ntb_dev_ops *ops)
/* ops->peer_db_read_mask && */
/* ops->peer_db_set_mask && */
/* ops->peer_db_clear_mask && */
- /* ops->spad_is_unsafe && */
- ops->spad_count &&
- ops->spad_read &&
- ops->spad_write &&
- /* ops->peer_spad_addr && */
- /* ops->peer_spad_read && */
- ops->peer_spad_write &&
+
+ /* Scrachpads interface is optional */
+ /* !ops->spad_is_unsafe == !ops->spad_count && */
+ !ops->spad_read == !ops->spad_count &&
+ !ops->spad_write == !ops->spad_count &&
+ /* !ops->peer_spad_addr == !ops->spad_count && */
+ /* !ops->peer_spad_read == !ops->spad_count && */
+ !ops->peer_spad_write == !ops->spad_count &&
+
+ /* Messaging interface is optional */
+ !ops->msg_inbits == !ops->msg_count &&
+ !ops->msg_outbits == !ops->msg_count &&
+ !ops->msg_read_sts == !ops->msg_count &&
+ !ops->msg_clear_sts == !ops->msg_count &&
+ /* !ops->msg_set_mask == !ops->msg_count && */
+ /* !ops->msg_clear_mask == !ops->msg_count && */
+ !ops->msg_read == !ops->msg_count &&
+ !ops->msg_write == !ops->msg_count &&
1;
}
@@ -310,13 +394,12 @@ struct ntb_client {
struct device_driver drv;
const struct ntb_client_ops ops;
};
-
#define drv_ntb_client(__drv) container_of((__drv), struct ntb_client, drv)
/**
* struct ntb_device - ntb device
* @dev: Linux device object.
- * @pdev: Pci device entry of the ntb.
+ * @pdev: PCI device entry of the ntb.
* @topo: Detected topology of the ntb.
* @ops: See &ntb_dev_ops.
* @ctx: See &ntb_ctx_ops.
@@ -337,7 +420,6 @@ struct ntb_dev {
/* block unregister until device is fully released */
struct completion released;
};
-
#define dev_ntb(__dev) container_of((__dev), struct ntb_dev, dev)
/**
@@ -434,86 +516,152 @@ void ntb_link_event(struct ntb_dev *ntb);
* multiple interrupt vectors for doorbells, the vector number indicates which
* vector received the interrupt. The vector number is relative to the first
* vector used for doorbells, starting at zero, and must be less than
- ** ntb_db_vector_count(). The driver may call ntb_db_read() to check which
+ * ntb_db_vector_count(). The driver may call ntb_db_read() to check which
* doorbell bits need service, and ntb_db_vector_mask() to determine which of
* those bits are associated with the vector number.
*/
void ntb_db_event(struct ntb_dev *ntb, int vector);
/**
- * ntb_mw_count() - get the number of memory windows
+ * ntb_msg_event() - notify driver context of a message event
* @ntb: NTB device context.
*
- * Hardware and topology may support a different number of memory windows.
+ * Notify the driver context of a message event. If hardware supports
+ * message registers, this event indicates, that a new message arrived in
+ * some incoming message register or last sent message couldn't be delivered.
+ * The events can be masked/unmasked by the methods ntb_msg_set_mask() and
+ * ntb_msg_clear_mask().
+ */
+void ntb_msg_event(struct ntb_dev *ntb);
+
+/**
+ * ntb_default_port_number() - get the default local port number
+ * @ntb: NTB device context.
*
- * Return: the number of memory windows.
+ * If hardware driver doesn't specify port_number() callback method, the NTB
+ * is considered with just two ports. So this method returns default local
+ * port number in compliance with topology.
+ *
+ * NOTE Don't call this method directly. The ntb_port_number() function should
+ * be used instead.
+ *
+ * Return: the default local port number
+ */
+int ntb_default_port_number(struct ntb_dev *ntb);
+
+/**
+ * ntb_default_port_count() - get the default number of peer device ports
+ * @ntb: NTB device context.
+ *
+ * By default hardware driver supports just one peer device.
+ *
+ * NOTE Don't call this method directly. The ntb_peer_port_count() function
+ * should be used instead.
+ *
+ * Return: the default number of peer ports
+ */
+int ntb_default_peer_port_count(struct ntb_dev *ntb);
+
+/**
+ * ntb_default_peer_port_number() - get the default peer port by given index
+ * @ntb: NTB device context.
+ * @idx: Peer port index (should not differ from zero).
+ *
+ * By default hardware driver supports just one peer device, so this method
+ * shall return the corresponding value from enum ntb_default_port.
+ *
+ * NOTE Don't call this method directly. The ntb_peer_port_number() function
+ * should be used instead.
+ *
+ * Return: the peer device port or negative value indicating an error
+ */
+int ntb_default_peer_port_number(struct ntb_dev *ntb, int pidx);
+
+/**
+ * ntb_default_peer_port_idx() - get the default peer device port index by
+ * given port number
+ * @ntb: NTB device context.
+ * @port: Peer port number (should be one of enum ntb_default_port).
+ *
+ * By default hardware driver supports just one peer device, so while
+ * specified port-argument indicates peer port from enum ntb_default_port,
+ * the return value shall be zero.
+ *
+ * NOTE Don't call this method directly. The ntb_peer_port_idx() function
+ * should be used instead.
+ *
+ * Return: the peer port index or negative value indicating an error
+ */
+int ntb_default_peer_port_idx(struct ntb_dev *ntb, int port);
+
+/**
+ * ntb_port_number() - get the local port number
+ * @ntb: NTB device context.
+ *
+ * Hardware must support at least simple two-ports ntb connection
+ *
+ * Return: the local port number
*/
-static inline int ntb_mw_count(struct ntb_dev *ntb)
+static inline int ntb_port_number(struct ntb_dev *ntb)
{
- return ntb->ops->mw_count(ntb);
+ if (!ntb->ops->port_number)
+ return ntb_default_port_number(ntb);
+
+ return ntb->ops->port_number(ntb);
}
/**
- * ntb_mw_get_range() - get the range of a memory window
+ * ntb_peer_port_count() - get the number of peer device ports
* @ntb: NTB device context.
- * @idx: Memory window number.
- * @base: OUT - the base address for mapping the memory window
- * @size: OUT - the size for mapping the memory window
- * @align: OUT - the base alignment for translating the memory window
- * @align_size: OUT - the size alignment for translating the memory window
*
- * Get the range of a memory window. NULL may be given for any output
- * parameter if the value is not needed. The base and size may be used for
- * mapping the memory window, to access the peer memory. The alignment and
- * size may be used for translating the memory window, for the peer to access
- * memory on the local system.
+ * Hardware may support an access to memory of several remote domains
+ * over multi-port NTB devices. This method returns the number of peers,
+ * local device can have shared memory with.
*
- * Return: Zero on success, otherwise an error number.
+ * Return: the number of peer ports
*/
-static inline int ntb_mw_get_range(struct ntb_dev *ntb, int idx,
- phys_addr_t *base, resource_size_t *size,
- resource_size_t *align, resource_size_t *align_size)
+static inline int ntb_peer_port_count(struct ntb_dev *ntb)
{
- return ntb->ops->mw_get_range(ntb, idx, base, size,
- align, align_size);
+ if (!ntb->ops->peer_port_count)
+ return ntb_default_peer_port_count(ntb);
+
+ return ntb->ops->peer_port_count(ntb);
}
/**
- * ntb_mw_set_trans() - set the translation of a memory window
+ * ntb_peer_port_number() - get the peer port by given index
* @ntb: NTB device context.
- * @idx: Memory window number.
- * @addr: The dma address local memory to expose to the peer.
- * @size: The size of the local memory to expose to the peer.
+ * @pidx: Peer port index.
*
- * Set the translation of a memory window. The peer may access local memory
- * through the window starting at the address, up to the size. The address
- * must be aligned to the alignment specified by ntb_mw_get_range(). The size
- * must be aligned to the size alignment specified by ntb_mw_get_range().
+ * Peer ports are continuously enumerated by NTB API logic, so this method
+ * lets to retrieve port real number by its index.
*
- * Return: Zero on success, otherwise an error number.
+ * Return: the peer device port or negative value indicating an error
*/
-static inline int ntb_mw_set_trans(struct ntb_dev *ntb, int idx,
- dma_addr_t addr, resource_size_t size)
+static inline int ntb_peer_port_number(struct ntb_dev *ntb, int pidx)
{
- return ntb->ops->mw_set_trans(ntb, idx, addr, size);
+ if (!ntb->ops->peer_port_number)
+ return ntb_default_peer_port_number(ntb, pidx);
+
+ return ntb->ops->peer_port_number(ntb, pidx);
}
/**
- * ntb_mw_clear_trans() - clear the translation of a memory window
+ * ntb_peer_port_idx() - get the peer device port index by given port number
* @ntb: NTB device context.
- * @idx: Memory window number.
+ * @port: Peer port number.
*
- * Clear the translation of a memory window. The peer may no longer access
- * local memory through the window.
+ * Inverse operation of ntb_peer_port_number(), so one can get port index
+ * by specified port number.
*
- * Return: Zero on success, otherwise an error number.
+ * Return: the peer port index or negative value indicating an error
*/
-static inline int ntb_mw_clear_trans(struct ntb_dev *ntb, int idx)
+static inline int ntb_peer_port_idx(struct ntb_dev *ntb, int port)
{
- if (!ntb->ops->mw_clear_trans)
- return ntb->ops->mw_set_trans(ntb, idx, 0, 0);
+ if (!ntb->ops->peer_port_idx)
+ return ntb_default_peer_port_idx(ntb, port);
- return ntb->ops->mw_clear_trans(ntb, idx);
+ return ntb->ops->peer_port_idx(ntb, port);
}
/**
@@ -526,25 +674,26 @@ static inline int ntb_mw_clear_trans(struct ntb_dev *ntb, int idx)
* state once after every link event. It is safe to query the link state in
* the context of the link event callback.
*
- * Return: One if the link is up, zero if the link is down, otherwise a
- * negative value indicating the error number.
+ * Return: bitfield of indexed ports link state: bit is set/cleared if the
+ * link is up/down respectively.
*/
-static inline int ntb_link_is_up(struct ntb_dev *ntb,
+static inline u64 ntb_link_is_up(struct ntb_dev *ntb,
enum ntb_speed *speed, enum ntb_width *width)
{
return ntb->ops->link_is_up(ntb, speed, width);
}
/**
- * ntb_link_enable() - enable the link on the secondary side of the ntb
+ * ntb_link_enable() - enable the local port ntb connection
* @ntb: NTB device context.
* @max_speed: The maximum link speed expressed as PCIe generation number.
* @max_width: The maximum link width expressed as the number of PCIe lanes.
*
- * Enable the link on the secondary side of the ntb. This can only be done
- * from the primary side of the ntb in primary or b2b topology. The ntb device
- * should train the link to its maximum speed and width, or the requested speed
- * and width, whichever is smaller, if supported.
+ * Enable the NTB/PCIe link on the local or remote (for bridge-to-bridge
+ * topology) side of the bridge. If it's supported the ntb device should train
+ * the link to its maximum speed and width, or the requested speed and width,
+ * whichever is smaller. Some hardware doesn't support PCIe link training, so
+ * the last two arguments will be ignored then.
*
* Return: Zero on success, otherwise an error number.
*/
@@ -556,14 +705,14 @@ static inline int ntb_link_enable(struct ntb_dev *ntb,
}
/**
- * ntb_link_disable() - disable the link on the secondary side of the ntb
+ * ntb_link_disable() - disable the local port ntb connection
* @ntb: NTB device context.
*
- * Disable the link on the secondary side of the ntb. This can only be
- * done from the primary side of the ntb in primary or b2b topology. The ntb
- * device should disable the link. Returning from this call must indicate that
- * a barrier has passed, though with no more writes may pass in either
- * direction across the link, except if this call returns an error number.
+ * Disable the link on the local or remote (for b2b topology) of the ntb.
+ * The ntb device should disable the link. Returning from this call must
+ * indicate that a barrier has passed, though with no more writes may pass in
+ * either direction across the link, except if this call returns an error
+ * number.
*
* Return: Zero on success, otherwise an error number.
*/
@@ -573,6 +722,183 @@ static inline int ntb_link_disable(struct ntb_dev *ntb)
}
/**
+ * ntb_mw_count() - get the number of inbound memory windows, which could
+ * be created for a specified peer device
+ * @ntb: NTB device context.
+ * @pidx: Port index of peer device.
+ *
+ * Hardware and topology may support a different number of memory windows.
+ * Moreover different peer devices can support different number of memory
+ * windows. Simply speaking this method returns the number of possible inbound
+ * memory windows to share with specified peer device.
+ *
+ * Return: the number of memory windows.
+ */
+static inline int ntb_mw_count(struct ntb_dev *ntb, int pidx)
+{
+ return ntb->ops->mw_count(ntb, pidx);
+}
+
+/**
+ * ntb_mw_get_align() - get the restriction parameters of inbound memory window
+ * @ntb: NTB device context.
+ * @pidx: Port index of peer device.
+ * @widx: Memory window index.
+ * @addr_align: OUT - the base alignment for translating the memory window
+ * @size_align: OUT - the size alignment for translating the memory window
+ * @size_max: OUT - the maximum size of the memory window
+ *
+ * Get the alignments of an inbound memory window with specified index.
+ * NULL may be given for any output parameter if the value is not needed.
+ * The alignment and size parameters may be used for allocation of proper
+ * shared memory.
+ *
+ * Return: Zero on success, otherwise a negative error number.
+ */
+static inline int ntb_mw_get_align(struct ntb_dev *ntb, int pidx, int widx,
+ resource_size_t *addr_align,
+ resource_size_t *size_align,
+ resource_size_t *size_max)
+{
+ return ntb->ops->mw_get_align(ntb, pidx, widx, addr_align, size_align,
+ size_max);
+}
+
+/**
+ * ntb_mw_set_trans() - set the translation of an inbound memory window
+ * @ntb: NTB device context.
+ * @pidx: Port index of peer device.
+ * @widx: Memory window index.
+ * @addr: The dma address of local memory to expose to the peer.
+ * @size: The size of the local memory to expose to the peer.
+ *
+ * Set the translation of a memory window. The peer may access local memory
+ * through the window starting at the address, up to the size. The address
+ * and size must be aligned in compliance with restrictions of
+ * ntb_mw_get_align(). The region size should not exceed the size_max parameter
+ * of that method.
+ *
+ * This method may not be implemented due to the hardware specific memory
+ * windows interface.
+ *
+ * Return: Zero on success, otherwise an error number.
+ */
+static inline int ntb_mw_set_trans(struct ntb_dev *ntb, int pidx, int widx,
+ dma_addr_t addr, resource_size_t size)
+{
+ if (!ntb->ops->mw_set_trans)
+ return 0;
+
+ return ntb->ops->mw_set_trans(ntb, pidx, widx, addr, size);
+}
+
+/**
+ * ntb_mw_clear_trans() - clear the translation address of an inbound memory
+ * window
+ * @ntb: NTB device context.
+ * @pidx: Port index of peer device.
+ * @widx: Memory window index.
+ *
+ * Clear the translation of an inbound memory window. The peer may no longer
+ * access local memory through the window.
+ *
+ * Return: Zero on success, otherwise an error number.
+ */
+static inline int ntb_mw_clear_trans(struct ntb_dev *ntb, int pidx, int widx)
+{
+ if (!ntb->ops->mw_clear_trans)
+ return ntb_mw_set_trans(ntb, pidx, widx, 0, 0);
+
+ return ntb->ops->mw_clear_trans(ntb, pidx, widx);
+}
+
+/**
+ * ntb_peer_mw_count() - get the number of outbound memory windows, which could
+ * be mapped to access a shared memory
+ * @ntb: NTB device context.
+ *
+ * Hardware and topology may support a different number of memory windows.
+ * This method returns the number of outbound memory windows supported by
+ * local device.
+ *
+ * Return: the number of memory windows.
+ */
+static inline int ntb_peer_mw_count(struct ntb_dev *ntb)
+{
+ return ntb->ops->peer_mw_count(ntb);
+}
+
+/**
+ * ntb_peer_mw_get_addr() - get map address of an outbound memory window
+ * @ntb: NTB device context.
+ * @widx: Memory window index (within ntb_peer_mw_count() return value).
+ * @base: OUT - the base address of mapping region.
+ * @size: OUT - the size of mapping region.
+ *
+ * Get base and size of memory region to map. NULL may be given for any output
+ * parameter if the value is not needed. The base and size may be used for
+ * mapping the memory window, to access the peer memory.
+ *
+ * Return: Zero on success, otherwise a negative error number.
+ */
+static inline int ntb_peer_mw_get_addr(struct ntb_dev *ntb, int widx,
+ phys_addr_t *base, resource_size_t *size)
+{
+ return ntb->ops->peer_mw_get_addr(ntb, widx, base, size);
+}
+
+/**
+ * ntb_peer_mw_set_trans() - set a translation address of a memory window
+ * retrieved from a peer device
+ * @ntb: NTB device context.
+ * @pidx: Port index of peer device the translation address received from.
+ * @widx: Memory window index.
+ * @addr: The dma address of the shared memory to access.
+ * @size: The size of the shared memory to access.
+ *
+ * Set the translation of an outbound memory window. The local device may
+ * access shared memory allocated by a peer device sent the address.
+ *
+ * This method may not be implemented due to the hardware specific memory
+ * windows interface, so a translation address can be only set on the side,
+ * where shared memory (inbound memory windows) is allocated.
+ *
+ * Return: Zero on success, otherwise an error number.
+ */
+static inline int ntb_peer_mw_set_trans(struct ntb_dev *ntb, int pidx, int widx,
+ u64 addr, resource_size_t size)
+{
+ if (!ntb->ops->peer_mw_set_trans)
+ return 0;
+
+ return ntb->ops->peer_mw_set_trans(ntb, pidx, widx, addr, size);
+}
+
+/**
+ * ntb_peer_mw_clear_trans() - clear the translation address of an outbound
+ * memory window
+ * @ntb: NTB device context.
+ * @pidx: Port index of peer device.
+ * @widx: Memory window index.
+ *
+ * Clear the translation of a outbound memory window. The local device may no
+ * longer access a shared memory through the window.
+ *
+ * This method may not be implemented due to the hardware specific memory
+ * windows interface.
+ *
+ * Return: Zero on success, otherwise an error number.
+ */
+static inline int ntb_peer_mw_clear_trans(struct ntb_dev *ntb, int pidx,
+ int widx)
+{
+ if (!ntb->ops->peer_mw_clear_trans)
+ return ntb_peer_mw_set_trans(ntb, pidx, widx, 0, 0);
+
+ return ntb->ops->peer_mw_clear_trans(ntb, pidx, widx);
+}
+
+/**
* ntb_db_is_unsafe() - check if it is safe to use hardware doorbell
* @ntb: NTB device context.
*
@@ -900,47 +1226,58 @@ static inline int ntb_spad_is_unsafe(struct ntb_dev *ntb)
* @ntb: NTB device context.
*
* Hardware and topology may support a different number of scratchpads.
+ * Although it must be the same for all ports per NTB device.
*
* Return: the number of scratchpads.
*/
static inline int ntb_spad_count(struct ntb_dev *ntb)
{
+ if (!ntb->ops->spad_count)
+ return 0;
+
return ntb->ops->spad_count(ntb);
}
/**
* ntb_spad_read() - read the local scratchpad register
* @ntb: NTB device context.
- * @idx: Scratchpad index.
+ * @sidx: Scratchpad index.
*
* Read the local scratchpad register, and return the value.
*
* Return: The value of the local scratchpad register.
*/
-static inline u32 ntb_spad_read(struct ntb_dev *ntb, int idx)
+static inline u32 ntb_spad_read(struct ntb_dev *ntb, int sidx)
{
- return ntb->ops->spad_read(ntb, idx);
+ if (!ntb->ops->spad_read)
+ return ~(u32)0;
+
+ return ntb->ops->spad_read(ntb, sidx);
}
/**
* ntb_spad_write() - write the local scratchpad register
* @ntb: NTB device context.
- * @idx: Scratchpad index.
+ * @sidx: Scratchpad index.
* @val: Scratchpad value.
*
* Write the value to the local scratchpad register.
*
* Return: Zero on success, otherwise an error number.
*/
-static inline int ntb_spad_write(struct ntb_dev *ntb, int idx, u32 val)
+static inline int ntb_spad_write(struct ntb_dev *ntb, int sidx, u32 val)
{
- return ntb->ops->spad_write(ntb, idx, val);
+ if (!ntb->ops->spad_write)
+ return -EINVAL;
+
+ return ntb->ops->spad_write(ntb, sidx, val);
}
/**
* ntb_peer_spad_addr() - address of the peer scratchpad register
* @ntb: NTB device context.
- * @idx: Scratchpad index.
+ * @pidx: Port index of peer device.
+ * @sidx: Scratchpad index.
* @spad_addr: OUT - The address of the peer scratchpad register.
*
* Return the address of the peer doorbell register. This may be used, for
@@ -948,45 +1285,213 @@ static inline int ntb_spad_write(struct ntb_dev *ntb, int idx, u32 val)
*
* Return: Zero on success, otherwise an error number.
*/
-static inline int ntb_peer_spad_addr(struct ntb_dev *ntb, int idx,
+static inline int ntb_peer_spad_addr(struct ntb_dev *ntb, int pidx, int sidx,
phys_addr_t *spad_addr)
{
if (!ntb->ops->peer_spad_addr)
return -EINVAL;
- return ntb->ops->peer_spad_addr(ntb, idx, spad_addr);
+ return ntb->ops->peer_spad_addr(ntb, pidx, sidx, spad_addr);
}
/**
* ntb_peer_spad_read() - read the peer scratchpad register
* @ntb: NTB device context.
- * @idx: Scratchpad index.
+ * @pidx: Port index of peer device.
+ * @sidx: Scratchpad index.
*
* Read the peer scratchpad register, and return the value.
*
* Return: The value of the local scratchpad register.
*/
-static inline u32 ntb_peer_spad_read(struct ntb_dev *ntb, int idx)
+static inline u32 ntb_peer_spad_read(struct ntb_dev *ntb, int pidx, int sidx)
{
if (!ntb->ops->peer_spad_read)
- return 0;
+ return ~(u32)0;
- return ntb->ops->peer_spad_read(ntb, idx);
+ return ntb->ops->peer_spad_read(ntb, pidx, sidx);
}
/**
* ntb_peer_spad_write() - write the peer scratchpad register
* @ntb: NTB device context.
- * @idx: Scratchpad index.
+ * @pidx: Port index of peer device.
+ * @sidx: Scratchpad index.
* @val: Scratchpad value.
*
* Write the value to the peer scratchpad register.
*
* Return: Zero on success, otherwise an error number.
*/
-static inline int ntb_peer_spad_write(struct ntb_dev *ntb, int idx, u32 val)
+static inline int ntb_peer_spad_write(struct ntb_dev *ntb, int pidx, int sidx,
+ u32 val)
+{
+ if (!ntb->ops->peer_spad_write)
+ return -EINVAL;
+
+ return ntb->ops->peer_spad_write(ntb, pidx, sidx, val);
+}
+
+/**
+ * ntb_msg_count() - get the number of message registers
+ * @ntb: NTB device context.
+ *
+ * Hardware may support a different number of message registers.
+ *
+ * Return: the number of message registers.
+ */
+static inline int ntb_msg_count(struct ntb_dev *ntb)
+{
+ if (!ntb->ops->msg_count)
+ return 0;
+
+ return ntb->ops->msg_count(ntb);
+}
+
+/**
+ * ntb_msg_inbits() - get a bitfield of inbound message registers status
+ * @ntb: NTB device context.
+ *
+ * The method returns the bitfield of status and mask registers, which related
+ * to inbound message registers.
+ *
+ * Return: bitfield of inbound message registers.
+ */
+static inline u64 ntb_msg_inbits(struct ntb_dev *ntb)
{
- return ntb->ops->peer_spad_write(ntb, idx, val);
+ if (!ntb->ops->msg_inbits)
+ return 0;
+
+ return ntb->ops->msg_inbits(ntb);
+}
+
+/**
+ * ntb_msg_outbits() - get a bitfield of outbound message registers status
+ * @ntb: NTB device context.
+ *
+ * The method returns the bitfield of status and mask registers, which related
+ * to outbound message registers.
+ *
+ * Return: bitfield of outbound message registers.
+ */
+static inline u64 ntb_msg_outbits(struct ntb_dev *ntb)
+{
+ if (!ntb->ops->msg_outbits)
+ return 0;
+
+ return ntb->ops->msg_outbits(ntb);
+}
+
+/**
+ * ntb_msg_read_sts() - read the message registers status
+ * @ntb: NTB device context.
+ *
+ * Read the status of message register. Inbound and outbound message registers
+ * related bits can be filtered by masks retrieved from ntb_msg_inbits() and
+ * ntb_msg_outbits().
+ *
+ * Return: status bits of message registers
+ */
+static inline u64 ntb_msg_read_sts(struct ntb_dev *ntb)
+{
+ if (!ntb->ops->msg_read_sts)
+ return 0;
+
+ return ntb->ops->msg_read_sts(ntb);
+}
+
+/**
+ * ntb_msg_clear_sts() - clear status bits of message registers
+ * @ntb: NTB device context.
+ * @sts_bits: Status bits to clear.
+ *
+ * Clear bits in the status register.
+ *
+ * Return: Zero on success, otherwise a negative error number.
+ */
+static inline int ntb_msg_clear_sts(struct ntb_dev *ntb, u64 sts_bits)
+{
+ if (!ntb->ops->msg_clear_sts)
+ return -EINVAL;
+
+ return ntb->ops->msg_clear_sts(ntb, sts_bits);
+}
+
+/**
+ * ntb_msg_set_mask() - set mask of message register status bits
+ * @ntb: NTB device context.
+ * @mask_bits: Mask bits.
+ *
+ * Mask the message registers status bits from raising the message event.
+ *
+ * Return: Zero on success, otherwise a negative error number.
+ */
+static inline int ntb_msg_set_mask(struct ntb_dev *ntb, u64 mask_bits)
+{
+ if (!ntb->ops->msg_set_mask)
+ return -EINVAL;
+
+ return ntb->ops->msg_set_mask(ntb, mask_bits);
+}
+
+/**
+ * ntb_msg_clear_mask() - clear message registers mask
+ * @ntb: NTB device context.
+ * @mask_bits: Mask bits to clear.
+ *
+ * Clear bits in the message events mask register.
+ *
+ * Return: Zero on success, otherwise a negative error number.
+ */
+static inline int ntb_msg_clear_mask(struct ntb_dev *ntb, u64 mask_bits)
+{
+ if (!ntb->ops->msg_clear_mask)
+ return -EINVAL;
+
+ return ntb->ops->msg_clear_mask(ntb, mask_bits);
+}
+
+/**
+ * ntb_msg_read() - read message register with specified index
+ * @ntb: NTB device context.
+ * @midx: Message register index
+ * @pidx: OUT - Port index of peer device a message retrieved from
+ * @msg: OUT - Data
+ *
+ * Read data from the specified message register. Source port index of a
+ * message is retrieved as well.
+ *
+ * Return: Zero on success, otherwise a negative error number.
+ */
+static inline int ntb_msg_read(struct ntb_dev *ntb, int midx, int *pidx,
+ u32 *msg)
+{
+ if (!ntb->ops->msg_read)
+ return -EINVAL;
+
+ return ntb->ops->msg_read(ntb, midx, pidx, msg);
+}
+
+/**
+ * ntb_msg_write() - write data to the specified message register
+ * @ntb: NTB device context.
+ * @midx: Message register index
+ * @pidx: Port index of peer device a message being sent to
+ * @msg: Data to send
+ *
+ * Send data to a specified peer device using the defined message register.
+ * Message event can be raised if the midx registers isn't empty while
+ * calling this method and the corresponding interrupt isn't masked.
+ *
+ * Return: Zero on success, otherwise a negative error number.
+ */
+static inline int ntb_msg_write(struct ntb_dev *ntb, int midx, int pidx,
+ u32 msg)
+{
+ if (!ntb->ops->msg_write)
+ return -EINVAL;
+
+ return ntb->ops->msg_write(ntb, midx, pidx, msg);
}
#endif
diff --git a/tools/testing/selftests/ntb/ntb_test.sh b/tools/testing/selftests/ntb/ntb_test.sh
index 13f5198ba0ee..1c12b5855e4f 100755
--- a/tools/testing/selftests/ntb/ntb_test.sh
+++ b/tools/testing/selftests/ntb/ntb_test.sh
@@ -18,6 +18,7 @@ LIST_DEVS=FALSE
DEBUGFS=${DEBUGFS-/sys/kernel/debug}
+DB_BITMASK=0x7FFF
PERF_RUN_ORDER=32
MAX_MW_SIZE=0
RUN_DMA_TESTS=
@@ -38,6 +39,7 @@ function show_help()
echo "be highly recommended."
echo
echo "Options:"
+ echo " -b BITMASK doorbell clear bitmask for ntb_tool"
echo " -C don't cleanup ntb modules on exit"
echo " -d run dma tests"
echo " -h show this help message"
@@ -52,8 +54,9 @@ function show_help()
function parse_args()
{
OPTIND=0
- while getopts "Cdhlm:r:p:w:" opt; do
+ while getopts "b:Cdhlm:r:p:w:" opt; do
case "$opt" in
+ b) DB_BITMASK=${OPTARG} ;;
C) DONT_CLEANUP=1 ;;
d) RUN_DMA_TESTS=1 ;;
h) show_help; exit 0 ;;
@@ -85,6 +88,10 @@ set -e
function _modprobe()
{
modprobe "$@"
+
+ if [[ "$REMOTE_HOST" != "" ]]; then
+ ssh "$REMOTE_HOST" modprobe "$@"
+ fi
}
function split_remote()
@@ -154,7 +161,7 @@ function doorbell_test()
echo "Running db tests on: $(basename $LOC) / $(basename $REM)"
- write_file "c 0xFFFFFFFF" "$REM/db"
+ write_file "c $DB_BITMASK" "$REM/db"
for ((i=1; i <= 8; i++)); do
let DB=$(read_file "$REM/db") || true