Age | Commit message (Collapse) | Author |
|
When masking/unmasking a doorbell interrupt, it is necessary
to issue an invalidation to the corresponding redistributor.
We use the DirectLPI feature by writting directly to the corresponding
redistributor.
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
|
|
When we're about to run a vcpu, it is crucial that the redistributor
associated with the physical CPU is being told about the new residency.
This is abstracted by hijacking the irq_set_affinity method for the
doorbell interrupt associated with the VPE. It is expected that the
hypervisor will call this method before scheduling the VPE.
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
|
|
When a guest issues a INVALL command targetting a collection, it must
be translated into a VINVALL for the VPE that has this collection.
This patch implements a hook that offers this functionallity to the
hypervisor.
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
|
|
When a VPE is scheduled to run, the corresponding redistributor must
be told so, by setting VPROPBASER to the VM's property table, and
VPENDBASER to the vcpu's pending table.
When scheduled out, we preserve the IDAI and PendingLast bits. The
latter is specially important, as it tells the hypervisor that
there are pending interrupts for this vcpu.
Reviewed-by: Eric Auger <eric.auger@redhat.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
|
|
On activation, a VPE is mapped using the VMAPP command, followed
by a VINVALL for a good measure. On deactivation, the VPE is
simply unmapped.
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
|
|
When creating a VM, the low level GICv4 code is responsible for:
- allocating each VPE a unique VPEID
- allocating a doorbell interrupt for each VPE
- allocating the pending tables for each VPE
- allocating the property table for the VM
This of course has to be reversed when the VM is brought down.
All of this is wired into the irq domain alloc/free methods.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
|
|
Add the basic GICv4 VPE (vcpu in GICv4 parlance) infrastructure
(irqchip, irq domain) that is going to be populated in the following
patches.
Reviewed-by: Eric Auger <eric.auger@redhat.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
|
|
When a VLPI is reconfigured (enabled, disabled, change in priority),
the full configuration byte must be written, and the caches invalidated.
Also, when using the irq_mask/irq_unmask methods, it is necessary
to disable the doorbell for that particular interrupt (by mapping it
to 1023) on top of clearing the Enable bit.
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Eric Auger <eric.auger@redhat.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
|
|
In order to let a VLPI being injected into a guest, the VLPI must
be mapped using the VMAPTI command. When moved to a different vcpu,
it must be moved with the VMOVI command.
These commands are issued via the irq_set_vcpu_affinity method,
making sure we unmap the corresponding host LPI first.
The reverse is also done when the VLPI is unmapped from the guest.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
|
|
Add the skeleton irq_set_vcpu_affinity method that will be used
to configure VLPIs.
Reviewed-by: Eric Auger <eric.auger@redhat.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
|
|
Add the new GICv4 ITS command definitions, most of them, being
defined in terms of their physical counterparts.
Reviewed-by: Eric Auger <eric.auger@redhat.com>
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
|
|
Now that we have a custom printf format specifier, convert users of
full_name to use %pOF instead. This is preparation to remove storing
of the full path string for each node.
Signed-off-by: Rob Herring <robh@kernel.org>
Cc: Daniel Lezcano <daniel.lezcano@linaro.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Marc Gonzalez <marc_gonzalez@sigmadesigns.com>
Cc: Maxime Coquelin <mcoquelin.stm32@gmail.com>
Cc: Alexandre Torgue <alexandre.torgue@st.com>
Cc: linux-arm-kernel@lists.infradead.org
Acked-by: Marc Gonzalez <marc_gonzalez@sigmadesigns.com>
Acked-by: Alexandre TORGUE <alexandre.torgue@st.com>
Signed-off-by: Daniel Lezcano <daniel.lezcano@linaro.org>
|
|
This reverts commit 2154d94b40ea2a5de05245521371d0461bb0d669.
The original patch was intented to avoid some issues with the sunxi
gpio rework and was supposed to be reverted after all the required
DT bits had been merged around v4.10.
Signed-off-by: Priit Laes <plaes@plaes.org>
Acked-by: Maxime Ripard <maxime.ripard@free-electrons.com>
Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
|
|
The ether_rmii_groups should have "ether_rmii" and "ether_rmiib" as
members. This patch replaces to them.
Fixes: 1e359ab1285e ("pinctrl: uniphier: add Ethernet pin-mux settings")
Signed-off-by: Kunihiko Hayashi <hayashi.kunihiko@socionext.com>
Acked-by: Masahiro Yamada <yamada.masahiro@socionext.com>
Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
|
|
Commit aba831a69632 ("xen: remove tests for pvh mode in pure pv paths")
removed XENFEAT_auto_translated_physmap test in xen_alloc_p2m_entry()
since it is assumed that the routine is never called by non-PV guests.
However, alloc_xenballooned_pages() may make this call on a PVH guest.
Prevent this from happening by adding XENFEAT_auto_translated_physmap
check there.
Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Reviewed-by: Juergen Gross <jgross@suse.com>
Fixes: aba831a69632 ("xen: remove tests for pvh mode in pure pv paths")
|
|
When booting Linux as Xen guest with CONFIG_DEBUG_ATOMIC, the following
splat appears:
[ 0.002323] Mountpoint-cache hash table entries: 1024 (order: 1, 8192 bytes)
[ 0.019717] ASID allocator initialised with 65536 entries
[ 0.020019] xen:grant_table: Grant tables using version 1 layout
[ 0.020051] Grant table initialized
[ 0.020069] BUG: sleeping function called from invalid context at /data/src/linux/mm/page_alloc.c:4046
[ 0.020100] in_atomic(): 1, irqs_disabled(): 0, pid: 1, name: swapper/0
[ 0.020123] no locks held by swapper/0/1.
[ 0.020143] CPU: 0 PID: 1 Comm: swapper/0 Not tainted 4.13.0-rc5 #598
[ 0.020166] Hardware name: FVP Base (DT)
[ 0.020182] Call trace:
[ 0.020199] [<ffff00000808a5c0>] dump_backtrace+0x0/0x270
[ 0.020222] [<ffff00000808a95c>] show_stack+0x24/0x30
[ 0.020244] [<ffff000008c1ef20>] dump_stack+0xb8/0xf0
[ 0.020267] [<ffff0000081128c0>] ___might_sleep+0x1c8/0x1f8
[ 0.020291] [<ffff000008112948>] __might_sleep+0x58/0x90
[ 0.020313] [<ffff0000082171b8>] __alloc_pages_nodemask+0x1c0/0x12e8
[ 0.020338] [<ffff00000827a110>] alloc_page_interleave+0x38/0x88
[ 0.020363] [<ffff00000827a904>] alloc_pages_current+0xdc/0xf0
[ 0.020387] [<ffff000008211f38>] __get_free_pages+0x28/0x50
[ 0.020411] [<ffff0000086566a4>] evtchn_fifo_alloc_control_block+0x2c/0xa0
[ 0.020437] [<ffff0000091747b0>] xen_evtchn_fifo_init+0x38/0xb4
[ 0.020461] [<ffff0000091746c0>] xen_init_IRQ+0x44/0xc8
[ 0.020484] [<ffff000009128adc>] xen_guest_init+0x250/0x300
[ 0.020507] [<ffff000008083974>] do_one_initcall+0x44/0x130
[ 0.020531] [<ffff000009120df8>] kernel_init_freeable+0x120/0x288
[ 0.020556] [<ffff000008c31ca8>] kernel_init+0x18/0x110
[ 0.020578] [<ffff000008083710>] ret_from_fork+0x10/0x40
[ 0.020606] xen:events: Using FIFO-based ABI
[ 0.020658] Xen: initializing cpu0
[ 0.027727] Hierarchical SRCU implementation.
[ 0.036235] EFI services will not be available.
[ 0.043810] smp: Bringing up secondary CPUs ...
This is because get_cpu() in xen_evtchn_fifo_init() will disable
preemption, but __get_free_page() might sleep (GFP_ATOMIC is not set).
xen_evtchn_fifo_init() will always be called before SMP is initialized,
so {get,put}_cpu() could be replaced by a simple smp_processor_id().
This also avoid to modify evtchn_fifo_alloc_control_block that will be
called in other context.
Signed-off-by: Julien Grall <julien.grall@arm.com>
Reported-by: Andre Przywara <andre.przywara@arm.com>
Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Fixes: 1fe565517b57 ("xen/events: use the FIFO-based ABI if available")
Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
|
|
__WARN() is an internal helper that is only available on
some architectures, but causes a build error e.g. on ARM64
in some configurations:
drivers/xen/pvcalls-back.c: In function 'set_backend_state':
drivers/xen/pvcalls-back.c:1097:5: error: implicit declaration of function '__WARN' [-Werror=implicit-function-declaration]
Unfortunately, there is no equivalent of BUG() that takes no
arguments, but WARN_ON(1) is commonly used in other drivers
and works on all configurations.
Fixes: 7160378206b2 ("xen/pvcalls: xenbus state handling")
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
|
|
pci_device_id are not supposed to change at runtime. All functions
working with pci_device_id provided by <linux/pci.h> work with
const pci_device_id. So mark the non-const structs as const.
Signed-off-by: Arvind Yadav <arvind.yadav.cs@gmail.com>
Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
|
|
Also add pvcalls-back to the Makefile.
Signed-off-by: Stefano Stabellini <stefano@aporeto.com>
Reviewed-by: Juergen Gross <jgross@suse.com>
CC: boris.ostrovsky@oracle.com
CC: jgross@suse.com
Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
|
|
When the other end notifies us that there is data to be written
(pvcalls_back_conn_event), increment the io and write counters, and
schedule the ioworker.
Implement the write function called by ioworker by reading the data from
the data ring, writing it to the socket by calling inet_sendmsg.
Set out_error on error.
Signed-off-by: Stefano Stabellini <stefano@aporeto.com>
Reviewed-by: Juergen Gross <jgross@suse.com>
CC: boris.ostrovsky@oracle.com
CC: jgross@suse.com
Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
|
|
When an active socket has data available, increment the io and read
counters, and schedule the ioworker.
Implement the read function by reading from the socket, writing the data
to the data ring.
Set in_error on error.
Signed-off-by: Stefano Stabellini <stefano@aporeto.com>
Reviewed-by: Juergen Gross <jgross@suse.com>
CC: boris.ostrovsky@oracle.com
CC: jgross@suse.com
Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
|
|
We have one ioworker per socket. Each ioworker goes through the list of
outstanding read/write requests. Once all requests have been dealt with,
it returns.
We use one atomic counter per socket for "read" operations and one
for "write" operations to keep track of the reads/writes to do.
We also use one atomic counter ("io") per ioworker to keep track of how
many outstanding requests we have in total assigned to the ioworker. The
ioworker finishes when there are none.
Signed-off-by: Stefano Stabellini <stefano@aporeto.com>
Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Reviewed-by: Juergen Gross <jgross@suse.com>
CC: boris.ostrovsky@oracle.com
CC: jgross@suse.com
Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
|
|
Implement backend_disconnect. Call pvcalls_back_release_active on active
sockets and pvcalls_back_release_passive on passive sockets.
Implement module_exit by calling backend_disconnect on frontend
connections.
[ boris: fixed long lines ]
Signed-off-by: Stefano Stabellini <stefano@aporeto.com>
Reviewed-by: Juergen Gross <jgross@suse.com>
CC: boris.ostrovsky@oracle.com
CC: jgross@suse.com
Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
|
|
Release both active and passive sockets. For active sockets, make sure
to avoid possible conflicts with the ioworker reading/writing to those
sockets concurrently. Set map->release to let the ioworker know
atomically that the socket will be released soon, then wait until the
ioworker finishes (flush_work).
Unmap indexes pages and data rings.
Signed-off-by: Stefano Stabellini <stefano@aporeto.com>
Reviewed-by: Juergen Gross <jgross@suse.com>
CC: boris.ostrovsky@oracle.com
CC: jgross@suse.com
Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
|
|
Implement poll on passive sockets by requesting a delayed response with
mappass->reqcopy, and reply back when there is data on the passive
socket.
Poll on active socket is unimplemented as by the spec, as the frontend
should just wait for events and check the indexes on the indexes page.
Only support one outstanding poll (or accept) request for every passive
socket at any given time.
[ boris: fixed long lines ]
Signed-off-by: Stefano Stabellini <stefano@aporeto.com>
Reviewed-by: Juergen Gross <jgross@suse.com>
CC: boris.ostrovsky@oracle.com
CC: jgross@suse.com
Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
|
|
Implement the accept command by calling inet_accept. To avoid blocking
in the kernel, call inet_accept(O_NONBLOCK) from a workqueue, which get
scheduled on sk_data_ready (for a passive socket, it means that there
are connections to accept).
Use the reqcopy field to store the request. Accept the new socket from
the delayed work function, create a new sock_mapping for it, map
the indexes page and data ring, and reply to the other end. Allocate an
ioworker for the socket.
Only support one outstanding blocking accept request for every socket at
any time.
Add a field to sock_mapping to remember the passive socket from which an
active socket was created.
[ boris: fixed whitespaces ]
Signed-off-by: Stefano Stabellini <stefano@aporeto.com>
Reviewed-by: Juergen Gross <jgross@suse.com>
CC: boris.ostrovsky@oracle.com
CC: jgross@suse.com
Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
|
|
Call inet_listen to implement the listen command.
Signed-off-by: Stefano Stabellini <stefano@aporeto.com>
Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Reviewed-by: Juergen Gross <jgross@suse.com>
CC: boris.ostrovsky@oracle.com
CC: jgross@suse.com
Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
|
|
Allocate a socket. Track the allocated passive sockets with a new data
structure named sockpass_mapping. It contains an unbound workqueue to
schedule delayed work for the accept and poll commands. It also has a
reqcopy field to be used to store a copy of a request for delayed work.
Reads/writes to it are protected by a lock (the "copy_lock" spinlock).
Initialize the workqueue in pvcalls_back_bind.
Implement the bind command with inet_bind.
The pass_sk_data_ready event handler will be added later.
Signed-off-by: Stefano Stabellini <stefano@aporeto.com>
Reviewed-by: Juergen Gross <jgross@suse.com>
CC: boris.ostrovsky@oracle.com
CC: jgross@suse.com
Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
|
|
Allocate a socket. Keep track of socket <-> ring mappings with a new data
structure, called sock_mapping. Implement the connect command by calling
inet_stream_connect, and mapping the new indexes page and data ring.
Allocate a workqueue and a work_struct, called ioworker, to perform
reads and writes to the socket.
When an active socket is closed (sk_state_change), set in_error to
-ENOTCONN and notify the other end, as specified by the protocol.
sk_data_ready and pvcalls_back_ioworker will be implemented later.
[ boris: fixed whitespaces ]
Signed-off-by: Stefano Stabellini <stefano@aporeto.com>
Reviewed-by: Juergen Gross <jgross@suse.com>
CC: boris.ostrovsky@oracle.com
CC: jgross@suse.com
Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
|
|
Just reply with success to the other end for now. Delay the allocation
of the actual socket to bind and/or connect.
Signed-off-by: Stefano Stabellini <stefano@aporeto.com>
Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Reviewed-by: Juergen Gross <jgross@suse.com>
CC: boris.ostrovsky@oracle.com
CC: jgross@suse.com
Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
|
|
When the other end notifies us that there are commands to be read
(pvcalls_back_event), wake up the backend thread to parse the command.
The command ring works like most other Xen rings, so use the usual
ring macros to read and write to it. The functions implementing the
commands are empty stubs for now.
[ boris: fixed whitespaces ]
Signed-off-by: Stefano Stabellini <stefano@aporeto.com>
Reviewed-by: Juergen Gross <jgross@suse.com>
CC: boris.ostrovsky@oracle.com
CC: jgross@suse.com
Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
|
|
Introduce a per-frontend data structure named pvcalls_fedata. It
contains pointers to the command ring, its event channel, a list of
active sockets and a tree of passive sockets (passing sockets need to be
looked up from the id on listen, accept and poll commands, while active
sockets only on release).
It also has an unbound workqueue to schedule the work of parsing and
executing commands on the command ring. socket_lock protects the two
lists. In pvcalls_back_global, keep a list of connected frontends.
[ boris: fixed whitespaces/long lines ]
Signed-off-by: Stefano Stabellini <stefano@aporeto.com>
Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Reviewed-by: Juergen Gross <jgross@suse.com>
CC: boris.ostrovsky@oracle.com
CC: jgross@suse.com
Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
|
|
Introduce the code to handle xenbus state changes.
Implement the probe function for the pvcalls backend. Write the
supported versions, max-page-order and function-calls nodes to xenstore,
as required by the protocol.
Introduce stub functions for disconnecting/connecting to a frontend.
Signed-off-by: Stefano Stabellini <stefano@aporeto.com>
Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Reviewed-by: Juergen Gross <jgross@suse.com>
CC: boris.ostrovsky@oracle.com
CC: jgross@suse.com
Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
|
|
Keep a list of connected frontends. Use a semaphore to protect list
accesses.
Signed-off-by: Stefano Stabellini <stefano@aporeto.com>
Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Reviewed-by: Juergen Gross <jgross@suse.com>
CC: boris.ostrovsky@oracle.com
CC: jgross@suse.com
Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
|
|
Introduce a xenbus backend for the pvcalls protocol, as defined by
https://xenbits.xen.org/docs/unstable/misc/pvcalls.html.
This patch only adds the stubs, the code will be added by the following
patches.
Signed-off-by: Stefano Stabellini <stefano@aporeto.com>
Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Reviewed-by: Juergen Gross <jgross@suse.com>
CC: boris.ostrovsky@oracle.com
CC: jgross@suse.com
Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
|
|
Omit an extra message for a memory allocation failure in these functions.
This issue was detected by using the Coccinelle software.
Signed-off-by: Markus Elfring <elfring@users.sourceforge.net>
Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
|
|
Omit an extra message for a memory allocation failure in this function.
This issue was detected by using the Coccinelle software.
Signed-off-by: Markus Elfring <elfring@users.sourceforge.net>
Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
|
|
In the same way as it's done in pinctrl-cherryview.c we would provide
a readback TX buffer state.
Fixes: 17fab473693 ("pinctrl: intel: Set pin direction properly")
Reported-by: "Bourque, Francis" <francis.bourque@intel.com>
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Acked-by: Mika Westerberg <mika.westerberg@linux.intel.com>
Tested-by: "Bourque, Francis" <francis.bourque@intel.com>
Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
|
|
The pins from GPIO1A0 to GPIO1B1 are special, need to recalculate
iomux. And the register offset is larger than the u8 range, so changed
to u32.
Signed-off-by: David Wu <david.wu@rock-chips.com>
Reviewed-by: Heiko Stuebner <heiko@sntech.de>
Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
|
|
The variable gc is assigned but never read and is redundant. Remove it.
Cleans up clang warning:
drivers/gpio/gpio-mockup.c:169:2: warning: Value stored to 'gc' is never read
Signed-off-by: Colin Ian King <colin.king@canonical.com>
Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
|
|
Move independent thermal module reset in the beginning.
Signed-off-by: Louis Yu <louis.yu@mediatek.com>
Reviewed-by: Dawei Chien <dawei.chien@mediatek.com>
Signed-off-by: Zhang Rui <rui.zhang@intel.com>
|
|
This patch adds support for mt2712 chip thermal calibration data
and calculation, and is compatible with the existing chips.
Signed-off-by: Louis Yu <louis.yu@mediatek.com>
Reviewed-by: Dawei Chien <dawei.chien@mediatek.com>
Signed-off-by: Zhang Rui <rui.zhang@intel.com>
|
|
This patch adds support for mt2712 chip to mtk_thermal,
and integrate mt2712 into the same mediatek thermal driver.
MT2712 has only 1 bank and 4 sensors.
Signed-off-by: Louis Yu <louis.yu@mediatek.com>
Reviewed-by: Dawei Chien <dawei.chien@mediatek.com>
Signed-off-by: Zhang Rui <rui.zhang@intel.com>
|
|
Refactor code in order to avoid identical code for different branches.
Addresses-Coverity-ID: 1248728
Signed-off-by: Gustavo A. R. Silva <gustavo@embeddedor.com>
Acked-by: Larry Finger <Larry.Finger@lwfinger.net>
Signed-off-by: Kalle Valo <kvalo@codeaurora.org>
|
|
For debugging some problems, it is useful to know the chip revision
add a brcmf_info message logging this.
Signed-off-by: Hans de Goede <hdegoede@redhat.com>
Acked-by: Arend van Spriel <arend.vanspriel@broadcom.com>
Signed-off-by: Kalle Valo <kvalo@codeaurora.org>
|
|
Use 64-bit dma for hosts with CONFIG_ARCH_DMA_ADDR_T_64BIT enabled.
Signed-off-by: Sergey Matyukevich <sergey.matyukevich.os@quantenna.com>
Signed-off-by: Kalle Valo <kvalo@codeaurora.org>
|
|
Check if skb tracking arrays has been already allocated. This additional
check handles the case when init partially failed.
Signed-off-by: Sergey Matyukevich <sergey.matyukevich.os@quantenna.com>
Signed-off-by: Kalle Valo <kvalo@codeaurora.org>
|
|
NULL is not a special type of success here but a error pointer.
So it makes sense to check against NULL in qtnf_map_bar
and return error code.
Signed-off-by: Sergey Matyukevich <sergey.matyukevich.os@quantenna.com>
Signed-off-by: Kalle Valo <kvalo@codeaurora.org>
|
|
Linux built-in circ_buf implementation assumes that that the
circular buffer length is a power of 2. Make sure that
rx and tx descriptor queue lengths are power-of-2.
Signed-off-by: Sergey Matyukevich <sergey.matyukevich.os@quantenna.com>
Signed-off-by: Kalle Valo <kvalo@codeaurora.org>
|
|
Flag -D__CHECK_ENDIAN was wrong: it should be -D__CHECK_ENDIAN__ instead.
However now this flag is enabled by default, so it can be removed.
Signed-off-by: Sergey Matyukevich <sergey.matyukevich.os@quantenna.com>
Signed-off-by: Kalle Valo <kvalo@codeaurora.org>
|