Age | Commit message (Collapse) | Author |
|
Previously this was only happening in ef10-specific code.
Signed-off-by: Edward Cree <ecree@solarflare.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Signed-off-by: Edward Cree <ecree@solarflare.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
_down() merely removes all our filters and VLANs, it doesn't free
efx->filter_state itself.
Signed-off-by: Edward Cree <ecree@solarflare.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
EF100 doesn't need to split up large DMAs.
Signed-off-by: Edward Cree <ecree@solarflare.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Since we only allocate VIs for the number of TXQs we actually need, we
cannot naively use "channel * TXQ_TYPES + txq" for the TXQ number, as
this has gaps (when efx->tx_queues_per_channel < EFX_TXQ_TYPES) and
thus overruns the driver's VI allocations, causing the firmware to
reject the MC_CMD_INIT_TXQ based on INSTANCE.
Thus, we distinguish INSTANCE (stored in tx_queue->queue) from LABEL
(tx_queue->label); the former is allocated starting from 0 in
efx_set_channels(), while the latter is simply the txq type (index in
channel->tx_queue array).
To simplify things, rather than changing tx_queues_per_channel after
setting up TXQs, make Siena always probe its HIGHPRI queues at start
of day, rather than deferring it until tc mqprio enables them.
Signed-off-by: Edward Cree <ecree@solarflare.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
While we're at it, also check them for failure.
Signed-off-by: Edward Cree <ecree@solarflare.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Siena needs four TX queues (csum * highpri), EF10 needs two (csum),
and EF100 only needs one (as checksumming is controlled entirely by
the transmit descriptor). Rather than having various bits of ad-hoc
code to decide which queues to set up etc., put the knowledge of how
many TXQs a channel has in one place.
Signed-off-by: Edward Cree <ecree@solarflare.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Instead of exposing this old module parameter on the new driver (thus
having to keep it forever after for compatibility), let's confine it
to the old one; if we find later that we need the feature, we ought
to support it properly, with ethtool set-channels.
Signed-off-by: Edward Cree <ecree@solarflare.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
EF100 only supports MSI-X, so there's no need for the new driver to
expose this old module parameter.
Since it's now visible to the linker, we have to rename it internally
to efx_interrupt_mode to avoid symbol collisions in non-modular
builds.
Signed-off-by: Edward Cree <ecree@solarflare.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
All NICs supported by this driver are capable of MSI-X interrupts (only
Falcon A1 wasn't, and that's now hived off into its own driver), so no
need for a nic-type parameter. Besides, the code that checked it was
buggy anyway (the following assignment that checked min_interrupt_mode
overrode it).
Signed-off-by: Edward Cree <ecree@solarflare.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Unprivileged functions (such as VFs) may set their MTU by use of the
'control' field of MC_CMD_SET_MAC_EXT, as used in efx_mcdi_set_mtu().
If calling efx_ef10_mac_reconfigure() from efx_change_mtu(), and the
NIC supports the above (SET_MAC_ENHANCED capability), use it rather
than efx_mcdi_set_mac().
Signed-off-by: Edward Cree <ecree@solarflare.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
The variable act is being initialized with a value that is
never read and it is being updated later with a new value. The
initialization is redundant and can be removed.
Addresses-Coverity: ("Unused value")
Signed-off-by: Colin Ian King <colin.king@canonical.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
I would like that Claudiu becomes co-maintainer of the Cadence macb
driver. He's already participating to lots of reviews and enhancements
to this driver and knows the different versions of this controller.
Signed-off-by: Nicolas Ferre <nicolas.ferre@microchip.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Alex Elder says:
====================
net: ipa: simplify endpoint programming
Add tests to functions so they don't update undefined endpoint
registers, rather than requiring the caller to avoid calling them.
Move the call to a workaround function required when suspending
inside the function that puts an endpoint into suspend mode. This
requires moving a few functions (which are otherwise unchanged).
Then simplify ipa_endpoint_program() to call essentially all
endpoint register update functions unconditionally.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Have functions that write endpoint configuration registers return
immediately if they are not valid for the direction of transfer for
the endpoint. This allows most of the calls in ipa_endpoint_program()
to be made unconditionally. Reorder the register writes to match
the order of their definition (based on offset).
Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
IPA version 4.0+ does not support endpoint suspend. Put a test at
the top of ipa_endpoint_program_suspend() that returns immediately
if suspend is not supported rather than making that check in the caller.
Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
IPA version 3.5.1 has a hardware quirk that requires special
handling if an RX endpoint is suspended while aggregation is active.
This handling is implemented by ipa_endpoint_suspend_aggr().
Have ipa_endpoint_program_suspend() be responsible for calling
ipa_endpoint_suspend_aggr() if suspend mode is being enabled on
an endpoint. If the endpoint does not support aggregation, or if
aggregation isn't active, this call will continue to have no effect.
Move the definition of ipa_endpoint_suspend_aggr() up in the file so
its definition precedes the new earlier reference to it. This
requires ipa_endpoint_aggr_active() and ipa_endpoint_force_close()
to be moved as well.
Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
IPA version 4.2 has a hardware quirk that affects endpoint delay
mode, so it isn't used there. Isolate the test that avoids using
delay mode for that version inside ipa_endpoint_program_delay(),
rather than making that check in the caller.
Signed-off-by: Alex Elder <elder@linaro.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
The number of ports is incorrectly set to the maximum available for a DSA
switch. Even if the extra ports are not used, this causes some functions
to be called later, like port_disable() and port_stp_state_set(). If the
driver doesn't check the port index, it will end up modifying unknown
registers.
Fixes: b987e98e50ab ("dsa: add DSA switch driver for Microchip KSZ9477")
Signed-off-by: Codrin Ciubotariu <codrin.ciubotariu@microchip.com>
Reviewed-by: Andrew Lunn <andrew@lunn.ch>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
In certain configurations without power management support, the
following warnings happen:
drivers/net/ethernet/mellanox/mlx4/main.c:4388:12:
warning: 'mlx4_resume' defined but not used [-Wunused-function]
4388 | static int mlx4_resume(struct device *dev_d)
| ^~~~~~~~~~~
drivers/net/ethernet/mellanox/mlx4/main.c:4373:12: warning:
'mlx4_suspend' defined but not used [-Wunused-function]
4373 | static int mlx4_suspend(struct device *dev_d)
| ^~~~~~~~~~~~
Mark these functions as __maybe_unused to make it clear to the
compiler that this is going to happen based on the configuration,
which is the standard for these types of functions.
Fixes: 0e3e206a3e12 ("mlx4: use generic power management")
Reported-by: Hulk Robot <hulkci@huawei.com>
Signed-off-by: Wei Yongjun <weiyongjun1@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
In certain configurations without power management support, gcc report
the following warning:
drivers/net/ethernet/micrel/ksz884x.c:7182:12: warning:
'pcidev_suspend' defined but not used [-Wunused-function]
7182 | static int pcidev_suspend(struct device *dev_d)
| ^~~~~~~~~~~~~~
Mark pcidev_suspend() as __maybe_unused to make it clear.
Fixes: 64120615d140 ("ksz884x: use generic power management")
Reported-by: Hulk Robot <hulkci@huawei.com>
Signed-off-by: Wei Yongjun <weiyongjun1@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Claudiu Beznea says:
====================
net: macb: few code cleanups
Patches in this series cleanup a bit macb code.
Changes in v2:
- in patch 2/4 use hweight32() instead of hweight_long()
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Remove is_udp variable that is used in only one place and use
ip_hdr(skb)->protocol == IPPROTO_UDP check instead.
Signed-off-by: Claudiu Beznea <claudiu.beznea@microchip.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Do not initialize queue variable. It is already initialized in for loops.
Signed-off-by: Claudiu Beznea <claudiu.beznea@microchip.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Use hweight32() to count set bits in queue_mask.
Signed-off-by: Claudiu Beznea <claudiu.beznea@microchip.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Bit 0 of queue_mask is set at the beginning of
macb_probe_queues() function. Do not set it again after reading
DGFG6 but instead use "|=" operator.
Signed-off-by: Claudiu Beznea <claudiu.beznea@microchip.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Today xenbus_map_ring_valloc() can return either a negative errno
value (-ENOMEM or -EINVAL) or a grant status value. This is a mess as
e.g -ENOMEM and GNTST_eagain have the same numeric value.
Fix that by turning all grant mapping errors into -ENOENT. This is
no problem as all callers of xenbus_map_ring_valloc() only use the
return value to print an error message, and in case of mapping errors
the grant status value has already been printed by __xenbus_map_ring()
before.
Signed-off-by: Juergen Gross <jgross@suse.com>
Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Link: https://lore.kernel.org/r/20200701121638.19840-3-jgross@suse.com
Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
|
|
xenbus_map_ring_valloc() and its sub-functions are putting quite large
structs and arrays on the stack. This is problematic at runtime, but
might also result in build failures (e.g. with clang due to the option
-Werror,-Wframe-larger-than=... used).
Fix that by moving most of the data from the stack into a dynamically
allocated struct. Performance is no issue here, as
xenbus_map_ring_valloc() is used only when adding a new PV device to
a backend driver.
While at it move some duplicated code from pv/hvm specific mapping
functions to the single caller.
Reported-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Juergen Gross <jgross@suse.com>
Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Link: https://lore.kernel.org/r/20200701121638.19840-2-jgross@suse.com
Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
|
|
Horatiu Vultur says:
====================
bridge: mrp: Add support for getting the status
This patch series extends the MRP netlink interface to allow the userspace
daemon to get the status of the MRP instances in the kernel.
v3:
- remove misleading comment
- fix to use correctly the RCU
v2:
- fix sparse warnings
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
This patch extends the function br_fill_ifinfo to return also the MRP
status for each instance on a bridge. It also adds a new filter
RTEXT_FILTER_MRP to return the MRP status only when this is set, not to
interfer with the vlans. The MRP status is return only on the bridge
interfaces.
Signed-off-by: Horatiu Vultur <horatiu.vultur@microchip.com>
Acked-by: Nikolay Aleksandrov <nikolay@cumulusnetworks.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Add the function br_mrp_fill_info which populates the MRP attributes
regarding the status of each MRP instance.
Signed-off-by: Horatiu Vultur <horatiu.vultur@microchip.com>
Acked-by: Nikolay Aleksandrov <nikolay@cumulusnetworks.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Add MRP attribute IFLA_BRIDGE_MRP_INFO to allow the userspace to get the
current state of the MRP instances. This is a nested attribute that
contains other attributes like, ring id, index of primary and secondary
port, priority, ring state, ring role.
Signed-off-by: Horatiu Vultur <horatiu.vultur@microchip.com>
Acked-by: Nikolay Aleksandrov <nikolay@cumulusnetworks.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
This essentially reverts commit 721230326891 ("tcp: md5: reject TCP_MD5SIG
or TCP_MD5SIG_EXT on established sockets")
Mathieu reported that many vendors BGP implementations can
actually switch TCP MD5 on established flows.
Quoting Mathieu :
Here is a list of a few network vendors along with their behavior
with respect to TCP MD5:
- Cisco: Allows for password to be changed, but within the hold-down
timer (~180 seconds).
- Juniper: When password is initially set on active connection it will
reset, but after that any subsequent password changes no network
resets.
- Nokia: No notes on if they flap the tcp connection or not.
- Ericsson/RedBack: Allows for 2 password (old/new) to co-exist until
both sides are ok with new passwords.
- Meta-Switch: Expects the password to be set before a connection is
attempted, but no further info on whether they reset the TCP
connection on a change.
- Avaya: Disable the neighbor, then set password, then re-enable.
- Zebos: Would normally allow the change when socket connected.
We can revert my prior change because commit 9424e2e7ad93 ("tcp: md5: fix potential
overestimation of TCP option space") removed the leak of 4 kernel bytes to
the wire that was the main reason for my patch.
While doing my investigations, I found a bug when a MD5 key is changed, leading
to these commits that stable teams want to consider before backporting this revert :
Commit 6a2febec338d ("tcp: md5: add missing memory barriers in tcp_md5_do_add()/tcp_md5_hash_key()")
Commit e6ced831ef11 ("tcp: md5: refine tcp_md5_do_add()/tcp_md5_hash_key() barriers")
Fixes: 721230326891 "tcp: md5: reject TCP_MD5SIG or TCP_MD5SIG_EXT on established sockets"
Signed-off-by: Eric Dumazet <edumazet@google.com>
Reported-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
There is a race condition where ib_nl_make_request() inserts the request
data into the linked list but the timer in ib_nl_request_timeout() can see
it and destroy it before ib_nl_send_msg() is done touching it. This could
happen, for instance, if there is a long delay allocating memory during
nlmsg_new()
This causes a use-after-free in the send_mad() thread:
[<ffffffffa02f43cb>] ? ib_pack+0x17b/0x240 [ib_core]
[ <ffffffffa032aef1>] ib_sa_path_rec_get+0x181/0x200 [ib_sa]
[<ffffffffa0379db0>] rdma_resolve_route+0x3c0/0x8d0 [rdma_cm]
[<ffffffffa0374450>] ? cma_bind_port+0xa0/0xa0 [rdma_cm]
[<ffffffffa040f850>] ? rds_rdma_cm_event_handler_cmn+0x850/0x850 [rds_rdma]
[<ffffffffa040f22c>] rds_rdma_cm_event_handler_cmn+0x22c/0x850 [rds_rdma]
[<ffffffffa040f860>] rds_rdma_cm_event_handler+0x10/0x20 [rds_rdma]
[<ffffffffa037778e>] addr_handler+0x9e/0x140 [rdma_cm]
[<ffffffffa026cdb4>] process_req+0x134/0x190 [ib_addr]
[<ffffffff810a02f9>] process_one_work+0x169/0x4a0
[<ffffffff810a0b2b>] worker_thread+0x5b/0x560
[<ffffffff810a0ad0>] ? flush_delayed_work+0x50/0x50
[<ffffffff810a68fb>] kthread+0xcb/0xf0
[<ffffffff816ec49a>] ? __schedule+0x24a/0x810
[<ffffffff816ec49a>] ? __schedule+0x24a/0x810
[<ffffffff810a6830>] ? kthread_create_on_node+0x180/0x180
[<ffffffff816f25a7>] ret_from_fork+0x47/0x90
[<ffffffff810a6830>] ? kthread_create_on_node+0x180/0x180
The ownership rule is once the request is on the list, ownership transfers
to the list and the local thread can't touch it any more, just like for
the normal MAD case in send_mad().
Thus, instead of adding before send and then trying to delete after on
errors, move the entire thing under the spinlock so that the send and
update of the lists are atomic to the conurrent threads. Lightly reoganize
things so spinlock safe memory allocations are done in the final NL send
path and the rest of the setup work is done before and outside the lock.
Fixes: 3ebd2fd0d011 ("IB/sa: Put netlink request into the request list before sending")
Link: https://lore.kernel.org/r/1592964789-14533-1-git-send-email-divya.indi@oracle.com
Signed-off-by: Divya Indi <divya.indi@oracle.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
|
|
Fix sparse build warning:
block/bio-integrity.c:27:6: warning:
symbol '__bio_integrity_free' was not declared. Should it be static?
Signed-off-by: Wei Yongjun <weiyongjun1@huawei.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
|
|
Pull NVMe fixes from Christoph.
* 'nvme-5.8' of git://git.infradead.org/nvme:
nvme: fix a crash in nvme_mpath_add_disk
nvme: fix identify error status silent ignore
|
|
The workqueue link_wq should only be destroyed when the hfi1 driver is
unloaded, not when the device is shut down.
Fixes: 71d47008ca1b ("IB/hfi1: Create workqueue for link events")
Link: https://lore.kernel.org/r/20200623204053.107638.70315.stgit@awfm-01.aw.intel.com
Cc: <stable@vger.kernel.org>
Reviewed-by: Mike Marciniszyn <mike.marciniszyn@intel.com>
Signed-off-by: Kaike Wan <kaike.wan@intel.com>
Signed-off-by: Dennis Dalessandro <dennis.dalessandro@intel.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
|
|
The workqueue hfi1_wq is destroyed in function shutdown_device(), which is
called by either shutdown_one() or remove_one(). The function
shutdown_one() is called when the kernel is rebooted while remove_one() is
called when the hfi1 driver is unloaded. When the kernel is rebooted,
hfi1_wq is destroyed while all qps are still active, leading to a kernel
crash:
BUG: unable to handle kernel NULL pointer dereference at 0000000000000102
IP: [<ffffffff94cb7b02>] __queue_work+0x32/0x3e0
PGD 0
Oops: 0000 [#1] SMP
Modules linked in: dm_round_robin nvme_rdma(OE) nvme_fabrics(OE) nvme_core(OE) ib_isert iscsi_target_mod target_core_mod ib_ucm mlx4_ib iTCO_wdt iTCO_vendor_support mxm_wmi sb_edac intel_powerclamp coretemp intel_rapl iosf_mbi kvm rpcrdma sunrpc irqbypass crc32_pclmul ghash_clmulni_intel rdma_ucm aesni_intel ib_uverbs lrw gf128mul opa_vnic glue_helper ablk_helper ib_iser cryptd ib_umad rdma_cm iw_cm ses enclosure libiscsi scsi_transport_sas pcspkr joydev ib_ipoib(OE) scsi_transport_iscsi ib_cm sg ipmi_ssif mei_me lpc_ich i2c_i801 mei ioatdma ipmi_si dm_multipath ipmi_devintf ipmi_msghandler wmi acpi_pad acpi_power_meter hangcheck_timer ip_tables ext4 mbcache jbd2 mlx4_en sd_mod crc_t10dif crct10dif_generic mgag200 drm_kms_helper syscopyarea sysfillrect sysimgblt fb_sys_fops ttm hfi1(OE)
crct10dif_pclmul crct10dif_common crc32c_intel drm ahci mlx4_core libahci rdmavt(OE) igb megaraid_sas ib_core libata drm_panel_orientation_quirks ptp pps_core devlink dca i2c_algo_bit dm_mirror dm_region_hash dm_log dm_mod
CPU: 19 PID: 0 Comm: swapper/19 Kdump: loaded Tainted: G OE ------------ 3.10.0-957.el7.x86_64 #1
Hardware name: Phegda X2226A/S2600CW, BIOS SE5C610.86B.01.01.0024.021320181901 02/13/2018
task: ffff8a799ba0d140 ti: ffff8a799bad8000 task.ti: ffff8a799bad8000
RIP: 0010:[<ffffffff94cb7b02>] [<ffffffff94cb7b02>] __queue_work+0x32/0x3e0
RSP: 0018:ffff8a90dde43d80 EFLAGS: 00010046
RAX: 0000000000000082 RBX: 0000000000000086 RCX: 0000000000000000
RDX: ffff8a90b924fcb8 RSI: 0000000000000000 RDI: 000000000000001b
RBP: ffff8a90dde43db8 R08: ffff8a799ba0d6d8 R09: ffff8a90dde53900
R10: 0000000000000002 R11: ffff8a90dde43de8 R12: ffff8a90b924fcb8
R13: 000000000000001b R14: 0000000000000000 R15: ffff8a90d2890000
FS: 0000000000000000(0000) GS:ffff8a90dde40000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 0000000000000102 CR3: 0000001a70410000 CR4: 00000000001607e0
Call Trace:
[<ffffffff94cb8105>] queue_work_on+0x45/0x50
[<ffffffffc03f781e>] _hfi1_schedule_send+0x6e/0xc0 [hfi1]
[<ffffffffc03f78a2>] hfi1_schedule_send+0x32/0x70 [hfi1]
[<ffffffffc02cf2d9>] rvt_rc_timeout+0xe9/0x130 [rdmavt]
[<ffffffff94ce563a>] ? trigger_load_balance+0x6a/0x280
[<ffffffffc02cf1f0>] ? rvt_free_qpn+0x40/0x40 [rdmavt]
[<ffffffff94ca7f58>] call_timer_fn+0x38/0x110
[<ffffffffc02cf1f0>] ? rvt_free_qpn+0x40/0x40 [rdmavt]
[<ffffffff94caa3bd>] run_timer_softirq+0x24d/0x300
[<ffffffff94ca0f05>] __do_softirq+0xf5/0x280
[<ffffffff9537832c>] call_softirq+0x1c/0x30
[<ffffffff94c2e675>] do_softirq+0x65/0xa0
[<ffffffff94ca1285>] irq_exit+0x105/0x110
[<ffffffff953796c8>] smp_apic_timer_interrupt+0x48/0x60
[<ffffffff95375df2>] apic_timer_interrupt+0x162/0x170
<EOI>
[<ffffffff951adfb7>] ? cpuidle_enter_state+0x57/0xd0
[<ffffffff951ae10e>] cpuidle_idle_call+0xde/0x230
[<ffffffff94c366de>] arch_cpu_idle+0xe/0xc0
[<ffffffff94cfc3ba>] cpu_startup_entry+0x14a/0x1e0
[<ffffffff94c57db7>] start_secondary+0x1f7/0x270
[<ffffffff94c000d5>] start_cpu+0x5/0x14
The solution is to destroy the workqueue only when the hfi1 driver is
unloaded, not when the device is shut down. In addition, when the device
is shut down, no more work should be scheduled on the workqueues and the
workqueues are flushed.
Fixes: 8d3e71136a08 ("IB/{hfi1, qib}: Add handling of kernel restart")
Link: https://lore.kernel.org/r/20200623204047.107638.77646.stgit@awfm-01.aw.intel.com
Cc: <stable@vger.kernel.org>
Reviewed-by: Mike Marciniszyn <mike.marciniszyn@intel.com>
Signed-off-by: Kaike Wan <kaike.wan@intel.com>
Signed-off-by: Dennis Dalessandro <dennis.dalessandro@intel.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
|
|
Acer C720 running Linux v5.3 reports this in klog:
tpm_tis: 1.2 TPM (device-id 0xB, rev-id 16)
tpm tpm0: tpm_try_transmit: send(): error -5
tpm tpm0: A TPM error (-5) occurred attempting to determine the timeouts
tpm_tis tpm_tis: Could not get TPM timeouts and durations
tpm_tis 00:08: 1.2 TPM (device-id 0xB, rev-id 16)
tpm tpm0: tpm_try_transmit: send(): error -5
tpm tpm0: A TPM error (-5) occurred attempting to determine the timeouts
tpm_tis 00:08: Could not get TPM timeouts and durations
ima: No TPM chip found, activating TPM-bypass!
tpm_inf_pnp 00:08: Found TPM with ID IFX0102
% git --no-pager grep IFX0102 drivers/char/tpm
drivers/char/tpm/tpm_infineon.c: {"IFX0102", 0},
drivers/char/tpm/tpm_tis.c: {"IFX0102", 0}, /* Infineon */
Obviously IFX0102 was added to the HID table for the TCG TIS driver by
mistake.
Fixes: 93e1b7d42e1e ("[PATCH] tpm: add HID module parameter")
Link: https://bugzilla.kernel.org/show_bug.cgi?id=203877
Cc: stable@vger.kernel.org
Cc: Kylene Jo Hall <kjhall@us.ibm.com>
Reported-by: Ferry Toth: <ferry.toth@elsinga.info>
Reviewed-by: Jerry Snitselaar <jsnitsel@redhat.com>
Signed-off-by: Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com>
|
|
On a Chromebook I'm working on I noticed a big (~1 second) delay
during bootup where nothing was happening. Right around this big
delay there were messages about the TPM:
[ 2.311352] tpm_tis_spi spi0.0: TPM ready IRQ confirmed on attempt 2
[ 3.332790] tpm_tis_spi spi0.0: Cr50 firmware version: ...
I put a few printouts in and saw that tpm_tis_spi_init() (specifically
tpm_chip_register() in that function) was taking the lion's share of
this time, though ~115 ms of the time was in cr50_print_fw_version().
Let's make a one-line change to prefer async probe for tpm_tis_spi.
There's no reason we need to block other drivers from probing while we
load.
NOTES:
* It's possible that other hardware runs through the init sequence
faster than Cr50 and this isn't such a big problem for them.
However, even if they are faster they are still doing _some_
transfers over a SPI bus so this should benefit everyone even if to
a lesser extent.
* It's possible that there are extra delays in the code that could be
optimized out. I didn't dig since once I enabled async probe they
no longer impacted me.
Signed-off-by: Douglas Anderson <dianders@chromium.org>
Reviewed-by: Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com>
Signed-off-by: Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com>
|
|
The tpm2_get_cc_attrs_tbl() call will result in TPM commands being issued,
which will need the use of the internal command/response buffer. But,
we're issuing this *before* we've waited to make sure that buffer is
allocated.
This can result in intermittent failures to probe if the hypervisor / TPM
implementation doesn't respond quickly enough. I find it fails almost
every time with an 8 vcpu guest under KVM with software emulated TPM.
To fix it, just move the tpm2_get_cc_attrs_tlb() call after the
existing code to wait for initialization, which will ensure the buffer
is allocated.
Fixes: 18b3670d79ae9 ("tpm: ibmvtpm: Add support for TPM2")
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Jerry Snitselaar <jsnitsel@redhat.com>
Reviewed-by: Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com>
Signed-off-by: Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com>
|
|
Trivial fix, the spelling of "drescription" is incorrect
in function comment.
Fix this.
Signed-off-by: Binbin Zhou <zhoubinbin@uniontech.com>
Acked-by: Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com>
Signed-off-by: Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com>
|
|
Found by smatch:
drivers/char/tpm/tpm_tis_core.c:1088 tpm_tis_core_init() warn:
variable dereferenced before check 'chip->ops' (see line 979)
'chip->ops' is assigned in the beginning of function
in tpmm_chip_alloc->tpm_chip_alloc
and is used before first possible goto to error path.
Signed-off-by: Vasily Averin <vvs@virtuozzo.com>
Reviewed-by: Jerry Snitselaar <jsnitsel@redhat.com>
Reviewed-by: Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com>
Signed-off-by: Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com>
|
|
During flow control we are just reading from the TPM, yet our spi_xfer
has the tx_buf and rx_buf both non-NULL which means we're requesting a
full duplex transfer.
SPI is always somewhat of a full duplex protocol anyway and in theory
the other side shouldn't really be looking at what we're sending it
during flow control, but it's still a bit ugly to be sending some
"random" data when we shouldn't.
The default tpm_tis_spi_flow_control() tries to address this by
setting 'phy->iobuf[0] = 0'. This partially avoids the problem of
sending "random" data, but since our tx_buf and rx_buf both point to
the same place I believe there is the potential of us sending the
TPM's previous byte back to it if we hit the retry loop.
Another flow control implementation, cr50_spi_flow_control(), doesn't
address this at all.
Let's clean this up and just make the tx_buf NULL before we call
flow_control(). Not only does this ensure that we're not sending any
"random" bytes but it also possibly could make the SPI controller
behave in a slightly more optimal way.
NOTE: no actual observed problems are fixed by this patch--it's was
just made based on code inspection.
Signed-off-by: Douglas Anderson <dianders@chromium.org>
Reviewed-by: Paul Menzel <pmenzel@molgen.mpg.de>
Reviewed-by: Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com>
Signed-off-by: Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com>
|
|
It has been reported that some TIS based TPMs are giving unexpected
errors when using the O_NONBLOCK path of the TPM device. The problem
is that some TPMs don't like it when you get and then relinquish a
locality (as the tpm_try_get_ops()/tpm_put_ops() pair does) without
sending a command. This currently happens all the time in the
O_NONBLOCK write path. Fix this by moving the tpm_try_get_ops()
further down the code to after the O_NONBLOCK determination is made.
This is safe because the priv->buffer_mutex still protects the priv
state being modified.
BugLink: https://bugzilla.kernel.org/show_bug.cgi?id=206275
Fixes: d23d12484307 ("tpm: fix invalid locking in NONBLOCKING mode")
Reported-by: Mario Limonciello <Mario.Limonciello@dell.com>
Tested-by: Alex Guzman <alex@guzman.io>
Cc: stable@vger.kernel.org
Reviewed-by: Jerry Snitselaar <jsnitsel@redhat.com>
Signed-off-by: James Bottomley <James.Bottomley@HansenPartnership.com>
Reviewed-by: Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com>
Signed-off-by: Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com>
|
|
Legacy IPoIB sets IB_QP_CREATE_NETIF_QP QP create flag and because mlx5
doesn't use this flag, the process_create_flags() failed to create IPoIB
QPs.
Fixes: 2978975ce7f1 ("RDMA/mlx5: Process create QP flags in one place")
Link: https://lore.kernel.org/r/20200630122147.445847-1-leon@kernel.org
Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
|
|
Clang warns:
drivers/infiniband/hw/hfi1/qp.c:198:9: warning: implicit conversion from enumeration type 'enum opa_mtu' to different enumeration type 'enum ib_mtu' [-Wenum-conversion]
mtu = OPA_MTU_8192;
~ ^~~~~~~~~~~~
enum opa_mtu extends enum ib_mtu. There are typically two ways to deal
with this:
* Remove the expected types and just use 'int' for all parameters and
types.
* Explicitly cast the enums between each other.
This driver chooses to do the later so do the same thing here.
Fixes: 6d72344cf6c4 ("IB/ipoib: Increase ipoib Datagram mode MTU's upper limit")
Link: https://lore.kernel.org/r/20200623005224.492239-1-natechancellor@gmail.com
Link: https://github.com/ClangBuiltLinux/linux/issues/1062
Link: https://lore.kernel.org/linux-rdma/20200527040350.GA3118979@ubuntu-s3-xlarge-x86/
Signed-off-by: Nathan Chancellor <natechancellor@gmail.com>
Acked-by: Dennis Dalessandro <dennis.dalessandro@intel.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
|
|
It is common for networking tests creating its netns and making its own
setting under this new netns (e.g. changing tcp sysctl). If the test
forgot to restore to the original netns, it would affect the
result of other tests.
This patch saves the original netns at the beginning and then restores it
after every test. Since the restore "setns()" is not expensive, it does it
on all tests without tracking if a test has created a new netns or not.
The new restore_netns() could also be done in test__end_subtest() such
that each subtest will get an automatic netns reset. However,
the individual test would lose flexibility to have total control
on netns for its own subtests. In some cases, forcing a test to do
unnecessary netns re-configure for each subtest is time consuming.
e.g. In my vm, forcing netns re-configure on each subtest in sk_assign.c
increased the runtime from 1s to 8s. On top of that, test_progs.c
is also doing per-test (instead of per-subtest) cleanup for cgroup.
Thus, this patch also does per-test restore_netns(). The only existing
per-subtest cleanup is reset_affinity() and no test is depending on this.
Thus, it is removed from test__end_subtest() to give a consistent
expectation to the individual tests. test_progs.c only ensures
any affinity/netns/cgroup change made by an earlier test does not
affect the following tests.
Signed-off-by: Martin KaFai Lau <kafai@fb.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Andrii Nakryiko <andriin@fb.com>
Link: https://lore.kernel.org/bpf/20200702004858.2103728-1-kafai@fb.com
|
|
This patch makes a few changes to the network_helpers.c
1) Enforce SO_RCVTIMEO and SO_SNDTIMEO
This patch enforces timeout to the network fds through setsockopt
SO_RCVTIMEO and SO_SNDTIMEO.
It will remove the need for SOCK_NONBLOCK that requires a more demanding
timeout logic with epoll/select, e.g. epoll_create, epoll_ctrl, and
then epoll_wait for timeout.
That removes the need for connect_wait() from the
cgroup_skb_sk_lookup.c. The needed change is made in
cgroup_skb_sk_lookup.c.
2) start_server():
Add optional addr_str and port to start_server().
That removes the need of the start_server_with_port(). The caller
can pass addr_str==NULL and/or port==0.
I have a future tcp-hdr-opt test that will pass a non-NULL addr_str
and it is in general useful for other future tests.
"int timeout_ms" is also added to control the timeout
on the "accept(listen_fd)".
3) connect_to_fd(): Fully use the server_fd.
The server sock address has already been obtained from
getsockname(server_fd). The sockaddr includes the family,
so the "int family" arg is redundant.
Since the server address is obtained from server_fd, there
is little reason not to get the server's socket type from the
server_fd also. getsockopt(server_fd) can be used to do that,
so "int type" arg is also removed.
"int timeout_ms" is added.
4) connect_fd_to_fd():
"int timeout_ms" is added.
Some code is also refactored to connect_fd_to_addr() which is
shared with connect_to_fd().
5) Preserve errno:
Some callers need to check errno, e.g. cgroup_skb_sk_lookup.c.
Make changes to do it more consistently in save_errno_close()
and log_err().
Signed-off-by: Martin KaFai Lau <kafai@fb.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Yonghong Song <yhs@fb.com>
Link: https://lore.kernel.org/bpf/20200702004852.2103003-1-kafai@fb.com
|
|
When building very large kernels, the logic that emits replacement
sequences for alternatives fails when relative branches are present
in the code that is emitted into the .altinstr_replacement section
and patched in at the original site and fixed up. The reason is that
the linker will insert veneers if relative branches go out of range,
and due to the relative distance of the .altinstr_replacement from
the .text section where its branch targets usually live, veneers
may be emitted at the end of the .altinstr_replacement section, with
the relative branches in the sequence pointed at the veneers instead
of the actual target.
The alternatives patching logic will attempt to fix up the branch to
point to its original target, which will be the veneer in this case,
but given that the patch site is likely to be far away as well, it
will be out of range and so patching will fail. There are other cases
where these veneers are problematic, e.g., when the target of the
branch is in .text while the patch site is in .init.text, in which
case putting the replacement sequence inside .text may not help either.
So let's use subsections to emit the replacement code as closely as
possible to the patch site, to ensure that veneers are only likely to
be emitted if they are required at the patch site as well, in which
case they will be in range for the replacement sequence both before
and after it is transported to the patch site.
This will prevent alternative sequences in non-init code from being
released from memory after boot, but this is tolerable given that the
entire section is only 512 KB on an allyesconfig build (which weighs in
at 500+ MB for the entire Image). Also, note that modules today carry
the replacement sequences in non-init sections as well, and any of
those that target init code will be emitted into init sections after
this change.
This fixes an early crash when booting an allyesconfig kernel on a
system where any of the alternatives sequences containing relative
branches are activated at boot (e.g., ARM64_HAS_PAN on TX2)
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Andre Przywara <andre.przywara@arm.com>
Cc: Dave P Martin <dave.martin@arm.com>
Link: https://lore.kernel.org/r/20200630081921.13443-1-ardb@kernel.org
Signed-off-by: Will Deacon <will@kernel.org>
|