Age | Commit message (Collapse) | Author |
|
Merge series from Abel Vesa <abel.vesa@linaro.org>:
This patchset adds regulator support for the new Qualcomm PM8550 PMIC.
|
|
Free 'info' upon remapping error to avoid a memory leak.
Fixes: e644f7d62894 ("[MTD] MAPS: Merge Lubbock and Mainstone drivers into common PXA2xx driver")
Signed-off-by: Zheng Yongjun <zhengyongjun3@huawei.com>
[<miquel.raynal@bootlin.com>: Reword the commit log]
Signed-off-by: Miquel Raynal <miquel.raynal@bootlin.com>
Link: https://lore.kernel.org/linux-mtd/20221119073307.22929-1-zhengyongjun3@huawei.com
|
|
del_mtd_device() will call of_node_put() to mtd_get_of_node(mtd), which
is mtd->dev.of_node. However, memset(&mtd->dev, 0) is called before
of_node_put(). As the result, of_node_put() won't do anything in
del_mtd_device(), and causes the refcount leak.
del_mtd_device()
memset(&mtd->dev, 0, sizeof(mtd->dev) # clear mtd->dev
of_node_put()
mtd_get_of_node(mtd) # mtd->dev is cleared, can't locate of_node
# of_node_put(NULL) won't do anything
Fix the error by caching the pointer of the device_node.
OF: ERROR: memory leak, expected refcount 1 instead of 2,
of_node_get()/of_node_put() unbalanced - destroy cset entry: attach
overlay node /spi/spi-sram@0
CPU: 3 PID: 275 Comm: python3 Tainted: G N 6.1.0-rc3+ #54
0d8a1edddf51f172ff5226989a7565c6313b08e2
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS
rel-1.15.0-0-g2dd4b9b3f840-prebuilt.qemu.org 04/01/2014
Call Trace:
<TASK>
dump_stack_lvl+0x67/0x83
kobject_get+0x155/0x160
of_node_get+0x1f/0x30
of_fwnode_get+0x43/0x70
fwnode_handle_get+0x54/0x80
fwnode_get_nth_parent+0xc9/0xe0
fwnode_full_name_string+0x3f/0xa0
device_node_string+0x30f/0x750
pointer+0x598/0x7a0
vsnprintf+0x62d/0x9b0
...
cfs_overlay_release+0x30/0x90
config_item_release+0xbe/0x1a0
config_item_put+0x5e/0x80
configfs_rmdir+0x3bd/0x540
vfs_rmdir+0x18c/0x320
do_rmdir+0x198/0x330
__x64_sys_rmdir+0x2c/0x40
do_syscall_64+0x37/0x90
entry_SYSCALL_64_after_hwframe+0x63/0xcd
Fixes: 00596576a051 ("mtd: core: clear out unregistered devices a bit more")
Signed-off-by: Shang XiaoJing <shangxiaojing@huawei.com>
[<miquel.raynal@bootlin.com>: Light reword of the commit log]
Signed-off-by: Miquel Raynal <miquel.raynal@bootlin.com>
Link: https://lore.kernel.org/linux-mtd/20221119063915.11108-1-shangxiaojing@huawei.com
|
|
Introduce Socionext F_OSPI controller driver. This controller is used to
communicate with slave devices such as SPI Flash memories. It supports
4 slave devices and up to 8-bit wide bus, but supports master mode only.
This driver uses spi-mem framework for SPI flash memory access, and
can only operate indirect access mode and single data rate mode.
Signed-off-by: Kunihiko Hayashi <hayashi.kunihiko@socionext.com>
Link: https://lore.kernel.org/r/20221124003351.7792-3-hayashi.kunihiko@socionext.com
Signed-off-by: Mark Brown <broonie@kernel.org>
|
|
The ACPI buffer memory (string.pointer) should be freed as the buffer is
not used after returning from bgx_acpi_match_id(), free it to prevent
memory leak.
Fixes: 46b903a01c05 ("net, thunder, bgx: Add support to get MAC address from ACPI.")
Signed-off-by: Yu Liao <liaoyu15@huawei.com>
Link: https://lore.kernel.org/r/20221123082237.1220521-1-liaoyu15@huawei.com
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
|
|
pci_get_device() will decrease the reference count for the *from*
parameter. So we don't need to call put_device() to decrease the
reference. Let's remove the put_device() in the loop and only decrease
the reference count of the returned 'pdev' for the last loop because it
will not be passed to pci_get_device() as input parameter. We don't need
to check if 'pdev' is NULL because it is already checked inside
pci_dev_put(). Also add pci_dev_put() for the error path.
Fixes: fe1939bb2340 ("octeontx2-af: Add SDP interface support")
Signed-off-by: Xiongfeng Wang <wangxiongfeng2@huawei.com>
Reviewed-by: Saeed Mahameed <saeed@kernel.org>
Link: https://lore.kernel.org/r/20221123065919.31499-1-wangxiongfeng2@huawei.com
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
|
|
Call phylink_disconnect_phy() in tse_shutdown() to release the
resources occupied by phylink_of_phy_connect() in the tse_open().
Fixes: fef2998203e1 ("net: altera: tse: convert to phylink")
Signed-off-by: Liu Jian <liujian56@huawei.com>
Link: https://lore.kernel.org/r/20221123011617.332302-1-liujian56@huawei.com
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
|
|
When doing the following test steps, an error was found:
step 1: modprobe virtio_net succeeded
# modprobe virtio_net <-- OK
step 2: fault injection in register_netdevice()
# modprobe -r virtio_net <-- OK
# ...
FAULT_INJECTION: forcing a failure.
name failslab, interval 1, probability 0, space 0, times 0
CPU: 0 PID: 3521 Comm: modprobe
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996),
Call Trace:
<TASK>
...
should_failslab+0xa/0x20
...
dev_set_name+0xc0/0x100
netdev_register_kobject+0xc2/0x340
register_netdevice+0xbb9/0x1320
virtnet_probe+0x1d72/0x2658 [virtio_net]
...
</TASK>
virtio_net: probe of virtio0 failed with error -22
step 3: modprobe virtio_net failed
# modprobe virtio_net <-- failed
virtio_net: probe of virtio0 failed with error -2
The root cause of the problem is that the queues are not
disable on the error handling path when register_netdevice()
fails in virtnet_probe(), resulting in an error "-ENOENT"
returned in the next modprobe call in setup_vq().
virtio_pci_modern_device uses virtqueues to send or
receive message, and "queue_enable" records whether the
queues are available. In vp_modern_find_vqs(), all queues
will be selected and activated, but once queues are enabled
there is no way to go back except reset.
Fix it by reset virtio device on error handling path. This
makes error handling follow the same order as normal device
cleanup in virtnet_remove() which does: unregister, destroy
failover, then reset. And that flow is better tested than
error handling so we can be reasonably sure it works well.
Fixes: 024655555021 ("virtio_net: fix use after free on allocation failure")
Signed-off-by: Li Zetao <lizetao1@huawei.com>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
Link: https://lore.kernel.org/r/20221122150046.3910638-1-lizetao1@huawei.com
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
|
|
Currently offloading MACsec with authentication only (encrypt
property set to off) is not supported, block such requests
when adding/updating a macsec device.
Fixes: 8ff0ac5be144 ("net/mlx5: Add MACsec offload Tx command support")
Signed-off-by: Emeel Hakim <ehakim@nvidia.com>
Reviewed-by: Raed Salem <raeds@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
|
|
Currently during update Tx security association (SA) flow, the Tx SA
active state is updated only if the Tx SA in question is the same SA
that the MACsec interface is using for Tx,in consequence when the
MACsec interface chose to work with this Tx SA later, where this SA
for example should have been updated to active state and it was not,
the relevant Tx SA HW context won't be installed, hence the MACSec
flow won't be offloaded.
Fix by update Tx SA active state as part of update flow regardless
whether the SA in question is the same Tx SA used by the MACsec
interface.
Fixes: 8ff0ac5be144 ("net/mlx5: Add MACsec offload Tx command support")
Signed-off-by: Raed Salem <raeds@nvidia.com>
Reviewed-by: Emeel Hakim <ehakim@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
|
|
Currently offload path limits replay window size to 32/64/128/256 bits,
such a limitation should not exist since software allows it.
Remove such limitation.
Fixes: eb43846b43c3 ("net/mlx5e: Support MACsec offload replay window")
Signed-off-by: Emeel Hakim <ehakim@nvidia.com>
Reviewed-by: Raed Salem <raeds@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
|
|
Currently MACsec's add Rx SA flow steering (fs) rule routine
uses a spec object which is dynamically allocated and do
not free it upon leaving. The above led to a memory leak.
Fix by freeing dynamically allocated objects.
Fixes: 3b20949cb21b ("net/mlx5e: Add MACsec RX steering rules")
Signed-off-by: Emeel Hakim <ehakim@nvidia.com>
Reviewed-by: Raed Salem <raeds@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
|
|
Fix update Rx SA wrong bail condition, naturally update functionality
needs to check that something changed otherwise bailout currently the
active state check does just the opposite, furthermore unlike deactivate
path which remove the macsec rules to deactivate the offload, the
activation path does not include the counter part installation of the
macsec rules.
Fix by using correct bailout condition and when Rx SA changes state to
active then add the relevant macsec rules.
While at it, refine function name to reflect more precisely its role.
Fixes: aae3454e4d4c ("net/mlx5e: Add MACsec offload Rx command support")
Signed-off-by: Raed Salem <raeds@nvidia.com>
Reviewed-by: Emeel Hakim <ehakim@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
|
|
The main functionality for this operation is to update the
active state of the Rx security channel (SC) if the new
active setting is different from the current active state
of this Rx SC, however the relevant active state check is
done post updating the current active state to match the
new active state, effectively blocks any offload state
update for the Rx SC in question.
Fix by delay the assignment to be post the relevant check.
Fixes: aae3454e4d4c ("net/mlx5e: Add MACsec offload Rx command support")
Signed-off-by: Raed Salem <raeds@nvidia.com>
Reviewed-by: Emeel Hakim <ehakim@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
|
|
When the MACsec netdevice is deleted, all related Rx/Tx HW/SW
states should be released/deallocated, however currently part
of the Rx security channel association data is not cleaned
properly, hence the memory leaks.
Fix by make sure all related Rx Sc resources are cleaned/freed,
while at it improve code by grouping release SC context in a
function so it can be used in both delete MACsec device and
delete Rx SC operations.
Fixes: 5a39816a75e5 ("net/mlx5e: Add MACsec offload SecY support")
Signed-off-by: Raed Salem <raeds@nvidia.com>
Reviewed-by: Emeel Hakim <ehakim@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
|
|
Currently the data path metadata flow id mask wrongly limits the
number of different RX security channels (SC) to 16, whereas in
adding RX SC the limit is "2^16 - 1" this cause an overlap in
metadata flow id once more than 16 RX SCs is added, this corrupts
MACsec RX offloaded flow handling.
Fix by using the correct mask, while at it improve code to use this
mask when adding the Rx rule and improve visibility of such errors
by adding debug massage.
Fixes: b7c9400cbc48 ("net/mlx5e: Implement MACsec Rx data path using MACsec skb_metadata_dst")
Signed-off-by: Raed Salem <raeds@nvidia.com>
Reviewed-by: Emeel Hakim <ehakim@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
|
|
'accel_tcp' is allocated by kvzalloc(), which should freed by kvfree().
Fixes: f52f2faee581 ("net/mlx5e: Introduce flow steering API")
Signed-off-by: YueHaibing <yuehaibing@huawei.com>
Reviewed-by: Tariq Toukan <tariqt@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
|
|
If kvzalloc() fails then return -ENOMEM. Don't return success.
Fixes: 3b20949cb21b ("net/mlx5e: Add MACsec RX steering rules")
Fixes: e467b283ffd5 ("net/mlx5e: Add MACsec TX steering rules")
Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
|
|
When having multiple dests with termination tables and second one
or afterwards fails the driver reverts usage of term tables but
doesn't reset the assignment in attr->dests[num_vport_dests].termtbl
which case a use-after-free when releasing the rule.
Fix by resetting the assignment of termtbl to null.
Fixes: 10caabdaad5a ("net/mlx5e: Use termination table for VLAN push actions")
Signed-off-by: Roi Dayan <roid@nvidia.com>
Reviewed-by: Maor Dickman <maord@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
|
|
If sscanf() return 0, outlen is uninitialized and used in kzalloc(),
this is unexpected. We should return -EINVAL if the string is invalid.
Fixes: e126ba97dba9 ("mlx5: Add driver for Mellanox Connect-IB adapters")
Signed-off-by: YueHaibing <yuehaibing@huawei.com>
Reviewed-by: Leon Romanovsky <leonro@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
|
|
If creating bond first and then enabling sriov in switchdev mode,
will hit the following syndrome:
mlx5_core 0000:08:00.0: mlx5_cmd_out_err:778:(pid 25543): CREATE_LAG(0x840) op_mod(0x0) failed, status bad parameter(0x3), syndrome (0x7d49cb), err(-22)
The reason is because the offending patch removes eswitch mode
none. In vf lag, the checking of eswitch mode none is replaced
by checking if sriov is enabled. But when driver enables sriov,
it triggers the bond workqueue task first and then setting sriov
number in pci_enable_sriov(). So the check fails.
Fix it by checking if sriov is enabled using eswitch internal
counter that is set before triggering the bond workqueue task.
Fixes: f019679ea5f2 ("net/mlx5: E-switch, Remove dependency between sriov and eswitch mode")
Signed-off-by: Chris Mi <cmi@nvidia.com>
Reviewed-by: Roi Dayan <roid@nvidia.com>
Reviewed-by: Mark Bloch <mbloch@nvidia.com>
Reviewed-by: Vlad Buslov <vladbu@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
|
|
The cited commit removes eswitch mode none. But when disabling
sriov in legacy mode or changing from switchdev to legacy mode
without sriov enabled, the legacy fdb table is not destroyed.
It is not the right behavior. Destroy legacy fdb table in above
two caes.
Fixes: f019679ea5f2 ("net/mlx5: E-switch, Remove dependency between sriov and eswitch mode")
Signed-off-by: Chris Mi <cmi@nvidia.com>
Reviewed-by: Roi Dayan <roid@nvidia.com>
Reviewed-by: Eli Cohen <elic@nvidia.com>
Reviewed-by: Mark Bloch <mbloch@nvidia.com>
Reviewed-by: Vlad Buslov <vladbu@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
|
|
Smatch warns this:
drivers/net/ethernet/mellanox/mlx5/core/steering/dr_table.c:81
mlx5dr_table_set_miss_action() error: uninitialized symbol 'ret'.
Initializing ret with -EOPNOTSUPP and fix missing action case.
Fixes: 7838e1725394 ("net/mlx5: DR, Expose steering table functionality")
Signed-off-by: YueHaibing <yuehaibing@huawei.com>
Reviewed-by: Roi Dayan <roid@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
|
|
The ACPI buffer memory (buffer.pointer) should be freed as the
buffer is not used after acpi_evaluate_object(), free it to
prevent memory leak.
Fixes: 13e920d93e37 ("net: wwan: t7xx: Add core components")
Signed-off-by: Hanjun Guo <guohanjun@huawei.com>
Link: https://lore.kernel.org/r/1669119580-28977-1-git-send-email-guohanjun@huawei.com
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
|
|
As the devm_kcalloc may return NULL pointer,
it should be better to add check for the return
value, as same as the others.
Fixes: e8e095b3b370 ("octeontx2-af: cn10k: Bandwidth profiles config support")
Signed-off-by: Jiasheng Jiang <jiasheng@iscas.ac.cn>
Reviewed-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com>
Link: https://lore.kernel.org/r/20221122055449.31247-1-jiasheng@iscas.ac.cn
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
|
|
Qcom CPUFreq hardware (EPSS/OSM) controls clock and voltage to the CPU
cores. But this relationship is not represented with the clk framework
so far.
So, let's make the qcom-cpufreq-hw driver a clock provider. This makes the
clock producer/consumer relationship cleaner and is also useful for CPU
related frameworks like OPP to know the frequency at which the CPUs are
running.
The clock frequency provided by the driver is for each frequency domain.
We cannot get the frequency of each CPU core because, not all platforms
support per-core DCVS feature.
Also the frequency supplied by the driver is the actual frequency that
comes out of the EPSS/OSM block after the DCVS operation. This frequency is
not same as what the CPUFreq framework has set but it is the one that gets
supplied to the CPUs after throttling by LMh.
Signed-off-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
[ Xiu: Fixed memleak. ]
Signed-off-by: Xiu Jianfeng <xiujianfeng@huawei.com>
Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
|
|
In the blamed commit, a rudimentary reallocation procedure for RX buffer
descriptors was implemented, for the situation when their format changes
between normal (no PTP) and extended (PTP).
enetc_hwtstamp_set() calls enetc_close() and enetc_open() in a sequence,
and this sequence loses information which was previously configured in
the TX BDR Mode Register, specifically via the enetc_set_bdr_prio() call.
The TX ring priority is configured by tc-mqprio and tc-taprio, and
affects important things for TSN such as the TX time of packets. The
issue manifests itself most visibly by the fact that isochron --txtime
reports premature packet transmissions when PTP is first enabled on an
enetc interface.
Save the TX ring priority in a new field in struct enetc_bdr (occupies a
2 byte hole on arm64) in order to make this survive a ring reconfiguration.
Fixes: 434cebabd3a2 ("enetc: Add dynamic allocation of extended Rx BD rings")
Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com>
Reviewed-by: Alexander Lobakin <alexandr.lobakin@intel.com>
Link: https://lore.kernel.org/r/20221122130936.1704151-1-vladimir.oltean@nxp.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
|
|
prestera_port_create()
If prestera_port_sfp_bind() fails, unregister_netdev() should be called
in error handling path.
Compile tested only.
Fixes: 52323ef75414 ("net: marvell: prestera: add phylink support")
Signed-off-by: Zhang Changzhong <zhangchangzhong@huawei.com>
Reviewed-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com>
Link: https://lore.kernel.org/r/1669115432-36841-1-git-send-email-zhangchangzhong@huawei.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
|
|
The transaction buffer is allocated by using the size of the packet buf,
and subtracting two which seems intended to remove the two tags which are
not present in the target structure. This calculation leads to under
counting memory because of differences between the packet contents and the
target structure. The aid_len field is a u8 in the packet, but a u32 in
the structure, resulting in at least 3 bytes always being under counted.
Further, the aid data is a variable length field in the packet, but fixed
in the structure, so if this field is less than the max, the difference is
added to the under counting.
To fix, perform validation checks progressively to safely reach the
next field, to determine the size of both buffers and verify both tags.
Once all validation checks pass, allocate the buffer and copy the data.
This eliminates freeing memory on the error path, as validation checks are
moved ahead of memory allocation.
Reported-by: Denis Efremov <denis.e.efremov@oracle.com>
Reviewed-by: Guenter Roeck <groeck@google.com>
Fixes: 5d1ceb7f5e56 ("NFC: st21nfcb: Add HCI transaction event support")
Signed-off-by: Martin Faltesek <mfaltesek@google.com>
Reviewed-by: Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
|
|
Error path does not free previously allocated memory. Add devm_kfree() to
the failure path.
Reported-by: Denis Efremov <denis.e.efremov@oracle.com>
Reviewed-by: Guenter Roeck <groeck@google.com>
Fixes: 5d1ceb7f5e56 ("NFC: st21nfcb: Add HCI transaction event support")
Signed-off-by: Martin Faltesek <mfaltesek@google.com>
Reviewed-by: Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
|
|
The first validation check for EVT_TRANSACTION has two different checks
tied together with logical AND. One is a check for minimum packet length,
and the other is for a valid aid_tag. If either condition is true (fails),
then an error should be triggered. The fix is to change && to ||.
Reported-by: Denis Efremov <denis.e.efremov@oracle.com>
Reviewed-by: Guenter Roeck <groeck@google.com>
Fixes: 5d1ceb7f5e56 ("NFC: st21nfcb: Add HCI transaction event support")
Signed-off-by: Martin Faltesek <mfaltesek@google.com>
Reviewed-by: Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
|
|
Either ublk_can_use_task_work() is true or not, io commands are
forwarded to ublk server in reverse order, since llist_add() is
always to add one element to the head of the list.
Even though block layer doesn't guarantee request dispatch order,
requests should be sent to hardware in the sequence order generated
from io scheduler, which usually considers the request's LBA, and
order is often important for HDD.
So forward io commands in the sequence made from io scheduler by
aligning task work with current io_uring command's batch handling,
and it has been observed that both can get similar performance data
if IORING_SETUP_COOP_TASKRUN is set from ublk server.
Reported-by: Andreas Hindborg <andreas.hindborg@wdc.com>
Cc: Damien Le Moal <damien.lemoal@opensource.wdc.com>
Signed-off-by: Ming Lei <ming.lei@redhat.com>
Reviewed-by: Damien Le Moal <damien.lemoal@opensource.wdc.com>
Reviewed-by: ZiyangZhang <ZiyangZhang@linux.alibaba.com>
Link: https://lore.kernel.org/r/20221121155645.396272-1-ming.lei@redhat.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
|
|
https://git.kernel.org/pub/scm/linux/kernel/git/chunkuang.hu/linux into drm-next
Mediatek DRM Next for Linux 6.2
1. Fixup of dpi and hdmi
2. Move panel connector to head
3. Add MT8188 dpi support
4. Add MT8195 AFBC support
Signed-off-by: Dave Airlie <airlied@redhat.com>
From: Chun-Kuang Hu <chunkuang.hu@kernel.org>
Link: https://patchwork.freedesktop.org/patch/msgid/20221123234855.2485-1-chunkuang.hu@kernel.org
|
|
Kconfig fix for RZ/G2L DSI
Signed-off-by: Dave Airlie <airlied@redhat.com>
From: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Link: https://patchwork.freedesktop.org/patch/msgid/Y3wYk/Bn/qVa9ha0@pendragon.ideasonboard.com
|
|
Linux 6.1-rc6
This is needed for drm-misc-next and tegra.
Signed-off-by: Dave Airlie <airlied@redhat.com>
|
|
Nothing in this file needs anything from linux/msi.h
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Acked-by: Robin Murphy <robin.murphy@arm.com>
Link: https://lore.kernel.org/r/20221113202428.889624434@linutronix.de
|
|
Nothing in this file needs anything from linux/msi.h
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Jason Gunthorpe <jgg@nvidia.com>
Acked-by: Alex Williamson <alex.williamson@redhat.com>
Link: https://lore.kernel.org/r/20221113202428.826924043@linutronix.de
|
|
Nothing in this file needs anything from linux/msi.h
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Link: https://lore.kernel.org/r/20221113202428.760225831@linutronix.de
|
|
Neither dprc-driver.c nor fsl-mc-bus.c need anything from linux/msi.h.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Link: https://lore.kernel.org/r/20221113202428.511591041@linutronix.de
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/broonie/spi
Pull spi fixes from Mark Brown:
"A few fixes, all device specific.
The most important ones are for the i.MX driver which had a couple of
nasty data corruption inducing errors appear after the change to
support PIO mode in the last merge window (one introduced by the
change and one latent one which the PIO changes exposed).
Thanks to Frieder, Fabio, Marc and Marek for jumping on that and
resolving the issues quickly once they were found"
* tag 'spi-fix-v6.1-rc6' of git://git.kernel.org/pub/scm/linux/kernel/git/broonie/spi:
spi: spi-imx: spi_imx_transfer_one(): check for DMA transfer first
spi: tegra210-quad: Fix duplicate resource error
spi: dw-dma: decrease reference count in dw_spi_dma_init_mfld()
spi: spi-imx: Fix spi_bus_clk if requested clock is higher than input clock
spi: mediatek: Fix DEVAPC Violation at KO Remove
|
|
Some processors issue more than one HFI interrupt with the same
timestamp. Each interrupt must be acknowledged to let the hardware issue
new HFI interrupts. But this can't be done without some additional flow
modification in the existing interrupt handling.
For background, the HFI interrupt is a package level thermal interrupt
delivered via a LVT. This LVT is common for both the CPU and package
level interrupts. Hence, all CPUs receive the HFI interrupts. But only
one CPU should process interrupt and others simply exit by issuing EOI
to LAPIC.
The current HFI interrupt processing flow:
1. Receive Thermal interrupt
2. Check if there is an active HFI status in MSR_IA32_THERM_STATUS
3. Try and get spinlock, one CPU will enter spinlock and others
will simply return from here to issue EOI.
(Let's assume CPU 4 is processing interrupt)
4. Check the stored time-stamp from the HFI memory time-stamp
5. if same
6. ignore interrupt, unlock and return
7. Copy the HFI message to local buffer
8. unlock spinlock
9. ACK HFI interrupt
10. Queue the message for processing in a work-queue
It is tempting to simply acknowledge all the interrupts even if they
have the same timestamp. This may cause some interrupts to not be
processed.
Let's say CPU5 is slightly late and reaches step 4 while CPU4 is
between steps 8 and 9.
Currently we simply ignore interrupts with the same timestamp. No
issue here for CPU5. When CPU4 acknowledges the interrupt, the next
HFI interrupt can be delivered.
If we acknowledge interrupts with the same timestamp (at step 6), there
is a race condition. Under the same scenario, CPU 5 will acknowledge
the HFI interrupt. This lets hardware generate another HFI interrupt,
before CPU 4 start executing step 9. Once CPU 4 complete step 9, it
will acknowledge the newly arrived HFI interrupt, without actually
processing it.
Acknowledge the interrupt when holding the spinlock. This avoids
contention of the interrupt acknowledgment.
Updated flow:
1. Receive HFI Thermal interrupt
2. Check if there is an active HFI status in MSR_IA32_THERM_STATUS
3. Try and get spin-lock
Let's assume CPU 4 is processing interrupt
4.1 Read MSR_IA32_PACKAGE_THERM_STATUS and check HFI status bit
4.2 If hfi status is 0
4.3 unlock spinlock
4.4 return
4.5 Check the stored time-stamp from the HFI memory time-stamp
5. if same
6.1 ACK HFI Interrupt,
6.2 unlock spinlock
6.3 return
7. Copy the HFI message to local buffer
8. ACK HFI interrupt
9. unlock spinlock
10. Queue the message for processing in a work-queue
To avoid taking the lock unnecessarily, intel_hfi_process_event() checks
the status of the HFI interrupt before taking the lock. If CPU5 is late,
when it starts processing the interrupt there are two scenarios:
a) CPU4 acknowledged the HFI interrupt before CPU5 read
MSR_IA32_THERM_STATUS. CPU5 exits.
b) CPU5 reads MSR_IA32_THERM_STATUS before CPU4 has acknowledged the
interrupt. CPU5 will take the lock if CPU4 has released it. It then
re-reads MSR_IA32_THERM_STATUS. If there is not a new interrupt,
the HFI status bit is clear and CPU5 exits. If a new HFI interrupt
was generated it will find that the status bit is set and it will
continue to process the interrupt. In this case even if timestamp
is not changed, the ACK can be issued as this is a new interrupt.
Signed-off-by: Srinivas Pandruvada <srinivas.pandruvada@linux.intel.com>
Reviewed-by: Ricardo Neri <ricardo.neri-calderon@linux.intel.com>
Tested-by: Arshad, Adeel<adeel.arshad@intel.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
|
|
The clearing of the package thermal status is done by Read-Modify-Write
operation. This may result in clearing of some new status bits which are
being or about to be processed.
For example, while clearing of HFI status, after read of thermal status
register, a new thermal status bit is set by the hardware. But during
write back, the newly generated status bit will be set to 0 or cleared.
So, it is not safe to do read-modify-write.
Since thermal status Read-Write bits can be set to only 0 not 1, it is
safe to set all other bits to 1 which are not getting cleared.
Create a common interface for clearing package thermal status bits. Use
this interface to replace existing code to clear thermal package status
bits.
It is safe to call from different CPUs without protection as there is no
read-modify-write. Also wrmsrl results in just single instruction. For
example while CPU 0 and CPU 3 are clearing bit 1 and 3 respectively. If
CPU 3 wins the race, it will write 0x4000aa2, then CPU 1 will write
0x4000aa8. The bits which are not part of clear are set to 1. The default
mask for bits, which can be written here is 0x4000aaa.
Signed-off-by: Srinivas Pandruvada <srinivas.pandruvada@linux.intel.com>
Reviewed-by: Ricardo Neri <ricardo.neri-calderon@linux.intel.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
|
|
When there is a package thermal interrupt with PROCHOT log, it will be
processed and cleared. It is possible that there is an active HFI event
status, which is about to get processed or getting processed. While
clearing PROCHOT log bit, it will also clear HFI status bit. This means
that hardware is free to update HFI memory.
When clearing a package thermal interrupt, some processors will generate
a "general protection fault" when any of the read only bit is set to 1.
The driver maintains a mask of all read-write bits which can be set.
This mask doesn't include HFI status bit. This bit will also be cleared,
as it will be assumed read-only bit. So, add HFI status bit 26 to the
mask.
Signed-off-by: Srinivas Pandruvada <srinivas.pandruvada@linux.intel.com>
Reviewed-by: Ricardo Neri <ricardo.neri-calderon@linux.intel.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
|
|
Silence the following warnings when make W=1:
| CC drivers/acpi/apei/apei-base.c
| warning: no previous prototype for 'arch_apei_enable_cmcff' [-Wmissing-prototypes]
| int __weak arch_apei_enable_cmcff(struct acpi_hest_header *hest_hdr,
| ^
| CC drivers/acpi/apei/apei-base.c
| warning: no previous prototype for 'arch_apei_report_mem_error' [-Wmissing-prototypes]
| void __weak arch_apei_report_mem_error(int sev,
| ^
Reported-by: kernel test robot <lkp@intel.com>
Signed-off-by: Sudeep Holla <sudeep.holla@arm.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
|
|
Bail out if we extract the _FIF package failed, or we will end
of referencing the garbage information in fields[], the fan control
will be in mess, fix it.
Fiexes: d445571fa369 ("ACPI: fan: Optimize struct acpi_fan_fif")
Signed-off-by: Hanjun Guo <guohanjun@huawei.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
|
|
acpi_evaluate_dsm_typed()/acpi_evaluate_dsm() should be coupled with
ACPI_FREE() to free the ACPI memory, because we need to track the
allocation of acpi_object when ACPI_DBG_TRACK_ALLOCATIONS enabled,
so use ACPI_FREE() instead of kfree().
Fixes: 0db89fa243e5 ("ACPI: Introduce Platform Firmware Runtime Update device driver")
Signed-off-by: Wang ShaoBo <bobo.shaobowang@huawei.com>
Reviewed-by: Chen Yu <yu.c.chen@intel.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
|
|
acpi_evaluate_dsm_typed()/acpi_evaluate_dsm() should be coupled
with ACPI_FREE() to free the ACPI memory, because we need to
track the allocation of acpi_object when ACPI_DBG_TRACK_ALLOCATIONS
enabled, so use ACPI_FREE() instead of kfree().
Fixes: b0013e037a8b ("ACPI: Introduce Platform Firmware Runtime Telemetry driver")
Signed-off-by: Wang ShaoBo <bobo.shaobowang@huawei.com>
Reviewed-by: Chen Yu <yu.c.chen@intel.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
|
|
For bus-based driver, device removal is implemented as:
1 device_remove()->
2 bus->remove()->
3 driver->remove()
Driver core needs no inform from callee(bus driver) about the
result of remove callback. In that case, commit fc7a6209d571
("bus: Make remove callback return void") forces bus_type::remove
be void-returned.
Now we have the situation that both 1 & 2 of calling chain are
void-returned, so it does not make much sense for 3(driver->remove)
to return non-void to its caller.
So the basic idea behind this change is making remove() callback of
any bus-based driver to be void-returned.
This change, for itself, is for device drivers based on acpi-bus.
Acked-by: Uwe Kleine-König <u.kleine-koenig@pengutronix.de>
Acked-by: Lee Jones <lee@kernel.org>
Acked-by: Dmitry Torokhov <dmitry.torokhov@gmail.com>
Reviewed-by: Hans de Goede <hdegoede@redhat.com>
Signed-off-by: Dawei Li <set_pte_at@outlook.com>
Reviewed-by: Maximilian Luz <luzmaximilian@gmail.com> # for drivers/platform/surface/*
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
|
|
Currently, 'pcc_chan_count' is remains set to a non-zero value if PCC
subspaces are parsed successfully but something else fail later during
the initial PCC probing phase. This will result in pcc_mbox_request_channel
trying to access the resources that are not initialised or allocated and
may end up in a system crash.
Reset pcc_chan_count to 0 when the PCC probe fails in order to prevent
the possible issue as described above.
Fixes: ce028702ddbc ("mailbox: pcc: Move bulk of PCCT parsing into pcc_mbox_probe")
Signed-off-by: Huisong Li <lihuisong@huawei.com>
Reviewed-by: Sudeep Holla <sudeep.holla@arm.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
|
|
Currently, PCC OpRegion handler depends on the availability of platform
interrupt to be functional currently. If it is not available, the OpRegion
can't be executed successfully or the desired outcome won't be possible.
So let's reject setting up the PCC OpRegion handler on the platform if
it doesn't support or have platform interrupt available.
Signed-off-by: Huisong Li <lihuisong@huawei.com>
Reviewed-by: Sudeep Holla <sudeep.holla@arm.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
|