Age | Commit message (Collapse) | Author |
|
Remove the extra space in the nvme_free_cels() when calling
xa_for_each loop which is not a common practice
(except drivers/infiniband/core/ not sure why).
Signed-off-by: Chaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
|
|
When support for the NVMe ZNS commands was merged, tracing of these has
been omitted.
Add nvme_cmd_zone_mgmt_send, nvme_cmd_zone_mgmt_recv as well as
nvme_cmd_zone_append to the nvme driver's tracing facility.
Signed-off-by: Johannes Thumshirn <johannes.thumshirn@wdc.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
|
|
Add detailed parsing of format nvm admin command to make the
trace log more consistent and human-readable.
Signed-off-by: Michal Krakowiak <michal.krakowiak@intel.com>
Acked-by: Dan Williams <dan.j.williams@intel.com>
Reviewed-by: Minwoo Im <minwoo.im.dev@gmail.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
|
|
In this preparation patch, we add helpers to convert lbas to sectors &
sectors to lba. This is needed to eliminate code duplication in the ZBD
backend.
Use these helpers in the block device backend.
Signed-off-by: Chaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
Reviewed-by: Damien Le Moal <damien.lemoal@wdc.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
|
|
We remove the extra local variable struct nvmet_ns in
nvmet_execute_identify_ns() since req already has ns member that can be
reused, this also eliminates the explicit call to nvmet_put_namespace()
which is already present in the request completion path.
Signed-off-by: Chaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
Signed-off-by: Christoph Hellwig <hch@lst.de>
|
|
We remove the extra local variable struct nvmet_ns in
nvmet_execute_identify_desclist() since req already has ns member that
can be reused, this also eliminates the explicit call to
nvmet_put_namespace() which is already present in the request
completion path.
Signed-off-by: Chaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
Signed-off-by: Christoph Hellwig <hch@lst.de>
|
|
We remove the extra local variable struct nvmet_ns in
nvmet_get_smart_log_nsid() since req already has ns member that can be
reused, this also eliminates the explicit call to nvmet_put_namespace()
which is already present in the request completion path.
Signed-off-by: Chaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
Signed-off-by: Christoph Hellwig <hch@lst.de>
|
|
Just for current code in nvme_cleanup_cmd(), we don't have to get
namespace instance, but we need controller instance.
Controller instance can be retrieved by namespace instance, but it can
be directly accessed by nvme_request instance from request.
ctrl = nvme_req(req)->ctrl;
We don't have to go around namespace instance from request instance
through gendisk.
Signed-off-by: Minwoo Im <minwoo.im.dev@gmail.com>
Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
Signed-off-by: Christoph Hellwig <hch@lst.de>
|
|
iov_iter uses the right helpers so we should be able
to pass in a multipage bvec. Right now the iov_iter is
initialized with more segments that it needs which doesn't
fail because the iov_iter is capped by byte count, but it
is better to use a full multipage bvec iter.
Signed-off-by: Sagi Grimberg <sagi@grimberg.me>
Signed-off-by: Christoph Hellwig <hch@lst.de>
|
|
Signed-off-by: Sagi Grimberg <sagi@grimberg.me>
Signed-off-by: Christoph Hellwig <hch@lst.de>
|
|
We might set the iov_iter direction wrong, which is harmless for this
use-case, but get it right. Also this makes the code slightly cleaner.
Signed-off-by: Sagi Grimberg <sagi@grimberg.me>
Signed-off-by: Christoph Hellwig <hch@lst.de>
|
|
The controller can request a delay retrying a failed command by setting
the Command Retry Delay (CRD) field in the Completion Queue Entry.
Currentlty this features is only applied to commands on the I/O queue, but
not to commands on the admin queue. Retreive the nvme_ctrl from the
request so that no namespace is required and apply the feature to all
commands.
Signed-off-by: Minwoo Im <minwoo.im.dev@gmail.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
|
|
The only usage of these is to put their addresses in arrays of pointers
to const attribute_groups. Make them const to allow the compiler to put
them in read-only memory.
Signed-off-by: Rikard Falkeborn <rikard.falkeborn@gmail.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
|
|
searching assoc_list protected by rcu_read_lock if list not changed inline.
and according to the rcu list rules.
queue array embedded into nvmet_fc_tgt_assoc protected by rcu_read_lock
according to rcu dereference/assign rules.
queue and assoc object freed after grace period by call_rcu.
tgtport lock taken for changing assoc_list.
Reviewed-by: Eldad Zinger <Eldad.Zinger@dell.com>
Reviewed-by: Elad Grupi <Elad.Grupi@dell.com>
Reviewed-by: James Smart <james.smart@broadcom.com>
Signed-off-by: Leonid Ravich <Leonid.Ravich@emc.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
|
|
Remove extra tab.
Signed-off-by: Israel Rukshin <israelr@nvidia.com>
Reviewed-by: Max Gurtovoy <mgurtovoy@nvidia.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
|
|
Remove code duplication.
Signed-off-by: Israel Rukshin <israelr@nvidia.com>
Reviewed-by: Max Gurtovoy <mgurtovoy@nvidia.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
|
|
When SOP Discovery is done, set the opmode to PD if status indicates
SOP is connected.
SOP connected indicates a PD contract is in place, and is a solid
indication we have transitioned to PD power negotiation, either as
source or sink.
Signed-off-by: Benson Leung <bleung@chromium.org>
Acked-by: Heikki Krogerus <heikki.krogerus@linux.intel.com>
Acked-by: Enric Balletbo i Serra <enric.balletbo@collabora.com>
Reviewed-by: Prashant Malani <pmalani@chromium.org>
Link: https://lore.kernel.org/r/20210129061406.2680146-7-bleung@chromium.org
Signed-off-by: Benson Leung <bleung@chromium.org>
|
|
Status provides sop_revision. Process it, and set it using the new
setter in the typec class.
Signed-off-by: Benson Leung <bleung@chromium.org>
Acked-by: Heikki Krogerus <heikki.krogerus@linux.intel.com>
Acked-by: Enric Balletbo i Serra <enric.balletbo@collabora.com>
Reviewed-by: Prashant Malani <pmalani@chomium.org>
Link: https://lore.kernel.org/r/20210129061406.2680146-6-bleung@chromium.org
Signed-off-by: Benson Leung <bleung@chromium.org>
|
|
cros_typec_handle_sop_prime_disc now takes the PD revision provided
by the EC_CMD_TYPEC_STATUS command response for the SOP'.
Attach the properly formatted pd_revision to the cable desc before
registering the cable.
Signed-off-by: Benson Leung <bleung@chromium.org>
Acked-by: Heikki Krogerus <heikki.krogerus@linux.intel.com>
Acked-by: Enric Balletbo i Serra <enric.balletbo@collabora.com>
Reviewed-by: Prashant Malani <pmalani@chromium.org>
Link: https://lore.kernel.org/r/20210129061406.2680146-5-bleung@chromium.org
Signed-off-by: Benson Leung <bleung@chromium.org>
|
|
ib-usb-typec-chrome-platform-cros-ec-typec-changes-for-5.12
|
|
Eliminate the following coccicheck warning:
./drivers/devfreq/rk3399_dmc.c:403:2-3: Unneeded semicolon
Reported-by: Abaci Robot <abaci@linux.alibaba.com>
Signed-off-by: Yang Li <yang.lee@linux.alibaba.com>
Signed-off-by: Chanwoo Choi <cw00.choi@samsung.com>
|
|
Currently we only explicitly power up the combo PHY lanes
for DP. The spec says we should do it for HDMI as well.
Cc: stable@vger.kernel.org
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20210128155948.13678-3-ville.syrjala@linux.intel.com
Reviewed-by: Imre Deak <imre.deak@intel.com>
(cherry picked from commit 1e0cb7bef35f0d1aed383bf69a209df218b807c9)
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
|
|
Reduce the copypasta by pulling the combo PHY lane
power up stuff into a helper. We'll have a third user soon.
Cc: stable@vger.kernel.org
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20210128155948.13678-2-ville.syrjala@linux.intel.com
Reviewed-by: Imre Deak <imre.deak@intel.com>
(cherry picked from commit 5cdf706fb91a6e4e6af799bb957c4d598e6a067b)
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
|
|
In thunderbolt mode the PHY is owned by the thunderbolt controller.
We are not supposed to touch it. So skip the vswing programming
as well (we already skipped the other steps not applicable to TBT).
Touching this stuff could supposedly interfere with the PHY
programming done by the thunderbolt controller.
Cc: stable@vger.kernel.org
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20210128155948.13678-1-ville.syrjala@linux.intel.com
Reviewed-by: Imre Deak <imre.deak@intel.com>
(cherry picked from commit f8c6b615b921d8a1bcd74870f9105e62b0bceff3)
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
|
|
In case of failure in tc update skb the packet is dropped
without freeing the skb.
Fixed by freeing the skb in case failure in tc update skb.
Fixes: d6d27782864f ("net/mlx5: E-Switch, Restore chain id on miss")
Fixes: c75690972228 ("net/mlx5e: Add tc chains offload support for nic flows")
Signed-off-by: Maor Dickman <maord@nvidia.com>
Reviewed-by: Roi Dayan <roid@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
|
|
max_opened_tc is used for stats, so that potentially non-zero stats
won't disappear when num_tc decreases. However, mlx5e_setup_tc_mqprio
fails to update it in the flow where channels are closed.
This commit fixes it. The new value of priv->channels.params.num_tc is
always checked on exit. In case of errors it will just be the old value,
and in case of success it will be the updated value.
Fixes: 05909babce53 ("net/mlx5e: Avoid reset netdev stats on configuration changes")
Signed-off-by: Maxim Mikityanskiy <maximmi@mellanox.com>
Reviewed-by: Tariq Toukan <tariqt@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
|
|
When creation of a new rule that requires allocation of an FTE fails,
need to call to tree_put_node on the FTE in order to release its'
resource.
Fixes: cefc23554fc2 ("net/mlx5: Fix FTE cleanup")
Signed-off-by: Maor Gottlieb <maorg@nvidia.com>
Reviewed-by: Alaa Hleihel <alaa@nvidia.com>
Reviewed-by: Mark Bloch <mbloch@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
|
|
The function calculation always results in a value of 0. This works
generally, but when the release all pages feature is enabled it will
result in crashes.
Fixes: 0aa128475d33 ("net/mlx5: Maintain separate page trees for ECPF and PF functions")
Signed-off-by: Daniel Jurgens <danielj@nvidia.com>
Reported-by: Colin Ian King <colin.king@canonical.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
|
|
If as part of the actions the TTL of the packet is modified, the packet's
checksum needs to be recalculated. Connect-X6DX can handle this csum
recalculation natively. Older devices require this additional recalculation.
Signed-off-by: Yevgeny Kliteynik <kliteyn@nvidia.com>
Reviewed-by: Alex Vesker <valex@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
|
|
Just return the ptr directly.
Reported-by: Jakub Kicinski <kuba@kernel.org>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
|
|
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
|
|
The TLS RX workqueue is needed only when kTLS RX device offload
is supported.
Move its creation from the general TLS init function to the
kTLS RX init.
Create it once at init time if supported, avoid creation/destroy
everytime the feature bit is toggled.
Signed-off-by: Tariq Toukan <tariqt@nvidia.com>
Reviewed-by: Maxim Mikityanskiy <maximmi@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
|
|
This change fixes the checkpatch warning described in this commit
commit cbacb5ab0aa0 ("docs: printk-formats: Stop encouraging use of unnecessary %h[xudi] and %hh[xudi]")
Standard integer promotion is already done and %hx and %hhx is useless
so do not encourage the use of %hh[xudi] or %h[xudi].
Signed-off-by: Tom Rix <trix@redhat.com>
Reviewed-by: Leon Romanovsky <leonro@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
|
|
Increasing the indirection RQ table size from 128 to 256 improves the
packet distribution over the NIC HW queues for various cases.
Let's take a look at the following scenario:
Assuming RSS result distributed uniformly and indirection table is filled
with queues in a cyclic manner.
Let N be the number of queues on a given setup.
If 256%N = 128%N = 0, then all queues have the same probability to be
chosen for a given RSS result.
This case doesn't improves nor degrade by this change.
If 256%N != 0 and 128%N != 0, there is a remainder which will favor some
queues. Increasing the indirection RQ table size to 256 reduce the ratio
between the favored queues probability to be selected to the rest of the
queues and improves the distribution.
For example, let's assume the number of queues is 56.
For a table size of 128, we have 128%56=16 queues which will have a 3/128
probability to be chosen and 2/128 for the rest 40.
16 queues have 1.5 times the probability to be chosen over the other 40.
For a table size of 256, we have 256%56=32 queues which will have a 5/256
probability to be chosen and 4/256 probability for the rest 24 queues.
Here 32 queues have 1.25 more probability to be chosen over the other 24.
This shows that the larger indirection table size would more likely cause
an even distribution.
This change also aligns our mlx5 driver's indirection table size with
other vendors.
Signed-off-by: Noam Stolero <noams@nvidia.com>
Reviewed-by: Tal Gilboa <talgi@nvidia.com>
Reviewed-by: Tariq Toukan <tariqt@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
|
|
The channel's napi is first needed upon activation, not creation.
Minimize its enabled scope by moving it from the channel's open/close
stage into the activate/deactivate stage.
Signed-off-by: Tariq Toukan <tariqt@mellanox.com>
Reviewed-by: Maxim Mikityanskiy <maximmi@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
|
|
Also cleanup neigh in profile disable.
This is for logical separation.
Signed-off-by: Roi Dayan <roid@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
|
|
To avoid false lock dependency warning set the tc_ht lock
class different than the lock class of the ht being used when deleting
last flow from a group and then deleting a group, we get into del_sw_flow_group()
which call rhashtable_destroy on fg->ftes_hash which will take ht->mutex but
it's different than the ht->mutex here.
======================================================
WARNING: possible circular locking dependency detected
5.11.0-rc4_net_next_mlx5_949fdcc #1 Not tainted
------------------------------------------------------
modprobe/12950 is trying to acquire lock:
ffff88816510f910 (&node->lock){++++}-{3:3}, at: mlx5_del_flow_rules+0x2a/0x210 [mlx5_core]
but task is already holding lock:
ffff88815834e3e8 (&ht->mutex){+.+.}-{3:3}, at: rhashtable_free_and_destroy+0x37/0x340
which lock already depends on the new lock.
Signed-off-by: Roi Dayan <roid@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
|
|
Since its profile dependent let's init the vxlan info
as part of profile initialization.
Signed-off-by: Roi Dayan <roid@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
|
|
It's not part of priv initialization.
Signed-off-by: Roi Dayan <roid@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
|
|
We actually initialize priv and not netdev. The only call to
set netdev carrier will be moved in the following commit.
Signed-off-by: Roi Dayan <roid@nvidia.com>
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
|
|
Port nic netdevice will be used as uplink representor in downstream
patches. Add change profile method to allow changing a mlx5e netdevice
profile dynamically.
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
Reviewed-by: Roi Dayan <roid@nvidia.com>
|
|
1) Initialize netdevice features and structures on netdevice allocation
and outside of the mlx5e profile.
2) As now mlx5e netdevice private params will be setup on profile init only
after netdevice features are already set, we add a call to
netde_update_features() to resolve any conflict.
This is nice since we reuse the fix_features ndo code if a profile
wants different default features, instead of duplicating features
conflict resolution code on profile initialization.
3) With this we achieve total separation between mlx5e profiles and
netdevices, and will allow replacing mlx5e profiles on the fly to reuse
the same netdevice for multiple profiles.
e.g. for uplink representor profile as shown in the following patch
4) Profile callbacks are not allowed to touch netdev->features directly
anymore, since in downstream patch we will detach/attach netdev
dynamically to profile, hence we move the code dealing with
netdev->features from profile->init() to fix_features ndo, and we
will call netdev_update_features() on
mlx5e_attach_netdev(profile, netdev);
Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
Reviewed-by: Roi Dayan <roid@nvidia.com>
|
|
Not all devices that need to use OPP core need to have clocks, a missing
clock is fine in which case -ENOENT shall be returned by clk_get().
Anything else is an error and must be handled properly.
Reported-by: Dmitry Osipenko <digetx@gmail.com>
Tested-by: Dmitry Osipenko <digetx@gmail.com>
Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
|
|
The bandwidth must be scaled at a different point in the code flow based
on if we are scaling up or down the frequency, otherwise this may cause
undesired effects as the device will try to use more of the memory
bandwidth which may be shared across several devices. Much like how
regulators and required-opps are programmed.
Reported-by: Dmitry Osipenko <digetx@gmail.com>
Reported-by: Akhil P Oommen <akhilpo@codeaurora.org>
Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
Tested-by: Dmitry Osipenko <digetx@gmail.com>
|
|
The OPP core currently requires the required opp tables to be available
before the dependent OPP table is added, as it needs to create links
from the dependent OPP table to the required ones. This may not be
convenient for all the platforms though, as this requires strict
ordering for probing the drivers.
This patch allows lazy-linking of the required-opps. The OPP tables for
which the required-opp-tables aren't available at the time of their
initialization, are added to a special list of OPP tables:
lazy_opp_tables. Later on, whenever a new OPP table is registered with
the OPP core, we check if it is required by an OPP table in the pending
list; if yes, then we complete the linking then and there.
An OPP table is marked unusable until the time all its required-opp
tables are available. And if lazy-linking fails for an OPP table, the
OPP core disables all of its OPPs to make sure no one can use them.
Tested-by: Hsin-Yi Wang <hsinyi@chromium.org>
Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
|
|
All the users have migrated to dev_pm_opp_set_opp() now, get rid of the
duplicate API, dev_pm_opp_set_bw(), which only performs a part of the new API.
While at it, remove the unnecessary parameter to _set_opp_bw().
Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
Tested-by: Dmitry Osipenko <digetx@gmail.com>
|
|
dev_pm_opp_set_bw() is getting removed and dev_pm_opp_set_opp() should
be used instead. Migrate to the new API.
We don't want the OPP core to manage the clk for this driver, migrate to
dev_pm_opp_of_add_table_noclk() to make sure dev_pm_opp_set_opp()
doesn't have any side effects.
Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
Acked-by: Chanwoo Choi <cw00.choi@samsung.com>
Tested-by: Dmitry Osipenko <digetx@gmail.com>
|
|
dev_pm_opp_set_bw() is getting removed and dev_pm_opp_set_opp() should
be used instead. Migrate to the new API.
Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
|
|
dev_pm_opp_set_bw() is getting removed and dev_pm_opp_set_opp() should
be used instead. Migrate to the new API.
Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
Tested-by: Dmitry Osipenko <digetx@gmail.com>
|
|
The new helper dev_pm_opp_set_opp() can be used for configuring the
devices for a particular OPP and can be used by different type of
devices, even the ones which don't change frequency (like power
domains).
Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
Tested-by: Dmitry Osipenko <digetx@gmail.com>
|