Age | Commit message (Collapse) | Author |
|
Add ConfigChangeError(qbv_config_change_errors) when user try to set the
AdminBaseTime to past value while the current GCL is still running.
The ConfigChangeError counter should not be increased when a gate control
list is scheduled into the future.
User can use "ethtool -S <interface> | grep qbv_config_change_errors"
command to check the counter values.
Signed-off-by: Muhammad Husaini Zulkifli <muhammad.husaini.zulkifli@intel.com>
Tested-by: Naama Meir <naamax.meir@linux.intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
|
|
I was intending to make all the Netlink Spec code BSD-3-Clause
to ease the adoption but it appears that:
- I fumbled the uAPI and used "GPL WITH uAPI note" there
- it gives people pause as they expect GPL in the kernel
As suggested by Chuck re-license under dual. This gives us benefit
of full BSD freedom while fulfilling the broad "kernel is under GPL"
expectations.
Link: https://lore.kernel.org/all/20230304120108.05dd44c5@kernel.org/
Link: https://lore.kernel.org/r/20230306200457.3903854-1-kuba@kernel.org
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
|
|
Map all my old email addresses to current address.
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Link: https://lore.kernel.org/r/20230306194405.108236-1-stephen@networkplumber.org
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
|
|
Map Maxim's old corporate addresses to his personal one.
Link: https://lore.kernel.org/r/20230306192018.3894988-1-kuba@kernel.org
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
|
|
cb_context should be freed on the error path in nfc_se_io as stated by
commit 25ff6f8a5a3b ("nfc: fix memory leak of se_io context in
nfc_genl_se_io").
Make the error path in nfc_se_io unwind everything in reverse order, i.e.
free the cb_context after unlocking the device.
Suggested-by: Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org>
Signed-off-by: Fedor Pchelkin <pchelkin@ispras.ru>
Reviewed-by: Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org>
Link: https://lore.kernel.org/r/20230306212650.230322-1-pchelkin@ispras.ru
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
|
|
In monitor mode, we try to report the EOF bit on the
first MPDU of an A-MPDU (hardware duplicates this bit
over all MPDUs, so it's only trustable on the first).
However, due to reshuffling in an ealier commit, the
toggle_bit != mvm->ampdu_toggle logic can no longer
work since mvm->ampdu_toggle is now set before this
code runs.
Fix this by tracking the first_subframe status in the
phy data struct and using that instead of checking.
Fixes: f1490546bec9 ("wifi: iwlwifi: mvm: rxmq: refactor mac80211 rx_status setting")
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: Gregory Greenman <gregory.greenman@intel.com>
Link: https://lore.kernel.org/r/20230305124407.e273aa0d3fdc.I77db4cc247898eae8a98b80659386d6737052b95@changeid
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
|
|
With older compilers like gcc-9, the calculation of the vlan
priority field causes a false-positive warning from the byteswap:
In file included from drivers/net/ethernet/intel/ice/ice_tc_lib.c:4:
drivers/net/ethernet/intel/ice/ice_tc_lib.c: In function 'ice_parse_cls_flower':
include/uapi/linux/swab.h:15:15: error: integer overflow in expression '(int)(short unsigned int)((int)match.key-><U67c8>.<U6698>.vlan_priority << 13) & 57344 & 255' of type 'int' results in '0' [-Werror=overflow]
15 | (((__u16)(x) & (__u16)0x00ffU) << 8) | \
| ~~~~~~~~~~~~^~~~~~~~~~~~~~~~~
include/uapi/linux/swab.h:106:2: note: in expansion of macro '___constant_swab16'
106 | ___constant_swab16(x) : \
| ^~~~~~~~~~~~~~~~~~
include/uapi/linux/byteorder/little_endian.h:42:43: note: in expansion of macro '__swab16'
42 | #define __cpu_to_be16(x) ((__force __be16)__swab16((x)))
| ^~~~~~~~
include/linux/byteorder/generic.h:96:21: note: in expansion of macro '__cpu_to_be16'
96 | #define cpu_to_be16 __cpu_to_be16
| ^~~~~~~~~~~~~
drivers/net/ethernet/intel/ice/ice_tc_lib.c:1458:5: note: in expansion of macro 'cpu_to_be16'
1458 | cpu_to_be16((match.key->vlan_priority <<
| ^~~~~~~~~~~
After a change to be16_encode_bits(), the code becomes more
readable to both people and compilers, which avoids the warning.
Fixes: 34800178b302 ("ice: Add support for VLAN priority filters in switchdev")
Suggested-by: Alexander Lobakin <alexandr.lobakin@intel.com>
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Reviewed-by: Alexander Lobakin <alexandr.lobakin@intel.com>
Tested-by: Sujai Buvaneswaran <sujai.buvaneswaran@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
|
|
There were few smatch warnings reported by Dan:
- ice_vsi_cfg_xdp_txqs can return 0 instead of ret, which is cleaner
- return values in ice_vsi_cfg_def were ignored
- in ice_vsi_rebuild return value was ignored in case rebuild failed,
it was a never reached code, however, rewrite it for clarity.
- ice_vsi_cfg_tc can return 0 instead of ret
Fixes: 6624e780a577 ("ice: split ice_vsi_setup into smaller functions")
Reported-by: Dan Carpenter <error27@gmail.com>
Signed-off-by: Michal Swiatkowski <michal.swiatkowski@linux.intel.com>
Tested-by: Gurucharan G <gurucharanx.g@intel.com> (A Contingent worker at Intel)
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
|
|
When creating the TLV to send to the FW for configuring DSCP mode PFC,the
PFCENABLE field was being masked with a 4 bit mask (0xF), but this is an 8
bit bitmask for enabled classes for PFC. This means that traffic classes
4-7 could not be enabled for PFC.
Remove the mask completely, as it is not necessary, as we are assigning 8
bits to an 8 bit field.
Fixes: 2a87bd73e50d ("ice: Add DSCP support")
Signed-off-by: Dave Ertman <david.m.ertman@intel.com>
Signed-off-by: Karen Ostrowska <karen.ostrowska@intel.com>
Tested-by: Gurucharan G <gurucharanx.g@intel.com> (A Contingent worker at Intel)
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
|
|
[Why]
Currently, the clk manager matches SocVoltage with voltage from
fused settings (dfPstate clock table). And then corresponding clocks
are selected.
However in certain situations, this leads to clk manager not
including at least one entry with highest supported clock setting.
[How]
Update the clk manager to include at least one entry with highest
supported clock setting.
Reviewed-by: Pavle Kotarac <pavle.kotarac@amd.com>
Acked-by: Qingqing Zhuo <qingqing.zhuo@amd.com>
Signed-off-by: Swapnil Patel <Swapnil.Patel@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
|
|
Support EccInfoTable which includes umc ras error count and
error address.
Signed-off-by: Candice Li <candice.li@amd.com>
Reviewed-by: Evan Quan <evan.quan@amd.com>
Reviewed-by: Stanley.Yang <Stanley.Yang@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
|
|
Don't need to query error count and error address on harvest umc nodes.
v2: Fix code bug, use active_mask instead of harvsest_config
and remove unnecessary argument in LOOP macro.
v3: Leave adev->gmc.num_umc unchanged.
Signed-off-by: Candice Li <candice.li@amd.com>
Reviewed-by: Tao Zhou <tao.zhou1@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
|
|
This is useful to understand the bpc defaults and
support of a driver.
Signed-off-by: Harry Wentland <harry.wentland@amd.com>
Cc: Pekka Paalanen <ppaalanen@gmail.com>
Cc: Sebastian Wick <sebastian.wick@redhat.com>
Cc: Vitaly.Prosyak@amd.com
Cc: Uma Shankar <uma.shankar@intel.com>
Cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
Cc: Joshua Ashton <joshua@froggi.es>
Cc: Jani Nikula <jani.nikula@linux.intel.com>
Cc: dri-devel@lists.freedesktop.org
Cc: amd-gfx@lists.freedesktop.org
Reviewed-By: Joshua Ashton <joshua@froggi.es>
Link: https://patchwork.freedesktop.org/patch/msgid/20230113162428.33874-3-harry.wentland@amd.com
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Cc: stable@vger.kernel.org
|
|
The EDID of an HDR display defines EOTFs that are supported
by the display and can be set in the HDR metadata infoframe.
Userspace is expected to read the EDID and set an appropriate
HDR_OUTPUT_METADATA.
In drm_parse_hdr_metadata_block the kernel reads the supported
EOTFs from the EDID and stores them in the
drm_connector->hdr_sink_metadata. While doing so it also
filters the EOTFs to the EOTFs the kernel knows about.
When an HDR_OUTPUT_METADATA is set it then checks to
make sure the EOTF is a supported EOTF. In cases where
the kernel doesn't know about a new EOTF this check will
fail, even if the EDID advertises support.
Since it is expected that userspace reads the EDID to understand
what the display supports it doesn't make sense for DRM to block
an HDR_OUTPUT_METADATA if it contains an EOTF the kernel doesn't
understand.
This comes with the added benefit of future-proofing metadata
support. If the spec defines a new EOTF there is no need to
update DRM and an compositor can immediately make use of it.
Bug: https://gitlab.freedesktop.org/wayland/weston/-/issues/609
v2: Distinguish EOTFs defind in kernel and ones defined
in EDID in the commit description (Pekka)
v3: Rebase; drm_hdmi_infoframe_set_hdr_metadata moved
to drm_hdmi_helper.c
Signed-off-by: Harry Wentland <harry.wentland@amd.com>
Cc: Pekka Paalanen <ppaalanen@gmail.com>
Cc: Sebastian Wick <sebastian.wick@redhat.com>
Cc: Vitaly.Prosyak@amd.com
Cc: Uma Shankar <uma.shankar@intel.com>
Cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
Cc: Joshua Ashton <joshua@froggi.es>
Cc: Jani Nikula <jani.nikula@linux.intel.com>
Cc: dri-devel@lists.freedesktop.org
Cc: amd-gfx@lists.freedesktop.org
Acked-by: Pekka Paalanen <pekka.paalanen@collabora.com>
Reviewed-By: Joshua Ashton <joshua@froggi.es>
Link: https://patchwork.freedesktop.org/patch/msgid/20230113162428.33874-2-harry.wentland@amd.com
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Cc: stable@vger.kernel.org
|
|
Chris pointed out that some bonehead, *cough* me *cough*, added two
mutex_locks() to the SiFive errata patching. The second was meant to
have been a mutex_unlock().
This results in errors such as
Unable to handle kernel NULL pointer dereference at virtual address 0000000000000030
Oops [#1]
Modules linked in:
CPU: 0 PID: 0 Comm: swapper Not tainted
6.2.0-rc1-starlight-00079-g9493e6f3ce02 #229
Hardware name: BeagleV Starlight Beta (DT)
epc : __schedule+0x42/0x500
ra : schedule+0x46/0xce
epc : ffffffff8065957c ra : ffffffff80659a80 sp : ffffffff81203c80
gp : ffffffff812d50a0 tp : ffffffff8120db40 t0 : ffffffff81203d68
t1 : 0000000000000001 t2 : 4c45203a76637369 s0 : ffffffff81203cf0
s1 : ffffffff8120db40 a0 : 0000000000000000 a1 : ffffffff81213958
a2 : ffffffff81213958 a3 : 0000000000000000 a4 : 0000000000000000
a5 : ffffffff80a1bd00 a6 : 0000000000000000 a7 : 0000000052464e43
s2 : ffffffff8120db41 s3 : ffffffff80a1ad00 s4 : 0000000000000000
s5 : 0000000000000002 s6 : ffffffff81213938 s7 : 0000000000000000
s8 : 0000000000000000 s9 : 0000000000000001 s10: ffffffff812d7204
s11: ffffffff80d3c920 t3 : 0000000000000001 t4 : ffffffff812e6dd7
t5 : ffffffff812e6dd8 t6 : ffffffff81203bb8
status: 0000000200000100 badaddr: 0000000000000030 cause: 000000000000000d
[<ffffffff80659a80>] schedule+0x46/0xce
[<ffffffff80659dce>] schedule_preempt_disabled+0x16/0x28
[<ffffffff8065ae0c>] __mutex_lock.constprop.0+0x3fe/0x652
[<ffffffff8065b138>] __mutex_lock_slowpath+0xe/0x16
[<ffffffff8065b182>] mutex_lock+0x42/0x4c
[<ffffffff8000ad94>] sifive_errata_patch_func+0xf6/0x18c
[<ffffffff80002b92>] _apply_alternatives+0x74/0x76
[<ffffffff80802ee8>] apply_boot_alternatives+0x3c/0xfa
[<ffffffff80803cb0>] setup_arch+0x60c/0x640
[<ffffffff80800926>] start_kernel+0x8e/0x99c
---[ end trace 0000000000000000 ]---
Reported-by: Chris Hofstaedtler <zeha@debian.org>
Fixes: 9493e6f3ce02 ("RISC-V: take text_mutex during alternative patching")
Signed-off-by: Conor Dooley <conor.dooley@microchip.com>
Tested-by: Geert Uytterhoeven <geert+renesas@glider.be>
Link: https://lore.kernel.org/r/20230302174154.970746-1-conor@kernel.org
[Palmer: pick up Geert's bug report from the thread]
Signed-off-by: Palmer Dabbelt <palmer@rivosinc.com>
|
|
Commit 596ff4a09b89 ("cpumask: re-introduce constant-sized cpumask
optimizations") changed cpumask_setall() to use "bitmap_set()" instead
of "bitmap_fill()", because bitmap_fill() would explicitly set all the
bits of a constant sized small bitmap, and that's exactly what we don't
want: we want to only set bits up to 'nr_cpu_ids', which is what
"bitmap_set()" does.
However, Yury correctly points out that while "bitmap_set()" does indeed
only set bits up to the required bitmap size, it doesn't _clear_ bits
above that size, so the upper bits would still not have well-defined
values.
Now, none of this should really matter, since any bits set past
'nr_cpu_ids' should always be ignored in the first place. Yes, the bit
scanning functions might return them as a result, but since users should
always consider the ">= nr_cpu_ids" condition to mean "no more bits",
that shouldn't have any actual effect (see previous commit 8ca09d5fa354
"cpumask: fix incorrect cpumask scanning result checks").
But let's just do it right, the way the code was _intended_ to work. We
have had enough lazy code that works but bites us in the *rse later
(again, see previous commit) that there's no reason to not just do this
properly.
It turns out that "bitmap_fill()" gets this all right for the complex
case, and really only fails for the inlined optimized case that just
fills the whole word. And while we could just fix bitmap_fill() to use
the proper last word mask, there's two issues with that:
- the cpumask case wants to do the _optimization_ based on "NR_CPUS is
a small constant", but then wants to do the actual bit _fill_ based
on "nr_cpu_ids" that isn't necessarily that same constant
- we have lots of non-cpumask users of bitmap_fill(), and while they
hopefully don't care, and probably would want the proper semantics
anyway ("only set bits up to the limit"), I do not want the cpumask
changes to impact other parts
So this ends up just doing the single-word optimization by hand in the
cpumask code. If our cpumask is fundamentally limited to a single word,
just do the proper "fill in that word" exactly. And if it's the more
complex multi-word case, then the generic bitmap_fill() will DTRT.
This is all an example of how our bitmap function optimizations really
are somewhat broken. They conflate the "this is size of the bitmap"
optimizations with the actual bit(s) we want to set.
In many cases we really want to have the two be separate things:
sometimes we base our optimizations on the size of the whole bitmap ("I
know this whole bitmap fits in a single word, so I'll just use
single-word accesses"), and sometimes we base them on the bit we are
looking at ("this is just acting on bits that are in the first word, so
I'll use single-word accesses").
Notice how the end result of the two optimizations are the same, but the
way we get to them are quite different.
And all our cpumask optimization games are really about that fundamental
distinction, and we'd often really want to pass in both the "this is the
bit I'm working on" (which _can_ be a small constant but might be
variable), and "I know it's in this range even if it's variable" (based
on CONFIG_NR_CPUS).
So this cpumask_setall() implementation just makes that explicit. It
checks the "I statically know the size is small" using the known static
size of the cpumask (which is what that 'small_cpumask_bits' is all
about), but then sets the actual bits using the exact number of cpus we
have (ie 'nr_cpumask_bits')
Of course, in a perfect world, the compiler would have done all the
range analysis (possibly with help from us just telling it that
"this value is always in this range"), and would do all of this for us.
But that is not the world we live in.
While we dream of that perfect world, this does that manual logic to
make it all work out. And this was a very long explanation for a small
code change that shouldn't even matter.
Reported-by: Yury Norov <yury.norov@gmail.com>
Link: https://lore.kernel.org/lkml/ZAV9nGG9e1%2FrV+L%2F@yury-laptop/
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
Do not include user information in radtiotap EHT data for EHT sounding
NDP as the frame doesn't include the user specific field. Instead,
encode the NSS and the beamforming information in the EHT data.
Signed-off-by: Ilan Peer <ilan.peer@intel.com>
Signed-off-by: Gregory Greenman <gregory.greenman@intel.com>
Link: https://lore.kernel.org/r/20230305124407.ac6474ded9bd.I9655589e9afbacc16820f35f6f5d90c6a91b8b05@changeid
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
|
|
FW new API added the info missing for update RU allocation,
so use the new API to update radiotap information.
Signed-off-by: Mordechay Goodstein <mordechay.goodstein@intel.com>
Signed-off-by: Gregory Greenman <gregory.greenman@intel.com>
Link: https://lore.kernel.org/r/20230305124407.b16acaa4bad1.I53afa03058dbd2cd8afbaf5e82596c8ed501a476@changeid
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
|
|
Update the log category for the reset-fw changes.
Signed-off-by: Mukesh Sisodiya <mukesh.sisodiya@intel.com>
Signed-off-by: Gregory Greenman <gregory.greenman@intel.com>
Link: https://lore.kernel.org/r/20230305124407.852a6b5f95fa.Ie67bd28da65c7e42424cacb37495930475de2dad@changeid
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
|
|
VHT, HE and EHT rates use the same bits for NSS, so no need for
defines per PHY version.
Also use spatch to replace bit manipulation with FIELD_GET:
@@
identifier rate;
@@
-((rate & RATE_MCS_NSS_MSK) >> RATE_MCS_NSS_POS)
+FIELD_GET(RATE_MCS_NSS_MSK, rate)
Signed-off-by: Mordechay Goodstein <mordechay.goodstein@intel.com>
Signed-off-by: Gregory Greenman <gregory.greenman@intel.com>
Link: https://lore.kernel.org/r/20230305124407.167ed9477aa8.Ibd8e71d31896e8d8f067ce4e3a6e9a0e86c78f3f@changeid
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
|
|
Currently the for loop runs also over unsupported bandwidth in the
command, shorten the path in case we don't support it.
Also use the right macro for setting BW20.
Signed-off-by: Mordechay Goodstein <mordechay.goodstein@intel.com>
Signed-off-by: Gregory Greenman <gregory.greenman@intel.com>
Link: https://lore.kernel.org/r/20230305124407.0264ba9df63b.I6c7c9efc806e0ffb7cb3b6051b2d109646e8708c@changeid
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
|
|
Support new HW step of BnJ-Fm4 device
Signed-off-by: Golan Ben Ami <golan.ben.ami@intel.com>
Signed-off-by: Gregory Greenman <gregory.greenman@intel.com>
Link: https://lore.kernel.org/r/20230305124407.bb0591c59898.If04d7a45707ba008981f8c8ea7f7f107880f146c@changeid
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
|
|
The mask building here is only relevant for the old TX API,
so move it into the else branch.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: Gregory Greenman <gregory.greenman@intel.com>
Link: https://lore.kernel.org/r/20230305124407.c0795543f254.I302124a8584dd049577b0c2c74ecd7c48ddf4f3e@changeid
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
|
|
For the old TX API we need the tfd_queue_msk, but for the
new TX API we don't need it here because we add it to the
station later. However, for the new API mvm->snif_queue is
set to IWL_MVM_INVALID_QUEUE == 0xffff, so the BIT() here
is undefined behaviour.
Since we don't need the tfd_queue_msk value for the new TX
API at all, simply fill it in only for the old API.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: Gregory Greenman <gregory.greenman@intel.com>
Link: https://lore.kernel.org/r/20230305124407.b8da0b7eb194.I53744fd7cfb6e146a9393272a2a61852841238d9@changeid
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
|
|
Calculate the position of the control channel in the wide channel
based on the chandef, this is used to obtain the value of N in
802.11be D1.5 Table 9-53a in the column PHY MU/MRU index.
To avoid the need to calculate every frame the value, do it once
monitor vif is added.
Signed-off-by: Mordechay Goodstein <mordechay.goodstein@intel.com>
Signed-off-by: Gregory Greenman <gregory.greenman@intel.com>
Link: https://lore.kernel.org/r/20230305124407.fe9a5b58e241.I291ee480252d098f62d9ec39040284d3e521d88e@changeid
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
|
|
In EHT sniffer mode DW4 is all used for sniffer data (unlike we have in
HE mode), so move the full DW4 into a union, and we extract the new data5
used for parsing USIG info and set all to radiotap TLVs with the
extracted data.
Also parse OFDM_RX_VECTOR_USIG_A1_OUT and OFDM_RX_VECTOR_USIG_A2_OUT for
rx_no_data notification.
Signed-off-by: Mordechay Goodstein <mordechay.goodstein@intel.com>
Signed-off-by: Gregory Greenman <gregory.greenman@intel.com>
Link: https://lore.kernel.org/r/20230305124407.557d3870753b.I4e9fa4d21900a187753529d46956ba2a7ee75fda@changeid
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
|
|
This is based on 802.11be D1.5 table 9-53a
Signed-off-by: Mordechay Goodstein <mordechay.goodstein@intel.com>
Signed-off-by: Gregory Greenman <gregory.greenman@intel.com>
Link: https://lore.kernel.org/r/20230305124407.0b720d6d6a48.I0034dd108696223494799d3ffe4f09685800b831@changeid
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
|
|
The type RX_NO_DATA_INFO_TYPE_HE_TB_UNMATCHED is applied to all TB
frames including EHT mode, so rename accordingly.
Signed-off-by: Mordechay Goodstein <mordechay.goodstein@intel.com>
Signed-off-by: Gregory Greenman <gregory.greenman@intel.com>
Link: https://lore.kernel.org/r/20230305124407.e4f51f347e48.I2d6ecb6eadc95666d2ef9794662ee779488ceac1@changeid
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
|
|
Add Microsoft to the list of OEMs which allowed to use TAS.
Signed-off-by: Alon Giladi <alon.giladi@intel.com>
Signed-off-by: Gregory Greenman <gregory.greenman@intel.com>
Link: https://lore.kernel.org/r/20230305124407.662967fec1cc.Icb30cddc049cb5402fd5ab2ce7f95033e478b1b9@changeid
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
|
|
Update all radiotap EHT TLVs that we can extract from data0 in HW.
Signed-off-by: Mordechay Goodstein <mordechay.goodstein@intel.com>
Signed-off-by: Gregory Greenman <gregory.greenman@intel.com>
Link: https://lore.kernel.org/r/20230305124407.730f219e02ee.Ife3dd85c65758694d7602e8bc8660887d77faacf@changeid
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
|
|
rate_n_flags is always present in the data so at least give all of
the information we can extract from rate_n_flags
Signed-off-by: Mordechay Goodstein <mordechay.goodstein@intel.com>
Signed-off-by: Gregory Greenman <gregory.greenman@intel.com>
Link: https://lore.kernel.org/r/20230305124407.b1c7d49ad35e.Ie2412ac6f88700aa3767ff95ffb52a806b13b7ce@changeid
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
|
|
Add a helper function setting type, length, zeroing out
TLV data and including adding padding if necessary.
Signed-off-by: Mordechay Goodstein <mordechay.goodstein@intel.com>
Signed-off-by: Gregory Greenman <gregory.greenman@intel.com>
Link: https://lore.kernel.org/r/20230305124407.8ac5195bb3e6.I19ad99c1ad3108453aede64bddf6ef1a7c4a0b74@changeid
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
|
|
To be able to use a general function later for any kind of
TLV, separate the vendor TLV header/content in the structs.
Signed-off-by: Mordechay Goodstein <mordechay.goodstein@intel.com>
Signed-off-by: Gregory Greenman <gregory.greenman@intel.com>
Link: https://lore.kernel.org/r/20230305124407.8ac5195bb3e6.I19ad99c1ad3108453aede64bddf6ef1a7c4a0b74@changeid
[separate from the original combined patch]
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
|
|
Yafang Shao says:
====================
Currently we can't get bpf memory usage reliably either from memcg or
from bpftool.
In memcg, there's not a 'bpf' item in memory.stat, but only 'kernel',
'sock', 'vmalloc' and 'percpu' which may related to bpf memory. With
these items we still can't get the bpf memory usage, because bpf memory
usage may far less than the kmem in a memcg, for example, the dentry may
consume lots of kmem.
bpftool now shows the bpf memory footprint, which is difference with bpf
memory usage. The difference can be quite great in some cases, for example,
- non-preallocated bpf map
The non-preallocated bpf map memory usage is dynamically changed. The
allocated elements count can be from 0 to the max entries. But the
memory footprint in bpftool only shows a fixed number.
- bpf metadata consumes more memory than bpf element
In some corner cases, the bpf metadata can consumes a lot more memory
than bpf element consumes. For example, it can happen when the element
size is quite small.
- some maps don't have key, value or max_entries
For example the key_size and value_size of ringbuf is 0, so its
memlock is always 0.
We need a way to show the bpf memory usage especially there will be more
and more bpf programs running on the production environment and thus the
bpf memory usage is not trivial.
This patchset introduces a new map ops ->map_mem_usage to calculate the
memory usage. Note that we don't intend to make the memory usage 100%
accurate, while our goal is to make sure there is only a small difference
between what bpftool reports and the real memory. That small difference
can be ignored compared to the total usage. That is enough to monitor
the bpf memory usage. For example, the user can rely on this value to
monitor the trend of bpf memory usage, compare the difference in bpf
memory usage between different bpf program versions, figure out which
maps consume large memory, and etc.
This patchset implements the bpf memory usage for all maps, and yet there's
still work to do. We don't want to introduce runtime overhead in the
element update and delete path, but we have to do it for some
non-preallocated maps,
- devmap, xskmap
When we update or delete an element, it will allocate or free memory.
In order to track this dynamic memory, we have to track the count in
element update and delete path.
- cpumap
The element size of each cpumap element is not determinated. If we
want to track the usage, we have to count the size of all elements in
the element update and delete path. So I just put it aside currently.
- local_storage, bpf_local_storage
When we attach or detach a cgroup, it will allocate or free memory. If
we want to track the dynamic memory, we also need to do something in
the update and delete path. So I just put it aside currently.
- offload map
The element update and delete of offload map is via the netdev dev_ops,
in which it may dynamically allocate or free memory, but this dynamic
memory isn't counted in offload map memory usage currently.
The result of each map can be found in the individual patch.
We may also need to track per-container bpf memory usage, that will be
addressed by a different patchset.
Changes:
v3->v4: code improvement on ringbuf (Andrii)
use READ_ONCE() to read lpm_trie (Tao)
explain why we can't get bpf memory usage from memcg.
v2->v3: check callback at map creation time and avoid warning (Alexei)
fix build error under CONFIG_BPF=n (lkp@intel.com)
v1->v2: calculate the memory usage within bpf (Alexei)
- [v1] bpf, mm: bpf memory usage
https://lwn.net/Articles/921991/
- [RFC PATCH v2] mm, bpf: Add BPF into /proc/meminfo
https://lwn.net/Articles/919848/
- [RFC PATCH v1] mm, bpf: Add BPF into /proc/meminfo
https://lwn.net/Articles/917647/
- [RFC PATCH] bpf, mm: Add a new item bpf into memory.stat
https://lore.kernel.org/bpf/20220921170002.29557-1-laoar.shao@gmail].com/
====================
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
|
|
We have implemented memory usage callback for all maps, and we enforce
any newly added map having a callback as well. We check this callback at
map creation time. If it doesn't have the callback, we will return
EINVAL.
Signed-off-by: Yafang Shao <laoar.shao@gmail.com>
Link: https://lore.kernel.org/r/20230305124615.12358-19-laoar.shao@gmail.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
|
|
A new helper is introduced to calculate offload map memory usage. But
currently the memory dynamically allocated in netdev dev_ops, like
nsim_map_update_elem, is not counted. Let's just put it aside now.
Signed-off-by: Yafang Shao <laoar.shao@gmail.com>
Link: https://lore.kernel.org/r/20230305124615.12358-18-laoar.shao@gmail.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
|
|
A new helper is introduced to calculate xskmap memory usage.
The xfsmap memory usage can be dynamically changed when we add or remove
a xsk_map_node. Hence we need to track the count of xsk_map_node to get
its memory usage.
The result as follows,
- before
10: xskmap name count_map flags 0x0
key 4B value 4B max_entries 65536 memlock 524288B
- after
10: xskmap name count_map flags 0x0 <<< no elements case
key 4B value 4B max_entries 65536 memlock 524608B
Signed-off-by: Yafang Shao <laoar.shao@gmail.com>
Link: https://lore.kernel.org/r/20230305124615.12358-17-laoar.shao@gmail.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
|
|
sockmap and sockhash don't have something in common in allocation, so let's
introduce different helpers to calculate their memory usage.
The reuslt as follows,
- before
28: sockmap name count_map flags 0x0
key 4B value 4B max_entries 65536 memlock 524288B
29: sockhash name count_map flags 0x0
key 4B value 4B max_entries 65536 memlock 524288B
- after
28: sockmap name count_map flags 0x0
key 4B value 4B max_entries 65536 memlock 524608B
29: sockhash name count_map flags 0x0 <<<< no updated elements
key 4B value 4B max_entries 65536 memlock 1048896B
Signed-off-by: Yafang Shao <laoar.shao@gmail.com>
Link: https://lore.kernel.org/r/20230305124615.12358-16-laoar.shao@gmail.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
|
|
A new helper is introduced into bpf_local_storage map to calculate the
memory usage. This helper is also used by other maps like
bpf_cgrp_storage, bpf_inode_storage, bpf_task_storage and etc.
Note that currently the dynamically allocated storage elements are not
counted in the usage, since it will take extra runtime overhead in the
elements update or delete path. So let's put it aside now, and implement
it in the future when someone really need it.
Signed-off-by: Yafang Shao <laoar.shao@gmail.com>
Link: https://lore.kernel.org/r/20230305124615.12358-15-laoar.shao@gmail.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
|
|
A new helper is introduced to calculate local_storage map memory usage.
Currently the dynamically allocated elements are not counted, since it
will take runtime overhead in the element update or delete path. So
let's put it aside currently, and implement it in the future if the user
really needs it.
Signed-off-by: Yafang Shao <laoar.shao@gmail.com>
Link: https://lore.kernel.org/r/20230305124615.12358-14-laoar.shao@gmail.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
|
|
A new helper is introduced to calculate bpf_struct_ops memory usage.
The result as follows,
- before
1: struct_ops name count_map flags 0x0
key 4B value 256B max_entries 1 memlock 4096B
btf_id 73
- after
1: struct_ops name count_map flags 0x0
key 4B value 256B max_entries 1 memlock 5016B
btf_id 73
Signed-off-by: Yafang Shao <laoar.shao@gmail.com>
Link: https://lore.kernel.org/r/20230305124615.12358-13-laoar.shao@gmail.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
|
|
A new helper is introduced to calculate queue_stack_maps memory usage.
The result as follows,
- before
20: queue name count_map flags 0x0
key 0B value 4B max_entries 65536 memlock 266240B
21: stack name count_map flags 0x0
key 0B value 4B max_entries 65536 memlock 266240B
- after
20: queue name count_map flags 0x0
key 0B value 4B max_entries 65536 memlock 524288B
21: stack name count_map flags 0x0
key 0B value 4B max_entries 65536 memlock 524288B
Signed-off-by: Yafang Shao <laoar.shao@gmail.com>
Link: https://lore.kernel.org/r/20230305124615.12358-12-laoar.shao@gmail.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
|
|
A new helper is introduced to calculate the memory usage of devmap and
devmap_hash. The number of dynamically allocated elements are recored
for devmap_hash already, but not for devmap. To track the memory size of
dynamically allocated elements, this patch also count the numbers for
devmap.
The result as follows,
- before
40: devmap name count_map flags 0x80
key 4B value 4B max_entries 65536 memlock 524288B
41: devmap_hash name count_map flags 0x80
key 4B value 4B max_entries 65536 memlock 524288B
- after
40: devmap name count_map flags 0x80 <<<< no elements
key 4B value 4B max_entries 65536 memlock 524608B
41: devmap_hash name count_map flags 0x80 <<<< no elements
key 4B value 4B max_entries 65536 memlock 524608B
Note that the number of buckets is same with max_entries for devmap_hash
in this case.
Signed-off-by: Yafang Shao <laoar.shao@gmail.com>
Link: https://lore.kernel.org/r/20230305124615.12358-11-laoar.shao@gmail.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
|
|
A new helper is introduced to calculate cpumap memory usage. The size of
cpu_entries can be dynamically changed when we update or delete a cpumap
element, but this patch doesn't include the memory size of cpu_entry
yet. We can dynamically calculate the memory usage when we alloc or free
a cpu_entry, but it will take extra runtime overhead, so let just put it
aside currently. Note that the size of different cpu_entry may be
different as well.
The result as follows,
- before
48: cpumap name count_map flags 0x4
key 4B value 4B max_entries 64 memlock 4096B
- after
48: cpumap name count_map flags 0x4
key 4B value 4B max_entries 64 memlock 832B
Signed-off-by: Yafang Shao <laoar.shao@gmail.com>
Link: https://lore.kernel.org/r/20230305124615.12358-10-laoar.shao@gmail.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
|
|
Introduce a new helper to calculate the bloom_filter memory usage.
The result as follows,
- before
16: bloom_filter flags 0x0
key 0B value 8B max_entries 65536 memlock 524288B
- after
16: bloom_filter flags 0x0
key 0B value 8B max_entries 65536 memlock 65856B
Signed-off-by: Yafang Shao <laoar.shao@gmail.com>
Link: https://lore.kernel.org/r/20230305124615.12358-9-laoar.shao@gmail.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
|
|
A new helper ringbuf_map_mem_usage() is introduced to calculate ringbuf
memory usage.
The result as follows,
- before
15: ringbuf name count_map flags 0x0
key 0B value 0B max_entries 65536 memlock 0B
- after
15: ringbuf name count_map flags 0x0
key 0B value 0B max_entries 65536 memlock 78424B
Signed-off-by: Yafang Shao <laoar.shao@gmail.com>
Acked-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/r/20230305124615.12358-8-laoar.shao@gmail.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
|
|
A new helper is introduced to calculate reuseport_array memory usage.
The result as follows,
- before
14: reuseport_sockarray name count_map flags 0x0
key 4B value 8B max_entries 65536 memlock 1048576B
- after
14: reuseport_sockarray name count_map flags 0x0
key 4B value 8B max_entries 65536 memlock 524544B
Signed-off-by: Yafang Shao <laoar.shao@gmail.com>
Link: https://lore.kernel.org/r/20230305124615.12358-7-laoar.shao@gmail.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
|
|
A new helper is introduced to get stackmap memory usage. Some small
memory allocations are ignored as their memory size is quite small
compared to the totol usage.
The result as follows,
- before
16: stack_trace name count_map flags 0x0
key 4B value 8B max_entries 65536 memlock 1048576B
- after
16: stack_trace name count_map flags 0x0
key 4B value 8B max_entries 65536 memlock 2097472B
Signed-off-by: Yafang Shao <laoar.shao@gmail.com>
Link: https://lore.kernel.org/r/20230305124615.12358-6-laoar.shao@gmail.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
|
|
Introduce array_map_mem_usage() to calculate arraymap memory usage. In
this helper, some small memory allocations are ignored, like the
allocation of struct bpf_array_aux in prog_array. The inner_map_meta in
array_of_map is also ignored.
The result as follows,
- before
11: array name count_map flags 0x0
key 4B value 4B max_entries 65536 memlock 524288B
12: percpu_array name count_map flags 0x0
key 4B value 4B max_entries 65536 memlock 8912896B
13: perf_event_array name count_map flags 0x0
key 4B value 4B max_entries 65536 memlock 524288B
14: prog_array name count_map flags 0x0
key 4B value 4B max_entries 65536 memlock 524288B
15: cgroup_array name count_map flags 0x0
key 4B value 4B max_entries 65536 memlock 524288B
- after
11: array name count_map flags 0x0
key 4B value 4B max_entries 65536 memlock 524608B
12: percpu_array name count_map flags 0x0
key 4B value 4B max_entries 65536 memlock 17301824B
13: perf_event_array name count_map flags 0x0
key 4B value 4B max_entries 65536 memlock 524608B
14: prog_array name count_map flags 0x0
key 4B value 4B max_entries 65536 memlock 524608B
15: cgroup_array name count_map flags 0x0
key 4B value 4B max_entries 65536 memlock 524608B
Signed-off-by: Yafang Shao <laoar.shao@gmail.com>
Link: https://lore.kernel.org/r/20230305124615.12358-5-laoar.shao@gmail.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
|
|
htab_map_mem_usage() is introduced to calculate hashmap memory usage. In
this helper, some small memory allocations are ignore, as their size is
quite small compared with the total size. The inner_map_meta in
hash_of_map is also ignored.
The result for hashtab as follows,
- before this change
1: hash name count_map flags 0x1 <<<< no prealloc, fully set
key 16B value 24B max_entries 1048576 memlock 41943040B
2: hash name count_map flags 0x1 <<<< no prealloc, none set
key 16B value 24B max_entries 1048576 memlock 41943040B
3: hash name count_map flags 0x0 <<<< prealloc
key 16B value 24B max_entries 1048576 memlock 41943040B
The memlock is always a fixed size whatever it is preallocated or
not, and whatever the count of allocated elements is.
- after this change
1: hash name count_map flags 0x1 <<<< non prealloc, fully set
key 16B value 24B max_entries 1048576 memlock 117441536B
2: hash name count_map flags 0x1 <<<< non prealloc, non set
key 16B value 24B max_entries 1048576 memlock 16778240B
3: hash name count_map flags 0x0 <<<< prealloc
key 16B value 24B max_entries 1048576 memlock 109056000B
The memlock now is hashtab actually allocated.
The result for percpu hash map as follows,
- before this change
4: percpu_hash name count_map flags 0x0 <<<< prealloc
key 16B value 24B max_entries 1048576 memlock 822083584B
5: percpu_hash name count_map flags 0x1 <<<< no prealloc
key 16B value 24B max_entries 1048576 memlock 822083584B
- after this change
4: percpu_hash name count_map flags 0x0
key 16B value 24B max_entries 1048576 memlock 897582080B
5: percpu_hash name count_map flags 0x1
key 16B value 24B max_entries 1048576 memlock 922748736B
At worst, the difference can be 10x, for example,
- before this change
6: hash name count_map flags 0x0
key 4B value 4B max_entries 1048576 memlock 8388608B
- after this change
6: hash name count_map flags 0x0
key 4B value 4B max_entries 1048576 memlock 83889408B
Signed-off-by: Yafang Shao <laoar.shao@gmail.com>
Acked-by: Hou Tao <houtao1@huawei.com>
Link: https://lore.kernel.org/r/20230305124615.12358-4-laoar.shao@gmail.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
|