summaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
2024-03-04Merge branch 'Allow struct_ops maps with a large number of programs'Martin KaFai Lau
Kui-Feng Lee says: ==================== The BPF struct_ops previously only allowed for one page to be used for the trampolines of all links in a map. However, we have recently run out of space due to the large number of BPF program links. By allocating additional pages when we exhaust an existing page, we can accommodate more links in a single map. The variable st_map->image has been changed to st_map->image_pages, and its type has been changed to an array of pointers to buffers of PAGE_SIZE. Additional pages are allocated when all existing pages are exhausted. The test case loads a struct_ops maps having 40 programs. Their trampolines takes about 6.6k+ bytes over 1.5 pages on x86. --- Major differences from v3: - Refactor buffer allocations to bpf_struct_ops_tramp_buf_alloc() and bpf_struct_ops_tramp_buf_free(). Major differences from v2: - Move image buffer allocation to bpf_struct_ops_prepare_trampoline(). Major differences from v1: - Always free pages if failing to update. - Allocate 8 pages at most. v3: https://lore.kernel.org/all/20240224030302.1500343-1-thinker.li@gmail.com/ v2: https://lore.kernel.org/all/20240221225911.757861-1-thinker.li@gmail.com/ v1: https://lore.kernel.org/all/20240216182828.201727-1-thinker.li@gmail.com/ ==================== Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org>
2024-03-04selftests/bpf: Test struct_ops maps with a large number of struct_ops program.Kui-Feng Lee
Create and load a struct_ops map with a large number of struct_ops programs to generate trampolines taking a size over multiple pages. The map includes 40 programs. Their trampolines takes 6.6k+, more than 1.5 pages, on x86. Signed-off-by: Kui-Feng Lee <thinker.li@gmail.com> Link: https://lore.kernel.org/r/20240224223418.526631-4-thinker.li@gmail.com Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org>
2024-03-04bpf: struct_ops supports more than one page for trampolines.Kui-Feng Lee
The BPF struct_ops previously only allowed one page of trampolines. Each function pointer of a struct_ops is implemented by a struct_ops bpf program. Each struct_ops bpf program requires a trampoline. The following selftest patch shows each page can hold a little more than 20 trampolines. While one page is more than enough for the tcp-cc usecase, the sched_ext use case shows that one page is not always enough and hits the one page limit. This patch overcomes the one page limit by allocating another page when needed and it is limited to a total of MAX_IMAGE_PAGES (8) pages which is more than enough for reasonable usages. The variable st_map->image has been changed to st_map->image_pages, and its type has been changed to an array of pointers to pages. Signed-off-by: Kui-Feng Lee <thinker.li@gmail.com> Link: https://lore.kernel.org/r/20240224223418.526631-3-thinker.li@gmail.com Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org>
2024-03-04arm64: defconfig: Enable support for cbmem entries in the coreboot tableNícolas F. R. A. Prado
Enable the cbmem driver and dependencies in order to support reading cbmem entries from the coreboot table, which are used to store logs from coreboot on arm64 Chromebooks, and provide useful information for debugging the boot process on those devices. Reviewed-by: AngeloGioacchino Del Regno <angelogioacchino.delregno@collabora.com> Reviewed-by: Brian Norris <briannorris@chromium.org> Signed-off-by: Nícolas F. R. A. Prado <nfraprado@collabora.com> Link: https://lore.kernel.org/r/20240304-coreboot-defconfig-v1-1-02dc1940408f@collabora.com Signed-off-by: Arnd Bergmann <arnd@arndb.de>
2024-03-04kselftest: Add basic test for probing the rust sample modulesLaura Nao
Add new basic kselftest that checks if the available rust sample modules can be added and removed correctly. Signed-off-by: Laura Nao <laura.nao@collabora.com> Reviewed-by: Sergio Gonzalez Collado <sergio.collado@gmail.com> Reviewed-by: Muhammad Usama Anjum <usama.anjum@collabora.com> Signed-off-by: Shuah Khan <skhan@linuxfoundation.org>
2024-03-04firmware: microchip: Fix over-requested allocation sizeDawei Li
cocci warnings: (new ones prefixed by >>) >> drivers/firmware/microchip/mpfs-auto-update.c:387:72-78: ERROR: application of sizeof to pointer drivers/firmware/microchip/mpfs-auto-update.c:170:72-78: ERROR: application of sizeof to pointer response_msg is a pointer to u32, so the size of element it points to is supposed to be a multiple of sizeof(u32), rather than sizeof(u32 *). Reported-by: kernel test robot <lkp@intel.com> Closes: https://lore.kernel.org/oe-kbuild-all/202403040516.CYxoWTXw-lkp@intel.com/ Signed-off-by: Dawei Li <dawei.li@shingroup.cn> Fixes: ec5b0f1193ad ("firmware: microchip: add PolarFire SoC Auto Update support") Signed-off-by: Conor Dooley <conor.dooley@microchip.com>
2024-03-04ice: avoid unnecessary devm_ usageMaciej Fijalkowski
1. pcaps are free'd right after AQ routines are done, no need for devm_'s 2. a test frame for loopback test in ethtool -t is destroyed at the end of the test so we don't need devm_ here either. Signed-off-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com> Reviewed-by: Przemek Kitszel <przemyslaw.kitszel@intel.com> Tested-by: Pucha Himasekhar Reddy <himasekharx.reddy.pucha@intel.com> (A Contingent worker at Intel) Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2024-03-04ice: do not disable Tx queues twice in ice_down()Maciej Fijalkowski
ice_down() clears QINT_TQCTL_CAUSE_ENA_M bit twice, which is not necessary. First clearing happens in ice_vsi_dis_irq() and second in ice_vsi_stop_tx_ring() - remove the first one. While at it, make ice_vsi_dis_irq() static as ice_down() is the only current caller of it. Signed-off-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com> Tested-by: Pucha Himasekhar Reddy <himasekharx.reddy.pucha@intel.com> (A Contingent worker at Intel) Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2024-03-04ice: cleanup line splitting for context set functionsJacob Keller
The indentation for ice_set_ctx and ice_write_rxq_ctx breaks the function name after the return type. This style of breaking is used a lot throughout the ice driver, even in cases where its not actually helpful for readability. We no longer prefer this style of line splitting in the driver, and new code is avoiding it. Normally, I would leave this alone unless the actual function contents or description needed updating. However, a future change is going to add inverse functions for converting packed context to unpacked context structures. To keep this code uniform with the existing set functions, fix up the style to the modern format of keeping the type on the same line. Signed-off-by: Jacob Keller <jacob.e.keller@intel.com> Reviewed-by: Przemek Kitszel <przemyslaw.kitszel@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2024-03-04ice: use GENMASK instead of BIT(n) - 1 in pack functionsJacob Keller
The functions used to pack the Tx and Rx context into the hardware format rely on using BIT() and then subtracting 1 to get a bitmask. These functions even have a comment about how x86 machines can't use this method for certain widths because the SHL instructions will not work properly. The Linux kernel already provides the GENMASK macro for generating a suitable bitmask. Further, GENMASK is capable of generating the mask including the shift_width. Since width is the total field width, take care to subtract one to get the final bit position. Since we now include the shifted bits as part of the mask, shift the source value first before applying the mask. Signed-off-by: Jacob Keller <jacob.e.keller@intel.com> Reviewed-by: Przemek Kitszel <przemyslaw.kitszel@intel.com> Tested-by: Pucha Himasekhar Reddy <himasekharx.reddy.pucha@intel.com> (A Contingent worker at Intel) Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2024-03-04ice: rename ice_write_* functions to ice_pack_ctx_*Jacob Keller
In ice_common.c there are 4 functions used for converting the unpacked software Tx and Rx context structure data into the packed format used by hardware. These functions have extremely generic names: * ice_write_byte * ice_write_word * ice_write_dword * ice_write_qword When I saw these function names my first thought was "write what? to where?". Understanding what these functions do requires looking at the implementation details. The functions take bits from an unpacked structure and copy them into the packed layout used by hardware. As part of live migration, we will want functions which perform the inverse operation of reading bits from the packed layout and copying them into the unpacked format. Naming these as "ice_read_byte", etc would be very confusing since they appear to write data. In preparation for adding this new inverse operation, rename the existing functions to use the prefix "ice_pack_ctx_". This makes it clear that they perform the bit packing while copying from the unpacked software context structure to the packed hardware context. The inverse operations can then neatly be named ice_unpack_ctx_*, clearly indicating they perform the bit unpacking while copying from the packed hardware context to the unpacked software context structure. Signed-off-by: Jacob Keller <jacob.e.keller@intel.com> Reviewed-by: Przemek Kitszel <przemyslaw.kitszel@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2024-03-04ice: remove vf->lan_vsi_num fieldJacob Keller
The lan_vsi_num field of the VF structure is no longer used for any purpose. Remove it. Signed-off-by: Jacob Keller <jacob.e.keller@intel.com> Reviewed-by: Przemek Kitszel <przemyslaw.kitszel@intel.com> Tested-by: Rafal Romanowski <rafal.romanowski@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2024-03-04ice: use relative VSI index for VFs instead of PF VSI numberJacob Keller
When initializing over virtchnl, the PF is required to pass a VSI ID to the VF as part of its capabilities exchange. The VF driver reports this value back to the PF in a variety of commands. The PF driver validates that this value matches the value it sent to the VF. Some hardware families such as the E700 series could use this value when reading RSS registers or communicating directly with firmware over the Admin Queue. However, E800 series hardware does not support any of these interfaces and the VF's only use for this value is to report it back to the PF. Thus, there is no requirement that this value be an actual VSI ID value of any kind. The PF driver already does not trust that the VF sends it a real VSI ID. The VSI structure is always looked up from the VF structure. The PF does validate that the VSI ID provided matches a VSI associated with the VF, but otherwise does not use the VSI ID for any purpose. Instead of reporting the VSI number relative to the PF space, report a fixed value of 1. When communicating with the VF over virtchnl, validate that the VSI number is returned appropriately. This avoids leaking information about the firmware of the PF state. Currently the ice driver only supplies a VF with a single VSI. However, it appears that virtchnl has some support for allowing multiple VSIs. I did not attempt to implement this. However, space is left open to allow further relative indexes if additional VSIs are provided in future feature development. For this reason, keep the ice_vc_isvalid_vsi_id function in place to allow extending it for multiple VSIs in the future. This change will also simplify handling of live migration in a future series. Since we no longer will provide a real VSI number to the VF, there will be no need to keep track of this number when migrating to a new host. Signed-off-by: Jacob Keller <jacob.e.keller@intel.com> Reviewed-by: Przemek Kitszel <przemyslaw.kitszel@intel.com> Tested-by: Rafal Romanowski <rafal.romanowski@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2024-03-04ice: remove unnecessary duplicate checks for VF VSI IDJacob Keller
The ice_vc_fdir_param_check() function validates that the VSI ID of the virtchnl flow director command matches the VSI number of the VF. This is already checked by the call to ice_vc_isvalid_vsi_id() immediately following this. This check is unnecessary since ice_vc_isvalid_vsi_id() already confirms this by checking that the VSI ID can locate the VSI associated with the VF structure. Furthermore, a following change is going to refactor the ice driver to report VSI IDs using a relative index for each VF instead of reporting the PF VSI number. This additional check would break that logic since it enforces that the VSI ID matches the VSI number. Since this check duplicates the logic in ice_vc_isvalid_vsi_id() and gets in the way of refactoring that logic, remove it. Signed-off-by: Jacob Keller <jacob.e.keller@intel.com> Reviewed-by: Przemek Kitszel <przemyslaw.kitszel@intel.com> Tested-by: Rafal Romanowski <rafal.romanowski@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2024-03-04ice: pass VSI pointer into ice_vc_isvalid_q_idJacob Keller
The ice_vc_isvalid_q_id() function takes a VSI index and a queue ID. It looks up the VSI from its index, and then validates that the queue number is valid for that VSI. The VSI ID passed is typically a VSI index from the VF. This VSI number is validated by the PF to ensure that it matches the VSI associated with the VF already. In every flow where ice_vc_isvalid_q_id() is called, the PF driver already has a pointer to the VSI associated with the VF. This pointer is obtained using ice_get_vf_vsi(), rather than looking up the VSI using the index sent by the VF. Since we already know which VSI to operate on, we can modify ice_vc_isvalid_q_id() to take a VSI pointer instead of a VSI index. Pass the VSI we found from ice_get_vf_vsi() instead of re-doing the lookup. This removes some unnecessary computation and scanning of the VSI list. It also removes the last place where the driver directly used the VSI number from the VF. This will pave the way for refactoring to communicate relative VSI numbers to the VF instead of absolute numbers from the PF space. Signed-off-by: Jacob Keller <jacob.e.keller@intel.com> Reviewed-by: Przemek Kitszel <przemyslaw.kitszel@intel.com> Tested-by: Rafal Romanowski <rafal.romanowski@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2024-03-04bpf, net: validate struct_ops when updating value.Kui-Feng Lee
Perform all validations when updating values of struct_ops maps. Doing validation in st_ops->reg() and st_ops->update() is not necessary anymore. However, tcp_register_congestion_control() has been called in various places. It still needs to do validations. Cc: netdev@vger.kernel.org Signed-off-by: Kui-Feng Lee <thinker.li@gmail.com> Link: https://lore.kernel.org/r/20240224223418.526631-2-thinker.li@gmail.com Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org>
2024-03-04idpf: remove dealloc vector msg err in idpf_intr_relAlan Brady
This error message is at best not really helpful and at worst misleading. If we're here in idpf_intr_rel we're likely trying to do remove or reset. If we're in reset, this message will fail because we lose the virtchnl on reset and HW is going to clean up those resources regardless in that case. If we're in remove and we get an error here, we're going to reset the device at the end of remove anyway so not a big deal. Just remove this message it's not useful. Tested-by: Alexander Lobakin <aleksander.lobakin@intel.com> Signed-off-by: Alan Brady <alan.brady@intel.com> Tested-by: Krishneil Singh <krishneil.k.singh@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2024-03-04idpf: fix minor controlq issuesAlan Brady
While we're here improving virtchnl we can include two minor fixes for the lower level ctrlq flow. This adds a memory barrier to idpf_post_rx_buffs before we update tail on the controlq. We should make sure our writes have had a chance to finish before we tell HW it can touch them. This also removes some defensive programming in idpf_ctrlq_recv. The caller should not be using a num_q_msg value of zero or more than the ring size and it's their responsibility to call functions sanely. Tested-by: Alexander Lobakin <aleksander.lobakin@intel.com> Signed-off-by: Alan Brady <alan.brady@intel.com> Tested-by: Krishneil Singh <krishneil.k.singh@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2024-03-04idpf: prevent deinit uninitialized virtchnl coreAlan Brady
In idpf_remove we need to tear down the virtchnl core with idpf_vc_core_deinit so we can free up resources and leave things in a good state. However, in the case where we failed to establish VC communications we may not have ever actually successfully initialized the virtchnl core. This fixes it by setting a bit once we successfully init the virtchnl core. Then, in deinit, we'll check for it before going on further, otherwise we just return. Also clear the bit at the end of deinit so we know it's gone now. Tested-by: Alexander Lobakin <aleksander.lobakin@intel.com> Signed-off-by: Alan Brady <alan.brady@intel.com> Tested-by: Krishneil Singh <krishneil.k.singh@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2024-03-04idpf: cleanup virtchnl cruftAlan Brady
We can now remove a bunch of gross code we don't need anymore like the vc state bits and vc_buf_lock since everything is using transaction API now. Tested-by: Alexander Lobakin <aleksander.lobakin@intel.com> Reviewed-by: Przemek Kitszel <przemyslaw.kitszel@intel.com> Reviewed-by: Igor Bagnucki <igor.bagnucki@intel.com> Signed-off-by: Alan Brady <alan.brady@intel.com> Tested-by: Krishneil Singh <krishneil.k.singh@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2024-03-04idpf: refactor idpf_recv_mb_msgAlan Brady
Now that all the messages are using the transaction API, we can rework idpf_recv_mb_msg quite a lot to simplify it. Due to this, we remove idpf_find_vport as no longer used and alter idpf_recv_event_msg slightly. Tested-by: Alexander Lobakin <aleksander.lobakin@intel.com> Signed-off-by: Alan Brady <alan.brady@intel.com> Tested-by: Krishneil Singh <krishneil.k.singh@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2024-03-04idpf: add async_handler for MAC filter messagesAlan Brady
There are situations where the driver needs to add a MAC filter but we're explicitly not allowed to sleep so we can wait for a virtchnl message to complete. This adds an async_handler for asynchronously sent messages for MAC filters so that we can better handle if there's an error of some kind. If success we don't need to do anything else, but if we failed to program the new filter we really should remove it from our list of MAC filters. If we don't remove bad filters, what I expect to happen is after a reset of some kind we try to program the MAC filter again and it fails again. This is clearly wrong and I would expect to be confusing for the user. It could also be the failure is for a delete MAC filter message but those filters get deleted regardless. Not much we can do about a delete failure. Tested-by: Alexander Lobakin <aleksander.lobakin@intel.com> Signed-off-by: Alan Brady <alan.brady@intel.com> Tested-by: Krishneil Singh <krishneil.k.singh@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2024-03-04idpf: refactor remaining virtchnl messagesAlan Brady
This takes care of RSS/SRIOV/MAC and other misc virtchnl messages. This again is mostly mechanical. In absence of an async_handler for MAC filters, this will simply generically report any errors from idpf_vc_xn_forward_async. This maintains the existing behavior. Follow up patch will add an async handler for MAC filters to remove bad filters from our list. While we're here we can also make the code much nicer by converting some variables to auto-variables where appropriate. This makes it cleaner and less prone to memory leaking. There's still a bit more cleanup we can do here to remove stuff that's not being used anymore now; follow-up patches will take care of loose ends. Tested-by: Alexander Lobakin <aleksander.lobakin@intel.com> Signed-off-by: Alan Brady <alan.brady@intel.com> Tested-by: Krishneil Singh <krishneil.k.singh@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2024-03-04idpf: refactor queue related virtchnl messagesAlan Brady
This reworks queue specific virtchnl messages to use the added transaction API. It is fairly mechanical and generally makes the functions using it more simple. Functions using transaction API no longer need to take the vc_buf_lock since it's not using it anymore. After filling out an idpf_vc_xn_params struct, idpf_vc_xn_exec takes care of the send and recv handling. This also converts those functions where appropriate to use auto-variables instead of manually calling kfree. This greatly simplifies the memory alloc paths and makes them less prone memory leaks. Tested-by: Alexander Lobakin <aleksander.lobakin@intel.com> Reviewed-by: Przemek Kitszel <przemyslaw.kitszel@intel.com> Reviewed-by: Igor Bagnucki <igor.bagnucki@intel.com> Signed-off-by: Alan Brady <alan.brady@intel.com> Tested-by: Krishneil Singh <krishneil.k.singh@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2024-03-04Merge tag 'vfs-6.9.rw_hint' of ↵Christian Brauner
gitolite.kernel.org:pub/scm/linux/kernel/git/vfs/vfs Pull write hint fix from Christian Brauner: UFS devices are widely used in mobile applications, e.g. in smartphones. UFS vendors need data lifetime information to achieve good performance. Providing data lifetime information to UFS devices can result in up to 40% lower write amplification. Hence this patch series that restores the bi_write_hint member in struct bio. After this patch series has been merged, patches that implement data lifetime support in the SCSI disk (sd) driver will be sent to the Linux kernel SCSI maintainer. The following changes are included in this patch series: - Improvements for the F_GET_RW_HINT and F_SET_RW_HINT fcntls. - Move enum rw_hint into a new header file. - Support F_SET_RW_HINT for block devices to make it easy to test data lifetime support. - Restore the bio.bi_write_hint member and restore support in the VFS layer and also in the block layer for data lifetime information. The shell script that has been used to test the patch series combined with the SCSI patches is available at the end of this cover letter. * tag 'vfs-6.9.rw_hint' of gitolite.kernel.org:pub/scm/linux/kernel/git/vfs/vfs: block, fs: Restore the per-bio/request data lifetime fields fs: Propagate write hints to the struct block_device inode fs: Move enum rw_hint into a new header file fs: Split fcntl_rw_hint() fs: Verify write lifetime constants at compile time fs: Fix rw_hint validation Signed-off-by: Christian Brauner <brauner@kernel.org>
2024-03-04idpf: refactor vport virtchnl messagesAlan Brady
This reworks the way vport related virtchnl messages work to take advantage of the added transaction API. It is fairly mechanical as, to use the transaction API, the function just needs to fill out an appropriate idpf_vc_xn_params struct to pass to idpf_vc_xn_exec which will take care of the actual send and recv. Tested-by: Alexander Lobakin <aleksander.lobakin@intel.com> Reviewed-by: Przemek Kitszel <przemyslaw.kitszel@intel.com> Reviewed-by: Igor Bagnucki <igor.bagnucki@intel.com> Co-developed-by: Joshua Hay <joshua.a.hay@intel.com> Signed-off-by: Joshua Hay <joshua.a.hay@intel.com> Signed-off-by: Alan Brady <alan.brady@intel.com> Tested-by: Krishneil Singh <krishneil.k.singh@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2024-03-04idpf: implement virtchnl transaction managerAlan Brady
This starts refactoring how virtchnl messages are handled by adding a transaction manager (idpf_vc_xn_manager). There are two primary motivations here which are to enable handling of multiple messages at once and to make it more robust in general. As it is right now, the driver may only have one pending message at a time and there's no guarantee that the response we receive was actually intended for the message we sent prior. This works by utilizing a "cookie" field of the message descriptor. It is arbitrary what data we put in the cookie and the response is required to have the same cookie the original message was sent with. Then using a "transaction" abstraction that uses the completion API to pair responses to the message it belongs to. The cookie works such that the first half is the index to the transaction in our array, and the second half is a "salt" that gets incremented every message. This enables quick lookups into the array and also ensuring we have the correct message. The salt is necessary because after, for example, a message times out and we deem the response was lost for some reason, we could theoretically reuse the same index but using a different salt ensures that when we do actually get a response it's not the old message that timed out previously finally coming in. Since the number of transactions allocated is U8_MAX and the salt is 8 bits, we can never have a conflict because we can't roll over the salt without using more transactions than we have available. This starts by only converting the VIRTCHNL2_OP_VERSION message to use this new transaction API. Follow up patches will convert all virtchnl messages to use the API. Tested-by: Alexander Lobakin <aleksander.lobakin@intel.com> Reviewed-by: Przemek Kitszel <przemyslaw.kitszel@intel.com> Reviewed-by: Igor Bagnucki <igor.bagnucki@intel.com> Co-developed-by: Joshua Hay <joshua.a.hay@intel.com> Signed-off-by: Joshua Hay <joshua.a.hay@intel.com> Signed-off-by: Alan Brady <alan.brady@intel.com> Tested-by: Krishneil Singh <krishneil.k.singh@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2024-03-04idpf: add idpf_virtchnl.hAlan Brady
idpf.h is quite heavy. We can reduce the burden a fair bit by introducing an idpf_virtchnl.h file. This mostly just moves function declarations but there are many of them. This also makes an attempt to group those declarations in a way that makes some sense instead of mishmashed. Suggested-by: Alexander Lobakin <aleksander.lobakin@intel.com> Signed-off-by: Alan Brady <alan.brady@intel.com> Tested-by: Krishneil Singh <krishneil.k.singh@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2024-03-04ASoC: amd: yc: Add HP Pavilion Aero Laptop 13-be2xxx(8BD6) into DMI quirk tableAl Raj Hassain
The HP Pavilion Aero Laptop 13-be2xxx(8BD6) requires a quirk entry for its internal microphone to function. Signed-off-by: Al Raj Hassain <alrajhassain@gmail.com> Reviewed-by: Mario Limonciello <mario.limonciello@amd.com> Link: https://msgid.link/r/20240304103924.13673-1-alrajhassain@gmail.com Signed-off-by: Mark Brown <broonie@kernel.org>
2024-03-04ASoC: rcar: adg: correct TIMSEL setting for SSI9Andreas Pape
Timing select registers for SRC and CMD are by default referring to the corresponding SSI word select. The calculation rule from HW spec skips SSI8, which has no clock connection. >From section 43.2.18 CMD Output Timing Select Register (CMDOUT_TIMSEL), of R-Car Series, 3rd Generation Hardware User’s Manual Rev.2.20: CMD0_OUT_DIVCLK_ Output Timing SEL [4:0] Signal Select B'0 0110: ssi_ws0 B'0 0111: ssi_ws1 B'0 1000: ssi_ws2 B'0 1001: ssi_ws3 B'0 1010: ssi_ws4 B'0 1011: ssi_ws5 B'0 1100: ssi_ws6 B'0 1101: ssi_ws7 <GAP> B'0 1110: ssi_ws9 B'0 1111: Setting prohibited Fix the erroneous prohibited setting of timsel value 1111 (0xf) for SSI9 by using timsel value 1110 (0xe) instead. This is possible because SSI8 is not connected as shown by <GAP> in the table above. [21.695055] rcar_sound ec500000.sound: b adg[0]-CMDOUT_TIMSEL (32):00000f00/00000f1f Correct the timsel assignment. Fixes: 629509c5bc478c ("ASoC: rsnd: add Gen2 SRC and DMAEngine support") Suggested-by: Kuninori Morimoto <kuninori.morimoto.gx@renesas.com> Signed-off-by: Andreas Pape <Andreas.Pape4@bosch.com> Signed-off-by: Yeswanth Rayapati <yeswanth.rayapati@in.bosch.com> Tested-by: Yeswanth Rayapati <yeswanth.rayapati@in.bosch.com> [erosca: massage commit description] Signed-off-by: Eugeniu Rosca <eugeniu.rosca@bosch.com> Acked-by: Kuninori Morimoto <kuninori.morimoto.gx@renesas.com> Link: https://msgid.link/r/20240301085003.3057-1-erosca@de.adit-jv.com Signed-off-by: Mark Brown <broonie@kernel.org>
2024-03-04x86/sev: Move early startup code into .head.text sectionArd Biesheuvel
In preparation for implementing rigorous build time checks to enforce that only code that can support it will be called from the early 1:1 mapping of memory, move SEV init code that is called in this manner to the .head.text section. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Tested-by: Tom Lendacky <thomas.lendacky@amd.com> Link: https://lore.kernel.org/r/20240227151907.387873-19-ardb+git@google.com
2024-03-04x86/sme: Move early SME kernel encryption handling into .head.textArd Biesheuvel
The .head.text section is the initial primary entrypoint of the core kernel, and is entered with the CPU executing from a 1:1 mapping of memory. Such code must never access global variables using absolute references, as these are based on the kernel virtual mapping which is not active yet at this point. Given that the SME startup code is also called from this early execution context, move it into .head.text as well. This will allow more thorough build time checks in the future to ensure that early startup code only uses RIP-relative references to global variables. Also replace some occurrences of __pa_symbol() [which relies on the compiler generating an absolute reference, which is not guaranteed] and an open coded RIP-relative access with RIP_REL_REF(). Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Tested-by: Tom Lendacky <thomas.lendacky@amd.com> Link: https://lore.kernel.org/r/20240227151907.387873-18-ardb+git@google.com
2024-03-04x86/boot: Move mem_encrypt= parsing to the decompressorArd Biesheuvel
The early SME/SEV code parses the command line very early, in order to decide whether or not memory encryption should be enabled, which needs to occur even before the initial page tables are created. This is problematic for a number of reasons: - this early code runs from the 1:1 mapping provided by the decompressor or firmware, which uses a different translation than the one assumed by the linker, and so the code needs to be built in a special way; - parsing external input while the entire kernel image is still mapped writable is a bad idea in general, and really does not belong in security minded code; - the current code ignores the built-in command line entirely (although this appears to be the case for the entire decompressor) Given that the decompressor/EFI stub is an intrinsic part of the x86 bootable kernel image, move the command line parsing there and out of the core kernel. This removes the need to build lib/cmdline.o in a special way, or to use RIP-relative LEA instructions in inline asm blocks. This involves a new xloadflag in the setup header to indicate that mem_encrypt=on appeared on the kernel command line. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Tested-by: Tom Lendacky <thomas.lendacky@amd.com> Link: https://lore.kernel.org/r/20240227151907.387873-17-ardb+git@google.com
2024-03-04efi/libstub: Add generic support for parsing mem_encrypt=Ard Biesheuvel
Parse the mem_encrypt= command line parameter from the EFI stub if CONFIG_ARCH_HAS_MEM_ENCRYPT=y, so that it can be passed to the early boot code by the arch code in the stub. This avoids the need for the core kernel to do any string parsing very early in the boot. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Tested-by: Tom Lendacky <thomas.lendacky@amd.com> Link: https://lore.kernel.org/r/20240227151907.387873-16-ardb+git@google.com
2024-03-04x86/startup_64: Simplify virtual switch on primary bootArd Biesheuvel
The secondary startup code is used on the primary boot path as well, but in this case, the initial part runs from a 1:1 mapping, until an explicit cross-jump is made to the kernel virtual mapping of the same code. On the secondary boot path, this jump is pointless as the code already executes from the mapping targeted by the jump. So combine this cross-jump with the jump from startup_64() into the common boot path. This simplifies the execution flow, and clearly separates code that runs from a 1:1 mapping from code that runs from the kernel virtual mapping. Note that this requires a page table switch, so hoist the CR3 assignment into startup_64() as well. And since absolute symbol references will no longer be permitted in .head.text once we enable the associated build time checks, a RIP-relative memory operand is used in the JMP instruction, referring to an absolute constant in the .init.rodata section. Given that the secondary startup code does not require a special placement inside the executable, move it to the .text section. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Tested-by: Tom Lendacky <thomas.lendacky@amd.com> Link: https://lore.kernel.org/r/20240227151907.387873-15-ardb+git@google.com
2024-03-04x86/startup_64: Simplify calculation of initial page table addressArd Biesheuvel
Determining the address of the initial page table to program into CR3 involves: - taking the physical address - adding the SME encryption mask On the primary entry path, the code is mapped using a 1:1 virtual to physical translation, so the physical address can be taken directly using a RIP-relative LEA instruction. On the secondary entry path, the address can be obtained by taking the offset from the virtual kernel base (__START_kernel_map) and adding the physical kernel base. This is implemented in a slightly confusing way, so clean this up. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Tested-by: Tom Lendacky <thomas.lendacky@amd.com> Link: https://lore.kernel.org/r/20240227151907.387873-14-ardb+git@google.com
2024-03-04x86/startup_64: Defer assignment of 5-level paging global variablesArd Biesheuvel
Assigning the 5-level paging related global variables from the earliest C code using explicit references that use the 1:1 translation of memory is unnecessary, as the startup code itself does not rely on them to create the initial page tables, and this is all it should be doing. So defer these assignments to the primary C entry code that executes via the ordinary kernel virtual mapping. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Tested-by: Tom Lendacky <thomas.lendacky@amd.com> Link: https://lore.kernel.org/r/20240227151907.387873-13-ardb+git@google.com
2024-03-04x86/startup_64: Simplify CR4 handling in startup codeArd Biesheuvel
When paging is enabled, the CR4.PAE and CR4.LA57 control bits cannot be changed, and so they can simply be preserved rather than reason about whether or not they need to be set. CR4.MCE should be preserved unless the kernel was built without CONFIG_X86_MCE, in which case it must be cleared. CR4.PSE should be set explicitly, regardless of whether or not it was set before. CR4.PGE is set explicitly, and then cleared and set again after programming CR3 in order to flush TLB entries based on global translations. This makes the first assignment redundant, and can therefore be omitted. So clear PGE by omitting it from the preserve mask, and set it again explicitly after switching to the new page tables. [ bp: Document the exact operation of CR4.PGE ] Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Tested-by: Tom Lendacky <thomas.lendacky@amd.com> Link: https://lore.kernel.org/r/20240227151907.387873-12-ardb+git@google.com
2024-03-04drm/panel: boe-tv101wum-nl6: Fine tune Himax83102-j02 panel HFP and HBP (again)Cong Yang
The current measured frame rate is 59.95Hz, which does not meet the requirements of touch-stylus and stylus cannot work normally. After adjustment, the actual measurement is 60.001Hz. Now this panel looks like it's only used by me on the MTK platform, so let's change this set of parameters. [ dianders: Added "(again") to subject and fixed the "Fixes" line ] Fixes: cea7008190ad ("drm/panel: boe-tv101wum-nl6: Fine tune Himax83102-j02 panel HFP and HBP") Signed-off-by: Cong Yang <yangcong5@huaqin.corp-partner.google.com> Signed-off-by: Douglas Anderson <dianders@chromium.org> Link: https://patchwork.freedesktop.org/patch/msgid/20240301061128.3145982-1-yangcong5@huaqin.corp-partner.google.com
2024-03-04x86/idle: Select idle routine only onceThomas Gleixner
The idle routine selection is done on every CPU bringup operation and has a guard in place which is effective after the first invocation, which is a pointless exercise. Invoke it once on the boot CPU and mark the related functions __init. The guard check has to stay as xen_set_default_idle() runs early. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Link: https://lore.kernel.org/r/87edcu6vaq.ffs@tglx
2024-03-04x86/idle: Let prefer_mwait_c1_over_halt() return boolThomas Gleixner
The return value is truly boolean. Make it so. No functional change. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Link: https://lore.kernel.org/r/20240229142248.518723854@linutronix.de
2024-03-04x86/idle: Cleanup idle_setup()Thomas Gleixner
Updating the static call for x86_idle() from idle_setup() is counter-intuitive. Let select_idle_routine() handle it like the other idle choices, which allows to simplify the idle selection later on. While at it rewrite comments and return a proper error code and not -1. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Link: https://lore.kernel.org/r/20240229142248.455616019@linutronix.de
2024-03-04x86/idle: Clean up idle selectionThomas Gleixner
Clean up the code to make it readable. No functional change. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Link: https://lore.kernel.org/r/20240229142248.392017685@linutronix.de
2024-03-04x86/idle: Sanitize X86_BUG_AMD_E400 handlingThomas Gleixner
amd_e400_idle(), the idle routine for AMD CPUs which are affected by erratum 400 violates the RCU constraints by invoking tick_broadcast_enter() and tick_broadcast_exit() after the core code has marked RCU non-idle. The functions can end up in lockdep or tracing, which rightfully triggers a RCU warning. The core code provides now a static branch conditional invocation of the broadcast functions. Remove amd_e400_idle(), enforce default_idle() and enable the static branch on affected CPUs to cure this. [ bp: Fold in a fix for a IS_ENABLED() check fail missing a "CONFIG_" prefix which tglx spotted. ] Reported-by: Borislav Petkov <bp@alien8.de> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Link: https://lore.kernel.org/r/877cim6sis.ffs@tglx
2024-03-04nvme-fabrics: typo in nvmf_parse_key()Hannes Reinecke
Of course we should use the key if there is no error ... Signed-off-by: Hannes Reinecke <hare@suse.de> Reviewed-by: Chaitanya Kulkarni <kch@nvidia.com> Signed-off-by: Keith Busch <kbusch@kernel.org>
2024-03-04tee: make tee_bus_type constRicardo B. Marliere
Since commit d492cc2573a0 ("driver core: device.h: make struct bus_type a const *"), the driver core can properly handle constant struct bus_type, move the tee_bus_type variable to be a constant structure as well, placing it into read-only memory which can not be modified at runtime. Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Suggested-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Ricardo B. Marliere <ricardo@marliere.net> Reviewed-by: Sumit Garg <sumit.garg@linaro.org> Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org> Signed-off-by: Arnd Bergmann <arnd@arndb.de>
2024-03-04nvme-multipath: use atomic queue limits API for stacking limitsChristoph Hellwig
Switch to the queue_limits_* helpers to stack the bdev limits, which also includes updating the readahead settings. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Keith Busch <kbusch@kernel.org>
2024-03-04nvme-multipath: pass queue_limits to blk_alloc_diskChristoph Hellwig
The multipath disk starts out with the stacking default limits. The one interesting part here is that blk_set_stacking_limits sets the max_zone_append_sectorts to UINT_MAX, which fails the validation for non-zoned devices. With the old one call per limit scheme this was fine because no one verified this weird mismatch and it was fixed by blk_stack_limits a little later before I/O could be issued. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Keith Busch <kbusch@kernel.org>
2024-03-04nvme: use the atomic queue limits update APIChristoph Hellwig
Changes the callchains that update queue_limits to build an on-stack queue_limits and update it atomically. Note that for now only the admin queue actually passes it to the queue allocation function. Doing the same for the gendisks used for the namespaces will require a little more work. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Keith Busch <kbusch@kernel.org>
2024-03-04nvme: cleanup nvme_configure_metadataChristoph Hellwig
Fold nvme_init_ms into nvme_configure_metadata after splitting up a little helper to deal with the extended LBA formats. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Keith Busch <kbusch@kernel.org>