summaryrefslogtreecommitdiff
path: root/include
AgeCommit message (Collapse)Author
2022-03-03vfio/pci: Expose vfio_pci_core_aer_err_detected()Yishai Hadas
Expose vfio_pci_core_aer_err_detected() to be used by drivers as part of their pci_error_handlers structure. Next patch for mlx5 driver will use it. Link: https://lore.kernel.org/all/20220224142024.147653-15-yishaih@nvidia.com Reviewed-by: Alex Williamson <alex.williamson@redhat.com> Signed-off-by: Yishai Hadas <yishaih@nvidia.com> Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
2022-03-03vfio: Remove migration protocol v1 documentationJason Gunthorpe
v1 was never implemented and is replaced by v2. The old uAPI documentation is removed from the header file. The old uAPI definitions are still kept in the header file to ease transition for userspace copying these headers. They will be fully removed down the road. Link: https://lore.kernel.org/all/20220224142024.147653-12-yishaih@nvidia.com Signed-off-by: Jason Gunthorpe <jgg@nvidia.com> Tested-by: Shameer Kolothum <shameerali.kolothum.thodi@huawei.com> Reviewed-by: Alex Williamson <alex.williamson@redhat.com> Reviewed-by: Cornelia Huck <cohuck@redhat.com> Signed-off-by: Yishai Hadas <yishaih@nvidia.com> Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
2022-03-03vfio: Extend the device migration protocol with RUNNING_P2PJason Gunthorpe
The RUNNING_P2P state is designed to support multiple devices in the same VM that are doing P2P transactions between themselves. When in RUNNING_P2P the device must be able to accept incoming P2P transactions but should not generate outgoing P2P transactions. As an optional extension to the mandatory states it is defined as in between STOP and RUNNING: STOP -> RUNNING_P2P -> RUNNING -> RUNNING_P2P -> STOP For drivers that are unable to support RUNNING_P2P the core code silently merges RUNNING_P2P and RUNNING together. Unless driver support is present, the new state cannot be used in SET_STATE. Drivers that support this will be required to implement 4 FSM arcs beyond the basic FSM. 2 of the basic FSM arcs become combination transitions. Compared to the v1 clarification, NDMA is redefined into FSM states and is described in terms of the desired P2P quiescent behavior, noting that halting all DMA is an acceptable implementation. Link: https://lore.kernel.org/all/20220224142024.147653-11-yishaih@nvidia.com Signed-off-by: Jason Gunthorpe <jgg@nvidia.com> Tested-by: Shameer Kolothum <shameerali.kolothum.thodi@huawei.com> Reviewed-by: Kevin Tian <kevin.tian@intel.com> Reviewed-by: Alex Williamson <alex.williamson@redhat.com> Signed-off-by: Yishai Hadas <yishaih@nvidia.com> Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
2022-03-03vfio: Define device migration protocol v2Jason Gunthorpe
Replace the existing region based migration protocol with an ioctl based protocol. The two protocols have the same general semantic behaviors, but the way the data is transported is changed. This is the STOP_COPY portion of the new protocol, it defines the 5 states for basic stop and copy migration and the protocol to move the migration data in/out of the kernel. Compared to the clarification of the v1 protocol Alex proposed: https://lore.kernel.org/r/163909282574.728533.7460416142511440919.stgit@omen This has a few deliberate functional differences: - ERROR arcs allow the device function to remain unchanged. - The protocol is not required to return to the original state on transition failure. Instead userspace can execute an unwind back to the original state, reset, or do something else without needing kernel support. This simplifies the kernel design and should userspace choose a policy like always reset, avoids doing useless work in the kernel on error handling paths. - PRE_COPY is made optional, userspace must discover it before using it. This reflects the fact that the majority of drivers we are aware of right now will not implement PRE_COPY. - segmentation is not part of the data stream protocol, the receiver does not have to reproduce the framing boundaries. The hybrid FSM for the device_state is described as a Mealy machine by documenting each of the arcs the driver is required to implement. Defining the remaining set of old/new device_state transitions as 'combination transitions' which are naturally defined as taking multiple FSM arcs along the shortest path within the FSM's digraph allows a complete matrix of transitions. A new VFIO_DEVICE_FEATURE of VFIO_DEVICE_FEATURE_MIG_DEVICE_STATE is defined to replace writing to the device_state field in the region. This allows returning a brand new FD whenever the requested transition opens a data transfer session. The VFIO core code implements the new feature and provides a helper function to the driver. Using the helper the driver only has to implement 6 of the FSM arcs and the other combination transitions are elaborated consistently from those arcs. A new VFIO_DEVICE_FEATURE of VFIO_DEVICE_FEATURE_MIGRATION is defined to report the capability for migration and indicate which set of states and arcs are supported by the device. The FSM provides a lot of flexibility to make backwards compatible extensions but the VFIO_DEVICE_FEATURE also allows for future breaking extensions for scenarios that cannot support even the basic STOP_COPY requirements. The VFIO_DEVICE_FEATURE_MIG_DEVICE_STATE with the GET option (i.e. VFIO_DEVICE_FEATURE_GET) can be used to read the current migration state of the VFIO device. Data transfer sessions are now carried over a file descriptor, instead of the region. The FD functions for the lifetime of the data transfer session. read() and write() transfer the data with normal Linux stream FD semantics. This design allows future expansion to support poll(), io_uring, and other performance optimizations. The complicated mmap mode for data transfer is discarded as current qemu doesn't take meaningful advantage of it, and the new qemu implementation avoids substantially all the performance penalty of using a read() on the region. Link: https://lore.kernel.org/all/20220224142024.147653-10-yishaih@nvidia.com Signed-off-by: Jason Gunthorpe <jgg@nvidia.com> Tested-by: Shameer Kolothum <shameerali.kolothum.thodi@huawei.com> Reviewed-by: Kevin Tian <kevin.tian@intel.com> Reviewed-by: Alex Williamson <alex.williamson@redhat.com> Reviewed-by: Cornelia Huck <cohuck@redhat.com> Signed-off-by: Yishai Hadas <yishaih@nvidia.com> Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
2022-03-03vfio: Have the core code decode the VFIO_DEVICE_FEATURE ioctlJason Gunthorpe
Invoke a new device op 'device_feature' to handle just the data array portion of the command. This lifts the ioctl validation to the core code and makes it simpler for either the core code, or layered drivers, to implement their own feature values. Provide vfio_check_feature() to consolidate checking the flags/etc against what the driver supports. Link: https://lore.kernel.org/all/20220224142024.147653-9-yishaih@nvidia.com Signed-off-by: Jason Gunthorpe <jgg@nvidia.com> Tested-by: Shameer Kolothum <shameerali.kolothum.thodi@huawei.com> Reviewed-by: Alex Williamson <alex.williamson@redhat.com> Reviewed-by: Cornelia Huck <cohuck@redhat.com> Signed-off-by: Yishai Hadas <yishaih@nvidia.com> Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
2022-03-03net: rtnetlink: Add UAPI toggle for IFLA_OFFLOAD_XSTATS_L3_STATSPetr Machata
The offloaded HW stats are designed to allow per-netdevice enablement and disablement. Add an attribute, IFLA_STATS_SET_OFFLOAD_XSTATS_L3_STATS, which should be carried by the RTM_SETSTATS message, and expresses a desire to toggle L3 offload xstats on or off. As part of the above, add an exported function rtnl_offload_xstats_notify() that drivers can use when they have installed or deinstalled the counters backing the HW stats. At this point, it is possible to enable, disable and query L3 offload xstats on netdevices. (However there is no driver actually implementing these.) Signed-off-by: Petr Machata <petrm@nvidia.com> Signed-off-by: Ido Schimmel <idosch@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-03-03net: rtnetlink: Add RTM_SETSTATSPetr Machata
The offloaded HW stats are designed to allow per-netdevice enablement and disablement. These stats are only accessible through RTM_GETSTATS, and therefore should be toggled by a RTM_SETSTATS message. Add it, and the necessary skeleton handler. Signed-off-by: Petr Machata <petrm@nvidia.com> Signed-off-by: Ido Schimmel <idosch@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-03-03net: rtnetlink: Add UAPI for obtaining L3 offload xstatsPetr Machata
Add a new IFLA_STATS_LINK_OFFLOAD_XSTATS child attribute, IFLA_OFFLOAD_XSTATS_L3_STATS, to carry statistics for traffic that takes place in a HW router. The offloaded HW stats are designed to allow per-netdevice enablement and disablement. Additionally, as a netdevice is configured, it may become or cease being suitable for binding of a HW counter. Both of these aspects need to be communicated to the userspace. To that end, add another child attribute, IFLA_OFFLOAD_XSTATS_HW_S_INFO: - attr nest IFLA_OFFLOAD_XSTATS_HW_S_INFO - attr nest IFLA_OFFLOAD_XSTATS_L3_STATS - attr IFLA_OFFLOAD_XSTATS_HW_S_INFO_REQUEST - {0,1} as u8 - attr IFLA_OFFLOAD_XSTATS_HW_S_INFO_USED - {0,1} as u8 Thus this one attribute is a nest that can be used to carry information about various types of HW statistics, and indexing is very simply done by wrapping the information for a given statistics suite into the attribute that carries the suite is the RTM_GETSTATS query. At the same time, because _HW_S_INFO is nested directly below IFLA_STATS_LINK_OFFLOAD_XSTATS, it is possible through filtering to request only the metadata about individual statistics suites, without having to hit the HW to get the actual counters. Signed-off-by: Petr Machata <petrm@nvidia.com> Signed-off-by: Ido Schimmel <idosch@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-03-03net: dev: Add hardware stats supportPetr Machata
Offloading switch device drivers may be able to collect statistics of the traffic taking place in the HW datapath that pertains to a certain soft netdevice, such as VLAN. Add the necessary infrastructure to allow exposing these statistics to the offloaded netdevice in question. The API was shaped by the following considerations: - Collection of HW statistics is not free: there may be a finite number of counters, and the act of counting may have a performance impact. It is therefore necessary to allow toggling whether HW counting should be done for any particular SW netdevice. - As the drivers are loaded and removed, a particular device may get offloaded and unoffloaded again. At the same time, the statistics values need to stay monotonic (modulo the eventual 64-bit wraparound), increasing only to reflect traffic measured in the device. To that end, the netdevice keeps around a lazily-allocated copy of struct rtnl_link_stats64. Device drivers then contribute to the values kept therein at various points. Even as the driver goes away, the struct stays around to maintain the statistics values. - Different HW devices may be able to count different things. The motivation behind this patch in particular is exposure of HW counters on Nvidia Spectrum switches, where the only practical approach to counting traffic on offloaded soft netdevices currently is to use router interface counters, and count L3 traffic. Correspondingly that is the statistics suite added in this patch. Other devices may be able to measure different kinds of traffic, and for that reason, the APIs are built to allow uniform access to different statistics suites. - Because soft netdevices and offloading drivers are only loosely bound, a netdevice uses a notifier chain to communicate with the drivers. Several new notifiers, NETDEV_OFFLOAD_XSTATS_*, have been added to carry messages to the offloading drivers. - Devices can have various conditions for when a particular counter is available. As the device is configured and reconfigured, the device offload may become or cease being suitable for counter binding. A netdevice can use a notifier type NETDEV_OFFLOAD_XSTATS_REPORT_USED to ping offloading drivers and determine whether anyone currently implements a given statistics suite. This information can then be propagated to user space. When the driver decides to unoffload a netdevice, it can use a newly-added function, netdev_offload_xstats_report_delta(), to record outstanding collected statistics, before destroying the HW counter. This patch adds a helper, call_netdevice_notifiers_info_robust(), for dispatching a notifier with the possibility of unwind when one of the consumers bails. Given the wish to eventually get rid of the global notifier block altogether, this helper only invokes the per-netns notifier block. Signed-off-by: Petr Machata <petrm@nvidia.com> Signed-off-by: Ido Schimmel <idosch@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-03-03net: rtnetlink: RTM_GETSTATS: Allow filtering inside nestsPetr Machata
The filter_mask field of RTM_GETSTATS header determines which top-level attributes should be included in the netlink response. This saves processing time by only including the bits that the user cares about instead of always dumping everything. This is doubly important for HW-backed statistics that would typically require a trip to the device to fetch the stats. So far there was only one HW-backed stat suite per attribute. However, IFLA_STATS_LINK_OFFLOAD_XSTATS is a nest, and will gain a new stat suite in the following patches. It would therefore be advantageous to be able to filter within that nest, and select just one or the other HW-backed statistics suite. Extend rtnetlink so that RTM_GETSTATS permits attributes in the payload. The scheme is as follows: - RTM_GETSTATS - struct if_stats_msg - attr nest IFLA_STATS_GET_FILTERS - attr IFLA_STATS_LINK_OFFLOAD_XSTATS - u32 filter_mask This scheme reuses the existing enumerators by nesting them in a dedicated context attribute. This is covered by policies as usual, therefore a gradual opt-in is possible. Currently only IFLA_STATS_LINK_OFFLOAD_XSTATS nest has filtering enabled, because for the SW counters the issue does not seem to be that important. rtnl_offload_xstats_get_size() and _fill() are extended to observe the requested filters. Signed-off-by: Petr Machata <petrm@nvidia.com> Signed-off-by: Ido Schimmel <idosch@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-03-03page_pool: Add function to batch and return statsJoe Damato
Adds a function page_pool_get_stats which can be used by drivers to obtain stats for a specified page_pool. Signed-off-by: Joe Damato <jdamato@fastly.com> Acked-by: Jesper Dangaard Brouer <brouer@redhat.com> Reviewed-by: Ilias Apalodimas <ilias.apalodimas@linaro.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-03-03page_pool: Add recycle statsJoe Damato
Add per-cpu stats tracking page pool recycling events: - cached: recycling placed page in the page pool cache - cache_full: page pool cache was full - ring: page placed into the ptr ring - ring_full: page released from page pool because the ptr ring was full - released_refcnt: page released (and not recycled) because refcnt > 1 Signed-off-by: Joe Damato <jdamato@fastly.com> Acked-by: Jesper Dangaard Brouer <brouer@redhat.com> Reviewed-by: Ilias Apalodimas <ilias.apalodimas@linaro.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-03-03page_pool: Add allocation statsJoe Damato
Add per-pool statistics counters for the allocation path of a page pool. These stats are incremented in softirq context, so no locking or per-cpu variables are needed. This code is disabled by default and a kernel config option is provided for users who wish to enable them. The statistics added are: - fast: successful fast path allocations - slow: slow path order-0 allocations - slow_high_order: slow path high order allocations - empty: ptr ring is empty, so a slow path allocation was forced. - refill: an allocation which triggered a refill of the cache - waive: pages obtained from the ptr ring that cannot be added to the cache due to a NUMA mismatch. Signed-off-by: Joe Damato <jdamato@fastly.com> Acked-by: Jesper Dangaard Brouer <brouer@redhat.com> Reviewed-by: Ilias Apalodimas <ilias.apalodimas@linaro.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2022-03-02tcp: Remove the unused apiTao Chen
Last tcp_write_queue_head() use was removed in commit 114f39feab36 ("tcp: restore autocorking"), so remove it. Signed-off-by: Tao Chen <chentao3@hotmail.com> Link: https://lore.kernel.org/r/SYZP282MB33317DEE1253B37C0F57231E86029@SYZP282MB3331.AUSP282.PROD.OUTLOOK.COM Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-03-02flow_dissector: Add support for HSRKurt Kanzenbach
Network drivers such as igb or igc call eth_get_headlen() to determine the header length for their to be constructed skbs in receive path. When running HSR on top of these drivers, it results in triggering BUG_ON() in skb_pull(). The reason is the skb headlen is not sufficient for HSR to work correctly. skb_pull() notices that. For instance, eth_get_headlen() returns 14 bytes for TCP traffic over HSR which is not correct. The problem is, the flow dissection code does not take HSR into account. Therefore, add support for it. Reported-by: Anthony Harivel <anthony.harivel@linutronix.de> Signed-off-by: Kurt Kanzenbach <kurt@linutronix.de> Link: https://lore.kernel.org/r/20220228195856.88187-1-kurt@linutronix.de Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-03-03PM: EM: add macro to set .active_power() callback conditionallyLukasz Luba
The Energy Model is able to use new power values coming from DT. Add a new macro which is helpful in setting the .active_power() callback conditionally in setup time. The dual-macro implementation handles both kernel configurations: w/ EM and w/o EM built-in. Reported-by: kernel test robot <lkp@intel.com> Signed-off-by: Lukasz Luba <lukasz.luba@arm.com> Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
2022-03-03OPP: Add "opp-microwatt" supporting codeLukasz Luba
Add new property to the OPP: power value. The OPP entry in the DT can have "opp-microwatt". Add the needed code to handle this new property in the existing infrastructure. Signed-off-by: Lukasz Luba <lukasz.luba@arm.com> Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
2022-03-02drm/amdkfd: Add SMI add event helperPhilip Yang
To remove duplicate code, unify event message format and simplify new event add in the following patches. Use KFD_SMI_EVENT_MSG_SIZE to define msg size, the same size will be used in user space to alloc the msg receive buffer. Signed-off-by: Philip Yang <Philip.Yang@amd.com> Reviewed-by: Felix Kuehling <Felix.Kuehling@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2022-03-03firmware: xilinx: Add ZynqMP SHA API for SHA3 functionalityHarsha
This patch adds zynqmp_pm_sha_hash API in the ZynqMP firmware to compute SHA3 hash of given data. Signed-off-by: Harsha <harsha.harsha@xilinx.com> Signed-off-by: Kalyani Akula <kalyani.akula@xilinx.com> Acked-by: Michal Simek <michal.simek@xilinx.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2022-03-03crypto: crypto_xor - use helpers for unaligned accessesArd Biesheuvel
Dereferencing a misaligned pointer is undefined behavior in C, and may result in codegen on architectures such as ARM that trigger alignments traps and expensive fixups in software. Instead, use the get_aligned()/put_aligned() accessors, which are cheap or even completely free when CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS=y. In the converse case, the prior alignment checks ensure that the casts are safe, and so no unaligned accessors are necessary. Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2022-03-03crypto: api - allow algs only in specific constructions in FIPS modeNicolai Stange
Currently we do not distinguish between algorithms that fail on the self-test vs. those which are disabled in FIPS mode (not allowed). Both are marked as having failed the self-test. Recently the need arose to allow the usage of certain algorithms only as arguments to specific template instantiations in FIPS mode. For example, standalone "dh" must be blocked, but e.g. "ffdhe2048(dh)" is allowed. Other potential use cases include "cbcmac(aes)", which must only be used with ccm(), or "ghash", which must be used only for gcm(). This patch allows this scenario by adding a new flag FIPS_INTERNAL to indicate those algorithms that are not FIPS-allowed. They can then be used as template arguments only, i.e. when looked up via crypto_grab_spawn() to be more specific. The FIPS_INTERNAL bit gets propagated upwards recursively into the surrounding template instances, until the construction eventually matches an explicit testmgr entry with ->fips_allowed being set, if any. The behaviour to skip !->fips_allowed self-test executions in FIPS mode will be retained. Note that this effectively means that FIPS_INTERNAL algorithms are handled very similarly to the INTERNAL ones in this regard. It is expected that the FIPS_INTERNAL algorithms will receive sufficient testing when the larger constructions they're a part of, if any, get exercised by testmgr. Note that as a side-effect of this patch algorithms which are not FIPS-allowed will now return ENOENT instead of ELIBBAD. Hopefully this is not an issue as some people were relying on this already. Link: https://lore.kernel.org/r/YeEVSaMEVJb3cQkq@gondor.apana.org.au Originally-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Nicolai Stange <nstange@suse.de> Reviewed-by: Hannes Reinecke <hare@suse.de> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2022-03-03crypto: dh - split out deserialization code from crypto_dh_decode()Nicolai Stange
A subsequent commit will introduce "dh" wrapping templates of the form "ffdhe2048(dh)", "ffdhe3072(dh)" and so on in order to provide built-in support for the well-known safe-prime ffdhe group parameters specified in RFC 7919. Those templates' ->set_secret() will wrap the inner "dh" implementation's ->set_secret() and set the ->p and ->g group parameters as appropriate on the way inwards. More specifically, - A ffdheXYZ(dh) user would call crypto_dh_encode() on a struct dh instance having ->p == ->g == NULL as well as ->p_size == ->g_size == 0 and pass the resulting buffer to the outer ->set_secret(). - This outer ->set_secret() would then decode the struct dh via crypto_dh_decode_key(), set ->p, ->g, ->p_size as well as ->g_size as appropriate for the group in question and encode the struct dh again before passing it further down to the inner "dh"'s ->set_secret(). The problem is that crypto_dh_decode_key() implements some basic checks which would reject parameter sets with ->p_size == 0 and thus, the ffdheXYZ templates' ->set_secret() cannot use it as-is for decoding the passed buffer. As the inner "dh"'s ->set_secret() will eventually conduct said checks on the final parameter set anyway, the outer ->set_secret() really only needs the decoding functionality. Split out the pure struct dh decoding part from crypto_dh_decode_key() into the new __crypto_dh_decode_key(). __crypto_dh_decode_key() gets defined in crypto/dh_helper.c, but will have to get called from crypto/dh.c and thus, its declaration must be somehow made available to the latter. Strictly speaking, __crypto_dh_decode_key() is internal to the dh_generic module, yet it would be a bit over the top to introduce a new header like e.g. include/crypto/internal/dh.h containing just a single prototype. Add the __crypto_dh_decode_key() declaration to include/crypto/dh.h instead. Provide a proper kernel-doc annotation, even though __crypto_dh_decode_key() is purposedly not on the function list specified in Documentation/crypto/api-kpp.rst. Signed-off-by: Nicolai Stange <nstange@suse.de> Reviewed-by: Hannes Reinecke <hare@suse.de> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2022-03-03crypto: dh - constify struct dh's pointer membersNicolai Stange
struct dh contains several pointer members corresponding to DH parameters: ->key, ->p and ->g. A subsequent commit will introduce "dh" wrapping templates of the form "ffdhe2048(dh)", "ffdhe3072(dh)" and so on in order to provide built-in support for the well-known safe-prime ffdhe group parameters specified in RFC 7919. These templates will need to set the group parameter related members of the (serialized) struct dh instance passed to the inner "dh" kpp_alg instance, i.e. ->p and ->g, to some constant, static storage arrays. Turn the struct dh pointer members' types into "pointer to const" in preparation for this. Signed-off-by: Nicolai Stange <nstange@suse.de> Reviewed-by: Hannes Reinecke <hare@suse.de> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2022-03-03crypto: dh - remove struct dh's ->q memberNicolai Stange
The only current user of the DH KPP algorithm, the keyctl(KEYCTL_DH_COMPUTE) syscall, doesn't set the domain parameter ->q in struct dh. Remove it and any associated (de)serialization code in crypto_dh_encode_key() and crypto_dh_decode_key. Adjust the encoded ->secret values in testmgr's DH test vectors accordingly. Note that the dh-generic implementation would have initialized its struct dh_ctx's ->q from the decoded struct dh's ->q, if present. If this struct dh_ctx's ->q would ever have been non-NULL, it would have enabled a full key validation as specified in NIST SP800-56A in dh_is_pubkey_valid(). However, as outlined above, ->q is always NULL in practice and the full key validation code is effectively dead. A later patch will make dh_is_pubkey_valid() to calculate Q from P on the fly, if possible, so don't remove struct dh_ctx's ->q now, but leave it there until that has happened. Signed-off-by: Nicolai Stange <nstange@suse.de> Reviewed-by: Hannes Reinecke <hare@suse.de> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2022-03-03crypto: kpp - provide support for KPP spawnsNicolai Stange
The upcoming support for the RFC 7919 ffdhe group parameters will be made available in the form of templates like "ffdhe2048(dh)", "ffdhe3072(dh)" and so on. Template instantiations thereof would wrap the inner "dh" kpp_alg and also provide kpp_alg services to the outside again. The primitves needed for providing kpp_alg services from template instances have been introduced with the previous patch. Continue this work now and implement everything needed for enabling template instances to make use of inner KPP algorithms like "dh". More specifically, define a struct crypto_kpp_spawn in close analogy to crypto_skcipher_spawn, crypto_shash_spawn and alike. Implement a crypto_grab_kpp() and crypto_drop_kpp() pair for binding such a spawn to some inner kpp_alg and for releasing it respectively. Template implementations can instantiate transforms from the underlying kpp_alg by means of the new crypto_spawn_kpp(). Finally, provide the crypto_spawn_kpp_alg() helper for accessing a spawn's underlying kpp_alg during template instantiation. Annotate everything with proper kernel-doc comments, even though include/crypto/internal/kpp.h is not considered for the generated docs. Signed-off-by: Nicolai Stange <nstange@suse.de> Reviewed-by: Hannes Reinecke <hare@suse.de> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2022-03-03crypto: kpp - provide support for KPP template instancesNicolai Stange
The upcoming support for the RFC 7919 ffdhe group parameters will be made available in the form of templates like "ffdhe2048(dh)", "ffdhe3072(dh)" and so on. Template instantiations thereof would wrap the inner "dh" kpp_alg and also provide kpp_alg services to the outside again. Furthermore, it might be perhaps be desirable to provide KDF templates in the future, which would similarly wrap an inner kpp_alg and present themselves to the outside as another kpp_alg, transforming the shared secret on its way out. Introduce the bits needed for supporting KPP template instances. Everything related to inner kpp_alg spawns potentially being held by such template instances will be deferred to a subsequent patch in order to facilitate review. Define struct struct kpp_instance in close analogy to the already existing skcipher_instance, shash_instance and alike, but wrapping a struct kpp_alg. Implement the new kpp_register_instance() template instance registration primitive. Provide some helper functions for - going back and forth between a generic struct crypto_instance and the new struct kpp_instance, - obtaining the instantiating kpp_instance from a crypto_kpp transform and - for accessing a given kpp_instance's implementation specific context data. Annotate everything with proper kernel-doc comments, even though include/crypto/internal/kpp.h is not considered for the generated docs. Signed-off-by: Nicolai Stange <nstange@suse.de> Reviewed-by: Hannes Reinecke <hare@suse.de> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2022-03-02ACPI: bus: Introduce acpi_bus_for_each_dev()Rafael J. Wysocki
In order to avoid exposing acpi_bus_type to modules, introduce an acpi_bus_for_each_dev() helper for iterating over all ACPI device objects and make typec_link_ports() use it instead of the raw bus_for_each_dev() along with acpi_bus_type. Having done that, drop the acpi_bus_type export. No intentional functional impact. Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com> Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Reviewed-by: Heikki Krogerus <heikki.krogerus@linux.intel.com>
2022-03-02NFS: Optimise away the previous cookie fieldTrond Myklebust
Replace the 'previous cookie' field in struct nfs_entry with the array->last_cookie. Signed-off-by: Trond Myklebust <trond.myklebust@hammerspace.com>
2022-03-02NFS: Fix up forced readdirplusTrond Myklebust
Avoid clearing the entire readdir page cache if we're just doing forced readdirplus for the 'ls -l' heuristic. Signed-off-by: Trond Myklebust <trond.myklebust@hammerspace.com>
2022-03-02NFS: Convert readdir page cache to use a cookie based indexTrond Myklebust
Instead of using a linear index to address the pages, use the cookie of the first entry, since that is what we use to match the page anyway. This allows us to avoid re-reading the entire cache on a seekdir() type of operation. The latter is very common when re-exporting NFS, and is a major performance drain. The change does affect our duplicate cookie detection, since we can no longer rely on the page index as a linear offset for detecting whether we looped backwards. However since we no longer do a linear search through all the pages on each call to nfs_readdir(), this is less of a concern than it was previously. The other downside is that invalidate_mapping_pages() no longer can use the page index to avoid clearing pages that have been read. A subsequent patch will restore the functionality this provides to the 'ls -l' heuristic. Signed-off-by: Trond Myklebust <trond.myklebust@hammerspace.com>
2022-03-02NFS: Improve heuristic for readdirplusTrond Myklebust
The heuristic for readdirplus is designed to try to detect 'ls -l' and similar patterns. It does so by looking for cache hit/miss patterns in both the attribute cache and in the dcache of the files in a given directory, and then sets a flag for the readdirplus code to interpret. The problem with this approach is that a single attribute or dcache miss can cause the NFS code to force a refresh of the attributes for the entire set of files contained in the directory. To be able to make a more nuanced decision, let's sample the number of hits and misses in the set of open directory descriptors. That allows us to set thresholds at which we start preferring READDIRPLUS over regular READDIR, or at which we start to force a re-read of the remaining readdir cache using READDIRPLUS. Signed-off-by: Trond Myklebust <trond.myklebust@hammerspace.com>
2022-03-02NFS: Adjust the amount of readahead performed by NFS readdirTrond Myklebust
The current NFS readdir code will always try to maximise the amount of readahead it performs on the assumption that we can cache anything that isn't immediately read by the process. There are several cases where this assumption breaks down, including when the 'ls -l' heuristic kicks in to try to force use of readdirplus as a batch replacement for lookup/getattr. This patch therefore tries to tone down the amount of readahead we perform, and adjust it to try to match the amount of data being requested by user space. Signed-off-by: Trond Myklebust <trond.myklebust@hammerspace.com>
2022-03-02NFS: Don't re-read the entire page cache to find the next cookieTrond Myklebust
If the page cache entry that was last read gets invalidated for some reason, then make sure we can re-create it on the next call to readdir. This, combined with the cache page validation, allows us to reuse the cached value of page-index on successive calls to nfs_readdir. Credit is due to Benjamin Coddington for showing that the concept works, and that it allows for improved cache sharing between processes even in the case where pages are lost due to LRU or active invalidation. Suggested-by: Benjamin Coddington <bcodding@redhat.com> Signed-off-by: Trond Myklebust <trond.myklebust@hammerspace.com>
2022-03-02ASoC: soc-acpi: remove sof_fw_filenamePierre-Louis Bossart
We've been using a default firmware name for each PCI/ACPI/OF platform for a while. The machine-specific sof_fw_filename is in practice not different from the default, and newer devices don't set this field, so let's remove the redundant definitions. When OEMs modify the base firmware, they can keep the same firmware name but store the file in a separate directory. Reviewed-by: Bard Liao <yung-chuan.liao@linux.intel.com> Reviewed-by: Rander Wang <rander.wang@intel.com> Reviewed-by: Péter Ujfalusi <peter.ujfalusi@linux.intel.com> Signed-off-by: Pierre-Louis Bossart <pierre-louis.bossart@linux.intel.com> Link: https://lore.kernel.org/r/20220301194903.60859-2-pierre-louis.bossart@linux.intel.com Signed-off-by: Mark Brown <broonie@kernel.org>
2022-03-01scsi: core: sd: Add silence_suspend flag to suppress some PM messagesAdrian Hunter
Kernel messages produced during runtime PM can cause a never-ending cycle because user space utilities (e.g. journald or rsyslog) write the messages back to storage, causing runtime resume, more messages, and so on. Messages that tell of things that are expected to happen are arguably unnecessary, so add a flag to suppress them. This flag is used by the UFS driver. Link: https://lore.kernel.org/r/20220228113652.970857-2-adrian.hunter@intel.com Cc: stable@vger.kernel.org Signed-off-by: Adrian Hunter <adrian.hunter@intel.com> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
2022-03-01scsi: iscsi: Drop temp workq_nameMike Christie
When the workqueue code was created it didn't allow variable args so we have been using a temp buffer. Drop that. Link: https://lore.kernel.org/r/20220226230435.38733-7-michael.christie@oracle.com Reviewed-by: Chris Leech <cleech@redhat.com> Reviewed-by: Lee Duncan <lduncan@suse.com> Signed-off-by: Mike Christie <michael.christie@oracle.com> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
2022-03-01scsi: iscsi: ql4xxx: Use per-session workqueue for unbindingMike Christie
We currently allocate a workqueue per host and only use it for removing the target. For the session per host case we could be using this workqueue to be able to do recoveries (block, unblock, timeout handling) in parallel. To also allow offload drivers to do their session recoveries in parallel, this drops the per host workqueue and replaces it with a per session one. Link: https://lore.kernel.org/r/20220226230435.38733-5-michael.christie@oracle.com Reviewed-by: Lee Duncan <lduncan@suse.com> Reviewed-by: Chris Leech <cleech@redhat.com> Signed-off-by: Mike Christie <michael.christie@oracle.com> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
2022-03-01scsi: iscsi: Remove iscsi_scan_finished()Mike Christie
qla4xxx does not use iscsi_scan_finished() anymore so remove it. Link: https://lore.kernel.org/r/20220226230435.38733-4-michael.christie@oracle.com Reviewed-by: Lee Duncan <lduncan@suse.com> Reviewed-by: Chris Leech <cleech@redhat.com> Signed-off-by: Mike Christie <michael.christie@oracle.com> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
2022-03-01scsi: core: Remove <scsi/scsi_request.h>Christoph Hellwig
This header is empty now except for an include of <linux/blk-mq.h>, so remove it. Link: https://lore.kernel.org/r/20220224175552.988286-9-hch@lst.de Reviewed-by: Bart Van Assche <bvanassche@acm.org> Reviewed-by: John Garry <john.garry@huawei.com> Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
2022-03-01scsi: core: Remove struct scsi_requestChristoph Hellwig
Let submitters initialize the scmd->allowed field directly instead of indirecting through struct scsi_request and remove the now superfluous structure. Link: https://lore.kernel.org/r/20220224175552.988286-8-hch@lst.de Reviewed-by: Bart Van Assche <bvanassche@acm.org> Reviewed-by: John Garry <john.garry@huawei.com> Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
2022-03-01scsi: core: Move the result field from struct scsi_request to struct scsi_cmndChristoph Hellwig
Prepare for removing the scsi_request structure by moving the result field to struct scsi_cmnd. Link: https://lore.kernel.org/r/20220224175552.988286-7-hch@lst.de Reviewed-by: Bart Van Assche <bvanassche@acm.org> Reviewed-by: John Garry <john.garry@huawei.com> Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
2022-03-01scsi: core: Move the resid_len field from struct scsi_request to struct ↵Christoph Hellwig
scsi_cmnd Prepare for removing the scsi_request structure by moving the resid_len field to struct scsi_cmnd. Link: https://lore.kernel.org/r/20220224175552.988286-6-hch@lst.de Reviewed-by: Bart Van Assche <bvanassche@acm.org> Reviewed-by: John Garry <john.garry@huawei.com> Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
2022-03-01scsi: core: Remove the sense and sense_len fields from struct scsi_requestChristoph Hellwig
Just use the sense_buffer field in struct scsi_cmnd for the sense data and move the sense_len field over to struct scsi_cmnd. Link: https://lore.kernel.org/r/20220224175552.988286-5-hch@lst.de Reviewed-by: Bart Van Assche <bvanassche@acm.org> Reviewed-by: John Garry <john.garry@huawei.com> Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
2022-03-01scsi: core: Remove the cmd field from struct scsi_requestChristoph Hellwig
Now that each scsi_request is backed by a scsi_cmnd, there is no need to indirect the CDB storage. Change all submitters of SCSI passthrough requests to store the CDB information directly in the scsi_cmnd, and while doing so allocate the full 32 bytes that cover all Linux supported SCSI hosts instead of requiring dynamic allocation for > 16 byte CDBs. On 64-bit systems this does not change the size of the scsi_cmnd at all, while on 32-bit systems it slightly increases it for now, but that increase will be made up by the removal of the remaining scsi_request fields. Link: https://lore.kernel.org/r/20220224175552.988286-4-hch@lst.de Reviewed-by: Bart Van Assche <bvanassche@acm.org> Reviewed-by: John Garry <john.garry@huawei.com> Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
2022-03-01if_ether.h: add EtherCAT EthertypeDaniel Braunwarth
Add the Ethertype for EtherCAT protocol. Signed-off-by: Daniel Braunwarth <daniel@braunwarth.dev> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-03-01if_ether.h: add PROFINET EthertypeDaniel Braunwarth
Add the Ethertype for PROFINET protocol. Signed-off-by: Daniel Braunwarth <daniel@braunwarth.dev> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-03-01ELF: Properly redefine PT_GNU_* in terms of PT_LOOSKees Cook
The PT_GNU_* program header types are actually offsets from PT_LOOS, so redefine them as such, reorder them, and add the missing PT_GNU_RELRO. Cc: Eric Biederman <ebiederm@xmission.com> Cc: Peter Collingbourne <pcc@google.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Dave Martin <Dave.Martin@arm.com> Signed-off-by: Kees Cook <keescook@chromium.org>
2022-03-01binfmt: move more stuff undef CONFIG_COREDUMPAlexey Dobriyan
struct linux_binfmt::core_dump and struct min_coredump::min_coredump are used under CONFIG_COREDUMP only. Shrink those embedded configs a bit. Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com> Signed-off-by: Kees Cook <keescook@chromium.org> Link: https://lore.kernel.org/r/YglbIFyN+OtwVyjW@localhost.localdomain
2022-03-01Merge git://git.kernel.org/pub/scm/linux/kernel/git/netfilter/nfJakub Kicinski
Pablo Neira Ayuso says: ==================== Netfilter fixes for net 1) Use kfree_rcu(ptr, rcu) variant, using kfree_rcu(ptr) was not intentional. From Eric Dumazet. 2) Use-after-free in netfilter hook core, from Eric Dumazet. 3) Missing rcu read lock side for netfilter egress hook, from Florian Westphal. 4) nf_queue assume state->sk is full socket while it might not be. Invoke sock_gen_put(), from Florian Westphal. 5) Add selftest to exercise the reported KASAN splat in 4) 6) Fix possible use-after-free in nf_queue in case sk_refcnt is 0. Also from Florian. 7) Use input interface index only for hardware offload, not for the software plane. This breaks tc ct action. Patch from Paul Blakey. * git://git.kernel.org/pub/scm/linux/kernel/git/netfilter/nf: net/sched: act_ct: Fix flow table lookup failure with no originating ifindex netfilter: nf_queue: handle socket prefetch netfilter: nf_queue: fix possible use-after-free selftests: netfilter: add nfqueue TCP_NEW_SYN_RECV socket race test netfilter: nf_queue: don't assume sk is full socket netfilter: egress: silence egress hook lockdep splats netfilter: fix use-after-free in __nf_register_net_hook() netfilter: nf_tables: prefer kfree_rcu(ptr, rcu) variant ==================== Link: https://lore.kernel.org/r/20220301215337.378405-1-pablo@netfilter.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-03-01net/sched: act_ct: Fix flow table lookup failure with no originating ifindexPaul Blakey
After cited commit optimizted hw insertion, flow table entries are populated with ifindex information which was intended to only be used for HW offload. This tuple ifindex is hashed in the flow table key, so it must be filled for lookup to be successful. But tuple ifindex is only relevant for the netfilter flowtables (nft), so it's not filled in act_ct flow table lookup, resulting in lookup failure, and no SW offload and no offload teardown for TCP connection FIN/RST packets. To fix this, add new tc ifindex field to tuple, which will only be used for offloading, not for lookup, as it will not be part of the tuple hash. Fixes: 9795ded7f924 ("net/sched: act_ct: Fill offloading tuple iifidx") Signed-off-by: Paul Blakey <paulb@nvidia.com> Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>