summaryrefslogtreecommitdiff
path: root/include
AgeCommit message (Collapse)Author
2019-04-08cfg80211/nl80211: Offload OWE processing to user space in AP modeSunil Dutt
This interface allows the host driver to offload OWE processing to user space. This intends to support OWE (Opportunistic Wireless Encryption) AKM by the drivers that implement SME but rely on the user space for the cryptographic/OWE processing in AP mode. Such drivers are not capable of processing/deriving the DH IE. A new NL80211 command - NL80211_CMD_UPDATE_OWE_INFO is introduced to send the request/event between the host driver and user space. Driver shall provide the OWE info (MAC address and DH IE) of the peer to user space for cryptographic processing of the DH IE through the event. Accordingly, the user space shall update the OWE info/DH IE to the driver. Following is the sequence in AP mode for OWE authentication. Driver passes the OWE info obtained from the peer in the Association Request to the user space through the event cfg80211_update_owe_info_event. User space shall process the OWE info received and generate new OWE info. This OWE info is passed to the driver through NL80211_CMD_UPDATE_OWE_INFO request. Driver eventually uses this OWE info to send the Association Response to the peer. This OWE info in the command interface carries the IEs that include PMKID of the peer if the PMKSA is still valid or an updated DH IE for generating a new PMKSA with the peer. Signed-off-by: Liangwei Dong <liangwei@codeaurora.org> Signed-off-by: Sunil Dutt <usdutt@codeaurora.org> Signed-off-by: Srinivas Dasari <dasaris@codeaurora.org> [remove policy initialization - no longer exists] Signed-off-by: Johannes Berg <johannes.berg@intel.com>
2019-04-08mac80211: Add support for NL80211_STA_INFO_AIRTIME_LINK_METRICNarayanraddi Masti
Add support for mesh airtime link metric attribute NL80211_STA_INFO_AIRTIME_LINK_METRIC. Signed-off-by: Narayanraddi Masti <team.nmasti@gmail.com> Signed-off-by: Johannes Berg <johannes.berg@intel.com>
2019-04-08nl80211/cfg80211: Specify band specific min RSSI thresholds with sched scanvamsi krishna
This commit adds the support to specify the RSSI thresholds per band for each match set. This enhances the current behavior which specifies a single rssi_threshold across all the bands by introducing the rssi_threshold_per_band. These per band rssi thresholds are referred through NL80211_BAND_* (enum nl80211_band) variables as attribute types. Such attributes/values per each band are nested through NL80211_ATTR_SCHED_SCAN_MIN_RSSI. These band specific rssi thresholds shall take precedence over the current rssi_thold per match set. Drivers indicate this support through %NL80211_EXT_FEATURE_SCHED_SCAN_BAND_SPECIFIC_RSSI_THOLD. These per band rssi attributes/values does not specify "default RSSI filter" as done by NL80211_SCHED_SCAN_MATCH_ATTR_RSSI to stay backward compatible. That said, these per band rssi values have to be specified for the corresponding matchset. Signed-off-by: vamsi krishna <vamsin@codeaurora.org> Signed-off-by: Srinivas Dasari <dasaris@codeaurora.org> [rebase on refactoring, add policy] Signed-off-by: Johannes Berg <johannes.berg@intel.com>
2019-04-08mac80211: make ieee80211_schedule_txq schedule empty TXQsFelix Fietkau
Currently there is no way for the driver to signal to mac80211 that it should schedule a TXQ even if there are no packets on the mac80211 part of that queue. This is problematic if the driver has an internal retry queue to deal with software A-MPDU retry. This patch changes the behavior of ieee80211_schedule_txq to always schedule the queue, as its only user (ath9k) seems to expect such behavior already: it calls this function on tx status and on powersave wakeup whenever its internal retry queue is not empty. Also add an extra argument to ieee80211_return_txq to get the same behavior. This fixes an issue on ath9k where tx queues with packets to retry (and no new packets in mac80211) would not get serviced. Fixes: 89cea7493a346 ("ath9k: Switch to mac80211 TXQ scheduling and airtime APIs") Signed-off-by: Felix Fietkau <nbd@nbd.name> Acked-by: Toke Høiland-Jørgensen <toke@redhat.com> Signed-off-by: Johannes Berg <johannes.berg@intel.com>
2019-04-08cfg80211: add ratelimited variants of err and warnStanislaw Gruszka
wiphy_{err,warn}_ratelimited will be used by rt2x00 Signed-off-by: Stanislaw Gruszka <sgruszka@redhat.com> Signed-off-by: Johannes Berg <johannes.berg@intel.com>
2019-04-08arch: Remove dummy mmiowb() definitions from arch codeWill Deacon
Now that no driver code is using mmiowb() directly, remove the dummy definitions remaining in architectures that don't make use of asm-generic/io.h, as well as the definition in asm-generic/io.h itself. Acked-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Will Deacon <will.deacon@arm.com>
2019-04-08Merge tag 'v5.1-rc3' into develLinus Walleij
Linux 5.1-rc3
2019-04-08drivers: Remove explicit invocations of mmiowb()Will Deacon
mmiowb() is now implied by spin_unlock() on architectures that require it, so there is no reason to call it from driver code. This patch was generated using coccinelle: @mmiowb@ @@ - mmiowb(); and invoked as: $ for d in drivers include/linux/qed sound; do \ spatch --include-headers --sp-file mmiowb.cocci --dir $d --in-place; done NOTE: mmiowb() has only ever guaranteed ordering in conjunction with spin_unlock(). However, pairing each mmiowb() removal in this patch with the corresponding call to spin_unlock() is not at all trivial, so there is a small chance that this change may regress any drivers incorrectly relying on mmiowb() to order MMIO writes between CPUs using lock-free synchronisation. If you've ended up bisecting to this commit, you can reintroduce the mmiowb() calls using wmb() instead, which should restore the old behaviour on all architectures other than some esoteric ia64 systems. Acked-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Will Deacon <will.deacon@arm.com>
2019-04-08mmiowb: Hook up mmiowb helpers to spinlocks and generic I/O accessorsWill Deacon
Removing explicit calls to mmiowb() from driver code means that we must now call into the generic mmiowb_spin_{lock,unlock}() functions from the core spinlock code. In order to elide barriers following critical sections without any I/O writes, we also hook into the asm-generic I/O routines. Acked-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Will Deacon <will.deacon@arm.com>
2019-04-08asm-generic/mmiowb: Add generic implementation of mmiowb() trackingWill Deacon
In preparation for removing all explicit mmiowb() calls from driver code, implement a tracking system in asm-generic based loosely on the PowerPC implementation. This allows architectures with a non-empty mmiowb() definition to have the barrier automatically inserted in spin_unlock() following a critical section containing an I/O write. Acked-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Will Deacon <will.deacon@arm.com>
2019-04-08cpufreq: intel_pstate: Update max frequency on global turbo changesRafael J. Wysocki
While the cpuinfo.max_freq value doesn't really matter for intel_pstate in the active mode, in the passive mode it is used by governors as the maximum physical frequency of the CPU and the results of governor computations generally depend on it. Also it is made available to user space via sysfs and it should match the current HW configuration. For this reason, make intel_pstate update cpuinfo.max_freq for all CPUs if it detects a global change of turbo frequency settings from "disable" to "enable" or the other way associated with a _PPC change notification from the platform firmware. Note that policy_is_inactive(), cpufreq_cpu_acquire(), cpufreq_cpu_release(), and cpufreq_set_policy() need to be made available to it for this purpose. Link: https://bugzilla.kernel.org/show_bug.cgi?id=200759 Reported-by: Gabriele Mazzotta <gabriele.mzt@gmail.com> Tested-by: Gabriele Mazzotta <gabriele.mzt@gmail.com> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com> Acked-by: Viresh Kumar <viresh.kumar@linaro.org>
2019-04-08gpio: mmio: Drop bgpio_dir_invertedLinus Walleij
The direction inversion semantics are now handled by simply using the registers for in/out available, no need to keep track of inversion semantics exmplicitly anymore. Reviewed-by: Bartosz Golaszewski <bgolaszewski@baylibre.com> Reviewed-by: Jan Kotas <jank@cadence.com> Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
2019-04-08Merge drm/drm-fixes into drm-misc-fixesMaxime Ripard
We haven't backmerged for a while and this creates some coherency issues across DRM drivers. Signed-off-by: Maxime Ripard <maxime.ripard@bootlin.com>
2019-04-08mtd: nand: omap: Fix comment in platform data using wrong Kconfig symbolMiquel Raynal
The symbol that is being used in the #if/#endif block is not the one which is mentioned at the bottom. Fixes: 93af53b8633c ("nand: omap2: Remove horrible ifdefs to fix module probe") Signed-off-by: Miquel Raynal <miquel.raynal@bootlin.com>
2019-04-08mtd: rawnand: Get rid of chip->ecc_{strength,step}_dsBoris Brezillon
nand_device embeds a nand_ecc_req object which contains the minimum strength and step-size required by the NAND device. Drop the chip->ecc_{strength,step}_ds fields and use chip->base.eccreq.{strength,step_size} instead. Signed-off-by: Boris Brezillon <bbrezillon@kernel.org> Signed-off-by: Miquel Raynal <miquel.raynal@bootlin.com> Reviewed-by: Frieder Schrempf <frieder.schrempf@kontron.de>
2019-04-08mtd: rawnand: Get rid of chip->numchipsBoris Brezillon
The same information is provided by nanddev_ntargets(). Signed-off-by: Boris Brezillon <bbrezillon@kernel.org> Signed-off-by: Miquel Raynal <miquel.raynal@bootlin.com> Reviewed-by: Frieder Schrempf <frieder.schrempf@kontron.de>
2019-04-08mtd: rawnand: Get rid of chip->chipsizeBoris Brezillon
The target size can now be returned by nanddev_get_targetsize(). Get rid of the chip->chipsize field and use this helper instead. Signed-off-by: Boris Brezillon <bbrezillon@kernel.org> Reviewed-by: Frieder Schrempf <frieder.schrempf@kontron.de> Signed-off-by: Miquel Raynal <miquel.raynal@bootlin.com>
2019-04-08mtd: rawnand: Get rid of chip->bits_per_cellBoris Brezillon
Now that we inherit from nand_device, we can use nand_device->memorg.bits_per_cell instead of having our own field at the nand_chip level. Signed-off-by: Boris Brezillon <bbrezillon@kernel.org> Signed-off-by: Miquel Raynal <miquel.raynal@bootlin.com> Reviewed-by: Frieder Schrempf <frieder.schrempf@kontron.de>
2019-04-08mtd: rawnand: Use nanddev_mtd_max_bad_blocks()Boris Brezillon
nanddev_mtd_max_bad_blocks() is implemented by the generic NAND layer and is already doing what we need. Reuse this function instead of having our own implementation. While at it, get rid of the ->max_bb_per_die and ->blocks_per_die fields which are now unused. Signed-off-by: Boris Brezillon <bbrezillon@kernel.org> Signed-off-by: Miquel Raynal <miquel.raynal@bootlin.com> Reviewed-by: Frieder Schrempf <frieder.schrempf@kontron.de>
2019-04-08mtd: rawnand: Move all page cache related fields to a sub-structBoris Brezillon
Looking at the field names it's hard to tell what ->data_buf, ->pagebuf and ->pagebuf_bitflips are for. Clarify that by moving those fields in a sub-struct named pagecache. Signed-off-by: Boris Brezillon <bbrezillon@kernel.org> Signed-off-by: Miquel Raynal <miquel.raynal@bootlin.com> Reviewed-by: Frieder Schrempf <frieder.schrempf@kontron.de>
2019-04-08mtd: rawnand: Provide a helper to get chip->data_bufBoris Brezillon
We plan to move cache related fields to a pagecache struct in nand_chip but some drivers access ->pagebuf directly to invalidate the cache before they start using ->data_buf. Let's provide an helper that returns a pointer to ->data_buf after invalidating the cache. Signed-off-by: Boris Brezillon <bbrezillon@kernel.org> Signed-off-by: Miquel Raynal <miquel.raynal@bootlin.com> Reviewed-by: Frieder Schrempf <frieder.schrempf@kontron.de>
2019-04-08mtd: rawnand: Prepare things to reuse the generic NAND layerBoris Brezillon
The generic NAND layer provides abstraction of NAND devices no matter the bus that is used to communicate with the chip. Basing the raw NAND core on this generic layer should avoid duplication of common operations, like iterating over all pages/blocks for MTD IO/erase operations. In order to re-use this layer, we must first inherit from nand_device and then initialize the nand_device struct appropriately. This patch is taking care of the former. Signed-off-by: Boris Brezillon <bbrezillon@kernel.org> Signed-off-by: Miquel Raynal <miquel.raynal@bootlin.com> Reviewed-by: Frieder Schrempf <frieder.schrempf@kontron.de>
2019-04-08mtd: rawnand: Use nand_to_mtd() in nand_{set,get}_flash_node()Boris Brezillon
Use the nand_to_mtd() helper to access chip->mtd as done everywhere else. Signed-off-by: Boris Brezillon <bbrezillon@kernel.org> Signed-off-by: Miquel Raynal <miquel.raynal@bootlin.com> Reviewed-by: Frieder Schrempf <frieder.schrempf@kontron.de>
2019-04-08mtd: nand: Add a helper to retrieve the number of pages per targetBoris Brezillon
Will be used by the raw NAND framework. Signed-off-by: Boris Brezillon <bbrezillon@kernel.org> Signed-off-by: Miquel Raynal <miquel.raynal@bootlin.com> Reviewed-by: Frieder Schrempf <frieder.schrempf@kontron.de>
2019-04-08mtd: nand: Add a helper returning the number of eraseblocks per targetBoris Brezillon
Some drivers in the raw NAND framework seems to need this helper, so let's just add it instead of open-coding the logic. Signed-off-by: Boris Brezillon <bbrezillon@kernel.org> Signed-off-by: Miquel Raynal <miquel.raynal@bootlin.com> Reviewed-by: Frieder Schrempf <frieder.schrempf@kontron.de>
2019-04-08mtd: nand: Add max_bad_eraseblocks_per_lun info to memorgBoris Brezillon
NAND datasheets usually give the maximum number of bad blocks per LUN and this number can be used to help upper layers decide how much blocks they should reserve for bad block handling. Add a max_bad_eraseblocks_per_lun to the nand_memory_organization struct and update the NAND_MEMORG() macro (and its users) accordingly. We also provide a default mtd->_max_bad_blocks() implementation. Signed-off-by: Boris Brezillon <bbrezillon@kernel.org> Signed-off-by: Miquel Raynal <miquel.raynal@bootlin.com> Reviewed-by: Frieder Schrempf <frieder.schrempf@kontron.de>
2019-04-08dt-bindings: clk: axg-audio: add g12a supportJerome Brunet
Add a new compatible string and additional clock ids for audio clock controller of the g12a SoC family. Signed-off-by: Jerome Brunet <jbrunet@baylibre.com> Reviewed-by: Rob Herring <robh@kernel.org> Signed-off-by: Neil Armstrong <narmstrong@baylibre.com> Link: https://lkml.kernel.org/r/20190329160649.31603-2-jbrunet@baylibre.com
2019-04-08ASoC: core: conditionally increase module refcount on component openRanjani Sridharan
Recently, for Intel platforms the "ignore_module_refcount" field was introduced for the component driver. In order to avoid a deadlock preventing the PCI modules from being removed even when the card was idle, the refcounts were not incremented for the device driver module during component probe. However, this change introduced a nasty side effect: the device driver module can be unloaded while a pcm stream is open. This patch proposes to change the field to be renamed as "module_get_upon_open". When this field is set, the module refcount should be incremented on pcm open amd decremented upon pcm close. This will enable modules to be removed when no PCM playback/capture happens and prevent removal when the component is actually in use. Also, align with the skylake component driver with the new name. Fixes: b450b878('ASoC: core: don't increase component module refcount unconditionally' Signed-off-by: Ranjani Sridharan <ranjani.sridharan@linux.intel.com> Acked-by: Pierre-Louis Bossart <pierre-louis.bossart@linux.intel.com> Signed-off-by: Mark Brown <broonie@kernel.org>
2019-04-08xfrm: store xfrm_mode directly, not its addressFlorian Westphal
This structure is now only 4 bytes, so its more efficient to cache a copy rather than its address. No significant size difference in allmodconfig vmlinux. With non-modular kernel that has all XFRM options enabled, this series reduces vmlinux image size by ~11kb. All xfrm_mode indirections are gone and all modes are built-in. before (ipsec-next master): text data bss dec filename 21071494 7233140 11104324 39408958 vmlinux.master after this series: 21066448 7226772 11104324 39397544 vmlinux.patched With allmodconfig kernel, the size increase is only 362 bytes, even all the xfrm config options removed in this series are modular. before: text data bss dec filename 15731286 6936912 4046908 26715106 vmlinux.master after this series: 15731492 6937068 4046908 26715468 vmlinux Signed-off-by: Florian Westphal <fw@strlen.de> Reviewed-by: Sabrina Dubroca <sd@queasysnail.net> Signed-off-by: Steffen Klassert <steffen.klassert@secunet.com>
2019-04-08xfrm: make xfrm modes builtinFlorian Westphal
after previous changes, xfrm_mode contains no function pointers anymore and all modules defining such struct contain no code except an init/exit functions to register the xfrm_mode struct with the xfrm core. Just place the xfrm modes core and remove the modules, the run-time xfrm_mode register/unregister functionality is removed. Before: text data bss dec filename 7523 200 2364 10087 net/xfrm/xfrm_input.o 40003 628 440 41071 net/xfrm/xfrm_state.o 15730338 6937080 4046908 26714326 vmlinux 7389 200 2364 9953 net/xfrm/xfrm_input.o 40574 656 440 41670 net/xfrm/xfrm_state.o 15730084 6937068 4046908 26714060 vmlinux The xfrm*_mode_{transport,tunnel,beet} modules are gone. v2: replace CONFIG_INET6_XFRM_MODE_* IS_ENABLED guards with CONFIG_IPV6 ones rather than removing them. Signed-off-by: Florian Westphal <fw@strlen.de> Reviewed-by: Sabrina Dubroca <sd@queasysnail.net> Signed-off-by: Steffen Klassert <steffen.klassert@secunet.com>
2019-04-08xfrm: remove afinfo pointer from xfrm_modeFlorian Westphal
Adds an EXPORT_SYMBOL for afinfo_get_rcu, as it will now be called from ipv6 in case of CONFIG_IPV6=m. This change has virtually no effect on vmlinux size, but it reduces afinfo size and allows followup patch to make xfrm modes const. v2: mark if (afinfo) tests as likely (Sabrina) re-fetch afinfo according to inner_mode in xfrm_prepare_input(). Signed-off-by: Florian Westphal <fw@strlen.de> Reviewed-by: Sabrina Dubroca <sd@queasysnail.net> Signed-off-by: Steffen Klassert <steffen.klassert@secunet.com>
2019-04-08xfrm: remove output2 indirection from xfrm_modeFlorian Westphal
similar to previous patch: no external module dependencies, so we can avoid the indirection by placing this in the core. This change removes the last indirection from xfrm_mode and the xfrm4|6_mode_{beet,tunnel}.c modules contain (almost) no code anymore. Before: text data bss dec hex filename 3957 136 0 4093 ffd net/xfrm/xfrm_output.o 587 44 0 631 277 net/ipv4/xfrm4_mode_beet.o 649 32 0 681 2a9 net/ipv4/xfrm4_mode_tunnel.o 625 44 0 669 29d net/ipv6/xfrm6_mode_beet.o 599 32 0 631 277 net/ipv6/xfrm6_mode_tunnel.o After: text data bss dec hex filename 5359 184 0 5543 15a7 net/xfrm/xfrm_output.o 171 24 0 195 c3 net/ipv4/xfrm4_mode_beet.o 171 24 0 195 c3 net/ipv4/xfrm4_mode_tunnel.o 172 24 0 196 c4 net/ipv6/xfrm6_mode_beet.o 172 24 0 196 c4 net/ipv6/xfrm6_mode_tunnel.o v2: fold the *encap_add functions into xfrm*_prepare_output preserve (move) output2 comment (Sabrina) use x->outer_mode->encap, not inner fix a build breakage on ppc (kbuild robot) Signed-off-by: Florian Westphal <fw@strlen.de> Reviewed-by: Sabrina Dubroca <sd@queasysnail.net> Signed-off-by: Steffen Klassert <steffen.klassert@secunet.com>
2019-04-08xfrm: remove input2 indirection from xfrm_modeFlorian Westphal
No external dependencies on any module, place this in the core. Increase is about 1800 byte for xfrm_input.o. The beet helpers get added to internal header, as they can be reused from xfrm_output.c in the next patch (kernel contains several copies of them in the xfrm{4,6}_mode_beet.c files). Before: text data bss dec filename 5578 176 2364 8118 net/xfrm/xfrm_input.o 1180 64 0 1244 net/ipv4/xfrm4_mode_beet.o 171 40 0 211 net/ipv4/xfrm4_mode_transport.o 1163 40 0 1203 net/ipv4/xfrm4_mode_tunnel.o 1083 52 0 1135 net/ipv6/xfrm6_mode_beet.o 172 40 0 212 net/ipv6/xfrm6_mode_ro.o 172 40 0 212 net/ipv6/xfrm6_mode_transport.o 1056 40 0 1096 net/ipv6/xfrm6_mode_tunnel.o After: text data bss dec filename 7373 200 2364 9937 net/xfrm/xfrm_input.o 587 44 0 631 net/ipv4/xfrm4_mode_beet.o 171 32 0 203 net/ipv4/xfrm4_mode_transport.o 649 32 0 681 net/ipv4/xfrm4_mode_tunnel.o 625 44 0 669 net/ipv6/xfrm6_mode_beet.o 172 32 0 204 net/ipv6/xfrm6_mode_ro.o 172 32 0 204 net/ipv6/xfrm6_mode_transport.o 599 32 0 631 net/ipv6/xfrm6_mode_tunnel.o v2: pass inner_mode to xfrm_inner_mode_encap_remove to fix AF_UNSPEC selector breakage (bisected by Benedict Wong) Signed-off-by: Florian Westphal <fw@strlen.de> Reviewed-by: Sabrina Dubroca <sd@queasysnail.net> Signed-off-by: Steffen Klassert <steffen.klassert@secunet.com>
2019-04-08xfrm: remove gso_segment indirection from xfrm_modeFlorian Westphal
These functions are small and we only have versions for tunnel and transport mode for ipv4 and ipv6 respectively. Just place the 'transport or tunnel' conditional in the protocol specific function instead of using an indirection. Before: 3226 12 0 3238 net/ipv4/esp4_offload.o 7004 492 0 7496 net/ipv4/ip_vti.o 3339 12 0 3351 net/ipv6/esp6_offload.o 11294 460 0 11754 net/ipv6/ip6_vti.o 1180 72 0 1252 net/ipv4/xfrm4_mode_beet.o 428 48 0 476 net/ipv4/xfrm4_mode_transport.o 1271 48 0 1319 net/ipv4/xfrm4_mode_tunnel.o 1083 60 0 1143 net/ipv6/xfrm6_mode_beet.o 172 48 0 220 net/ipv6/xfrm6_mode_ro.o 429 48 0 477 net/ipv6/xfrm6_mode_transport.o 1164 48 0 1212 net/ipv6/xfrm6_mode_tunnel.o 15730428 6937008 4046908 26714344 vmlinux After: 3461 12 0 3473 net/ipv4/esp4_offload.o 7000 492 0 7492 net/ipv4/ip_vti.o 3574 12 0 3586 net/ipv6/esp6_offload.o 11295 460 0 11755 net/ipv6/ip6_vti.o 1180 64 0 1244 net/ipv4/xfrm4_mode_beet.o 171 40 0 211 net/ipv4/xfrm4_mode_transport.o 1163 40 0 1203 net/ipv4/xfrm4_mode_tunnel.o 1083 52 0 1135 net/ipv6/xfrm6_mode_beet.o 172 40 0 212 net/ipv6/xfrm6_mode_ro.o 172 40 0 212 net/ipv6/xfrm6_mode_transport.o 1056 40 0 1096 net/ipv6/xfrm6_mode_tunnel.o 15730424 6937008 4046908 26714340 vmlinux Signed-off-by: Florian Westphal <fw@strlen.de> Reviewed-by: Sabrina Dubroca <sd@queasysnail.net> Signed-off-by: Steffen Klassert <steffen.klassert@secunet.com>
2019-04-08xfrm: remove xmit indirection from xfrm_modeFlorian Westphal
There are only two versions (tunnel and transport). The ip/ipv6 versions are only differ in sizeof(iphdr) vs ipv6hdr. Place this in the core and use x->outer_mode->encap type to call the correct adjustment helper. Before: text data bss dec filename 15730311 6937008 4046908 26714227 vmlinux After: 15730428 6937008 4046908 26714344 vmlinux (about 117 byte increase) v2: use family from x->outer_mode, not inner Signed-off-by: Florian Westphal <fw@strlen.de> Reviewed-by: Sabrina Dubroca <sd@queasysnail.net> Signed-off-by: Steffen Klassert <steffen.klassert@secunet.com>
2019-04-08xfrm: remove output indirection from xfrm_modeFlorian Westphal
Same is input indirection. Only exception: we need to export xfrm_outer_mode_output for pktgen. Increases size of vmlinux by about 163 byte: Before: text data bss dec filename 15730208 6936948 4046908 26714064 vmlinux After: 15730311 6937008 4046908 26714227 vmlinux xfrm_inner_extract_output has no more external callers, make it static. v2: add IS_ENABLED(IPV6) guard in xfrm6_prepare_output add two missing breaks in xfrm_outer_mode_output (Sabrina Dubroca) add WARN_ON_ONCE for 'call AF_INET6 related output function, but CONFIG_IPV6=n' case. make xfrm_inner_extract_output static Signed-off-by: Florian Westphal <fw@strlen.de> Reviewed-by: Sabrina Dubroca <sd@queasysnail.net> Signed-off-by: Steffen Klassert <steffen.klassert@secunet.com>
2019-04-08xfrm: remove input indirection from xfrm_modeFlorian Westphal
No need for any indirection or abstraction here, both functions are pretty much the same and quite small, they also have no external dependencies. xfrm_prepare_input can then be made static. With allmodconfig build, size increase of vmlinux is 25 byte: Before: text data bss dec filename 15730207 6936924 4046908 26714039 vmlinux After: 15730208 6936948 4046908 26714064 vmlinux v2: Fix INET_XFRM_MODE_TRANSPORT name in is-enabled test (Sabrina Dubroca) change copied comment to refer to transport and network header, not skb->{h,nh}, which don't exist anymore. (Sabrina) make xfrm_prepare_input static (Eyal Birger) Signed-off-by: Florian Westphal <fw@strlen.de> Reviewed-by: Sabrina Dubroca <sd@queasysnail.net> Signed-off-by: Steffen Klassert <steffen.klassert@secunet.com>
2019-04-08spi: add a method for configuring CS timingSowjanya Komatineni
This patch creates set_cs_timing SPI master optional method for SPI masters to implement configuring CS timing if applicable. This patch also creates spi_cs_timing accessory for SPI clients to use for requesting SPI master controllers to configure device requested CS setup time, hold time and inactive delay. Signed-off-by: Sowjanya Komatineni <skomatineni@nvidia.com> Signed-off-by: Mark Brown <broonie@kernel.org>
2019-04-08xfrm: place af number into xfrm_mode structFlorian Westphal
This will be useful to know if we're supposed to decode ipv4 or ipv6. While at it, make the unregister function return void, all module_exit functions did just BUG(); there is never a point in doing error checks if there is no way to handle such error. Signed-off-by: Florian Westphal <fw@strlen.de> Reviewed-by: Sabrina Dubroca <sd@queasysnail.net> Signed-off-by: Steffen Klassert <steffen.klassert@secunet.com>
2019-04-08spi: bitbang: Introduce spi_bitbang_init()Andrey Smirnov
Move all of the code doing struct spi_bitbang initialization, so that it can be paired with devm_spi_register_master() in order to avoid having to call spi_bitbang_stop() explicitly. Signed-off-by: Andrey Smirnov <andrew.smirnov@gmail.com> Cc: Mark Brown <broonie@kernel.org> Cc: Chris Healy <cphealy@gmail.com> Cc: linux-spi@vger.kernel.org Cc: linux-kernel@vger.kernel.org Signed-off-by: Mark Brown <broonie@kernel.org>
2019-04-08crypto: streebog - fix unaligned memory accessesEric Biggers
Don't cast the data buffer directly to streebog_uint512, as this violates alignment rules. Fixes: fe18957e8e87 ("crypto: streebog - add Streebog hash function") Cc: Vitaly Chikunov <vt@altlinux.org> Signed-off-by: Eric Biggers <ebiggers@google.com> Reviewed-by: Vitaly Chikunov <vt@altlinux.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-04-08crypto: aes - Use ___cacheline_aligned for aes dataAndi Kleen
cacheline_aligned is a special section. It cannot be const at the same time because it's not read-only. It doesn't give any MMU protection. Mark it ____cacheline_aligned to not place it in a special section, but just align it in .rodata Cc: herbert@gondor.apana.org.au Suggested-by: Rasmus Villemoes <linux@rasmusvillemoes.dk> Signed-off-by: Andi Kleen <ak@linux.intel.com> Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-04-08crypto: ccp - introduce SEV_GET_ID2 commandSingh, Brijesh
The current definition and implementation of the SEV_GET_ID command does not provide the length of the unique ID returned by the firmware. As per the firmware specification, the firmware may return an ID length that is not restricted to 64 bytes as assumed by the SEV_GET_ID command. Introduce the SEV_GET_ID2 command to overcome with the SEV_GET_ID limitations. Deprecate the SEV_GET_ID in the favor of SEV_GET_ID2. At the same time update SEV API web link. Cc: Janakarajan Natarajan <Janakarajan.Natarajan@amd.com> Cc: Herbert Xu <herbert@gondor.apana.org.au> Cc: Gary Hook <gary.hook@amd.com> Cc: Tom Lendacky <thomas.lendacky@amd.com> Cc: Nathaniel McCallum <npmccallum@redhat.com> Signed-off-by: Brijesh Singh <brijesh.singh@amd.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-04-08drm: add drm_fb_xrgb8888_to_rgb888_dstclip()Gerd Hoffmann
Simliar to drm_fb_xrgb8888_to_rgb565_dstclip() but converts to rgb888 instead of rgb565. Signed-off-by: Gerd Hoffmann <kraxel@redhat.com> Reviewed-by: Noralf Trønnes <noralf@tronnes.org> Link: http://patchwork.freedesktop.org/patch/msgid/20190405095219.9231-5-kraxel@redhat.com
2019-04-08drm: add drm_fb_xrgb8888_to_rgb565_dstclip()Gerd Hoffmann
It is a drm_fb_xrgb8888_to_rgb565() variant which checks the clip rectangle for the destination too. Common code between drm_fb_xrgb8888_to_rgb565() and drm_fb_xrgb8888_to_rgb565_dstclip() was factored out into the drm_fb_xrgb8888_to_rgb565_lines() helper function. Signed-off-by: Gerd Hoffmann <kraxel@redhat.com> Reviewed-by: Noralf Trønnes <noralf@tronnes.org> Link: http://patchwork.freedesktop.org/patch/msgid/20190405095219.9231-4-kraxel@redhat.com
2019-04-08drm: add drm_fb_memcpy_dstclip() helperGerd Hoffmann
It is a drm_fb_memcpy() variant which checks the clip rectangle for the destination too. Common code between drm_fb_memcpy() and drm_fb_memcpy_dstclip() was factored out into the drm_fb_memcpy_lines() helper function. Signed-off-by: Gerd Hoffmann <kraxel@redhat.com> Reviewed-by: Noralf Trønnes <noralf@tronnes.org> Link: http://patchwork.freedesktop.org/patch/msgid/20190405095219.9231-3-kraxel@redhat.com
2019-04-08drm: move tinydrm format conversion helpers to new drm_format_helper.cGerd Hoffmann
Also rename them from tinydrm_* to drm_fb_* Pure code motion, no functional change. Signed-off-by: Gerd Hoffmann <kraxel@redhat.com> Reviewed-by: Sam Ravnborg <sam@ravnborg.org> Reviewed-by: Noralf Trønnes <noralf@tronnes.org> Link: http://patchwork.freedesktop.org/patch/msgid/20190405095219.9231-2-kraxel@redhat.com
2019-04-07rhashtable: add lockdep tracking to bucket bit-spin-locks.NeilBrown
Native bit_spin_locks are not tracked by lockdep. The bit_spin_locks used for rhashtable buckets are local to the rhashtable implementation, so there is little opportunity for the sort of misuse that lockdep might detect. However locks are held while a hash function or compare function is called, and if one of these took a lock, a misbehaviour is possible. As it is quite easy to add lockdep support this unlikely possibility seems to be enough justification. So create a lockdep class for bucket bit_spin_lock and attach through a lockdep_map in each bucket_table. Without the 'nested' annotation in rhashtable_rehash_one(), lockdep correctly reports a possible problem as this lock is taken while another bucket lock (in another table) is held. This confirms that the added support works. With the correct nested annotation in place, lockdep reports no problems. Signed-off-by: NeilBrown <neilb@suse.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-04-07rhashtable: use bit_spin_locks to protect hash bucket.NeilBrown
This patch changes rhashtables to use a bit_spin_lock on BIT(1) of the bucket pointer to lock the hash chain for that bucket. The benefits of a bit spin_lock are: - no need to allocate a separate array of locks. - no need to have a configuration option to guide the choice of the size of this array - locking cost is often a single test-and-set in a cache line that will have to be loaded anyway. When inserting at, or removing from, the head of the chain, the unlock is free - writing the new address in the bucket head implicitly clears the lock bit. For __rhashtable_insert_fast() we ensure this always happens when adding a new key. - even when lockings costs 2 updates (lock and unlock), they are in a cacheline that needs to be read anyway. The cost of using a bit spin_lock is a little bit of code complexity, which I think is quite manageable. Bit spin_locks are sometimes inappropriate because they are not fair - if multiple CPUs repeatedly contend of the same lock, one CPU can easily be starved. This is not a credible situation with rhashtable. Multiple CPUs may want to repeatedly add or remove objects, but they will typically do so at different buckets, so they will attempt to acquire different locks. As we have more bit-locks than we previously had spinlocks (by at least a factor of two) we can expect slightly less contention to go with the slightly better cache behavior and reduced memory consumption. To enhance type checking, a new struct is introduced to represent the pointer plus lock-bit that is stored in the bucket-table. This is "struct rhash_lock_head" and is empty. A pointer to this needs to be cast to either an unsigned lock, or a "struct rhash_head *" to be useful. Variables of this type are most often called "bkt". Previously "pprev" would sometimes point to a bucket, and sometimes a ->next pointer in an rhash_head. As these are now different types, pprev is NULL when it would have pointed to the bucket. In that case, 'blk' is used, together with correct locking protocol. Signed-off-by: NeilBrown <neilb@suse.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2019-04-07rhashtable: allow rht_bucket_var to return NULL.NeilBrown
Rather than returning a pointer to a static nulls, rht_bucket_var() now returns NULL if the bucket doesn't exist. This will make the next patch, which stores a bitlock in the bucket pointer, somewhat cleaner. This change involves introducing __rht_bucket_nested() which is like rht_bucket_nested(), but doesn't provide the static nulls, and changing rht_bucket_nested() to call this and possible provide a static nulls - as is still needed for the non-var case. Signed-off-by: NeilBrown <neilb@suse.com> Signed-off-by: David S. Miller <davem@davemloft.net>