summaryrefslogtreecommitdiff
path: root/include/linux
AgeCommit message (Collapse)Author
2018-12-12phy: Add MIPI D-PHY configuration optionsMaxime Ripard
Now that we have some infrastructure for it, allow the MIPI D-PHY phy's to be configured through the generic functions through a custom structure added to the generic union. The parameters added here are the ones defined in the MIPI D-PHY spec, plus the number of lanes in use. The current set of parameters should cover all the potential users. Signed-off-by: Maxime Ripard <maxime.ripard@bootlin.com> Signed-off-by: Kishon Vijay Abraham I <kishon@ti.com>
2018-12-12phy: Add configuration interfaceMaxime Ripard
The phy framework is only allowing to configure the power state of the PHY using the init and power_on hooks, and their power_off and exit counterparts. While it works for most, simple, PHYs supported so far, some more advanced PHYs need some configuration depending on runtime parameters. These PHYs have been supported by a number of means already, often by using ad-hoc drivers in their consumer drivers. That doesn't work too well however, when a consumer device needs to deal with multiple PHYs, or when multiple consumers need to deal with the same PHY (a DSI driver and a CSI driver for example). So we'll add a new interface, through two funtions, phy_validate and phy_configure. The first one will allow to check that a current configuration, for a given mode, is applicable. It will also allow the PHY driver to tune the settings given as parameters as it sees fit. phy_configure will actually apply that configuration in the phy itself. Signed-off-by: Maxime Ripard <maxime.ripard@bootlin.com> Signed-off-by: Kishon Vijay Abraham I <kishon@ti.com>
2018-12-12phy: Add MIPI D-PHY modeMaxime Ripard
MIPI D-PHY is a MIPI standard meant mostly for display and cameras in embedded systems. Add a mode for it. Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com> Reviewed-by: Sakari Ailus <sakari.ailus@linux.intel.com> Signed-off-by: Maxime Ripard <maxime.ripard@bootlin.com> Signed-off-by: Kishon Vijay Abraham I <kishon@ti.com>
2018-12-12phy: core: clean up unused ethernet specific phy modesGrygorii Strashko
After recent changes PHY_MODE_SGMII, PHY_MODE_2500SGMII, PHY_MODE_QSGMII, PHY_MODE_10GKR are not used any more and can be removed. Hence - remove them. Signed-off-by: Grygorii Strashko <grygorii.strashko@ti.com> Signed-off-by: Kishon Vijay Abraham I <kishon@ti.com>
2018-12-12phy: core: add PHY_MODE_ETHERNETGrygorii Strashko
Add new PHY's mode to be used by Ethernet PHY interface drivers or multipurpose PHYs like serdes. It will be reused in further changes. Signed-off-by: Grygorii Strashko <grygorii.strashko@ti.com> Signed-off-by: Kishon Vijay Abraham I <kishon@ti.com>
2018-12-12phy: core: rework phy_set_mode to accept phy mode and submodeGrygorii Strashko
Currently the attempt to add support for Ethernet interface mode PHY (MII/GMII/RGMII) will lead to the necessity of extending enum phy_mode and duplicate there values from phy_interface_t enum (or introduce more PHY callbacks) [1]. Both approaches are ineffective and would lead to fast bloating of enum phy_mode or struct phy_ops in the process of adding more PHYs for different subsystems which will make them unmaintainable. As discussed in [1] the solution could be to introduce dual level PHYs mode configuration - PHY mode and PHY submode. The PHY mode will define generic PHY type (subsystem - PCIE/ETHERNET/USB_) while the PHY submode - subsystem specific interface mode. The last is usually already defined in corresponding subsystem headers (phy_interface_t for Ethernet, enum usb_device_speed for USB). This patch is cumulative change which refactors PHY framework code to support dual level PHYs mode configuration - PHY mode and PHY submode. It extends .set_mode() callback to support additional parameter "int submode" and converts all corresponding PHY drivers to support new .set_mode() callback declaration. The new extended PHY API int phy_set_mode_ext(struct phy *phy, enum phy_mode mode, int submode) is introduced to support dual level PHYs mode configuration and existing phy_set_mode() API is converted to macros, so PHY framework consumers do not need to be changed (~21 matches). [1] http://lkml.kernel.org/r/d63588f6-9ab0-848a-5ad4-8073143bd95d@ti.com Signed-off-by: Grygorii Strashko <grygorii.strashko@ti.com> Signed-off-by: Kishon Vijay Abraham I <kishon@ti.com>
2018-12-11bpf: fix bpf_jit_limit knob for PAGE_SIZE >= 64KDaniel Borkmann
Michael and Sandipan report: Commit ede95a63b5 introduced a bpf_jit_limit tuneable to limit BPF JIT allocations. At compile time it defaults to PAGE_SIZE * 40000, and is adjusted again at init time if MODULES_VADDR is defined. For ppc64 kernels, MODULES_VADDR isn't defined, so we're stuck with the compile-time default at boot-time, which is 0x9c400000 when using 64K page size. This overflows the signed 32-bit bpf_jit_limit value: root@ubuntu:/tmp# cat /proc/sys/net/core/bpf_jit_limit -1673527296 and can cause various unexpected failures throughout the network stack. In one case `strace dhclient eth0` reported: setsockopt(5, SOL_SOCKET, SO_ATTACH_FILTER, {len=11, filter=0x105dd27f8}, 16) = -1 ENOTSUPP (Unknown error 524) and similar failures can be seen with tools like tcpdump. This doesn't always reproduce however, and I'm not sure why. The more consistent failure I've seen is an Ubuntu 18.04 KVM guest booted on a POWER9 host would time out on systemd/netplan configuring a virtio-net NIC with no noticeable errors in the logs. Given this and also given that in near future some architectures like arm64 will have a custom area for BPF JIT image allocations we should get rid of the BPF_JIT_LIMIT_DEFAULT fallback / default entirely. For 4.21, we have an overridable bpf_jit_alloc_exec(), bpf_jit_free_exec() so therefore add another overridable bpf_jit_alloc_exec_limit() helper function which returns the possible size of the memory area for deriving the default heuristic in bpf_jit_charge_init(). Like bpf_jit_alloc_exec() and bpf_jit_free_exec(), the new bpf_jit_alloc_exec_limit() assumes that module_alloc() is the default JIT memory provider, and therefore in case archs implement their custom module_alloc() we use MODULES_{END,_VADDR} for limits and otherwise for vmalloc_exec() cases like on ppc64 we use VMALLOC_{END,_START}. Additionally, for archs supporting large page sizes, we should change the sysctl to be handled as long to not run into sysctl restrictions in future. Fixes: ede95a63b5e8 ("bpf: add bpf_jit_limit knob to restrict unpriv allocations") Reported-by: Sandipan Das <sandipan@linux.ibm.com> Reported-by: Michael Roth <mdroth@linux.vnet.ibm.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Tested-by: Michael Roth <mdroth@linux.vnet.ibm.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2018-12-11seccomp: add a return code to trap to userspaceTycho Andersen
This patch introduces a means for syscalls matched in seccomp to notify some other task that a particular filter has been triggered. The motivation for this is primarily for use with containers. For example, if a container does an init_module(), we obviously don't want to load this untrusted code, which may be compiled for the wrong version of the kernel anyway. Instead, we could parse the module image, figure out which module the container is trying to load and load it on the host. As another example, containers cannot mount() in general since various filesystems assume a trusted image. However, if an orchestrator knows that e.g. a particular block device has not been exposed to a container for writing, it want to allow the container to mount that block device (that is, handle the mount for it). This patch adds functionality that is already possible via at least two other means that I know about, both of which involve ptrace(): first, one could ptrace attach, and then iterate through syscalls via PTRACE_SYSCALL. Unfortunately this is slow, so a faster version would be to install a filter that does SECCOMP_RET_TRACE, which triggers a PTRACE_EVENT_SECCOMP. Since ptrace allows only one tracer, if the container runtime is that tracer, users inside the container (or outside) trying to debug it will not be able to use ptrace, which is annoying. It also means that older distributions based on Upstart cannot boot inside containers using ptrace, since upstart itself uses ptrace to monitor services while starting. The actual implementation of this is fairly small, although getting the synchronization right was/is slightly complex. Finally, it's worth noting that the classic seccomp TOCTOU of reading memory data from the task still applies here, but can be avoided with careful design of the userspace handler: if the userspace handler reads all of the task memory that is necessary before applying its security policy, the tracee's subsequent memory edits will not be read by the tracer. Signed-off-by: Tycho Andersen <tycho@tycho.ws> CC: Kees Cook <keescook@chromium.org> CC: Andy Lutomirski <luto@amacapital.net> CC: Oleg Nesterov <oleg@redhat.com> CC: Eric W. Biederman <ebiederm@xmission.com> CC: "Serge E. Hallyn" <serge@hallyn.com> Acked-by: Serge Hallyn <serge@hallyn.com> CC: Christian Brauner <christian@brauner.io> CC: Tyler Hicks <tyhicks@canonical.com> CC: Akihiro Suda <suda.akihiro@lab.ntt.co.jp> Signed-off-by: Kees Cook <keescook@chromium.org>
2018-12-11seccomp: switch system call argument type to void *Tycho Andersen
The const qualifier causes problems for any code that wants to write to the third argument of the seccomp syscall, as we will do in a future patch in this series. The third argument to the seccomp syscall is documented as void *, so rather than just dropping the const, let's switch everything to use void * as well. I believe this is safe because of 1. the documentation above, 2. there's no real type information exported about syscalls anywhere besides the man pages. Signed-off-by: Tycho Andersen <tycho@tycho.ws> CC: Kees Cook <keescook@chromium.org> CC: Andy Lutomirski <luto@amacapital.net> CC: Oleg Nesterov <oleg@redhat.com> CC: Eric W. Biederman <ebiederm@xmission.com> CC: "Serge E. Hallyn" <serge@hallyn.com> Acked-by: Serge Hallyn <serge@hallyn.com> CC: Christian Brauner <christian@brauner.io> CC: Tyler Hicks <tyhicks@canonical.com> CC: Akihiro Suda <suda.akihiro@lab.ntt.co.jp> Signed-off-by: Kees Cook <keescook@chromium.org>
2018-12-11net/mlx5e: Avoid query PPCNT register if not supported by the deviceEyal Davidovich
PPCNT is not supported if PCAM access reg is supported and ppcnt bit is clear. Signed-off-by: Eyal Davidovich <eyald@mellanox.com> Reviewed-by: Eran Ben Elisha <eranbe@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
2018-12-11net/mlx5e: Use CQE padding for Ethernet CQsDaniel Jurgens
Writing 64B CQEs to 128B cache lines results in a RMW operation. Padding the CQEs to 128B if possible improves performance on 128B cache line systems like PPC. Testing on PPC showed up to a 24% improvement in small packet throughput vs the default behavior, depending on the workload and system topology. Signed-off-by: Daniel Jurgens <danielj@mellanox.com> Reviewed-by: Tariq Toukan <tariqt@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
2018-12-11Merge tag 'v4.20-rc6' into rdma.git for-nextJason Gunthorpe
For dependencies in following patches.
2018-12-11Merge branch 'for-linus' of ↵Mark Brown
https://git.kernel.org/pub/scm/linux/kernel/git/broonie/regulator into regulator-4.21
2018-12-11IB/mlx5: Report CapabilityMask2 in ib_query_portMichael Guralnik
CapabilityMask2 exists when IB_PORT_CAP_MASK2_SUP is set in the original capability mask. In such cases, query its value and report it in query port. Signed-off-by: Michael Guralnik <michaelgur@mellanox.com> Reviewed-by: Majd Dibbiny <majd@mellanox.com> Signed-off-by: Leon Romanovsky <leonro@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
2018-12-11lightnvm: disable interleaved metadataIgor Konopko
Currently pblk only check the size of I/O metadata and does not take into account if this metadata is in a separate buffer or interleaved in a single metadata buffer. In reality only the first scenario is supported, where second mode will break pblk functionality during any IO operation. This patch prevents pblk to be instantiated in case device only supports interleaved metadata. Reviewed-by: Javier González <javier@cnexlabs.com> Signed-off-by: Igor Konopko <igor.j.konopko@intel.com> Signed-off-by: Matias Bjørling <mb@lightnvm.io> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2018-12-11lightnvm: dynamic DMA pool entry sizeIgor Konopko
Currently lightnvm and pblk uses single DMA pool, for which the entry size always is equal to PAGE_SIZE. The contents of each entry allocated from the DMA pool consists of a PPA list (8bytes * 64), leaving 56bytes * 64 space for metadata. Since the metadata field can be bigger, such as 128 bytes, the static size does not cover this use-case. This patch adds support for I/O metadata above 56 bytes by changing DMA pool size based on device meta size and allows pblk to use OOB metadata >=16B. Reviewed-by: Javier González <javier@cnexlabs.com> Signed-off-by: Igor Konopko <igor.j.konopko@intel.com> Signed-off-by: Matias Bjørling <mb@lightnvm.io> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2018-12-11clk: Tag clk core files with SPDXStephen Boyd
These are all GPL-2.0 files per the existing license text. Replace the boiler plate with the tag. Signed-off-by: Stephen Boyd <sboyd@kernel.org>
2018-12-11sched/topology: Make Energy Aware Scheduling depend on schedutilQuentin Perret
Energy Aware Scheduling (EAS) is designed with the assumption that frequencies of CPUs follow their utilization value. When using a CPUFreq governor other than schedutil, the chances of this assumption being true are small, if any. When schedutil is being used, EAS' predictions are at least consistent with the frequency requests. Although those requests have no guarantees to be honored by the hardware, they should at least guide DVFS in the right direction and provide some hope in regards to the EAS model being accurate. To make sure EAS is only used in a sane configuration, create a strong dependency on schedutil being used. Since having sugov compiled-in does not provide that guarantee, make CPUFreq call a scheduler function on governor changes hence letting it rebuild the scheduling domains, check the governors of the online CPUs, and enable/disable EAS accordingly. Signed-off-by: Quentin Perret <quentin.perret@arm.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Mike Galbraith <efault@gmx.de> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Rafael J. Wysocki <rjw@rjwysocki.net> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: adharmap@codeaurora.org Cc: chris.redpath@arm.com Cc: currojerez@riseup.net Cc: dietmar.eggemann@arm.com Cc: edubezval@gmail.com Cc: gregkh@linuxfoundation.org Cc: javi.merino@kernel.org Cc: joel@joelfernandes.org Cc: juri.lelli@redhat.com Cc: morten.rasmussen@arm.com Cc: patrick.bellasi@arm.com Cc: pkondeti@codeaurora.org Cc: skannan@codeaurora.org Cc: smuckle@google.com Cc: srinivas.pandruvada@linux.intel.com Cc: thara.gopinath@linaro.org Cc: tkjos@google.com Cc: valentin.schneider@arm.com Cc: vincent.guittot@linaro.org Cc: viresh.kumar@linaro.org Link: https://lkml.kernel.org/r/20181203095628.11858-9-quentin.perret@arm.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2018-12-11PM: Introduce an Energy Model management frameworkQuentin Perret
Several subsystems in the kernel (task scheduler and/or thermal at the time of writing) can benefit from knowing about the energy consumed by CPUs. Yet, this information can come from different sources (DT or firmware for example), in different formats, hence making it hard to exploit without a standard API. As an attempt to address this, introduce a centralized Energy Model (EM) management framework which aggregates the power values provided by drivers into a table for each performance domain in the system. The power cost tables are made available to interested clients (e.g. task scheduler or thermal) via platform-agnostic APIs. The overall design is represented by the diagram below (focused on Arm-related drivers as an example, but applicable to any architecture): +---------------+ +-----------------+ +-------------+ | Thermal (IPA) | | Scheduler (EAS) | | Other | +---------------+ +-----------------+ +-------------+ | | em_pd_energy() | | | em_cpu_get() | +-----------+ | +--------+ | | | v v v +---------------------+ | | | Energy Model | | | | Framework | | | +---------------------+ ^ ^ ^ | | | em_register_perf_domain() +----------+ | +---------+ | | | +---------------+ +---------------+ +--------------+ | cpufreq-dt | | arm_scmi | | Other | +---------------+ +---------------+ +--------------+ ^ ^ ^ | | | +--------------+ +---------------+ +--------------+ | Device Tree | | Firmware | | ? | +--------------+ +---------------+ +--------------+ Drivers (typically, but not limited to, CPUFreq drivers) can register data in the EM framework using the em_register_perf_domain() API. The calling driver must provide a callback function with a standardized signature that will be used by the EM framework to build the power cost tables of the performance domain. This design should offer a lot of flexibility to calling drivers which are free of reading information from any location and to use any technique to compute power costs. Moreover, the capacity states registered by drivers in the EM framework are not required to match real performance states of the target. This is particularly important on targets where the performance states are not known by the OS. The power cost coefficients managed by the EM framework are specified in milli-watts. Although the two potential users of those coefficients (IPA and EAS) only need relative correctness, IPA specifically needs to compare the power of CPUs with the power of other components (GPUs, for example), which are still expressed in absolute terms in their respective subsystems. Hence, specifying the power of CPUs in milli-watts should help transitioning IPA to using the EM framework without introducing new problems by keeping units comparable across sub-systems. On the longer term, the EM of other devices than CPUs could also be managed by the EM framework, which would enable to remove the absolute unit. However, this is not absolutely required as a first step, so this extension of the EM framework is left for later. On the client side, the EM framework offers APIs to access the power cost tables of a CPU (em_cpu_get()), and to estimate the energy consumed by the CPUs of a performance domain (em_pd_energy()). Clients such as the task scheduler can then use these APIs to access the shared data structures holding the Energy Model of CPUs. Signed-off-by: Quentin Perret <quentin.perret@arm.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Mike Galbraith <efault@gmx.de> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Rafael J. Wysocki <rjw@rjwysocki.net> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: adharmap@codeaurora.org Cc: chris.redpath@arm.com Cc: currojerez@riseup.net Cc: dietmar.eggemann@arm.com Cc: edubezval@gmail.com Cc: gregkh@linuxfoundation.org Cc: javi.merino@kernel.org Cc: joel@joelfernandes.org Cc: juri.lelli@redhat.com Cc: morten.rasmussen@arm.com Cc: patrick.bellasi@arm.com Cc: pkondeti@codeaurora.org Cc: skannan@codeaurora.org Cc: smuckle@google.com Cc: srinivas.pandruvada@linux.intel.com Cc: thara.gopinath@linaro.org Cc: tkjos@google.com Cc: valentin.schneider@arm.com Cc: vincent.guittot@linaro.org Cc: viresh.kumar@linaro.org Link: https://lkml.kernel.org/r/20181203095628.11858-4-quentin.perret@arm.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2018-12-11sched/cpufreq: Prepare schedutil for Energy Aware SchedulingQuentin Perret
Schedutil requests frequency by aggregating utilization signals from the scheduler (CFS, RT, DL, IRQ) and applying a 25% margin on top of them. Since Energy Aware Scheduling (EAS) needs to be able to predict the frequency requests, it needs to forecast the decisions made by the governor. In order to prepare the introduction of EAS, introduce schedutil_freq_util() to centralize the aforementioned signal aggregation and make it available to both schedutil and EAS. Since frequency selection and energy estimation still need to deal with RT and DL signals slightly differently, schedutil_freq_util() is called with a different 'type' parameter in those two contexts, and returns an aggregated utilization signal accordingly. While at it, introduce the map_util_freq() function which is designed to make schedutil's 25% margin usable easily for both sugov and EAS. As EAS will be able to predict schedutil's frequency requests more accurately than any other governor by design, it'd be sensible to make sure EAS cannot be used without schedutil. This will be done later, once EAS has actually been introduced. Suggested-by: Peter Zijlstra <peterz@infradead.org> Signed-off-by: Quentin Perret <quentin.perret@arm.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Mike Galbraith <efault@gmx.de> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: adharmap@codeaurora.org Cc: chris.redpath@arm.com Cc: currojerez@riseup.net Cc: dietmar.eggemann@arm.com Cc: edubezval@gmail.com Cc: gregkh@linuxfoundation.org Cc: javi.merino@kernel.org Cc: joel@joelfernandes.org Cc: juri.lelli@redhat.com Cc: morten.rasmussen@arm.com Cc: patrick.bellasi@arm.com Cc: pkondeti@codeaurora.org Cc: rjw@rjwysocki.net Cc: skannan@codeaurora.org Cc: smuckle@google.com Cc: srinivas.pandruvada@linux.intel.com Cc: thara.gopinath@linaro.org Cc: tkjos@google.com Cc: valentin.schneider@arm.com Cc: vincent.guittot@linaro.org Cc: viresh.kumar@linaro.org Link: https://lkml.kernel.org/r/20181203095628.11858-3-quentin.perret@arm.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2018-12-11sched/topology: Relocate arch_scale_cpu_capacity() to the internal headerQuentin Perret
By default, arch_scale_cpu_capacity() is only visible from within the kernel/sched folder. Relocate it to include/linux/sched/topology.h to make it visible to other clients needing to know about the capacity of CPUs, such as the Energy Model framework. This also shrinks the <linux/sched/topology.h> public header. Signed-off-by: Quentin Perret <quentin.perret@arm.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Mike Galbraith <efault@gmx.de> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: adharmap@codeaurora.org Cc: chris.redpath@arm.com Cc: currojerez@riseup.net Cc: dietmar.eggemann@arm.com Cc: edubezval@gmail.com Cc: gregkh@linuxfoundation.org Cc: javi.merino@kernel.org Cc: joel@joelfernandes.org Cc: juri.lelli@redhat.com Cc: morten.rasmussen@arm.com Cc: patrick.bellasi@arm.com Cc: pkondeti@codeaurora.org Cc: rjw@rjwysocki.net Cc: skannan@codeaurora.org Cc: smuckle@google.com Cc: srinivas.pandruvada@linux.intel.com Cc: thara.gopinath@linaro.org Cc: tkjos@google.com Cc: valentin.schneider@arm.com Cc: vincent.guittot@linaro.org Cc: viresh.kumar@linaro.org Link: https://lkml.kernel.org/r/20181203095628.11858-2-quentin.perret@arm.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2018-12-11sched/topology: Remove the ::smt_gain field from 'struct sched_domain'Vincent Guittot
::smt_gain is used to compute the capacity of CPUs of a SMT core with the constraint 1 < ::smt_gain < 2 in order to be able to compute number of CPUs per core. The field has_free_capacity of struct numa_stat, which was the last user of this computation of number of CPUs per core, has been removed by: 2d4056fafa19 ("sched/numa: Remove numa_has_capacity()") We can now remove this constraint on core capacity and use the defautl value SCHED_CAPACITY_SCALE for SMT CPUs. With this remove, SCHED_CAPACITY_SCALE becomes the maximum compute capacity of CPUs on every systems. This should help to simplify some code and remove fields like rd->max_cpu_capacity Furthermore, arch_scale_cpu_capacity() is used with a NULL sd in several other places in the code when it wants the capacity of a CPUs to scale some metrics like in pelt, deadline or schedutil. In case on SMT, the value returned is not the capacity of SMT CPUs but default SCHED_CAPACITY_SCALE. So remove it. Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Mike Galbraith <efault@gmx.de> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Link: http://lkml.kernel.org/r/1535548752-4434-4-git-send-email-vincent.guittot@linaro.org Signed-off-by: Ingo Molnar <mingo@kernel.org>
2018-12-11perf/core: Declare the __percpu attribute on non-deref typesMukesh Ojha
Sparse reports the current declaration of two perf percpu variables with this warning: warning: incorrect type in initializer (different address spaces) expected void const [noderef] <asn:3>*__vpp_verify got struct perf_cpu_context *<noident> While it's normally perfectly fine to place GCC attributes anywhere in the definition, this particular attribute is for a checking compiler's such as Sparse's benefit, which doesn't want __percpu on pointers. So reorder the attribute to come after the structure type, not after the pointer type. [ mingo: Rewrote the changelog. ] Signed-off-by: Mukesh Ojha <mojha@codeaurora.org> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Arnaldo Carvalho de Melo <acme@kernel.org> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Link: https://lkml.kernel.org/r/1543310012-7967-1-git-send-email-mojha@codeaurora.org Signed-off-by: Ingo Molnar <mingo@kernel.org>
2018-12-11locking/lockdep: Remove ::version from lock_class structureWaiman Long
It turns out the version field in the lock_class structure isn't used anywhere. Just remove it. Signed-off-by: Waiman Long <longman@redhat.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Andrey Ryabinin <aryabinin@virtuozzo.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Petr Mladek <pmladek@suse.com> Cc: Sasha Levin <sasha.levin@oracle.com> Cc: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Cc: Tejun Heo <tj@kernel.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Will Deacon <will.deacon@arm.com> Cc: iommu@lists.linux-foundation.org Cc: kasan-dev@googlegroups.com Link: https://lkml.kernel.org/r/1542653726-5655-2-git-send-email-longman@redhat.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2018-12-11dma/debug: Remove dma_debug_resize_entries()Robin Murphy
With the only caller now gone, we can clean up this part of dma-debug's exposed internals and make way to tweak the allocation behaviour. Signed-off-by: Robin Murphy <robin.murphy@arm.com> Tested-by: Qian Cai <cai@lca.pw> Signed-off-by: Christoph Hellwig <hch@lst.de>
2018-12-11x86/ima: retry detecting secure boot modeMimi Zohar
The secure boot mode may not be detected on boot for some reason (eg. buggy firmware). This patch attempts one more time to detect the secure boot mode. Signed-off-by: Mimi Zohar <zohar@linux.ibm.com>
2018-12-11x86/ima: define arch_get_ima_policy() for x86Eric Richter
On x86, there are two methods of verifying a kexec'ed kernel image signature being loaded via the kexec_file_load syscall - an architecture specific implementaton or a IMA KEXEC_KERNEL_CHECK appraisal rule. Neither of these methods verify the kexec'ed kernel image signature being loaded via the kexec_load syscall. Secure boot enabled systems require kexec images to be signed. Therefore, this patch loads an IMA KEXEC_KERNEL_CHECK policy rule on secure boot enabled systems not configured with CONFIG_KEXEC_VERIFY_SIG enabled. When IMA_APPRAISE_BOOTPARAM is configured, different IMA appraise modes (eg. fix, log) can be specified on the boot command line, allowing unsigned or invalidly signed kernel images to be kexec'ed. This patch permits enabling IMA_APPRAISE_BOOTPARAM or IMA_ARCH_POLICY, but not both. Signed-off-by: Eric Richter <erichte@linux.ibm.com> Signed-off-by: Nayna Jain <nayna@linux.ibm.com> Cc: David Howells <dhowells@redhat.com> Cc: Eric Biederman <ebiederm@xmission.com> Cc: Peter Jones <pjones@redhat.com> Cc: Vivek Goyal <vgoyal@redhat.com> Cc: Dave Young <dyoung@redhat.com> Signed-off-by: Mimi Zohar <zohar@linux.ibm.com>
2018-12-11ima: add support for arch specific policiesNayna Jain
Builtin IMA policies can be enabled on the boot command line, and replaced with a custom policy, normally during early boot in the initramfs. Build time IMA policy rules were recently added. These rules are automatically enabled on boot and persist after loading a custom policy. There is a need for yet another type of policy, an architecture specific policy, which is derived at runtime during kernel boot, based on the runtime secure boot flags. Like the build time policy rules, these rules persist after loading a custom policy. This patch adds support for loading an architecture specific IMA policy. Signed-off-by: Nayna Jain <nayna@linux.ibm.com> Co-Developed-by: Mimi Zohar <zohar@linux.ibm.com> Signed-off-by: Mimi Zohar <zohar@linux.ibm.com>
2018-12-11Merge back staging AVS changes for v4.21.Rafael J. Wysocki
2018-12-11iommu/vt-d: Remove deferred invalidationLu Baolu
Deferred invalidation is an ECS specific feature. It will not be supported when IOMMU works in scalable mode. As we deprecated the ECS support, remove deferred invalidation and cleanup the code. Cc: Ashok Raj <ashok.raj@intel.com> Cc: Jacob Pan <jacob.jun.pan@linux.intel.com> Cc: Kevin Tian <kevin.tian@intel.com> Cc: Liu Yi L <yi.l.liu@intel.com> Cc: Sanjay Kumar <sanjay.k.kumar@intel.com> Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com> Reviewed-by: Ashok Raj <ashok.raj@intel.com> Signed-off-by: Joerg Roedel <jroedel@suse.de>
2018-12-11iommu/vt-d: Shared virtual address in scalable modeLu Baolu
This patch enables the current SVA (Shared Virtual Address) implementation to work in the scalable mode. Cc: Ashok Raj <ashok.raj@intel.com> Cc: Jacob Pan <jacob.jun.pan@linux.intel.com> Cc: Kevin Tian <kevin.tian@intel.com> Signed-off-by: Sanjay Kumar <sanjay.k.kumar@intel.com> Signed-off-by: Liu Yi L <yi.l.liu@intel.com> Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com> Reviewed-by: Ashok Raj <ashok.raj@intel.com> Signed-off-by: Joerg Roedel <jroedel@suse.de>
2018-12-11iommu/vt-d: Add first level page table interfaceLu Baolu
This adds an interface to setup the PASID entries for first level page table translation. Cc: Ashok Raj <ashok.raj@intel.com> Cc: Jacob Pan <jacob.jun.pan@linux.intel.com> Cc: Kevin Tian <kevin.tian@intel.com> Signed-off-by: Sanjay Kumar <sanjay.k.kumar@intel.com> Signed-off-by: Liu Yi L <yi.l.liu@intel.com> Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com> Reviewed-by: Ashok Raj <ashok.raj@intel.com> Signed-off-by: Joerg Roedel <jroedel@suse.de>
2018-12-11iommu/vt-d: Setup context and enable RID2PASID supportLu Baolu
This patch enables the translation for requests without PASID in the scalable mode by setting up the root and context entries. Cc: Ashok Raj <ashok.raj@intel.com> Cc: Jacob Pan <jacob.jun.pan@linux.intel.com> Cc: Kevin Tian <kevin.tian@intel.com> Signed-off-by: Sanjay Kumar <sanjay.k.kumar@intel.com> Signed-off-by: Liu Yi L <yi.l.liu@intel.com> Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com> Reviewed-by: Ashok Raj <ashok.raj@intel.com> Signed-off-by: Joerg Roedel <jroedel@suse.de>
2018-12-11iommu/vt-d: Add second level page table interfaceLu Baolu
This adds the interfaces to setup or tear down the structures for second level page table translations. This includes types of second level only translation and pass through. Cc: Ashok Raj <ashok.raj@intel.com> Cc: Jacob Pan <jacob.jun.pan@linux.intel.com> Cc: Kevin Tian <kevin.tian@intel.com> Signed-off-by: Sanjay Kumar <sanjay.k.kumar@intel.com> Signed-off-by: Liu Yi L <yi.l.liu@intel.com> Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com> Reviewed-by: Ashok Raj <ashok.raj@intel.com> Signed-off-by: Joerg Roedel <jroedel@suse.de>
2018-12-11iommu/vt-d: Add 256-bit invalidation descriptor supportLu Baolu
Intel vt-d spec rev3.0 requires software to use 256-bit descriptors in invalidation queue. As the spec reads in section 6.5.2: Remapping hardware supporting Scalable Mode Translations (ECAP_REG.SMTS=1) allow software to additionally program the width of the descriptors (128-bits or 256-bits) that will be written into the Queue. Software should setup the Invalidation Queue for 256-bit descriptors before progra- mming remapping hardware for scalable-mode translation as 128-bit descriptors are treated as invalid descriptors (see Table 21 in Section 6.5.2.10) in scalable-mode. This patch adds 256-bit invalidation descriptor support if the hardware presents scalable mode capability. Cc: Ashok Raj <ashok.raj@intel.com> Cc: Jacob Pan <jacob.jun.pan@linux.intel.com> Cc: Kevin Tian <kevin.tian@intel.com> Signed-off-by: Sanjay Kumar <sanjay.k.kumar@intel.com> Signed-off-by: Liu Yi L <yi.l.liu@intel.com> Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com> Signed-off-by: Joerg Roedel <jroedel@suse.de>
2018-12-11iommu/vt-d: Move page table helpers into headerLu Baolu
So that they could also be used in other source files. Cc: Ashok Raj <ashok.raj@intel.com> Cc: Jacob Pan <jacob.jun.pan@linux.intel.com> Cc: Kevin Tian <kevin.tian@intel.com> Cc: Liu Yi L <yi.l.liu@intel.com> Cc: Sanjay Kumar <sanjay.k.kumar@intel.com> Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com> Reviewed-by: Ashok Raj <ashok.raj@intel.com> Reviewed-by: Kevin Tian <kevin.tian@intel.com> Signed-off-by: Joerg Roedel <jroedel@suse.de>
2018-12-11iommu/vt-d: Enumerate the scalable mode capabilityLu Baolu
The Intel vt-d spec rev3.0 introduces a new translation mode called scalable mode, which enables PASID-granular translations for first level, second level, nested and pass-through modes. At the same time, the previous Extended Context (ECS) mode is deprecated (no production ever implements ECS). This patch adds enumeration for Scalable Mode and removes the deprecated ECS enumeration. It provides a boot time option to disable scalable mode even hardware claims to support it. Cc: Ashok Raj <ashok.raj@intel.com> Cc: Jacob Pan <jacob.jun.pan@linux.intel.com> Cc: Kevin Tian <kevin.tian@intel.com> Signed-off-by: Sanjay Kumar <sanjay.k.kumar@intel.com> Signed-off-by: Liu Yi L <yi.l.liu@intel.com> Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com> Reviewed-by: Ashok Raj <ashok.raj@intel.com> Reviewed-by: Kevin Tian <kevin.tian@intel.com> Signed-off-by: Joerg Roedel <jroedel@suse.de>
2018-12-11PM / devfreq: add devfreq_suspend/resume() functionsLukasz Luba
This patch adds implementation for global suspend/resume for devfreq framework. System suspend will next use these functions. Suggested-by: Tobias Jakobi <tjakobi@math.uni-bielefeld.de> Suggested-by: Chanwoo Choi <cw00.choi@samsung.com> Signed-off-by: Lukasz Luba <l.luba@partner.samsung.com> Reviewed-by: Chanwoo Choi <cw00.choi@samsung.com> Signed-off-by: MyungJoo Ham <myungjoo.ham@samsung.com>
2018-12-11PM / devfreq: add support for suspend/resume of a devfreq deviceLukasz Luba
The patch prepares devfreq device for handling suspend/resume functionality. The new fields will store needed information during this process. Devfreq framework handles opp-suspend DT entry and there is no need of modyfications in the drivers code. It uses atomic variables to make sure no race condition affects the process. Suggested-by: Tobias Jakobi <tjakobi@math.uni-bielefeld.de> Suggested-by: Chanwoo Choi <cw00.choi@samsung.com> Signed-off-by: Lukasz Luba <l.luba@partner.samsung.com> Reviewed-by: Chanwoo Choi <cw00.choi@samsung.com> Signed-off-by: MyungJoo Ham <myungjoo.ham@samsung.com>
2018-12-10Merge git://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-nextDavid S. Miller
Daniel Borkmann says: ==================== pull-request: bpf-next 2018-12-11 The following pull-request contains BPF updates for your *net-next* tree. It has three minor merge conflicts, resolutions: 1) tools/testing/selftests/bpf/test_verifier.c Take first chunk with alignment_prevented_execution. 2) net/core/filter.c [...] case bpf_ctx_range_ptr(struct __sk_buff, flow_keys): case bpf_ctx_range(struct __sk_buff, wire_len): return false; [...] 3) include/uapi/linux/bpf.h Take the second chunk for the two cases each. The main changes are: 1) Add support for BPF line info via BTF and extend libbpf as well as bpftool's program dump to annotate output with BPF C code to facilitate debugging and introspection, from Martin. 2) Add support for BPF_ALU | BPF_ARSH | BPF_{K,X} in interpreter and all JIT backends, from Jiong. 3) Improve BPF test coverage on archs with no efficient unaligned access by adding an "any alignment" flag to the BPF program load to forcefully disable verifier alignment checks, from David. 4) Add a new bpf_prog_test_run_xattr() API to libbpf which allows for proper use of BPF_PROG_TEST_RUN with data_out, from Lorenz. 5) Extend tc BPF programs to use a new __sk_buff field called wire_len for more accurate accounting of packets going to wire, from Petar. 6) Improve bpftool to allow dumping the trace pipe from it and add several improvements in bash completion and map/prog dump, from Quentin. 7) Optimize arm64 BPF JIT to always emit movn/movk/movk sequence for kernel addresses and add a dedicated BPF JIT backend allocator, from Ard. 8) Add a BPF helper function for IR remotes to report mouse movements, from Sean. 9) Various cleanups in BPF prog dump e.g. to make UAPI bpf_prog_info member naming consistent with existing conventions, from Yonghong and Song. 10) Misc cleanups and improvements in allowing to pass interface name via cmdline for xdp1 BPF example, from Matteo. 11) Fix a potential segfault in BPF sample loader's kprobes handling, from Daniel T. 12) Fix SPDX license in libbpf's README.rst, from Andrey. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
2018-12-11Merge branch 'regulator-4.20' into regulator-linusMark Brown
2018-12-11usb: chipidea: add flag for imx hsic implementationPeter Chen
NXP (Freecale) imx HSIC design has some special requirements, add some flags at host code to handle them. Reviewed-by: Frieder Schrempf <frieder.schrempf@kontron.de> Tested-by: Frieder Schrempf <frieder.schrempf@kontron.de> Signed-off-by: Peter Chen <peter.chen@nxp.com>
2018-12-11gpio: Add devm_gpiod_unhinge()Linus Walleij
This adds a function named devm_gpiod_unhinge() that removes the resource management from a GPIO descriptor. I am not sure if this is the best anglosaxon name for the function, no other managed resources have an equivalent currently, but I chose "unhinge" as the closest intuitive thing I could imagine that fits Rusty Russell's API design criterions "the obvious use is the correct one" and "the name tells you how to use it". The idea came out of a remark from Mark Brown that it should be possible to handle over management of a resource from devres to the regulator core, and indeed we can do that. Signed-off-by: Linus Walleij <linus.walleij@linaro.org> Reviewed-by: Marek Szyprowski <m.szyprowski@samsung.com> Tested-by: Marek Szyprowski <m.szyprowski@samsung.com> Reviewed-by: Charles Keepax <ckeepax@opensource.cirrus.com> Signed-off-by: Mark Brown <broonie@kernel.org>
2018-12-11gpio: Export gpiod_get_from_of_node()Linus Walleij
This function already exist inside gpiolib, we were just reluctant to make it available to the kernel at large as the devm_* seemed to be enough for anyone. However we found out that regulators need to do their own lifecycle/refcounting on GPIO descriptors and explicitly call gpiod_put() when done with a descriptor, so export this function so we can hand the refcounting over to the regulator core for these descriptors after retrieveal. Signed-off-by: Linus Walleij <linus.walleij@linaro.org> Reviewed-by: Marek Szyprowski <m.szyprowski@samsung.com> Tested-by: Marek Szyprowski <m.szyprowski@samsung.com> Reviewed-by: Charles Keepax <ckeepax@opensource.cirrus.com> Signed-off-by: Mark Brown <broonie@kernel.org>
2018-12-10Merge branch 'mlx5-next' of ↵Saeed Mahameed
git://git.kernel.org/pub/scm/linux/kernel/git/mellanox/linux mlx5-next shared branch with rdma subtree to avoid mlx5 rdma v.s. netdev conflicts. Highlights: 1) RDMA ODP (On Demand Paging) improvements and moving ODP logic to mlx5 RDMA driver 2) Improved mlx5 core driver and device events handling and provided API for upper layers to subscribe to device events. 3) RDMA only code cleanup from mlx5 core 4) Add helper to get CQE opcode 5) Rework handling of port module events 6) shared mlx5_ifc.h updates to avoid conflicts Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
2018-12-10net/mlx5: Remove the get protocol device interface entryOr Gerlitz
This isn't used anywhere across the mlx5 driver stack, remove it. Signed-off-by: Or Gerlitz <ogerlitz@mellanox.com> Reviewed-by: Saeed Mahameed <saeedm@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
2018-12-10net/mlx5: Support extended destination format in flow steering commandEli Britstein
Update the flow steering command formatting according to the extended destination API. Note that the FW dictates that multi destination FTEs that involve at least one encap must use the extended destination format, while single destination ones must use the legacy format. Using extended destination format requires FW support. Check for its capabilities and return error if not supported. Signed-off-by: Eli Britstein <elibr@mellanox.com> Reviewed-by: Or Gerlitz <ogerlitz@mellanox.com> Reviewed-by: Oz Shlomo <ozsh@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
2018-12-10net/mlx5: E-Switch, Change vhca id valid bool field to bit flagEli Britstein
Change the driver flow destination struct to use bit flags with the vhca id valid being the 1st one. The flags field is more extendable and will be used in downstream patch. Signed-off-by: Eli Britstein <elibr@mellanox.com> Reviewed-by: Or Gerlitz <ogerlitz@mellanox.com> Reviewed-by: Oz Shlomo <ozsh@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
2018-12-10net/mlx5: Introduce extended destination fieldsEli Britstein
Extended destinations provide the ability to configure different encapsulation properties per destination on a single FTE. This is needed for use-cases such as remote mirroring over tunneled networks. Signed-off-by: Eli Britstein <elibr@mellanox.com> Reviewed-by: Or Gerlitz <ogerlitz@mellanox.com> Reviewed-by: Oz Shlomo <ozsh@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
2018-12-10net/mlx5: Revise gre and nvgre key formatsOz Shlomo
GRE RFC defines a 32 bit key field. NVGRE RFC splits the 32 bit key field to 24 bit VSID (gre_key_h) and 8 bit flow entropy (gre_key_l). Define the two key parsing alternatives in a union, thus enabling both access methods. Signed-off-by: Oz Shlomo <ozsh@mellanox.com> Reviewed-by: Eli Britstein <elibr@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>