summaryrefslogtreecommitdiff
path: root/include
AgeCommit message (Collapse)Author
2017-08-18genirq: Restrict effective affinity to interrupts actually using itMarc Zyngier
Just because CONFIG_GENERIC_IRQ_EFFECTIVE_AFF_MASK is selected doesn't mean that all the interrupts are using the effective affinity mask. For a number of them, this mask is likely to be empty. In order to deal with this, let's restrict the use of the effective affinity mask to these interrupts that have a non empty effective affinity. Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Cc: Andrew Lunn <andrew@lunn.ch> Cc: James Hogan <james.hogan@imgtec.com> Cc: Jason Cooper <jason@lakedaemon.net> Cc: Paul Burton <paul.burton@imgtec.com> Cc: Chris Zankel <chris@zankel.net> Cc: Kevin Cernekee <cernekee@gmail.com> Cc: Wei Xu <xuwei5@hisilicon.com> Cc: Max Filippov <jcmvbkbc@gmail.com> Cc: Florian Fainelli <f.fainelli@gmail.com> Cc: Gregory Clement <gregory.clement@free-electrons.com> Cc: Matt Redfearn <matt.redfearn@imgtec.com> Cc: Sebastian Hesselbarth <sebastian.hesselbarth@gmail.com> Link: http://lkml.kernel.org/r/20170818083925.10108-2-marc.zyngier@arm.com
2017-08-18Merge branch 'x86/asm' into locking/coreIngo Molnar
We need the ASM_UNREACHABLE() macro for a dependent patch. Signed-off-by: Ingo Molnar <mingo@kernel.org>
2017-08-18ACPI / PM: Check low power idle constraints for debug onlySrinivas Pandruvada
For SoC to achieve its lowest power platform idle state a set of hardware preconditions must be met. These preconditions or constraints can be obtained by issuing a device specific method (_DSM) with function "1". Refer to the document provided in the link below. Here during initialization (from attach() callback of LPS0 device), invoke function 1 to get the device constraints. Each enabled constraint is stored in a table. The devices in this table are used to check whether they were in required minimum state, while entering suspend. This check is done from platform freeze wake() callback, only when /sys/power/pm_debug_messages attribute is non zero. If any constraint is not met and device is ACPI power managed then it prints the device information to kernel logs. Also if debug is enabled in acpi/sleep.c, the constraint table and state of each device on wake is dumped in kernel logs. Since pm_debug_messages_on setting is used as condition to check constraints outside kernel/power/main.c, pm_debug_messages_on is changed to a global variable. Link: http://www.uefi.org/sites/default/files/resources/Intel_ACPI_Low_Power_S0_Idle.pdf Signed-off-by: Srinivas Pandruvada <srinivas.pandruvada@linux.intel.com> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
2017-08-17Revert "pstore: Honor dmesg_restrict sysctl on dmesg dumps"Kees Cook
This reverts commit 68c4a4f8abc60c9440ede9cd123d48b78325f7a3, with various conflict clean-ups. The capability check required too much privilege compared to simple DAC controls. A system builder was forced to have crash handler processes run with CAP_SYSLOG which would give it the ability to read (and wipe) the _current_ dmesg, which is much more access than being given access only to the historical log stored in pstorefs. With the prior commit to make the root directory 0750, the files are protected by default but a system builder can now opt to give access to a specific group (via chgrp on the pstorefs root directory) without being forced to also give away CAP_SYSLOG. Suggested-by: Nick Kralevich <nnk@google.com> Signed-off-by: Kees Cook <keescook@chromium.org> Reviewed-by: Petr Mladek <pmladek@suse.cz> Reviewed-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
2017-08-17quota: Reduce contention on dq_data_lockJan Kara
dq_data_lock is currently used to protect all modifications of quota accounting information, consistency of quota accounting on the inode, and dquot pointers from inode. As a result contention on the lock can be pretty heavy. Reduce the contention on the lock by protecting quota accounting information by a new dquot->dq_dqb_lock and consistency of quota accounting with inode usage by inode->i_lock. This change reduces time to create 500000 files on ext4 on ramdisk by 50 different processes in separate directories by 6% when user quota is turned on. When those 50 processes belong to 50 different users, the improvement is about 9%. Signed-off-by: Jan Kara <jack@suse.cz>
2017-08-17fs: Provide __inode_get_bytes()Jan Kara
Provide helper __inode_get_bytes() which assumes i_lock is already acquired. Quota code will need this to be able to use i_lock to protect consistency of quota accounting information and inode usage. Signed-off-by: Jan Kara <jack@suse.cz>
2017-08-17quota: Inline functions into their callsitesJan Kara
inode_add_rsv_space() and inode_sub_rsv_space() had only one callsite. Inline them there directly. inode_claim_rsv_space() and inode_reclaim_rsv_space() had two callsites so inline them there as well. This will simplify further locking changes. Signed-off-by: Jan Kara <jack@suse.cz>
2017-08-17quota: Allow disabling tracking of dirty dquots in a listJan Kara
Filesystems that are journalling quotas generally don't need tracking of dirty dquots in a list since forcing a transaction commit flushes all quotas anyway. Allow filesystem to say it doesn't want dquots to be tracked as it reduces contention on the dq_list_lock. Signed-off-by: Jan Kara <jack@suse.cz>
2017-08-17quota: Remove dq_wait_unused from dquotJan Kara
Currently every dquot carries a wait_queue_head_t used only when we are turning quotas off to wait for last users to drop dquot references. Since such rare case is not performance sensitive in any means, just use a global waitqueue for this and save space in struct dquot. Also convert the logic to use wait_event() instead of open-coding it. Signed-off-by: Jan Kara <jack@suse.cz>
2017-08-17drm/ttm: make ttm_mem_type_manager_func debug more usefulChristian König
Provide the drm printer directly instead of just the callback. Signed-off-by: Christian König <christian.koenig@amd.com> Reviewed-by: Alex Deucher <alexander.deucher@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2017-08-18Merge tag 'omapdrm-4.14' of ↵Dave Airlie
git://git.kernel.org/pub/scm/linux/kernel/git/tomba/linux into drm-next omapdrm changes for v4.14 * HDMI hot plug IRQ support (instead of polling) * Big driver cleanup from Laurent (no functional changes) * OMAP5 DSI support (only the pinmuxing was missing) * tag 'omapdrm-4.14' of git://git.kernel.org/pub/scm/linux/kernel/git/tomba/linux: (60 commits) drm/omap: Potential NULL deref in omap_crtc_duplicate_state() drm/omap: remove no-op cleanup code drm/omap: rename omapdrm device back drm: omapdrm: Remove omapdrm platform data ARM: OMAP2+: Don't register omapdss device for omapdrm ARM: OMAP2+: Remove unused omapdrm platform device drm: omapdrm: Remove the omapdss driver drm: omapdrm: Register omapdrm platform device in omapdss driver drm: omapdrm: hdmi: Don't allocate PHY features dynamically drm: omapdrm: hdmi: Configure the PHY from the HDMI core version drm: omapdrm: hdmi: Configure the PLL from the HDMI core version drm: omapdrm: hdmi: Pass HDMI core version as integer to HDMI audio drm: omapdrm: hdmi: Replace OMAP SoC model check with HDMI xmit version drm: omapdrm: hdmi: Rename functions and structures to use hdmi_ prefix drm/omap: add OMAP5 DSIPHY lane-enable support drm/omap: use regmap_update_bit() when muxing DSI pads drm: omapdrm: Remove dss_features.h drm: omapdrm: Move supported outputs feature to dss driver drm: omapdrm: Move DSS_FCK feature to dss driver drm: omapdrm: Move PCD, LINEWIDTH and DOWNSCALE features to dispc driver ...
2017-08-17lsm_audit: update my email addressStephen Smalley
Update my email address since epoch.ncsc.mil no longer exists. MAINTAINERS and CREDITS are already correct. Signed-off-by: Stephen Smalley <sds@tycho.nsa.gov> Signed-off-by: Paul Moore <paul@paul-moore.com>
2017-08-17ACPICA: Make it possible to enable runtime GPEs earlierRafael J. Wysocki
Runtime GPEs have corresponding _Lxx/_Exx methods and are enabled automatically during the initialization of the ACPI subsystem through acpi_update_all_gpes() with the assumption that acpi_setup_gpe_for_wake() will be called in advance for all of the GPEs pointed to by _PRW objects in the namespace that may be affected by acpi_update_all_gpes(). That is, acpi_ev_initialize_gpe_block() can only be called for a GPE block after acpi_setup_gpe_for_wake() has been called for all of the _PRW (wakeup) GPEs in it. The platform firmware on some systems, however, expects GPEs to be enabled before the enumeration of devices which is when acpi_setup_gpe_for_wake() is called and that goes against the above assumption. For this reason, introduce a new flag to be set by acpi_ev_initialize_gpe_block() when automatically enabling a GPE to indicate to acpi_setup_gpe_for_wake() that it needs to drop the reference to the GPE coming from acpi_ev_initialize_gpe_block() and modify acpi_setup_gpe_for_wake() accordingly. These changes allow acpi_setup_gpe_for_wake() and acpi_ev_initialize_gpe_block() to be invoked in any order. Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com> Tested-by: Mika Westerberg <mika.westerberg@linux.intel.com>
2017-08-17quota: Convert dqio_mutex to rwsemJan Kara
Convert dqio_mutex to rwsem and call it dqio_sem. No functional changes yet. Signed-off-by: Jan Kara <jack@suse.cz>
2017-08-17pty: fix the cached path of the pty slave file descriptor in the masterLinus Torvalds
Christian Brauner reported that if you use the TIOCGPTPEER ioctl() to get a slave pty file descriptor, the resulting file descriptor doesn't look right in /proc/<pid>/fd/<fd>. In particular, he wanted to use readlink() on /proc/self/fd/<fd> to get the pathname of the slave pty (basically implementing "ptsname{_r}()"). The reason for that was that we had generated the wrong 'struct path' when we create the pty in ptmx_open(). In particular, the dentry was correct, but the vfsmount pointed to the mount of the ptmx node. That _can_ be correct - in case you use "/dev/pts/ptmx" to open the master - but usually is not. The normal case is to use /dev/ptmx, which then looks up the pts/ directory, and then the vfsmount of the ptmx node is obviously the /dev directory, not the /dev/pts/ directory. We actually did have the right vfsmount available, but in the wrong place (it gets looked up in 'devpts_acquire()' when we get a reference to the pts filesystem), and so ptmx_open() used the wrong mnt pointer. The end result of this confusion was that the pty worked fine, but when if you did TIOCGPTPEER to get the slave side of the pty, end end result would also work, but have that dodgy 'struct path'. And then when doing "d_path()" on to get the pathname, the vfsmount would not match the root of the pts directory, and d_path() would return an empty pathname thinking that the entry had escaped a bind mount into another mount. This fixes the problem by making devpts_acquire() return the vfsmount for the pts filesystem, allowing ptmx_open() to trivially just use the right mount for the pts dentry, and create the proper 'struct path'. Reported-by: Christian Brauner <christian.brauner@ubuntu.com> Cc: Al Viro <viro@zeniv.linux.org.uk> Acked-by: Eric Biederman <ebiederm@xmission.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2017-08-17Merge branches 'doc.2017.08.17a', 'fixes.2017.08.17a', ↵Paul E. McKenney
'hotplug.2017.07.25b', 'misc.2017.08.17a', 'spin_unlock_wait_no.2017.08.17a', 'srcu.2017.07.27c' and 'torture.2017.07.24c' into HEAD doc.2017.08.17a: Documentation updates. fixes.2017.08.17a: RCU fixes. hotplug.2017.07.25b: CPU-hotplug updates. misc.2017.08.17a: Miscellaneous fixes outside of RCU (give or take conflicts). spin_unlock_wait_no.2017.08.17a: Remove spin_unlock_wait(). srcu.2017.07.27c: SRCU updates. torture.2017.07.24c: Torture-test updates.
2017-08-17locking: Remove spin_unlock_wait() generic definitionsPaul E. McKenney
There is no agreed-upon definition of spin_unlock_wait()'s semantics, and it appears that all callers could do just as well with a lock/unlock pair. This commit therefore removes spin_unlock_wait() and related definitions from core code. Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Ingo Molnar <mingo@redhat.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Alan Stern <stern@rowland.harvard.edu> Cc: Andrea Parri <parri.andrea@gmail.com> Cc: Linus Torvalds <torvalds@linux-foundation.org>
2017-08-17soc/tegra: Register SoC deviceThierry Reding
Move this code from arch/arm/mach-tegra and make it common among 32-bit and 64-bit Tegra SoCs. This is slightly complicated by the fact that on 32-bit Tegra, the SoC device is used as the parent for all devices that are instantiated from device tree. Signed-off-by: Thierry Reding <treding@nvidia.com>
2017-08-17Merge branch 'core' into arm/tegraJoerg Roedel
2017-08-17membarrier: Provide expedited private commandMathieu Desnoyers
Implement MEMBARRIER_CMD_PRIVATE_EXPEDITED with IPIs using cpumask built from all runqueues for which current thread's mm is the same as the thread calling sys_membarrier. It executes faster than the non-expedited variant (no blocking). It also works on NOHZ_FULL configurations. Scheduler-wise, it requires a memory barrier before and after context switching between processes (which have different mm). The memory barrier before context switch is already present. For the barrier after context switch: * Our TSO archs can do RELEASE without being a full barrier. Look at x86 spin_unlock() being a regular STORE for example. But for those archs, all atomics imply smp_mb and all of them have atomic ops in switch_mm() for mm_cpumask(), and on x86 the CR3 load acts as a full barrier. * From all weakly ordered machines, only ARM64 and PPC can do RELEASE, the rest does indeed do smp_mb(), so there the spin_unlock() is a full barrier and we're good. * ARM64 has a very heavy barrier in switch_to(), which suffices. * PPC just removed its barrier from switch_to(), but appears to be talking about adding something to switch_mm(). So add a smp_mb__after_unlock_lock() for now, until this is settled on the PPC side. Changes since v3: - Properly document the memory barriers provided by each architecture. Changes since v2: - Address comments from Peter Zijlstra, - Add smp_mb__after_unlock_lock() after finish_lock_switch() in finish_task_switch() to add the memory barrier we need after storing to rq->curr. This is much simpler than the previous approach relying on atomic_dec_and_test() in mmdrop(), which actually added a memory barrier in the common case of switching between userspace processes. - Return -EINVAL when MEMBARRIER_CMD_SHARED is used on a nohz_full kernel, rather than having the whole membarrier system call returning -ENOSYS. Indeed, CMD_PRIVATE_EXPEDITED is compatible with nohz_full. Adapt the CMD_QUERY mask accordingly. Changes since v1: - move membarrier code under kernel/sched/ because it uses the scheduler runqueue, - only add the barrier when we switch from a kernel thread. The case where we switch from a user-space thread is already handled by the atomic_dec_and_test() in mmdrop(). - add a comment to mmdrop() documenting the requirement on the implicit memory barrier. CC: Peter Zijlstra <peterz@infradead.org> CC: Paul E. McKenney <paulmck@linux.vnet.ibm.com> CC: Boqun Feng <boqun.feng@gmail.com> CC: Andrew Hunter <ahh@google.com> CC: Maged Michael <maged.michael@gmail.com> CC: gromer@google.com CC: Avi Kivity <avi@scylladb.com> CC: Benjamin Herrenschmidt <benh@kernel.crashing.org> CC: Paul Mackerras <paulus@samba.org> CC: Michael Ellerman <mpe@ellerman.id.au> Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com> Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Tested-by: Dave Watson <davejwatson@fb.com>
2017-08-17swait: Add idle variants which don't contribute to load averageLuis R. Rodriguez
There are cases where folks are using an interruptible swait when using kthreads. This is rather confusing given you'd expect interruptible waits to be -- interruptible, but kthreads are not interruptible ! The reason for such practice though is to avoid having these kthreads contribute to the system load average. When systems are idle some kthreads may spend a lot of time blocking if using swait_event_timeout(). This would contribute to the system load average. On systems without preemption this would mean the load average of an idle system is bumped to 2 instead of 0. On systems with PREEMPT=y this would mean the load average of an idle system is bumped to 3 instead of 0. This adds proper API using TASK_IDLE to make such goals explicit and avoid confusion. Suggested-by: "Eric W. Biederman" <ebiederm@xmission.com> Acked-by: "Eric W. Biederman" <ebiederm@xmission.com> Tested-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Signed-off-by: Luis R. Rodriguez <mcgrof@kernel.org> Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
2017-08-17rcu: Create reasonable API for do_exit() TASKS_RCU processingPaul E. McKenney
Currently, the exit-time support for TASKS_RCU is open-coded in do_exit(). This commit creates exit_tasks_rcu_start() and exit_tasks_rcu_finish() APIs for do_exit() use. This has the benefit of confining the use of the tasks_rcu_exit_srcu variable to one file, allowing it to become static. Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
2017-08-17rcu: Drive TASKS_RCU directly off of PREEMPTPaul E. McKenney
The actual use of TASKS_RCU is only when PREEMPT, otherwise RCU-sched is used instead. This commit therefore makes synchronize_rcu_tasks() and call_rcu_tasks() available always, but mapped to synchronize_sched() and call_rcu_sched(), respectively, when !PREEMPT. This approach also allows some #ifdefs to be removed from rcutorture. Reported-by: Ingo Molnar <mingo@kernel.org> Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Reviewed-by: Masami Hiramatsu <mhiramat@kernel.org> Acked-by: Ingo Molnar <mingo@kernel.org>
2017-08-17locking/lockdep: Explicitly initialize wq_barrier::done::mapBoqun Feng
With the new lockdep crossrelease feature, which checks completions usage, a false positive is reported in the workqueue code: > Worker A : acquired of wfc.work -> wait for cpu_hotplug_lock to be released > Task B : acquired of cpu_hotplug_lock -> wait for lock#3 to be released > Task C : acquired of lock#3 -> wait for completion of barr->done > (Task C is in lru_add_drain_all_cpuslocked()) > Worker D : wait for wfc.work to be released -> will complete barr->done Such a dead lock can not happen because Task C's barr->done and Worker D's barr->done can not be the same instance. The reason of this false positive is we initialize all wq_barrier::done at insert_wq_barrier() via init_completion(), which makes them belong to the same lock class, therefore, impossible circles are reported. To fix this, explicitly initialize the lockdep map for wq_barrier::done in insert_wq_barrier(), so that the lock class key of wq_barrier::done is a subkey of the corresponding work_struct, as a result we won't build a dependency between a wq_barrier with a unrelated work, and we can differ wq barriers based on the related works, so the false positive above is avoided. Also define the empty lockdep_init_map_crosslock() for !CROSSRELEASE to make the code simple and away from unnecessary #ifdefs. Reported-by: Ingo Molnar <mingo@kernel.org> Signed-off-by: Boqun Feng <boqun.feng@gmail.com> Cc: Byungchul Park <byungchul.park@lge.com> Cc: Lai Jiangshan <jiangshanlai@gmail.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Tejun Heo <tj@kernel.org> Cc: Thomas Gleixner <tglx@linutronix.de> Link: http://lkml.kernel.org/r/20170817094622.12915-1-boqun.feng@gmail.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2017-08-17locking/lockdep: Rename CONFIG_LOCKDEP_COMPLETE to CONFIG_LOCKDEP_COMPLETIONSByungchul Park
'complete' is an adjective and LOCKDEP_COMPLETE sounds like 'lockdep is complete', so pick a better name that uses a noun. Signed-off-by: Byungchul Park <byungchul.park@lge.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: kernel-team@lge.com Link: http://lkml.kernel.org/r/1502960261-16206-3-git-send-email-byungchul.park@lge.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2017-08-17Merge tag 'keystone_dts_for_4.14' of ↵Arnd Bergmann
git://git.kernel.org/pub/scm/linux/kernel/git/ssantosh/linux-keystone into next/dt Pull "ARM: Keystone DTS for 4.14" from Santosh Shilimkar: Contents: - ti-sci power domain, clock and reset controller support - DSP nodes for k2h/k2l/k2e evms - DSP CMA memory pools for k2h/k2l/k2e/keg evms - MMC/hsmmc suport for k2g - eDMA support for k2g - DCAN support for k2g * tag 'keystone_dts_for_4.14' of git://git.kernel.org/pub/scm/linux/kernel/git/ssantosh/linux-keystone: (22 commits) ARM: dts: keystone-k2g-ice: Add and enable DSP CMA memory pool ARM: dts: keystone-k2g-evm: Add and enable DSP CMA memory pool ARM: dts: keystone-k2g: Add DSP node ARM: dts: k2g: Add DCAN nodes dt-bindings: net: c_can: Update binding for clock and power-domains property ARM: dts: keystone-k2g-evm: Enable MMC0 and MMC1 ARM: dts: keystone-k2g: add MMC0 and MMC1 nodes ARM: dts: keystone-k2g: Add eDMA nodes dt-bindings: ti,omap-hsmmc: Add 66AK2G mmc controller dt-bindings: ti,edma: Add 66AK2G specific information ARM: dts: keystone-k2g: Add gpio nodes ARM: dts: keystone-k2e-evm: Add and enable DSP CMA memory pool ARM: dts: keystone-k2l-evm: Add and enable common DSP CMA memory pool ARM: dts: keystone-k2hk-evm: Add and enable common DSP CMA memory pool ARM: dts: keystone-k2e: Add DSP node ARM: dts: keystone-k2l: Add DSP nodes ARM: dts: keystone-k2hk: Add DSP nodes ARM: dts: keystone-k2g: Add TI SCI reset-controller node ARM: dts: keystone-k2g: Add ti-sci clock provider node ARM: dts: keystone-k2g: Add ti-sci power domain node ...
2017-08-17efi: Introduce efi_early_memdesc_ptr to get pointer to memmap descriptorBaoquan He
The existing map iteration helper for_each_efi_memory_desc_in_map can only be used after the kernel initializes the EFI subsystem to set up struct efi_memory_map. Before that we also need iterate map descriptors which are stored in several intermediate structures, like struct efi_boot_memmap for arch independent usage and struct efi_info for x86 arch only. Introduce efi_early_memdesc_ptr() to get pointer to a map descriptor, and replace several places where that primitive is open coded. Signed-off-by: Baoquan He <bhe@redhat.com> [ Various improvements to the text. ] Acked-by: Matt Fleming <matt@codeblueprint.co.uk> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: ard.biesheuvel@linaro.org Cc: fanc.fnst@cn.fujitsu.com Cc: izumi.taku@jp.fujitsu.com Cc: keescook@chromium.org Cc: linux-efi@vger.kernel.org Cc: n-horiguchi@ah.jp.nec.com Cc: thgarnie@google.com Link: http://lkml.kernel.org/r/20170816134651.GF21273@x1 Signed-off-by: Ingo Molnar <mingo@kernel.org>
2017-08-17locking/refcounts, x86/asm: Implement fast refcount overflow protectionKees Cook
This implements refcount_t overflow protection on x86 without a noticeable performance impact, though without the fuller checking of REFCOUNT_FULL. This is done by duplicating the existing atomic_t refcount implementation but with normally a single instruction added to detect if the refcount has gone negative (e.g. wrapped past INT_MAX or below zero). When detected, the handler saturates the refcount_t to INT_MIN / 2. With this overflow protection, the erroneous reference release that would follow a wrap back to zero is blocked from happening, avoiding the class of refcount-overflow use-after-free vulnerabilities entirely. Only the overflow case of refcounting can be perfectly protected, since it can be detected and stopped before the reference is freed and left to be abused by an attacker. There isn't a way to block early decrements, and while REFCOUNT_FULL stops increment-from-zero cases (which would be the state _after_ an early decrement and stops potential double-free conditions), this fast implementation does not, since it would require the more expensive cmpxchg loops. Since the overflow case is much more common (e.g. missing a "put" during an error path), this protection provides real-world protection. For example, the two public refcount overflow use-after-free exploits published in 2016 would have been rendered unexploitable: http://perception-point.io/2016/01/14/analysis-and-exploitation-of-a-linux-kernel-vulnerability-cve-2016-0728/ http://cyseclabs.com/page?n=02012016 This implementation does, however, notice an unchecked decrement to zero (i.e. caller used refcount_dec() instead of refcount_dec_and_test() and it resulted in a zero). Decrements under zero are noticed (since they will have resulted in a negative value), though this only indicates that a use-after-free may have already happened. Such notifications are likely avoidable by an attacker that has already exploited a use-after-free vulnerability, but it's better to have them reported than allow such conditions to remain universally silent. On first overflow detection, the refcount value is reset to INT_MIN / 2 (which serves as a saturation value) and a report and stack trace are produced. When operations detect only negative value results (such as changing an already saturated value), saturation still happens but no notification is performed (since the value was already saturated). On the matter of races, since the entire range beyond INT_MAX but before 0 is negative, every operation at INT_MIN / 2 will trap, leaving no overflow-only race condition. As for performance, this implementation adds a single "js" instruction to the regular execution flow of a copy of the standard atomic_t refcount operations. (The non-"and_test" refcount_dec() function, which is uncommon in regular refcount design patterns, has an additional "jz" instruction to detect reaching exactly zero.) Since this is a forward jump, it is by default the non-predicted path, which will be reinforced by dynamic branch prediction. The result is this protection having virtually no measurable change in performance over standard atomic_t operations. The error path, located in .text.unlikely, saves the refcount location and then uses UD0 to fire a refcount exception handler, which resets the refcount, handles reporting, and returns to regular execution. This keeps the changes to .text size minimal, avoiding return jumps and open-coded calls to the error reporting routine. Example assembly comparison: refcount_inc() before: .text: ffffffff81546149: f0 ff 45 f4 lock incl -0xc(%rbp) refcount_inc() after: .text: ffffffff81546149: f0 ff 45 f4 lock incl -0xc(%rbp) ffffffff8154614d: 0f 88 80 d5 17 00 js ffffffff816c36d3 ... .text.unlikely: ffffffff816c36d3: 48 8d 4d f4 lea -0xc(%rbp),%rcx ffffffff816c36d7: 0f ff (bad) These are the cycle counts comparing a loop of refcount_inc() from 1 to INT_MAX and back down to 0 (via refcount_dec_and_test()), between unprotected refcount_t (atomic_t), fully protected REFCOUNT_FULL (refcount_t-full), and this overflow-protected refcount (refcount_t-fast): 2147483646 refcount_inc()s and 2147483647 refcount_dec_and_test()s: cycles protections atomic_t 82249267387 none refcount_t-fast 82211446892 overflow, untested dec-to-zero refcount_t-full 144814735193 overflow, untested dec-to-zero, inc-from-zero This code is a modified version of the x86 PAX_REFCOUNT atomic_t overflow defense from the last public patch of PaX/grsecurity, based on my understanding of the code. Changes or omissions from the original code are mine and don't reflect the original grsecurity/PaX code. Thanks to PaX Team for various suggestions for improvement for repurposing this code to be a refcount-only protection. Signed-off-by: Kees Cook <keescook@chromium.org> Reviewed-by: Josh Poimboeuf <jpoimboe@redhat.com> Cc: Alexey Dobriyan <adobriyan@gmail.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Christoph Hellwig <hch@infradead.org> Cc: David S. Miller <davem@davemloft.net> Cc: Davidlohr Bueso <dave@stgolabs.net> Cc: Elena Reshetova <elena.reshetova@intel.com> Cc: Eric Biggers <ebiggers3@gmail.com> Cc: Eric W. Biederman <ebiederm@xmission.com> Cc: Greg KH <gregkh@linuxfoundation.org> Cc: Hans Liljestrand <ishkamiel@gmail.com> Cc: James Bottomley <James.Bottomley@hansenpartnership.com> Cc: Jann Horn <jannh@google.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Manfred Spraul <manfred@colorfullife.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Rik van Riel <riel@redhat.com> Cc: Serge E. Hallyn <serge@hallyn.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: arozansk@redhat.com Cc: axboe@kernel.dk Cc: kernel-hardening@lists.openwall.com Cc: linux-arch <linux-arch@vger.kernel.org> Link: http://lkml.kernel.org/r/20170815161924.GA133115@beast Signed-off-by: Ingo Molnar <mingo@kernel.org>
2017-08-17x86/mm, mm/hwpoison: Clear PRESENT bit for kernel 1:1 mappings of poison pagesTony Luck
Speculative processor accesses may reference any memory that has a valid page table entry. While a speculative access won't generate a machine check, it will log the error in a machine check bank. That could cause escalation of a subsequent error since the overflow bit will be then set in the machine check bank status register. Code has to be double-plus-tricky to avoid mentioning the 1:1 virtual address of the page we want to map out otherwise we may trigger the very problem we are trying to avoid. We use a non-canonical address that passes through the usual Linux table walking code to get to the same "pte". Thanks to Dave Hansen for reviewing several iterations of this. Also see: http://marc.info/?l=linux-mm&m=149860136413338&w=2 Signed-off-by: Tony Luck <tony.luck@intel.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Andy Lutomirski <luto@kernel.org> Cc: Borislav Petkov <bp@alien8.de> Cc: Borislav Petkov <bp@suse.de> Cc: Brian Gerst <brgerst@gmail.com> Cc: Dave Hansen <dave.hansen@intel.com> Cc: Denys Vlasenko <dvlasenk@redhat.com> Cc: Elliott, Robert (Persistent Memory) <elliott@hpe.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Josh Poimboeuf <jpoimboe@redhat.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: linux-mm@kvack.org Cc: stable@vger.kernel.org Link: http://lkml.kernel.org/r/20170816171803.28342-1-tony.luck@intel.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2017-08-17Merge branch 'linus' into perf/core, to pick up fixesIngo Molnar
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2017-08-16scsi: sd_zbc: Write unlock zone from sd_uninit_cmnd()Damien Le Moal
Releasing a zone write lock only when the write commnand that acquired the lock completes can cause deadlocks due to potential command reordering if the lock owning request is requeued and not executed. This problem exists only with the scsi-mq path as, unlike the legacy path, requests are moved out of the dispatch queue before being prepared and so before locking a zone for a write command. Since sd_uninit_cmnd() is now always called when a request is requeued, call sd_zbc_write_unlock_zone() from that function for write requests that acquired a zone lock instead of from sd_done(). Acquisition of a zone lock by a write command is indicated using the new command flag SCMD_ZONE_WRITE_LOCK. Signed-off-by: Damien Le Moal <damien.lemoal@wdc.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Bart Van Assche <Bart.VanAssche@wdc.com> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
2017-08-16ipv4: better IP_MAX_MTU enforcementEric Dumazet
While working on yet another syzkaller report, I found that our IP_MAX_MTU enforcements were not properly done. gcc seems to reload dev->mtu for min(dev->mtu, IP_MAX_MTU), and final result can be bigger than IP_MAX_MTU :/ This is a problem because device mtu can be changed on other cpus or threads. While this patch does not fix the issue I am working on, it is probably worth addressing it. Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-08-16ptr_ring: use kmalloc_array()Eric Dumazet
As found by syzkaller, malicious users can set whatever tx_queue_len on a tun device and eventually crash the kernel. Lets remove the ALIGN(XXX, SMP_CACHE_BYTES) thing since a small ring buffer is not fast anyway. Fixes: 2e0ab8ca83c1 ("ptr_ring: array based FIFO for pointers") Signed-off-by: Eric Dumazet <edumazet@google.com> Reported-by: Dmitry Vyukov <dvyukov@google.com> Cc: Michael S. Tsirkin <mst@redhat.com> Cc: Jason Wang <jasowang@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-08-16vmbus: remove unused vmbus_sendpacket_ctlstephen hemminger
The only usage of vmbus_sendpacket_ctl was by vmbus_sendpacket. Signed-off-by: Stephen Hemminger <sthemmin@microsoft.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-08-16vmbus: remove unused vmubs_sendpacket_pagebuffer_ctlstephen hemminger
The function vmbus_sendpacket_pagebuffer_ctl was never used directly. Just have vmbus_send_pagebuffer Signed-off-by: Stephen Hemminger <sthemmin@microsoft.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-08-16vmbus: remove unused vmbus_sendpacket_multipagebufferstephen hemminger
This function is not used anywhere in current code. Signed-off-by: Stephen Hemminger <sthemmin@microsoft.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-08-16bpf: sock_map fixes for !CONFIG_BPF_SYSCALL and !STREAM_PARSERJohn Fastabend
Resolve issues with !CONFIG_BPF_SYSCALL and !STREAM_PARSER net/core/filter.c: In function ‘do_sk_redirect_map’: net/core/filter.c:1881:3: error: implicit declaration of function ‘__sock_map_lookup_elem’ [-Werror=implicit-function-declaration] sk = __sock_map_lookup_elem(ri->map, ri->ifindex); ^ net/core/filter.c:1881:6: warning: assignment makes pointer from integer without a cast [enabled by default] sk = __sock_map_lookup_elem(ri->map, ri->ifindex); Fixes: 174a79ff9515 ("bpf: sockmap with sk redirect support") Reported-by: Eric Dumazet <eric.dumazet@gmail.com> Signed-off-by: John Fastabend <john.fastabend@gmail.com> Acked-by: Daniel Borkmann <daniel@iogearbox.net> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-08-17Merge tag 'drm-misc-next-2017-08-16' of ↵Dave Airlie
git://anongit.freedesktop.org/git/drm-misc into drm-next UAPI Changes: - vc4: Allow userspace to dictate rendering order in submit_cl ioctl (Eric) Cross-subsystem Changes: - vboxvideo: One of Cihangir's patches applies to vboxvideo which is maintained in staging Core Changes: - atomic_legacy_backoff is officially killed (Daniel) - Extract drm_device.h (Daniel) - Unregister drm device on unplug (Daniel) - Rename deprecated drm_*_(un)?reference functions to drm_*_{get|put} (Cihangir) Driver Changes: - vc4: Error/destroy path cleanups, log level demotion, edid leak (Eric) - various: Make various drm_*_funcs structs const (Bhumika) - tinydrm: add support for LEGO MINDSTORMS EV3 LCD (David) - various: Second half of .dumb_{map_offset|destroy} defaults set (Noralf) Cc: Daniel Vetter <daniel.vetter@ffwll.ch> Cc: Eric Anholt <eric@anholt.net> Cc: Bhumika Goyal <bhumirks@gmail.com> Cc: Cihangir Akturk <cakturk@gmail.com> Cc: David Lechner <david@lechnology.com> Cc: Noralf Trønnes <noralf@tronnes.org> * tag 'drm-misc-next-2017-08-16' of git://anongit.freedesktop.org/git/drm-misc: (50 commits) drm/gem-cma-helper: Remove drm_gem_cma_dumb_map_offset() drm/virtio: Use the drm_driver.dumb_destroy default drm/bochs: Use the drm_driver.dumb_destroy default drm/mgag200: Use the drm_driver.dumb_destroy default drm/exynos: Use .dumb_map_offset and .dumb_destroy defaults drm/msm: Use the drm_driver.dumb_destroy default drm/ast: Use the drm_driver.dumb_destroy default drm/qxl: Use the drm_driver.dumb_destroy default drm/udl: Use the drm_driver.dumb_destroy default drm/cirrus: Use the drm_driver.dumb_destroy default drm/tegra: Use .dumb_map_offset and .dumb_destroy defaults drm/gma500: Use .dumb_map_offset and .dumb_destroy defaults drm/mxsfb: Use .dumb_map_offset and .dumb_destroy defaults drm/meson: Use .dumb_map_offset and .dumb_destroy defaults drm/kirin: Use .dumb_map_offset and .dumb_destroy defaults drm/vc4: Continue the switch to drm_*_put() helpers drm/vc4: Fix leak of HDMI EDID dma-buf: fix reservation_object_wait_timeout_rcu to wait correctly v2 dma-buf: add reservation_object_copy_fences (v2) drm/tinydrm: add support for LEGO MINDSTORMS EV3 LCD ...
2017-08-16Merge tag 'omap-for-v4.14/dt-v3-signed' of ↵Arnd Bergmann
git://git.kernel.org/pub/scm/linux/kernel/git/tmlind/linux-omap into next/dt Pull "Device tree changes for omaps for v4.14 merge window" from Tony Lindgren: - A series of changes for dra7 and am572 to use generic MMC vqmmc regulator - Clean-up tps65217 internal interrupts to define them only in tps65217.dtsi - Add dra7 iodelay pinctrl driver configuration - Add buzzer support for am437x-gp-evm - Disable HDMI CEC internal pull-ups as it seems that all boards have an external pull for these - Remove unnecessary interrupt-parent for omap3 - Configure droid 4 vaudio regulator initial mode and add vibrator - Enable NAND dma prefetch for am335x-evm, am437x and dra7 - Add pcie1 dt node for EP mode for am57x and dra7 - Add support for new dra76x SoCs and dra76x-evm * tag 'omap-for-v4.14/dt-v3-signed' of git://git.kernel.org/pub/scm/linux/kernel/git/tmlind/linux-omap: (25 commits) ARM: dts: nokia n900: update dts with camera support ARM: dts: Add support for dra76-evm ARM: dts: Add support for dra76x family of devices ARM: dts: DRA7: Add pcie1 dt node for EP mode ARM: dts: am335x: add support for Moxa UC-8100-ME-T open platform ARM: dts: dra7xx: Enable NAND dma prefetch by default ARM: dts: am437xx: Enable NAND dma prefetch by default ARM: dts: am335x-evm: Enable NAND dma prefetch by default ARM: dts: omap4-droid4: Add vibrator ARM: dts: motorola-cpcap-mapphone: set initial mode for vaudio ARM: dts: omap3: Remove needless interrupt-parent property ARM: dts: Disable HDMI CEC internal pull-ups ARM: dts: am437x-gp-evm: Add support for buzzer ARM: dts: Add dra7 iodelay configuration ARM: dts: tps65217: Add power button interrupt to the common tps65217.dtsi file ARM: dts: tps65217: Add charger interrupts to the common tps65217.dtsi file ARM: dts: omap*: Replace deprecated "vmmc_aux" with "vqmmc" ARM: dts: am572x-idk: Fix GPIO polarity for MMC1 card detect ARM: dts: am571x-idk: Fix GPIO polarity for MMC1 card detect ARM: dts: dra7: Add "max-frequency" property to MMC dt nodes ...
2017-08-16qdisc: add tracepoint qdisc:qdisc_dequeue for dequeued SKBsJesper Dangaard Brouer
The main purpose of this tracepoint is to monitor bulk dequeue in the network qdisc layer, as it cannot be deducted from the existing qdisc stats. The txq_state can be used for determining the reason for zero packet dequeues, see enum netdev_queue_state_t. Notice all packets doesn't necessary activate this tracepoint. As qdiscs with flag TCQ_F_CAN_BYPASS, can directly invoke sch_direct_xmit() when qdisc_qlen is zero. Remember that perf record supports filters like: perf record -e qdisc:qdisc_dequeue \ --filter 'ifindex == 4 && (packets > 1 || txq_state > 0)' Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-08-16Merge tag 'omap-for-v4.14/soc-signed' of ↵Arnd Bergmann
git://git.kernel.org/pub/scm/linux/kernel/git/tmlind/linux-omap into next/soc Pull "soc changes for omaps for v4.14" from Tony Lindgren: SoC updates for omaps for v4.14. Most of the chages are to add support for new dra762 SoC. The other changes are are for legacy DMA code removal, and MMC quirk and iodelay config for dra7. * tag 'omap-for-v4.14/soc-signed' of git://git.kernel.org/pub/scm/linux/kernel/git/tmlind/linux-omap: ARM: OMAP: dra7: powerdomain data: Register SoC specific powerdomains ARM: dra762: Enable SMP for dra762 ARM: dra7: hwmod: Register dra76x specific hwmod ARM: dra762: Add support for device identification ARM: OMAP2+: board-generic: add support for dra762 family ARM: OMAP2+: Select PINCTRL_TI_IODELAY for SOC_DRA7XX ARM: OMAP2+: Add pdata-quirks for MMC/SD on DRA74x EVM ARM: OMAP2+: Remove unused legacy code for DMA
2017-08-16Merge tag 'renesas-drivers-for-v4.14' of ↵Arnd Bergmann
https://git.kernel.org/pub/scm/linux/kernel/git/horms/renesas into next/drivers Pull "Renesas ARM Based SoC Drivers Updates for v4.14" from Simon Horman: Add R-Car D3 (r8a77995) support to the Renesas-specific SoC drivers - SoC identification - System controller - Reset controller * tag 'renesas-drivers-for-v4.14' of https://git.kernel.org/pub/scm/linux/kernel/git/horms/renesas: soc: renesas: rcar-rst: Add support for R-Car D3 soc: renesas: rcar-sysc: Add support for R-Car D3 power areas soc: renesas: Add r8a77995 SYSC PM Domain Binding Definitions soc: renesas: Identify R-Car D3
2017-08-16Merge tag 'v4.14-rockchip-drivers-1' of ↵Arnd Bergmann
git://git.kernel.org/pub/scm/linux/kernel/git/mmind/linux-rockchip into next/drivers Pull "Rockchip driver changes for 4.14" from Heiko Stübner: Powerdomain support for rk3366 and disabling of the automatic jtag/sdmmc switching for rk3328. * tag 'v4.14-rockchip-drivers-1' of git://git.kernel.org/pub/scm/linux/kernel/git/mmind/linux-rockchip: soc: rockchip: power-domain: add power domain support for rk3366 dt-bindings: add binding for rk3366 power domains dt-bindings: power: add RK3366 SoCs header for power-domain soc: rockchip: disable jtag switching for RK3328 Soc
2017-08-16Merge tag 'tee-drv-for-4.14' of ↵Arnd Bergmann
http://git.linaro.org/people/jens.wiklander/linux-tee into next/drivers Pull "Small fixes and enhancements for the TEE subsystem" from Jens Wiklander: * tag 'tee-drv-for-4.14' of http://git.linaro.org/people/jens.wiklander/linux-tee: tee: optee: sync with new naming of interrupts tee: indicate privileged dev in gen_caps tee: optee: interruptible RPC sleep tee: optee: add const to tee_driver_ops and tee_desc structures tee: tee_shm: Constify dma_buf_ops structures. tee: add forward declaration for struct device tee: optee: fix uninitialized symbol 'parg'
2017-08-16drm: Add GEM backed framebuffer libraryNoralf Trønnes
This library provides helpers for drivers that don't subclass drm_framebuffer and are backed by drm_gem_object. The code is taken from drm_fb_cma_helper. Signed-off-by: Noralf Trønnes <noralf@tronnes.org> Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch> Reviewed-by: Eric Anholt <eric@anholt.net> Link: https://patchwork.freedesktop.org/patch/msgid/1502631125-13557-2-git-send-email-noralf@tronnes.org
2017-08-16SUNRPC: Don't hold the transport lock across socket copy operationsTrond Myklebust
Instead add a mechanism to ensure that the request doesn't disappear from underneath us while copying from the socket. We do this by preventing xprt_release() from freeing the XDR buffers until the flag RPC_TASK_MSG_RECV has been cleared from the request. Signed-off-by: Trond Myklebust <trond.myklebust@primarydata.com> Reviewed-by: Chuck Lever <chuck.lever@oracle.com>
2017-08-16bpf: add access to sock fields and pkt data from sk_skb programsJohn Fastabend
Signed-off-by: John Fastabend <john.fastabend@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-08-16bpf: sockmap with sk redirect supportJohn Fastabend
Recently we added a new map type called dev map used to forward XDP packets between ports (6093ec2dc313). This patches introduces a similar notion for sockets. A sockmap allows users to add participating sockets to a map. When sockets are added to the map enough context is stored with the map entry to use the entry with a new helper bpf_sk_redirect_map(map, key, flags) This helper (analogous to bpf_redirect_map in XDP) is given the map and an entry in the map. When called from a sockmap program, discussed below, the skb will be sent on the socket using skb_send_sock(). With the above we need a bpf program to call the helper from that will then implement the send logic. The initial site implemented in this series is the recv_sock hook. For this to work we implemented a map attach command to add attributes to a map. In sockmap we add two programs a parse program and a verdict program. The parse program uses strparser to build messages and pass them to the verdict program. The parse programs use the normal strparser semantics. The verdict program is of type SK_SKB. The verdict program returns a verdict SK_DROP, or SK_REDIRECT for now. Additional actions may be added later. When SK_REDIRECT is returned, expected when bpf program uses bpf_sk_redirect_map(), the sockmap logic will consult per cpu variables set by the helper routine and pull the sock entry out of the sock map. This pattern follows the existing redirect logic in cls and xdp programs. This gives the flow, recv_sock -> str_parser (parse_prog) -> verdict_prog -> skb_send_sock \ -> kfree_skb As an example use case a message based load balancer may use specific logic in the verdict program to select the sock to send on. Sample programs are provided in future patches that hopefully illustrate the user interfaces. Also selftests are in follow-on patches. Signed-off-by: John Fastabend <john.fastabend@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-08-16bpf: export bpf_prog_inc_not_zeroJohn Fastabend
bpf_prog_inc_not_zero will be used by upcoming sockmap patches this patch simply exports it so we can pull it in. Signed-off-by: John Fastabend <john.fastabend@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2017-08-16bpf: introduce new program type for skbs on socketsJohn Fastabend
A class of programs, run from strparser and soon from a new map type called sock map, are used with skb as the context but on established sockets. By creating a specific program type for these we can use bpf helpers that expect full sockets and get the verifier to ensure these helpers are not used out of context. The new type is BPF_PROG_TYPE_SK_SKB. This patch introduces the infrastructure and type. Signed-off-by: John Fastabend <john.fastabend@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>